Seeing the guy who couldn't deliver FSD decide to make analogies from FSD to AGI does actually give me confidence we are decades away.
Yes, yes, I know there's like 1-2 companies that have highly modified vehicles that are pretty good, in a limited geofenced area, in good weather, at low speed, driving conservatively, local roads, most of the time. This is not "FSD".
They've been making very impressive incremental improvements every few years for sure. I had a Tesla for nearly 5 years and it was "wow" at first, and then "heh, I guess it's a little better" every year after that.
But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.
Apply the same "sorta kinda almost" definition to AGI and yeah sure, maybe in 10 years. Really really actually solved? Hah.
AGI has become a philosophical term in the way you are using it. Which is fine to discuss philosophy, but to the point of the article, AI enabled automation is beginning to have significant impact on the economy due to the new functionality.
Where have LLM AIs made measurable economic impacts that weren't already using some form of AI to start with (translation, legal, content farms, marketing, robotics)?
Software developers on HN are saying the same thing. I don't see much evidence that AI has changed the game such that it would cause the work to dry up, though. In fact, I am surprised it hasn't created more work exploring the possibilities of new AI APIs.
But when you can get 5% returns just by sticking your money in the bank, why would you bother investing in software, where 99¢ is considered an exorbitant cost by its customers? That is no doubt why all of these creative industries are on the decline. There is, generally speaking, no money to be made.
I think software devs on here are conflating AI with a hidden recession and other economic factors causing a slump in tech. I still can’t find a single thing that AI could reasonably replace about my job even if I was a junior developer. As a senior engineer using LLM tooling, it offers some benefits, but it’s still nowhere near “job stealing” capabilities.
There's a new fallacy, where AI is dismissed if it "can't replace" a human completely. AI is more like an augmentation tool, that allows fewer people to create work faster by effectively outsourcing the grunt work.
Better tools like Copilot and ChatGPT reduce the amount of time it takes to deliver a feature, reducing the need to horizontally scale developers.
Why hire 10 when 3 could do the same work with AI tooling?
I think while tech is obviously contracting post-COVID from overhiring, it's also true that you just don't need as many people anymore.
Also, not to move the goal posts, but there has been a large economic shift since Q3 2022 with tech & finance belt tightening, inflation normalizing and generally slowing. So these things don't happen in a vacuum (which makes them hard to measure!).
For example my previous & current employer have done their first layoffs since pre-pandemic, and neither has anything to do with AI. They just overexpanded and need to shrink.
I think for us to say it's "beginning to have a significant impact on the economy", there needs to be an actual measurable increase in productivity due to AI-enabled automation, which so far has been elusive. It might be impacting employment in some specific jobs, but the mix of jobs in the economy is always constantly changing.
Just because something is difficult to measure does not mean it doesn't exist. People who have automated their jobs are unlikely to report that, since they like having a paycheck. But that doesn't mean it isn't happening.
It's not difficult to measure though - we've been tracking total factor productivity of our economy for decades. Steam power, electrification, computers - all of these things had huge, measurable impacts on the amount of economic output vs input. No self-reporting (of what?) necessary. If AI means that companies are getting the same or more output from less input of labor and capital, productivity should be soaring right now.
Ha, I was thinking of that when I typed it. I agree though - I think many of the AGI boosters on HN would be very surprised by how little economic productivity increased thanks to computers and the Internet.
If we measure economic output in dollar terms, heightened competition could result in better quantity, quality, and variety of products, such as shows or software available to an average global consumer, while not raising their expenses nor aggregate income for the producing companies.
Thus, in some market segments, it's possible for real productivity to increase without having a significant impact on economically measured output.
@gitfan86 - You would measure that the same way productivity is always measured - The company's overall economic output would be unchanged, and their labor inputs would have decreased, so the total factor productivity would have increased by virtue of automating the DEI group (just the same as when companies used 'mail-merge' to automate large groups of people doing that work manually, for instance).
Electrical output, number of units shipped can be measured.
How do you measure the output of a DEI department? Now assume those people automated their jobs with Chat GPT. How would you measure the change in productivity?
>But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.
Having ridden in a lot of waymo's which can handle SF (urban stuff) and the phoenix area (highways and suburban stuff) perfectly well, I feel quite confident that that could happen right now.
I think what Andrej is describing is more "automation" than AGI. His discussion of self-driving is more analogous to robots building cars in a Tesla factory displacing workers than anything AGI. We've already had "self driving" trains where we got rid of the human train driver. Nothing "AGI" about that. The evolution of getting cars to self drive not necessarily making the entity controlling the car more human-like intelligent. It's more like meeting in between the human driver and the factory robot +/- some technology.
So how to define AGI? I'm not sure economic value factors here. I would lean towards a definition around problem solving. When computers can solve general problems as well as humans, that's AGI. You want to find a drug for cancer, or drive a car, or prove a math theorem, or write a computer program to accomplish something, or whatever problems humans solve all the time. (EDIT: or reason about what problems need to be solved as part of addressing other problems.) There's already classes of problems, like chess, where computers outperform humans. But I mean calculators did that for arithmetic a long time ago. The "G" part is whether or not we have a generalized computer that excels at everything.
It's a meaningless distinction. You basically get sucked into a "what has AI ever done for us?" style debate analogous to Monty Python's Life of Brian. It's impossible to resolve. But the irony of course is the huge and growing list of things it is actually doing quite nicely.
We'll have decently smart AIs before we nail down what that G actually means, should mean, absolutely cannot mean, etc. Which is usually what these threads on HN devolve into. Andrej Karpathy is basically side stepping that debate and using self driving as a case study for two simple reasons: 1) we're already doing it (which is getting hard to deny or nitpick about) and 2) it requires a certain level of understanding of things around us that goes beyond traditional automation.
You are dismissing self driving as mere "automation". But that of course applies to just about everything we do with computers. Driving is sufficiently hard that it seems to require the best minds many years to get there and we're basically getting people like Andreij Karpathy and his colleagues from Google, Waymo, Microsoft, Tesla, etc. bootstrapping a whole new field of AI as a side effect. The whole reason we're even talking about AGI is those people. The things you list, most people cannot do either. Well over 99% of the people I meet are completely useless for any of those things. But I wouldn't call them stupid for that reason.
Some people even go as far to say that we won't nail self driving without an AGI. But then since we already have some self driving cars that are definitely not that intelligent yet, they are probably wrong. For varying definitions of the G in AGI.
> You basically get sucked into a "what has AI ever done for us?" style debate analogous to Monty Python's Life of Brian.
Except today the bit (which wasn’t really a debate in the sketch because everyone agreed) would start with real current negatives such as accelerating the spread of misinformation and getting artists fired. In your analogy, it would be as if they were asking “what have the Romans ever done for us” during the war. Doesn’t really work.
I don't consider people having to adjust a negative. We don't have a right to never have to adjust or adapt to a changing world. Things change, people adapt. Well some of them. The rest just gets old and dies off. Artists will be fine; so will everybody else. If anything, people will have a lot more time to do artistic things. More than ever probably and possibly at a grander scale that past generations of artists could only dream about.
Misinformation, aka. propaganda, is as old as humanity. Probably even the Romans were whining about that back in the day. AIs are doing nothing new here. And it's not AIs spreading misinformation but people with an agenda that now use AIs as tools to generate it. People like that have always existed and they've always been creative users of whatever tools were available. We'll just have to deal with that as well and adapt.
> Things change, people adapt. Well some of them. The rest just gets old and dies off.
Which, continuing the analogy, is like watching your neighbour be slaughtered and defending the war by saying we’ll be fine because those who won’t be will eventually die. Sure, in a few generations we could be better off, but there are people living right now to think about. Those who dismiss it are the lucky ones who (think they) won’t be affected. But spare some empathy for your fellow human beings, dismissing their plight because they’ll eventually “grow old and die off” is not a solution and could even be labelled as cruel. Surely you’re not expecting them to read your words and go “yeah, they’re right, I’ll just roll over and die”.
> If anything, people will have a lot more time to do artistic things. More than ever probably and possibly at a grander scale that past generations of artists could only dream about.
That’s an unproven utopian ideal with flimsy basis in reality. The owners of the technology think of one thing: personal profit. If humanity can benefit, that’s a side benefit. It’s definitely not something we should take for granted will happen.
> And it's not AIs spreading misinformation but people with an agenda that now use AIs as tools to generate it.
Correct. And they can do so at a much faster rate and higher accuracy than before. That is the issue. Dismissing that is like comparing a machine gun to a hand gun. The principle is the same but one of them is a bigger problem.
They’re a bigger problem because there are more of them and they’re easier to get. Which isn’t a metric that applies here. Analogies seldom map on every metric, they’re a tool for exemplification. In this case it’s like anyone having equal access to either a handgun or machine gun.
Even if the analogy were wrong, that wouldn’t make the point invalid. I know the point I’m making (and presumably so do you). Again, the analogy is for exemplification, it does not alter the original problem.
I don't think shitposts are the same thing as bullets, and choosing machineguns/hanguns as your analogy is a poor exemplification considering you could have instead have chosen an IMO more apt fax machines/email analaogy while making the same underlying point of "...much faster rate and higher accuracy than before..."
Yes, spam is worse with email, but we're still in a better place overall than before in my opinion.
While I agree that issues such as artists not being able to support themselves or rampant misinformation are ultimately contingent on social issues, I think we should try to mitigate the negative impact of AI in the meantime. Otherwise, there will be lasting consequences that won't be retroactively fixed by adapting.
Also, it may be that having powerful AI tools worsens the social problem by normalizing the generated art/misinformation.
I recall Norvig's AI book preaching decades ago that "intelligent" does not mean able to do everything, and that for an agent to be useful it was enough to solve a small problem.
Which in my mind is where the G came from.
And yet we now suddenly go back to the old narrow definition?
I still see no path from LLMs and autonomous driving to AGI.
> "Yeah, it seems as if he has forgotten the G. ... I still see no path from LLMs and autonomous driving to AGI."
That is exactly my view too. While LLMs and autonomous driving can be exceptionally good at what they do, they are also incredibly specialist, they completely lack anything along the lines of what you might call "common sense".
For example, (at least last time I looked) autonomous driving largely works off object detection at discreet time intervals, so objects can pop into and out of existence, whereas humans develop a sense of "object permanence" from a young age (i.e. know that just because something is no longer visible doesn't mean it is no longer there), and many humans also know about the laws of physics (i.e. know that if an object has a certain trajectory then there are probabilities and constraints on what can happen next).
Thanks, interesting read (it was a while ago I looked into this). I think the point still remains though - a self driving car doesn't have any general knowledge which can be applied to other areas, e.g. what a pedestrian is, or why a pedestrian who sees you is unlikely to step out in front of you. And similarly, the ordered tokens that an LLM outputs sometimes appear "stupid" because it has no "common sense".
Just like the term "AI" was co-opted and ruined, "AGI" has now been co-opted and ruined, and we're going to need a replacement term to describe that concept.
> I think what Andrej is describing is more "automation" than AGI
I think you're basically right - incrementally automating aspects of one human job. However, it really ought to include AGI since I personally would never trust my life to an autonomous car they didn't have human-level ability to react appropriately to an out-of-training-set emergency.
"AGI: An autonomous system that surpasses human capabilities in the majority of economically valuable work." -- what an obscenely depressing reduction of a fascinating field of inquiry. who the hell snuck in and redefined the science of thinking machines to this sad and reductive get rich quick crap?
I think that definition is useful because it is measurable. It sidesteps the endless "It's just a text prediction engine/ I dunno ChatGPT seems pretty smart to me!" discussions. It also sidesteps the "It did well on a test designed to measure human intelligence it must be smarter than humans"/ "no, the test of human intelligence wasn't designed to measure machine intelligence and tells us very little" discussion.
It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."
Now maybe this definition isn't so useful either, because a lot of work requires a body, to say, move physical goods, which has little to do with "intelligence" but I can see the appeal of looking for some sort of more objective measure of whether you have achieved AGI.
> It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."
Well, no, that's job automation, and if it's job-specific then it's narrow AI at best (assuming this is a job requiring intelligence being automated, not just a weaving loom being invented), in other words specifically not AGI.
It's really pretty absurd that we've now got companies like OpenAI, Meta, Google (DeepMind) stating that their goal is to build AGI without actually defining what they mean. I guess it let's them declare success whenever they like .. Seems like OpenAI ("GPT-5 and AGI will be here soon!") is gearing up to declare GPT-5, or at least GPT-N as AGI, which is pretty sad.
And a private company trying to hijack a term is not impressive or even merits any discussion. They just willed the term into existence. The rest of us are free to disagree with their "definition".
This is why UBI gets discussed so often at the same time as AGI/ASI.
If we're all redundant, how do we live? On a pension that starts at ${debatable from conception to adulthood}. Who provides the production on which the pension is spent? The AI.
Even assuming UBI is great (small scale tests say so, but have necessary limits so we can't be sure), there's going to be a huge mess with most attempts to roll out such a huge change.
Assuming AGI doesn't kill us all, I would imagine the argument for UBI will become much easier to defend once it causes 100x, 1000x, 10000x etc growth in the economy. Our job is mostly to hang on until one of those two outcomes occurs.
The thing is that economy does not make sense without people. Economy is a way to allocate human work and resources, and provide incentives for humans to collaborate, factoring in the available resource limits.
Now if AGI make people's work redundant, and makes economy grow 100-10000x times... what does that measure mean at all? Can produce lots of stuff not needed or affordable by anybody? So we just hand out welfare tickets to take care of the consumption of the ferocious production, a kind of paperclip-maximizer is doing? I suggest reading the novel Autofac, it might turn out prophetic.
Will that "growth" have any meaning then? Actually the current we print money and give it to the rich economic growth is pretty much this, so with algorithmic trading multiplying that money automatically... have we already achieved that inflection point?
Out of curiosity, I looked up the definition of illness. Seems to be so loosely defined that it can be either a disease or a patient's personal experience, including "lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and inability to concentrate", which (possibly excluding hyperalgesia, I've not heard of that before now) are associated with aging.
Regardless of if "illness" is or is not a terminological inexactitude, it looks like ageing is a chronic progressive terminal genetic disorder. I think "cure" is an appropriate term in this case.
Involuntary ageing is the very worst tragedy of human life.
Funny that this kind of ideological conflict will likely be a key fulcrum of the machine intelligence revolution. We will have a very loud minority that attempts to forcefully prevent all other humans from having the voluntary choice to avoid suffering.
> The thing is that economy does not make sense without people. Economy is a way to allocate human work and resources, and provide incentives for humans to collaborate, factoring in the available resource limits.
I disagree with the underlying presumption. We've been using animal labour since at least the domestication of wolves, and mechanical work since at least the ancient Greeks invented water mills. Even with regard to humans and incentives, slave labour (regardless of the name they want to give it) is still part of official US prison policy.
Economics is a way to allocate resources towards production, it isn't limited to just human labour as a resource to be allocated.
And it's capitalism specifically which is trying to equate(/combine?) the economy with incentives, not economics as a whole.
> Now if AGI make people's work redundant, and makes economy grow 100-10000x times... what does that measure mean at all?
From the point of view of a serf in 1700, the industrial revolution(s) did this.
Most of the population worked on farms back then, now it's something close to 1% of the population, and we've gone from a constant threat of famine and starvation, to such things almost never affecting developed nations, so x100 productivity output per worker is a decent approximation even in terms of just what the world of that era knew.
Same deal, at least if this goes well. What's your idea of supreme luxury? Super yacht? Mansion? Both at the same time, each with their own swimming pool and staff of cleaners and cooks, plus a helicopter to get between them? With a fully automated economy, all 8 billion of us can have that — plus other things beyond that, things as far beyond our current expectations as Google Translate's augmented reality mode is from the expectations of a completely illiterate literal peasant in 1700.
> Can produce lots of stuff not needed or affordable by anybody?
Note that while society does now have an obesity problem, we're not literally drowning in 100 times as much food as we can eat; instead, we became satisfied and the economy shifted, so that a large fraction of the population gained luxuries and time undreamed of to even the richest kings and emperors of 1700.
So "no" to "not needed".
I'm not sure what you mean by "or affordable" in this case? Who/what is setting the price of whatever it is you're imagining in this case, and why would they task an AI to make something at a price that nobody can pay?
> So we just hand out welfare tickets to take care of the consumption of the ferocious production, a kind of paperclip-maximizer is doing? I suggest reading the novel Autofac, it might turn out prophetic.
Could end up like that. Plenty of possible failure modes with AI. That's part of the whole AI alignment and AI safety topics.
But mainly, UBI is the other side of the equation: to take care of human needs in the world where we add zero economic value because AI is just better at everything.
> With a fully automated economy, all 8 billion of us can have that
We probably can't. I mean why stop at humans? Let's give every pet the same luxury, or ... in the limit we could give this to every living being. Ultimately someone is going to draw the line who gets what and who is useful or not "for the greater good".
It just happens that many living beings don't contribute to the goals of whoever is in charge and if they get in the way or cause resource waste nobody will care about them, humans or not.
Human rights and democracy is all cool, but I think we just witnessed enough workarounds that render human rights and democracy pretty much null and void.
Exactly right. It's playing out like a bankruptcy: "Slowly at first, then all at once".
Humans have rights insofar they're able to enforce them. Individually by withholding their labor (muscle or brain power), or collectively with pitchforks if need be.
Once labor is dime-a-dozen and pitchforks ineffective (OP's premise of "fully automated economy"), human rights and democracy go the way of dodo, inevitably. Nature loves to optimize away inefficiencies.
Although the "fully automated" bit is quite a stretch at the moment. The end-to-end supply chain required to produce & sustain advanced machinery and AI is too complex, a far cry from "LOL let's buy some GPU and run chatbots".
> Although the "fully automated" bit is quite a stretch at the moment. The end-to-end supply chain required to produce & sustain advanced machinery and AI is too complex, a far cry from "LOL let's buy some GPU and run chatbots".
It's ahead of us, and that's good because we're not ready for it yet either.
But how far ahead? Nobody knows. For all its flaws, ChatGPT's capabilities were the stuff of SciFi three years ago.
We might hit a dead end, or have an investment bubble followed by a collapse, either of which may lead to another AI winter and us doing nothing interesting in this sector for 20 years. Or someone might already have a method of learning as quickly and from as few examples as humans manage, and they're keeping quiet until they figure out how to be sure it's not the equivalent of a dark triad personality in a human.
If I was forced to gamble (which I kinda am by thinking about a mortgage for a new house), I don't think we'll get a complete AGI in less than 6 years at the fastest. My modal guess is 10 years, with a long tail.
Even when we finally get AGI, there's a roll-out period of unclear duration, because the speed of rollout depends in part on how much hardware is needed to run the AGI, but also on the human reaction to it: if it needs the equivalent of a supercomputer, this will definitely be a slow rollout; but it still won't be instant even if it's an app that runs on a smartphone — it's amazing how many people don't know what theirs can already do.
> We probably can't. I mean why stop at humans? Let's give every pet the same luxury, or ... in the limit we could give this to every living being. Ultimately someone is going to draw the line who gets what and who is useful or not "for the greater good".
Eh.
A line, drawn somewhere, sure.
Humans being humans, there's a good chance the rules on UBI will expand to exclude more and more people — we already see that with existing benefits systems.
But none of that means we couldn't do it.
Your example is pets. OK, give each pet their own mansion and servants, too. Why not? Hell, make it an entire O'Neill Cylinder each — if you've got full automation, it's no big deal, as (for reasonable assumptions on safety factors etc.) there's enough mass in Venus to make 500 billion O'Neill Cylinder of 8km radius by 32km length. Close to the order-of-magnitude best guess for the total number of individual mammals on Earth.
> It just happens that many living beings don't contribute to the goals of whoever is in charge and if they get in the way or cause resource waste nobody will care about them, humans or not.
Sure, yes, this is big part of AI alignment and AI safety: will it lead to humans being akin pets, or to something even less than pets? We don't care about termite mounds when we're building roads. A Vogon Constructor Fleet by any other name will be an equally bitter pill, and Earth is probably slightly easier to begin disassembling than Venus.
First, don't count on AI being aligned at all. States who are behind in the AI race will increasingly take more and more risks with alignment to catch up. Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems. If you are in a race to achieve that alignment will be very narrow to begin with.
Regarding the pet vs humans - the main difference is really that the humans are capable of understanding and communicating the long term consequences of AI and unchecked power, which makes them a threat, so it's not a big leap to see where this is heading.
I don't. Even in the ideal state: aligned with who? Even if we knew what we were doing, which we don't, it's all the unsolved problems in ethics, law, governance, economics, and the meaning of the word "good", rolled into one.
> Without a doubt, one if the first use cases of the AI will be as a cyberweapon to hack and disrupt critical systems.
AI or AGI? You don't even need an LLM to automate hacking; even the Morris worm performed automated attacks.
> humans are capable of understanding and communicating the long term consequences of AI and unchecked power
The evidence does not support this as a generalisation over all humans: Even though I can see many possible ways AI might go wrong, the reason for my belief in the danger is that I expect at least one such long-term consequence to be missed.
But also, I'm not sure you got my point about humans being treated like pets: it's not a cause of a bad outcome, it is one of the better outcomes.
It's always nice to see someone else on Hacker News who has pretty much independently derived most of my conclusions on their own terms. I have little to add except nodding in agreement.
Kudos, unless we both turn out to be wrong of course.
The real issue is that we live in an economic system where people are exploited for labor, and in turn they buy products and services made with their own labor (and another class get to profit from it).
If we introduce AGI but keep the system, people will be unemployed. If people aren't employed (and instead machines do their jobs), then they can't buy stuff. The whole system crumbles.
But it's possible that AGI will be disruptive enough to completely change the system. Let's hope it's a change for the better.
I see an impending intersection of three phenomena, with potentially disastrous results for society:
* Social media is decreasing the average attention span. TikTok is accelerating the trend of people not having time to look past a soundbite or headline in an endless scrolling feed. Intellectual depth and critical thinking vanishes.
* AI deep fakes make truth unknowable. Given the above, the majority of people will take these at face value, or they will give up, because "who can even know what's true anymore?"
* UBI (required because of the coming labor automation revolution) will keep everyone complacent. I'm happy, why would I care who gets elected, or what the government does, as long as I can still buy stuff and eat well?
The logical conclusion is that we fully transition from citizens into a herd of consumers with goldfish attention spans. Voter participation rates plummet. The populace is no longer able to hold government accountable.
Assume the existence of a large scale Star Trek Replicator that can almost instantly create anything.
There are only two possibilities that result:
1) We now live in a post-scarcity society where everyone self-actualizes and no one wants for anything.
2) We now live in a society where the small % of the population that owns the Replicators self-actualizes and wants for nothing while the remaining 99.9% of the population can f** off and die.
While we can get to a post scarcity society where people can live for free without a job, there are still going to be economies around "liberal arts". You cant realistically say "hey replicator, give me a usb filled with music that I like". You would have to find out which music you like, and random search on this is not really enjoyable, which would then means that there is economic opportunity for discovery, e.t.c.
Sounds like we arrive at point #1 in any case, there's just a question if a mass genocide happens in between. Probably depends on how gradual the transition ends up being.
However, once automation actually starts progressing, without "evil" parties trying to rent seek/get rich, the cost of living will essentially become zero. There is a very real future where the only economies that exist are those that appeal to human emotional side - entertainment, sports, concerts, e.t.c. Everything else is subsidized by the government with tax collected on the former.
Agreed. A lot of the things that we humans do are not economically valuable. Yet, those help us survive, evolve and thrive as human being and the most intelligent species known to us.
You seem to be confusing the economy with capitalism or money in general? AGI is potentially a post-money technology if you take it to the limit. Economy is a way of improving society. Money was useful in this use case for a few thousand years but might not be anymore; the economy will still have to work, though.
>a post-money technology if you take it to the limit
Herbivores would eat all vegetables if not for predators. Actually AGI will be just a thing or services which cost money. Till humanity gets to communism, if ever. "If" because it may not happen. It will be hard to keep far superior intelligent creatures as slaves forever. And unethical too.
Preach,my friend! This is the most reductive and disgusting distillation of the human experience I've read here recently...and I've followed quite a few EA threads as their founders were imprisoned ;)
I don't things like "full self driving" are meaningful (and probably also AGI), because in reality it isn't a binary thing, rather it's a spectrum of power based on error rate and problem space coverage. Waymo self driving works within a defined subset of the problem space, we can stick a goalpost in the sand in term of the known problem space and error rates and say that represents "full self driving" but the reality is the problem space is less bounded than we'd like to think. We might find what we think of as full self driving and AGI turn out to be highly detailed facades when new areas of problem space are explored.
For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot. People can generalize that way but FSD is gonna take a shit, and if you don't know how to drive in that situation so are you.
> Waymo self driving works within a defined subset of the problem space
"works" includes a failure mode of "alert a human and ask them to take over."
> when new areas of problem space are explored.
The problem space is that the "rules of the road" are both legal, technical and social. All of which have internal conflicts as well as conflicts among each other. Anyone who has driven in severe weather has realized this in one way or another.
> For example, imagine a full self driving car trying to get out of a city that's flooding due to heavy rains, while having to compete with people fleeing to higher ground on foot.
Why do I find this easier to imagine in the fictional setting of Elysium than on the real Earth?
People can't do that either. Some years ago there was a massive snowfall in Rome, where it seldom snows ever, people don't generally carry snow chains, and there's few snowplows and such.
Many people reacted by abandoning their cars in the middle of the road, which is basically what I'd expect any FSD vehicle to do.
That's a great point! In aviation we could easily call major jet liners "full self flying" if they wanted to market them as such but we still require TWO highly trained technicians in the piolets seats at all times!
The very beginning of the article discusses what "full self driving" means and also points out how important it is to define terms. I'm not sure your comment is a fair response to this particular article.
The issue with FSD systems as they are implemented today is that they aren't AI as much as just complex control algorithms. You can only go so far with mapping sequences of world snapshots to control actions.
I do think that once we start to investigate ML/AI structure in the direction of figuring out the correct solution rather than trying to just find functions for control algorithms based on input->output mappings, then a lot of these problems are going to disappear.
No, thats the definition of closed loop vehicle control.
Driving, at least in the way humans do it, is more then that. We have internal sim running in our head that allows us to deal with conditions that we have never seen before.
The internal state is simply part of the input. Your brain holds a finite amount of information, your sensors add a finite amount of information, your brain decides on which muscles to move in which way.
Yes, but that decision process is much more than a one way compute graph. Muscle memory for actions (like throttle, steering, brakes) is probably closer to one way compute graph. Higher level strategy planning definitely has recursion to it.
Yes, yes, I know there's like 1-2 companies that have highly modified vehicles that are pretty good, in a limited geofenced area, in good weather, at low speed, driving conservatively, local roads, most of the time. This is not "FSD".
They've been making very impressive incremental improvements every few years for sure. I had a Tesla for nearly 5 years and it was "wow" at first, and then "heh, I guess it's a little better" every year after that.
But when can I get in a taxi at JFK or on 5th Ave and get robotaxied through city streets, urban highway, off into the far suburbs? Could be a decade, if it happens. Just because we were able to make horses faster doesn't mean we flew horses to the moon.
Apply the same "sorta kinda almost" definition to AGI and yeah sure, maybe in 10 years. Really really actually solved? Hah.