Three things that get me about current AI discourse:
- The public focus on AGI is almost a distraction. By the time we get to AGI highly-specialised models will have taken jobs from huge swaths of the population, SWE and CS are already in play.
- That AI will need to carry out every task a role does to replace it. I see this a lot on HN. What if SWEs get 50% more efficient and they fire half? That's still a gigantic economic impact. Even at the current state of the art this is plausible.
- The notion that everyone laid off above will find new employment from the opportunities AI creates. Perhaps it's just a gap in my knowledge. What opportunities are so large they'll make up for the economies we're starting to see? I understand the inverting population pyramid in the Western world helps this some also (more retirees/less workers).
> What if SWEs get 50% more efficient and they fire half?
Zero sum game or fixed lump of work fallacy. Think second order effects - now that we spend less time repeating known methods, we will take on more ambitious work. Competition between companies using human + AI will raise the bar. Software has been cannibalizing itself for 60 years, with each new language and framework, and yet employment is strong.
New product that push that bar and can command a decent margin (and good staff salaries) as long as there's a business case/demand and feature-sets that currently command a decent margin will be available for dirt cheap prices (managed by one or two person outfits).
Your comment really got me thinking, it's time to upskill haha. Aside from biotech and robotics do you see any areas particularly ripe for this push?
For example, it the core field of innovation is biotech, there will be unexpected needs in downstream and upstream fields like medical tooling, biosensors, carbon capture and novel materials. Internet blossomed into a thousand businesses, I expect the same thing to happen again - we gain new capabilities, they open up demand for new products, so we get new markets and industries. Desires always fill up the existing capability space like a gas fills a room.
It's probably true, but just not for SWEs.
Many roles will go the way of secretarys; the cost of making an administrative tool will decrease to the point where there is less need for a specialised role to handle it.
The question is going to be about the pace of disruption, is there something special about these new tools?
Just like robo-taxis are supposed to be driving us around or self driving cars. Not to mention the non-fiat currency everyone can easily use to buy goods nowadays.
Waymo was providing 10,000 weekly autonomous rides in August 2023, 50,000 in June 2024, and 100,000 in August 2024.
Not everything has this trajectory, and it took 10 years more than expected. But it's coming.
Not saying AI will be the same, but underestimating the impact of having certain outputs 100x cheaper, even if many times crappier seems like a losing bet, considering how the world has gone so far.
Waymo is a great example, actually. They serve Phoenix, SF and LA. Those locations aren’t chosen at random, they present a small subset of all the weather and road conditions that humans can handle easily.
So yes: handling 100,000 passengers is a milestone. The growth from 10,000 to 100,000 implies it’s going to keep growing exponentially. But eventually they’re going to encounter stuff like Midwest winters that can easily stop progress in its tracks.
About driverless cars, new tech adoptions often start slow, until the iceberg tips and then it's very quick change. Like mobile phones today.
I remember thinking before smartphones that had entire-day battery and good touchscreens: These people really think population will use phones more than desktop computers? Here we are.
I wouldn't say so, because the cars are not at all autonomous in our understanding of autonomous.
The cars aren't making all their decisions in real-time like a human driver. They, Waymo, meticulously mapped and continue to map every inch of the traversable city. They don't know how to drive, they know how to drive THERE.
It would be like if I went to the DMV to take a driving test. I would fail immediately, because the parking lot is not one I've seen and analyzed before.
"true" self driving is not possible with our current implementation of automobiles. You cannot safely mix automobiles that self-drive with human drivers. And the best solution is to converge towards known routes. We don't even necessarily how to program the routes - we can instead encode them in the road itself.
It might occur to you that I'm speaking about rail. The reality is it's trivial to automate rail systems, but the variables of free-form driving can't be automated.
In the first case there are inherent safety constraints preventing it and thus its not available to public to freely use. It's highly regulated. With GPT to writing code, it is already generally available and in heavy use. There are no such life-and-death concerns in the main.
In the second case there are inherent technical challenges to using non-fiat currency and the fx volatility with fiat is wild. There are also barriers and inconveniences to conversion. With GPT writing code, the user can review for quality and still be many x more productive and there is far fewer fees and risk of loss.
It's risky to take two failed or slow innovations and assume that all innovations will be failed or slow.
On a small subsection of US roads, British roads for example don’t make any sense.
However, generally I think being a software developer might be not a career in 10 years which is terrible to think about. Designer too. And all of this is through stealing peoples work as their own.
These models are not repositories or archives of others work that they simply stitch together to create output. It's more accurate to say that they view work and then create an algorithm that can output the essence of that work.
For image models, people are often pretty surprised to learn that they are only a few gigabytes in size, despite training on petabytes of images.
Non-general AI won't cause mass unemployment, for the same reason previous productivity-enhancing tech hasn't. So long as humans can create valuable output machines can't, the new, higher-output economy will figure out how to employ them. Some won't even have to switch jobs, because demand for what they provide will be higher as AI tools bring down production costs. This is plausible for SWEs. Other people will end up in jobs that come into existence as a result of new tech, or that presently seem too silly to pay many people for — this, too, is consistent with historical precedent. It can result in temporary dislocation if the transition is fast enough, but things sort themselves out.
It's really only AGI, by eclipsing human capabilities across all useful work, that breaks this dynamic and creates the prospect of permanent structural unemployment.
We do have emplyoment problems arguably caused by tech, currently the bar of minimum viable productivity is higher than before in a lot of countries. In western welfare states there aren't jobs anymore for people who were doing groundskeeper ish things 50 years ago (apart from public sector subsidized employment programs).
We need to come up with ways of providing meaningful roles for the large percentage of people whose peg shape doesn't fit the median job hole.
The irregularities of many real-world problems will keep even humans of low intelligence employable in non-AGI scenarios. Consider that even if you build a robot to perform 99% of the job of, say, a janitor, there's still that last 1%. The robot is going to encounter things that it can't figure out, but any human with an IQ north of 70 can.
Now, initially this still looks like it's going to reduce demand for janitors by 99%. So it's still going to cause mass unemployment, right? Except, it's going to substantially reduce the cost of janitorial services, so more will be purchased. Not just janitorial services, of course. We'll deploy such robots to do many things at higher intensity than we do today, and as well as many things that we don't do at all right now because they're not cost effective. So in equilibrium (again, the transition may be messy), with 99% automation we end up with an economy 100x the size, and about the same number of humans employed.
I know this sounds crazy, but it's the historical norm. Today's industrialized economies already have hundreds of times the output of pre-industrial economies, and yet humans mostly remain employed. At no point did we find that we didn't want any more stuff, actually, and decide to start cashing out productivity increases as lower employment rather than more output.
We're quickly approaching how smart the average human can get, that's the problem and what sets this apparant from the historical norm.
This worked before because commonly people couldn't even read or do basic math. We figured that out and MUCH more and now everyday people are taught higher think for many years. People, today, are extremely smart as compared to all of human history.
But IMO we've kind of reached a ceiling. We can't push people further than we already have. In the last two decades this became very evident. Now almost everyone goes to college, but not all of them make it through.
The low-end has been steadily rising, that now for 20 bucks an hour you need a degree. That's with our technology NOW. We're already seeing the harmful effects of this as average or below-average people struggle to make even low incomes.
It's true that humans will always find new stuff to do. The issue is as time goes on this new stuff goes higher and higher. We can only push humans, as a whole, so far.
If an AI can do my job, why would my employer fire me? Why wouldn’t they be excited to get 200% productivity out of me for the marginal cost of an AI seat license?
A lot of the predictions of job loss are predicated on an unspoken assumption that we’re sitting at “task maximum” so any increase in productivity must result in job loss. It’s only true if there is no more work to be done. But no one seems to be willing or able or even aware that they need to make that point substantively—to prove that there is no more work to be done.
Historically, humans have been absolutely terrible at predicting the types and volumes of future work. But we’ve been absolutely incredible at inventing new things to do to keep busy.
> If an AI can do my job, why would my employer fire me? Why wouldn’t they be excited to get 200% productivity out of me for the marginal cost of an AI seat license?
They’d be excited at getting 100x of 100% output out an AI for 20 dollars a month and laying you off as redundant. If you aren’t scared of the potential of this technology you are lying to yourself.
“Fixed lump of work fallacy” as noted by commenter above.
If a company can get 100% more output they don’t fire half their people so they stand still/get no additional productivity gain.
You're relying on theoretical work needed by employers to be unlimited. You're also assuming all of this additional work can't be handled by an LLM.
First of all fixed lump of work is not a fallacy. We do know there is a limit as there's limits in the amount of work human brains can even comprehend. A limit exists. We don't know where exactly this limit is, but a limit DOES exist and an LLM may possibly cover that limit.
Second, you have to assume that this "additional work" can't be handled by the LLM. How can you be sure? Did you think about what this work actually is? My first thought was "cleaning the toilets."
>What forum is this???
I assume it's a forum of people who don't base their lives off of concepts with buzzwords. “Fixed lump of work fallacy” is a fancy phrase for a fancy concept... that doesn't mean it's an actual fallacy or actually true. Literally you just threw that quote up there as if the slightly clever wording itself proves your point.
What Exactly is this additional work that will pop up once LLMs are around and so powerful they can do all human intellectual work? Can you even do a concrete/solid real-world analysis without jumping to vague hypotheticals covered by fancy worded conceptual quotations? The last guy used analogies as part of his logical baseline of reasoning. Wasn't convincing to me.
This assumes that the bottleneck to profitability is the limit of software engineers they can afford to hire.
If they’re happy with current rate of progress (and in many companies that is the case), then a productivity increase of 100% means they need half the current number of engineers.
Is the reason for development on features going slow usually the number of developers though? Nowhere I’ve worked has that really been the case, it’s usually fumbled strategic decision making and pivots.
And the “current rate” is competitively defined. So if AI can make software developers twice as productive, then the acceptable minimum “current rate” will become 2x faster than it is today.
A computer already does in seconds what it used to take many people to do. In fact the word “computer” was a job title; now it describes the machine that replaced those jobs.
Yet people are still employed today. They are doing the many new jobs that the productivity boost of digital computing created.
I don't know why people think analogies from the past predict or prove anything in the future. It's as if a different situation applies completely to the current situation via analogy EVEN though both situations are DIFFERENT.
The computer created jobs because it takes human skills to talk to the computer.
It takes very little skill to talk to an LLM. Why would your manager ask you to prompt an LLM to do something for you when he can do it himself? You going to answer this question with another analogy?
Just think reasonably and logically. Why would I pay you a 300k annual salary when a chatGPT can do it for nothing? It's pretty straightforward. If you can't justify something with a straightforward answer, likely you're not being honest with yourself.
Why don't we use actually evidence based logic to prove things rather then justify things by leaping over some unreasonable gap with some analogy. Think about the current situation, don't base your hope on a past situation and hope that the current situation will be the same because of analogy.
My job is not to do a certain fixed set of tasks, my job is to do whatever my employer needs me to do. If an LLM can do part of the tasks I complete now, then I will leave those tasks to the LLM and move on to the rest of what my employer needs done.
Now you might say AI means that I will run out of things that my employer needs me to do. And I'll repeat what I said above: you've got to prove that. I'm not going to take it on faith that you have sussed out the complete future of business.
Future or events that haven't happened yet can't be proven out because it's an unknown.
What we can do is make a logical and theoretical extrapolation. If AI progresses to the point where it can do every single task you can do in seconds, what task is there for you left to do? And how hard is the task? If LLMs never evolve to the point where they can clean toilets, well then you can do that, but why would the boss pay you 300k to clean the toilet?
These are all logical conjectures on a possible future. The problem here is that if AI continues on the same trendline it's traveling on now I can't come up with a logical chain of thought where YOU or I keep our 300k+ engineering jobs.
This is what I keep hearing from not just you, but a ton of people. That analogy about how technology only created more jobs before with no illustration of a specific scenario of what's going on here. Yeah if LLMs replace almost every aspect of human intellectual analysis, design, art and engineering what is there left to do?
Clean the toilet. I'm not even kidding. We still have things we can do but the end comes when robotics catches up and is able to make robots as versatile as the human form. That's the true end when the boss has chatGPT clean the toilet.
If they're high growth yes, if they're in the majority of businesses that are just trying to maximise profit with negligible or no growth then likely not.
When electricity got cheap - we use MORE electricity.
Think how many places you see shitty software currently.
My wife was just trying to use an app to book a test with the doctor - did not work at all. The staff said they know it doesn’t work. They still give out the app.
We are surrounded by awful software. There’s a lot of work to do- if it could be done cheaper. Currently only rich companies can make great software.
Well, that probably happens to some extent, but I am quite confident that some smaller shops will just say "Hey make an app that works 50% of the time and that's good enough." then fire half of the staff.
Oh, not just smaller shops, I have many issues with Android and other Google products -- from bugs to things that just don't work that have existed for decades, and there is no action on those over the years. Surely Google has the resources? Right? riiight?
This is a human problem, not a technology problem.
> We are surrounded by awful software. There’s a lot of work to do- if it could be done cheaper. Currently only rich companies can make great software.
Lots of the awful software is made by awfully rich companies - and lots of good software is made by bootstraped devs.
To mention some interesting examples, both Amazon and Google has gone from great to meh soon after they went from startups to entrenched market leaders.
I guess this is why I’m excited. AI will give smaller motivated teams a lot more firepower. One committed person can (or may soon be able to) take on the might of a big company.
These companies are making crap software because their scale makes them hard to compete with. They know there’s no other good options.
I think Sam Altman’s right that there’ll be a 1 person unicorn company at some point.
On the third point, I think we've always seen this happen even in massive shocks like the Industrial Revolution (and the Second Industrial Revolution with assembly lines etc. and the Computer Age)
It might be hard for people to retrain to whatever the new opportunities are though. Although perhaps somewhat easier nowadays with the internet etc.
The myth that the Industrial Revolution was a wonderful time is just that, a myth. The actual reality of the AI revolution will likely be the same. Record number of billionaires and record number of people in deep poverty at the same time.
Do people really think the Industrial Revolution was “a wonderful time”? Basically the first thought to comes to mind for me is massive migration to urban centers, along with huge amount of poverty and squalid living conditions and disassociation with your own labor. I feel like that was basically what was taught to me in High School too, not like some recently learned insight.
And I agree with you. Further, the argument about economic prosperity isn’t equal for everyone. And increased worker efficiency isn’t directly (or sometimes at all) linked to worker satisfaction or even increased wages.
I’ve heard some people say it. That economic disruption doesn’t matter because “all the pieces fall into place” eventually and the Industrial Revolution being an example.
Well, yeah but right now we're reaping many benefits from the industrial revolution. Malnutrition for sure. Not saying it's the same as the AI boom though.
Not trying to value life in general at all, just the nature of the jobs. You might reply "distinction without a difference," and well, the fact that you'd think so would be one of my points about the labor ;).
Personally, preindustrial life sounds pretty rough, but its all just apples and oranges! The future will continue to happen, to critique the present and how we got here is not to exhort the past (unless, you know, you are a particularly conservative person I guess).
> What if SWEs get 50% more efficient and they fire half?
This is kinda ironic in a thread that's basically about the AI hype landscape, but you've just reduced the amount of SWE "power" your example organization has there by 25%.
Buy stocks and try to own the means of production. Things are going to begin to flatten out in terms of salary or even decrease as competition increases due to productivity gains.
> SWE and CS are already in play.
What if SWEs get 50% more efficient and they fire half?
You know what happened last time we got 50% more efficient? It was when github and npm arrived. LLM are saving time and making us more efficient, but that's peanuts compared to the ability to just “download a lib that does X” instead of coding this shit on your own. And you know what happened after that? SWE position skyrocketed.
- The public focus on AGI is almost a distraction. By the time we get to AGI highly-specialised models will have taken jobs from huge swaths of the population, SWE and CS are already in play.
- That AI will need to carry out every task a role does to replace it. I see this a lot on HN. What if SWEs get 50% more efficient and they fire half? That's still a gigantic economic impact. Even at the current state of the art this is plausible.
- The notion that everyone laid off above will find new employment from the opportunities AI creates. Perhaps it's just a gap in my knowledge. What opportunities are so large they'll make up for the economies we're starting to see? I understand the inverting population pyramid in the Western world helps this some also (more retirees/less workers).