I think what many people don't realise is that there will be a glut of cheap computer parts including CPUs, GPU cards, and memory when the AI and AI-adjacent businesses go bust and a bunch of data centres get pulled down.
Unfortunately, data center computers are not something you can just use as a consumer. They usually have custom connectors, and the parts are soldered down into rack-scale computers. They use custom water cooling that needs building-sized pumps, and so on. A Blackwell rack uses 140,000 watts and weighs 3500 lbs. A typical house in the US has 40,000-50,000 watts of power max and can only support 40 lbs per sq foot. These things are never going to be useable by consumers.
If the AI boom slows, it will free up manufacturing capacity for the consumer supply chain, but there is going to be a long drought of supply.
This is a nice idea, because it physically limits the place the child can call from. Even a very under-powered phone sets them free.
However, a new severely under-powered phone with no graphics or apps would probably meet the requirements of not being sucked into the grown-up world too early, and the kids can maintain their own contact lists.
And they'll grow super-fast thumbs like we had to in order to text :)
This is what I hate about people trusting it. If you rely on AI to operate in a domain you don't man-handle, you will be tricked, and hackers will take advantage.
"AI! Write me gambling software with true randomness, but a 20% return on average over 1000 games"
Who will this hurt? The players, the hackers or the company.
When you write gambling software, you must know the house wins, and it is unhackable.
A better example would be to use LLMs to generate passwords or secret keys. Then even if it looks random to human, the inherent bias would make it a security disaster.
You just went and created the worst example. The model knows how to create an rng, that's not it weakness. In fact, if you give it a random mcp it won't do that.
If you use AI to write a gambling software you run in production without reviewing the code or without a solid testing strategy to verify preferred odds, then I have a bridge to sell you.
I am not strictly entitled to answer this but I will just in case.
(Language is a bit different in Australia.)
I completed a Bachelor CS degree in 1995. I think that's a "CS major program".
It was very theoretical, in that the languages we learnt were too old, too new, and not industry-led. So, Eiffel for OO, Cobol(!), and some proper maths thrown in.
It got me a solid 25 years of work.
After about a five year gap in software development as a job, I am now doing a Masters of Computer Science at the same place (by name alone, maybe) and the tech they teach is ten years old.
I'm not averse to this so far. I finish in a year, and I'll know if it was a waste of time to get back into the industry then.
However, I have done six of the twelve subjects and they ALL filled gaps in my understanding from both my original Bachelor and my work experience. I am a better programmer now.
I am currently in an interview process where I surprised myself with my own knowledge. YMMV of course.
"But is this shift actually worth worrying about? Or are younger people just projecting their own anxieties about screen time onto their parents and grandparents?"
False dichotomies can either be the worst thing that happened to humankind or a pathway to a new way of understanding each other.
I'm still using a 2010 Macbook Pro with a 1TB SSD for Logic Pro and Mainstage. Does it struggle? Yes. Does it work? Yes. It's still amazing technology that makes my keyboards and guitars sound bananas. To be fair, I just muck around with it, but it still has more than what I'll ever need or be able to discover.
reply