I have not read much about this issue, but I wonder if it is as serious as you are saying why are populations across Europe not voting against it, or they are voting but getting ignored by the rulers?
> or they are voting but getting ignored by the rulers?
That is what happened in at least:
- the Netherlands (votes for Wilders)
- Sweden (votes for SD)
- Germany (votes for AfD)
- France (votes for whatever that party is called nowadays)
- the UK (votes for Brexit largely based around migration issues)
- Belgium (votes for Vlaams Blok)
- Italy (votes for Meloni)
- Austria (votes for FPÖ)
...and I can go on. In most countries in Europe voters have spoken out against what they consider to be excessive asylum-based migration from non-Western cultures without requirements regarding assimilation or integration but not much has happened politically until very recently when the EU parliament voted to increase deportation efforts [1]. It remains to be seen whether this law will have any measurable impact given the fact that the EU actively supports many NGOs which aim to achieve the opposite of what the law states.
Pulling a Trump requires a polarized electorate where you are mostly going to have both parties in 48-52% range, with only real fights in few battleground states, and no absurd change in total vote %. Even Trump won't pull a Trump if other party was nearing 2/3rd majority. I am not even sure of what would happen to American politics if a party reaches 2/3rd majority in both houses, a list of long pending reforms might finally become possible.
It's worth noting that the party vote share here was 53% for Tisza vs. 44% for the even-more-right-wing parties. The fact that this results in a two thirds majority is because the electoral system inflates the strongest party. Orbán has previously achieved two thirds majorities multiple times while winning less than 50% of the party vote. Most seats are assigned not through party lists but in single-member constituencies with first-past-the-post voting, same as in America. So it's not "convince two thirds of the people to vote for you", it's "convince a very slim plurality in two thirds of the constituencies to vote for you".
I'd like that. But this system is very attractive for the strongest party, so it will be a real test of their commitment to actually representative, multi-party democracy. Also, the general system (a mix of single-member constituencies and party list seats, with more of the former than the latter) isn't a Fidesz invention, it has a long legal tradition in Hungary. So there might be a lot of resistance to a purer party-list system on those grounds too.
Obvious tweaks exist, of course: Even if you keep more individual constituencies than party list seats, they should use some sort of instant runoff/ranked choice/etc. system. But other first past the post countries are dragging their feet on this too, so... we'll see.
It's a bit like computer security: you have to get it right all of the time and the perps mostly only need one shot at being lucky and then it will takes many years to undo the damage.
We should approach democracy more with the kind of insight that go into making computers secure. Oh, wait...
Even if people assume the worst impacts of LLMs on white collar work, there is simply not enough demand for electricians and plumbers for that to work, right now these professions work only because the number of people going into them is limited.
Don't get fixated on plumbing itself. The point is if a bunch of people rush into any profession it leads to wage depression. Unless the amount of plumbing needed increases, the overall amount of money flowing to the plumbing populace is likely to stay roughly the same.
> The point is if a bunch of people rush into any profession it leads to wage depression
Eventually. Wage depression does not happen linearly. You're asserting that demand is maxed out and there's no more money to go around, and that's just not true. A lot of people just don't bother because tradespeople are famously difficult to work with because they are so overbooked.
It takes a week because if you want it fast they charge you an emergency rate. This aspect of the tradesman is independent of demand and one of the perks of their lines of work much like over time in other fields.
If things play out I see there being two classes of low paid developers in a decade or so: the first being the vibe coders who earn a subsistence wage because most people can do it (not everyone, there will still be a cost of entry, paying for the tools, which will exclude some groups), the second being the more “artisnal” developers working on the things that can't (yet) be vide coded and fixing up the problems caused by insufficient care by the vibers and those employing them. These will be low paid because while the work is important demand will be low and there will still be a fair few people with the skills and desire (they'll make ends meet between good jobs by taking on gig-economy vide-coding work themselves). There will be a lucky few still making a decent living, but a much lower proportion than now.
I'm hoping to arrange retirement before things get that far… Failing that I'll do something else (I could be a sparky, though if all the youngsters are training for that perhaps that industry will gain a bad supply/demand picture from the worker's PoV too!) to pay the bills and reclaim dicking around with tech as a hobby.
Domain knowledge as in non public aspects of the work you/ your workplace does. The AI tools are very good at whatever is public but very clueless about proprietary domains .Let's say you make CRUD apps about some confidential domain. Now the CRUD skills might be commodity but the confidential domain is even more important.
As long as there's internal documentation, which virtually every serious shop has, it can be processed and combined with AI. There are startups selling this product already. I've seen first hand some very narrowly focused domain knowledge becoming more accessible when you can ask the chatbot and the thing is right. It works.
Come to think of it, domain knowledge should be an LLMs strong suit as long as you can provide the right documentation, which is working pretty well already.
Right now the main issue I see with AI is that it doesn't do well with scaling. It's great for building demos and examples but you have to fix its code for real production work. But for how long?
In ERP software there are MLOCs without any technical documentation. And nobody would spend a dime to create one. So, the deep expert knowledge on how business processes are supposed to work (in full detail) and how they are implemented is mostly in the heads of a couple of people.
AI is most excellent at reading and understanding large codebases and, with some guidance docs, can easily reproduce accurate technical documentation. Divide and conquer.
Reading a large codebase...perhaps if it is not too large. Understanding... why a tool exists, what is the motivation for its design, what the external human systems requirements for successful utilization of the internal facing tools... especially when that knowledge exists only in the memories of a few developers and PMs... not so much.
Deep domain expertise is a long way from AI capability for effective replacement.
Again, nobody would spend a dime to create the technical documentation, even if it could be done somewhat faster with AI support. Also, in my experience AI is not so great explaining the consequences to business processes when documenting code.
Accuracy/faithfulness to the code as written isn't necessarily what you care about though, it's an understanding of the underlying problem. Just translating code doesn't actually help you do that.
No, current LLMs are already good enough to read the subtexts from documents, email, call transcripts where available. They're extremely good at identifying unwritten business practices, relationships, data flows, etc.
But everyone at the company has that private domain knowledge. The only thing you're bringing to the table that anyone in any other role doesn't offer is the commoditized skill set.
Right, and you'll not keep everything out of materials like AI
generated meeting notes for every repeat of every process so
the company doesn't really need many experts in its existing
operations.
HN is full of people saying ABCD should know better and honestly I thought the same, but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly. People get defensive when I point out out to them that ChatGPT will make things up and it is widely know, and some even tell me it is the fault of "tech people" for not fixing it and they can't be expected to double check every chatgpt conversation. So I am very sure this problem is more prevalent than what we see and also that it is going to continue increasing.
Every single person, every one of them, that I have watched google something since AI overviews launched, will instantly reference the AI overview. And that model is some bottom-rung high volume model, not even gemini.
Yes and the world should be utopia and everyone should be happy and we all wish for world peace and yada yada yada. What you are saying is a vision of ideal world as it should be, but doesn't help anyone understand the real world problems.
You can't seriously compare the problem of world peace with the problem of exercising the most basic level of critical thinking w.r.t. LLM output after it has already proven itself unreliable. That's not a utopian dream, it's a level of prudence on par with not sticking a fork in an electrical socket.
You're seriously overestimating the average person's ability to understand what llms are.
Look at all the influences, streamers, podcasters constantly asking em things and taking it as fact - live.
Isn't the joe Rogan experience like the most watched podcast or something? Every episode I've ever stumbled upon he "fact checks" multiple things via their sponsor which is just an llm provider specialized on news.
People aren't good at statistics. If something is close enough to the truth enough times, and talks authoritively on everything with good English... Guess what, they're gonna trust it.
You don't need to know how an LLM works to realize "sometimes the magic ChatGPT box tells me wrong things". Even if you fully fall for the anthropomorphism, this only requires the same level of awareness as realizing that after the third or fourth thing your weird uncle tells you that turns out not to be true, maybe you shouldn't take him at his word.
If human psychology worked like that, lotteries wouldn't be a thing. Nor prayer. There wouldn't be horoscopes in newspapers, nor homeopathy.
One of the various oddities going on with LLMs in particular is them being trained with feedback from users having a chance to upvote or downvote responses, or A/B test which of two is "better". This naturally leads to things which are more convincing, though this only loosely correlates to "more correct".
No shit. Why do people in this thread keep telling me that people are stupid like that's a news flash to me? The fact remains that it is stupid, and especially for educated people like the laywers/doctors/etc mentioned upthread, it's sufficiently obvious stupidity that there's no excuse. Yes, I know, that describes a lot of other stupidity. Much of our history as a species is inexcusable.
Edit: though I should be clear: people demonstrably do often learn to discount obviously unreliable sources. Not all the time, but pretty often in the easily verifiable cases, especially where they don't have a major emotional stake.
You may demand that of yourself, but for others we must design around the fact that they are stupid. You do not have the power to change their stupidity, only your response to it.
I would happily bet that you too have fallen for this at least once. Unless you cut AI out of your life completely and do not interact with others.
AI output is like that COVID video of contamination, you almost can't avoid it unless you scrupulously check each and every thing that is presented as fact that you are exposed to. And absolutely nobody does that.
Pretty close. I only touched ChatGPT a couple times a few years ago, haven't used the others (on purpose at least. Google forces its Gemini summaries on me but I mostly avoid them, because, umm, see above.)
> and do not interact with others.
Most people I interact with are on the same page about AI. But I try to keep my critical thinking online anyway, like I always have. If someone tried to feed me AI slop, I would consider that person to have betrayed my trust and would, to put it gently, try to interact with them less.
That makes you an extremely rare exception. I use AI as a private tutor on various subjects that interest me to save time and not to have to watch hours of low content videos. But I've separated it out to the degree that I'm running that stuff on a separate laptop to make sure my work product is never going to be contaminated. I quite literally treat it as though it is radio active and should never touch the rest of what I do.
This answer really isn’t good enough. The providers can’t both aim to replace search and claim PhD level intelligence that will do all the jobs, but hide behind “it makes mistakes” in small print.
I think it's the fluency. Other tools fail visibly. A bad search result looks like a bad search result. A hallucinated quote reads exactly like a real one. There's no signal in the output itself that something is wrong. You have to go back to the source to check, and the whole point of using the tool was to not have to do that.
> almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly.
We do not live in a meritocracy, because society has no means to judge merit. We live in a society ruled by people who crammed before the tests, and who wrote the papers to agree with and flatter the teacher. Now they are the teachers (and bosses), and
1) expect to be flattered (and LLMs have been built as the ultimate flatterers),
2) feel that a good, ambitious student (or subordinate) will not question them and their work, but instead learn to conform to it, and
3) are not particularly interested in the quality of their work as such, but rather the acceptance of their work. In certain professions, such as judges, doctors, high-level lawyers and engineers, or politicians, they feel like (with good reason) that they can demand acceptance of their work, and punish those who don't accept it.
This position is what they worked so hard as young people for. They were not working to become the best at their jobs. They were working to get the most secure jobs. The most secure jobs are the ones that bad or lazy work doesn't endanger.
I think this is an issue with anyone who relies on any LLMs. But yeah I agree and have had similar issues where someone will get defensive because they just don't want to admit they(the LLM's response) were wrong. It's hard to tell someone in a "nice/nonchalant" way:
"It's fine, the LLM just lied to you, but hallucinations and making claims based off of assumptions is just something they do and always have done!"
People don't like to feel dumb, and they don't want to feel betrayed by the same tool that gave them incredible factually correct results that one time only to give them complete and utter bullshit(that sounded legitimate) another time.
Also, yeah it feels like its everywhere these days and isn't showing any signs of slowing down(visited my parents and my dads using siri to ask chatgpt stuff now - URGHHHH) and I really hope we're both wrong
>but when I look at almost all of my friends working in critical domains like as a judge or engineer or lawyer or even doctor, they seem to trust ChatGPT more or less blindly
That's why I lost trust and faith in people who end up in positions of doctor, lawyer or judge. When I was young I used to think they must be the smartest most high-IQ people in society, having read the most books and have the highest levels of critical thinking and debate skills ever. When in fact they were only good at memorizing and regurgitating the right information that the school required to pass the exam that gave them that prestigious title and that's it.
Now in my mid 30's when I talk to people from these professions at a beer, barbeque or any other casual gathering, I realize they're really not that sharp or well read or immune propaganda and misinformation, and anyone could be in their place if they put in the grind work at the right time. It's a miracle our society functions at all.
Pichai has been a very poor CEO but Google's position was so strong that it is still doing fine. I am sure he is in the founder's good graces so as long as the company's stock takes a big dive he is gonna stay at the helm and keep raking in the big bucks.
Poor CEO my abs. When ChatGPT came out Microsoft was singing victory songs, and predicted Google's imminent death. 3 years later Google has one of the best models and Microsoft is still borrowing OpenAI's model. Not only that, Google is running their models on their own hardware, not Nvidia's.
One of the things that a CEO drives is vision and innovation.
Sundar misses the mark on these things. AI is a good example. Google invented the transformer architecture, but simply published it for its competitors to use. It took a code red in 2023 to finally push Google to develop products based on this.
Cloud. Years late to the game. All it would have taken is a letter similar to the famous Bezos memo to eventually get all of Google's world-class scaled infra pointing externally and generating revenue. Instead, Google Cloud started late, and couldn't reuse much of the internal infrastructure.
Stadia, another example. That architecture is probably the future. It's not clear how gamers in developing countries are going to afford thousands of dollars in hardware that sits idle 90% of the time.
> Google invented the transformer architecture, but simply published it for its competitors to use.
That's how innovation works in this industry. If companies didn't allow researchers to publish their work it would set us back decades. Researchers building on each other's work is how this industry was built.
> It took a code red in 2023 to finally push Google to develop products based on this.
So Google executed. Ability to execute is one of the things that makes a good CEO. Other CEOs have additional qualities such as vision, and getting others to believe in the vision. But not every CEO needs to be a Steve Jobs!
Plenty of innovations are coming out of Google, just look at Nano Banana Pro for example.
Google invented the basis of LLMs, but under Pai failed to come up with the idea of ChatGPT. Getting Gemini into a workable state required the return of Page and Brin. It seems to be working out for Google, but how they got here is a very big mark against Pichai's leadership.
1. Proprietary Data (Youtube, docs, gmail, cloud logs, waymo, website analytics, ads, search, the list is huge)
2. Commercial Datacenters (theyre ahead at least)
3. Chip production (Google is manufactoring proprietary chips)
4. Consumer OS (Chrome, Andriod)
5. Consumer Hardware (Pixel)
Basically google has access to data that OpenAI will never have access to, can lower costs below what OpenAI can, and is already a leader in all the places OpenAI will need massive capex to catch up.
You can't train LLMs on proprietary data, at least not if you want to make that LLM as accessible as Gemini. Otherwise random people can ask it your home address.
So it matters less than one would think. Also, ChatGPT can do 'internet search' as a tool already, so it already has access to say Google maps POI database of SMBs.
And ChatGPT also gets a lot of proprietary data of its own as well. People use it as a Google replacement.
>You can't train LLMs on proprietary data, at least not if you want to make that LLM as accessible as Gemini. Otherwise random people can ask it your home address.
If this is your only criteria I think you have a misunderstanding of what proprietary data is and ways companies can mitigate the situation in the inference stage.
What if the CEO isn't just telling the company how much to invest, but also has influence on how that money is used? Google's relative success, if it exists, I'd rather not judge, isn't from investing more than everybody else. Because the money just keeps pouring into these things, for all contenders.
Why do you say this? I’m not familiar with him, and really haven’t paid much attention to Google’s strategy beyond cultural awareness, but I think Google has done well with staying competitive in AI, is dominating the self driving battle with Waymo, and has mostly kept its good brand intact (no small feat when you are so big). Are there some big mis-steps I don’t know about?
Not the person you're replying to, but something that has bothered me about him (and a lot of SV tech), is how they did rapid over-hiring in 2022, then a year later fire a bunch of people, while he claimed he took "full responsibility", but still got a nice happy bonus that year. I'm not sure I know what "taking full responsibility" actually means, because to me it seems like if you have to lay off thousands of people in a year, that would be a good reason to not get a bonus.
These are peoples' lives. People almost certainly quit decent jobs because there was a prestige factor in working for Google, potentially moved to the overpriced world of California, just to be fired less than a year later because apparently Pichai thought that interest rates would never increase and there would be free money for forever. These people have families, and they almost certainly thought that moving to Google would be a "stable" position, because it's one of the biggest SV companies.
I don't know if he's good for the stock price, that's tougher to gauge, but I do think he's a short-sighted jerk.
The "I take full responsibility" thing has been entirely meaningless.
I guess it's supposed to convey that it's not the laid-off folks' fault, and that it was "his decision", but as you said: "taking full responsibility" without any real impact to his life? I may as well take full responsibility for the layoffs. It'd mean just as much.
Yeah, that's the thing; if he's acknowledging that it was his decision to do this, then maybe he shouldn't be getting bonuses and maybe be fired? Why are the regular schmucks the ones being punished for his terrible decisions and not him?
Maybe it was the right decision at the time to lay them off? I think that's why he got the bonus, actually! I'm sure the layoff was difficult for him as well: he certainly lost a lot of goodwill with his workforce and I'm sure the internal politics were tricky for anyone involved.
No one is getting "punished" - there was no promise of ten years of employment from Google. Like when an employee leaves, you wouldn't say they're "punishing" the employer.
> Maybe it was the right decision at the time to lay them off?
It probably was the right decision to lay everyone off. What was not the right decision, and this should have been obvious, was hiring 10+k more employees than you actually need because you assume that this free money will last forever. He was almost certainly aware and signed off on this mass hiring. Other companies didn't make this mistake; Tim Cook didn't take a bonus that year to avoid mass layoffs.
> he certainly lost a lot of goodwill with his workforce and I'm sure the internal politics were tricky for anyone involved.
He probably did, because he's a bad CEO. He was right to lose goodwill.
> No one is getting "punished" - there was no promise of ten years of employment from Google.
No, there isn't a legal promise or anything, but people go to these BigCos primarily for stability. If you want an exciting job with lots of interesting new things, it's much easier to find that in a startup, but startups can be frustrating because they're inherently unstable. This is partly why startups tend to be made up of very young people; it's much easier to deal with volatility if you don't have a family.
You're obviously not "entitled" to a job, but the people who run Google aren't complete idiots; they know people are joining BigCo because they think it's going to be relatively stable. They depended on that in order to do all this overhiring.
Well I hope people won't perceive this (nonexistent) stability in the future.
I'm not trying to "absolve" Google, nor do I think they're guilty. They used their reputation to hire people. It turns out that needs to be updated. Perhaps in the future they will do things to improve their reputation again? Who knows...
It just feels a little victim-blamey. Google manipulated thousands of people, and they got screwed in the process. Should they have known that big corporations are evil? Maybe, but I'm not going to blame someone who was misled by dishonest people.
If you're agreeing that they misled people by using their reputation in a way that's dishonest, how are they "not guilty"?
I agree Google's reputation misled people. But importantly, I don't think Google can be held accountable for their reputation and for what other people believed.
To give a somewhat contorted example: If people believe you give 1 Bitcoin to anyone who can recite the whole Beowulf, they will perhaps spend a lot of time learning Beowulf, forgoing other things. Then they find out you in fact have not promised them that and that you have no such obligation. I don't think you've misled them! Do they have a right to be angry with you? Or should they have checked with you what the precise conditions were before upending their life?
If I happily let them waste their time reciting Beowulf on purpose under false pretenses then I would be a douchebag.
Google knew that people would join based on a perception of stability. Did they hire 10,000 people knowing that they would fire them six months later? If so, they are jerks. If not, then they are so categorically idiotic as to think that they will just have free money for forever and interest rates would never ever go up. In either situation they are bad.
I would argue that Google has had declining quality in search results, bordering on completely unusable in the past few years, and that has resulted in people using LLMs for things that they would have searched for years ago. Although they are competitive in AI, I think it is surprising that their product continues to frustrate people and that they are a distant second place.
Without taking a stance on whether their search has improved or degraded, we can observe that the same claim (“search is so degraded it’s unusable”) has been common for like 5 years at this point. If it’s really such a problem, why haven’t people already switched? Google’s search is at 90% market share [1]. Surely if it was perceived as a problem to customers there should be some measurable effect?
No offense to Kagi, but they don’t rank in the top 6. They are behind even Baidu, which I had forgotten exists. I think they have good mind-share among power users, but probably not in the general population.
But the question is whether or not Kagi is a competitor — not just in regards to the market share it currently holds, but what it could come to hold. Let's see where it is next year.
Google has succeeded in enshittifying their search in a way that the vast majority of users (not customers -- those are the advertisers) have not noticed.
If the users aren’t bothered by the “enshittification”, does that reflect poorly on the CEO? The CEO is supposed to make money, and maybe has personal aspirations to improve the world. They’re not making art.
Like I said originally, I think the rise of ChatGPT is a partly a consequence of this. It’s not that people are choosing a different search engine, they’re not searching at all because LLMs will give a better answer faster.
Also, whether it’s ChatGPT or something else, five years is really not that long. Time will tell, but does it really seem like decreasing quality in the name of profits is such a good long-term strategy?
Sundar was at the helm when the decision to worsen search results for the sake of ad revenue was made.
Previously, the two concerns were "firewalled" so as to prevent the money-generating side of the company from eroding the user experience.
This is a theme that's been at the core of every Titan of Industry's decline. That is: chasing of short-term results with disregard for the long term consequences. Alphabet is just so big and dominate in search that it will likely take quite a long time for the negative effects to appear. And they have other large businesses that haven't been as aggressively enshitified (Youtube, GCP).
It's like when the Titanic struck the iceberg and the crew mostly thought the ship would be fine.
Just because they're still making money doesn't mean the company hasn't already been damaged beyond repair. But in this case by the time it's clear the damage is fatal, those at the helm have jumped ship with piles of cash.
They missed the boat with ChatGPT, the research paper for it initially came from Google. There's no real focus between Android, ChromeOS, and Fuschia. The AI results box was possible a decade ago, but not giving money to the sites the info was gotten from was too far a stretch. How I feel is that the company doesn't really know what it's doing, there's no real leadership. KilledByGoogle is a website. With Stadia the technology was there but didn't have the right backing to make it in the market. Though it turns out those GPUs are useful for GCP for AI, so that might have been the real reason. He's just not much of a leader. He doesn't need to go full Elon, but some amount of character would be nice.
Pichai is being evaluated for his effect on stock price. His shareholders don't care if every product and service they offer has gotten worse for users in the meantime.
Gemini keeping pace with Claude and ChatGPT is clearly some kind of management victory, because Zuckerberg and Musk don't seem to be able to do it despite having limitless cash to spend.
Don't give Pichai credit for that. Google had the strongest ML research org on the planet before he took over, and it had Demis, arguably the best researcher in the field (and it had Geoffrey Hinton before that). The fact that goog was so far behind OAI despite Demis blazing frontiers was a major management failure.
Sundar's enshittification has also juiced short term share prices at the cost of long term health. It might turn out to be a decent decision for search because it's in the midst of being disrupted, but that's a happy accident for Sundar, not 4d chess (and you can argue the enshittification hastened the disruption).
Text search (without Gemini) and Gmail are much worse than they used to be. Android is less open, Chrome doesn't allow proper ad-blocking, YouTube has insane ads if you don't have Premium.
I think this refers less so on "Pichai did a great job" and more that Google is in a good position right now. One COULD say that Pichai is responsible for this - but probably many other semi-competent CEOs could have done about an equally solid job here. Google would have profited either way.
That may have been bad for users, but you can hardly claim it was bad for the company - not even in the long run. Ten years is like 40% of Google's lifetime, that is the long run! And if indeed he went all-in on AI in 2015, that seems to me like a damn near prophetic vision. Dislike AI by all means, but you can't say it's not the Current Big Thing or that Google is doing badly because of it. To see that coming so early as 2015 looks rather skilful to me.
I did not know this about Pichai and if true, it makes me feel rather better about his leadership.
> if indeed he went all-in on AI in 2015, that seems to me like a damn near prophetic vision.
Also note that 7 years later, when ChatGPT came out, built on top of Google Brain research (transformers), Google was caught flat-footed.
Even supposing that Pichai really had the right vision a decade ago, he completely failed in leading its execution until a serious threat to the company's core business model materialized.
I'm surprised people still think this. Google has the strongest position of any company in the world on AI. They have expertise and capability across the entire stack from chips to data centers to fundamental research to frontier models. Just because they weren't first-to-market with a chatbot doesn't mean they almost lost or made some terrible durable blunder.
That's about Google, though. The picture about Sundar specifically is harder to evaluate. The pessimistic take is that Google had that position already and Sundar failed to proactively lead through a fundamental product shift, forcing the company onto the defensive for some time. The optimistic take is that Sundar, having occupied the top spot since 2015, prioritized investments in the company's overall technology development, then successfully executed a rapid product pivot when the market changed, securing a dominant position in both research and product that nobody else can compete with long-term.
People give him way too many breaks, he's a money manager. He was asleep at the wheel when OpenAI absolutely steamrolled them, even though they very easily could have won that race.
I view Claude code same as how I used to use Jetbrains IDEs. I mean yes they are not same but even when I first learned of Pycharm pro and it's features I had this urge to make a lot of random idea apps. The landscape has changed but the solution is same. Prefer spending your time on things that give you long term happiness in any way.
The problem is it's not limited to code. I have Claude Code maintaining my Obsidian Vault, managing my Home Assistant setup via SSH, helping me buy life insurance and file my taxes and and manage my home orchard and...
If it gains enough adoption in India for the average Indian to ask it political questions, it will have zero chance of not being heavily regulated. Sadly misinformation is a problem which has no good solutions.
Sadly in India talking about the problems facing the country has become a taboo, and can easily get one labeled as anti national. See "Kompact AI" and its online discourse. While China practiced "Hide your strength, bide your time". India seems to practice the opposite.
This is like asking why are people buying so much stuff from a company that was founded as compiler/language tool seller. How much compiler do they need.
The above would be Microsoft for context. For some reason your comment assumes that what a company was "founded as" should dictate what they do decades later.
reply