Are you serious? People get upset about losing jobs because they need jobs to pay their bills. Further, we often build our life identities around work; if you're a good car mechanic or a successful restaurant owner, you're proud of that. It's a part of you.
Having to repeatedly restart your career is risky, painful, and demoralizing. I have no problem seeing why people don't like that and why it can lead to populist backlash or even violent revolutions (as it did in the past).
By the way, to address your closing comment: people don't like dying either and tend to get upset when others die?
I don't think the point is that the transition isn't difficult. It is that there is an overall benefit that outweighs the challenges of the transition.
The sad part is that industrializing societies have not been very good at reconciling the benefits with the costs. The benefits first go to a select few and have seeped out to the masses slowly. Railroads in the US are a good example. The wealth accumulated by the Vanderbilts, Hills and Harrimans, did not get redistributed in any kind of equitable manner. However, everyday people did eventually gain a lot of benefit form of those railroads through economic expansion. (None of which address the loss of the native Americans, whose losses should also be part of the equation.)
My impression is that the transition is such an open-ended process that you can’t really call it that. It’s unclear if and when the challenges will be overcome.
You're missing my point. Job losses are a fact of life, just like death. Why should people get upset about the fact that someone might lose their job, or die? It's not amoral. These things happen constantly to millions of people, we'd we worn out. We happily send young healthy people to their death fighting in wars so we don't have to. I don't see people weeping because the armed forces exist.
Or is this just some sort of PC bullshit, that we can't talk about this sort of progress without carefully lamenting job losses? If you're not useful doing a job, why should you be employed in it? That's the bottom line.
Society is better if we sacrifice one horse and buggy driver job for two engineering jobs. The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
> Society is better if we sacrifice one horse and buggy driver job for two engineering jobs.
That's a "statistic" you're pulling out of your butt, and it's doing a lot of work. No one ever knows if something like that will actually happen.
It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
> The drivers suffer from that, but the net win for society is so plainly obvious that it's a better investment to retrain the driver or just pay the off rather than support a job that dying anyways.
But what actually happens is our free-market society doesn't give a shit. No meaningful retraining happens, no meaningful effort goes into cushioning the blow for the "horse and buggy driver." Our society (or more accurately, the elites in charge) go tell those harmed to fuck off and deal with it.
> It could actually turn out that AI sacrifices 100 engineering jobs for 10 low-level service or prostitution jobs and a crap-ton of wealth to those already rich.
That's where wealth redistribution (Taxation) comes in. The USA is not good at progressive taxation, but everyone could be better off if it were implemented properly.
Which I find funny in a way: I'm sure that almost no one here understands the article, grasps the significance of this problem in mathematics, or can meaningfully comment on the difficulty of solving it. But we'll still have opinions because the article mentions a popular tool some of us like, some of us dislike, and some are ambivalent about.
It would be surreal if a carpentry forum was regularly abuzz about mountaineering because climbers use a hammer-shaped tool.
Nevanlinna theory isn’t that obscure (in the sense of mathematics, I suppose) but it is very difficult (for me, probably less so for Tao) when working to have the whole of 21st century analysis in your head at once and see what could be applied where. I can see how an LLM would be quicker than a human at recognizing a context where a theorem from an apparently unrelated subfield could be applied.
From a few older posts, I estimate that there are at least 10 mathematicians here, some doing math research and some doing other stuff. This is bleeding edge math, so probably only 100 persons in the word are working in something close enough to understand this now. [I guess I can understand it if I take a month [1] to study this and drop everything else.] I worked in harmonic analysis [2], but this looks more related to maximals that is a topic that I tried to avoid. I'm not sure about the areas of the other mathematicians here.
Anyway, there are a lot of topics in the discussion in HN and many times someone can give some insight. Sometimes it's about compilers and there are a few users that know about that. Sometimes it's about rockets and there are a few users that know about that. Sometimes it's about steel alloys and there are a few users that know about that. ... Not everyone here is a programmer.
I'm using very little AI, but my wife has been using it a lot. We both agree that it's at the level of a Gold medal in the IMO, that means that it can kick our ass hard. Anyway, sometimes the AI hallucinates and sometimes makes stupid error, so it's necessary to verify the output.
[1] It looks short, so perhaps a week is enough, but I feel I'm being too optimistic. Let's say a month just to be safe.
[2] Protip: If you see a circle, take the Fourier Transform and cross your fingers. You can't believe how many problems it solves. If it's useful, add me as a co-author.
We've always been bikeshedders. For example, back in Slashdot days, some company would decide to migrate something from Windows to Linux. Immediately the debate became whether they should have gone with Debian or SuSE instead of Red Hat.
> How can you seriously think you've created something when you're just using someone else's software?
It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.
I know of techies who ask LLMs for relationship advice, let them coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.
I can't imagine why a 100 MHz digital signal at 2.5 V would be even particularly challenging on a small two-layer PCB. A lot of signal integrity lore has to do with passing EMI compliance (this almost certainly wouldn't), as well as with people extrapolating from Rick Hartley videos without pausing to think if it really applies to hobby stuff.
> If you're only talking specifically about your program that no one else has access to, I don't think there is any battle? Do whatever you want, no one cares nor would even know about it.
No because a phone, despite being made from the same parts as a computer is actually a completely different thing.
You can't just run programs on your phone. You have to run apps, which require approved by the government and the company that made the phone, which tacks on additional fees as well. The phone also has constant cellular/GPS/wifi/bt-mesh location tracking, and it can never be completely turned off by the user without destroying the phone because even the batteries are glued in.
It's basicially the perfect slave device for your average goy. And everyone will need one to to access your bank account, recieve insecure SMS authentication, talk to other NPCs, and generally participate in the neo-economy.
If you don't think this is right, you are literally going to empty the bank account of my dumb ass grandma who can't stop installing malware, and in every way is better served by a flip phone from the early 2000's.
> If you don't think this is right, you are literally going to empty the bank account of my dumb ass grandma who can't stop installing malware, and in every way is better served by a flip phone from the early 2000's.
Then why are you demanding that everyone else's mobile computers have to be locked down instead of demanding that somebody make a mobile phone that only makes phone calls?
If you buy a google pixel 9 (the last version for which google released device trees), you can do anything you want on your phone. My pixel runs a version of android I built myself
I think that theoretical math and physics are special, but probably not in the way you assume. It's just that there isn't a whole lot of grant money, prestige, or influence associated with them (unless you accomplish something truly exceptional).
Computer science is very close to math and should be even easier to verify, but there's plenty of dubious results published every year, simply because it's more profitable to game the system. For example, I'd wager that 50%+ of academic claims related to information security are bogus or useless. Similarly, in the physics-adjacent world of materials science, a lot of announcements related to metamaterials and nanotech are suspect.
I would point out that most products are useless, and either fail or replace other products which weren't any worse. None of which prevented me from cashing my paychecks for the first half of my career when I worked in private industry.
Most scientific research represents about the same amount of improvement over the state of the art as the shitty web app or whatever that you're working on right now. It's not zero, but very few are going to be groundbreaking. And since the rules are that we all have to publish papers[*], the scientific literature (at least in my field, CS) looks less like a carefully curated library of works by geniuses, and more like an Amazon or Etsy marketplace of ideas, where most are crappy.
[* just like software engineers have to write code, even if the product ends up being shitty or ultimately gets canceled]
Neither of us are going to be changing how the system works, so my advice is to deal with it.
There are dubious results published in every subject, including math and physics (whether theoretical or experimental). The difference is that such results are less likely to be widely cited and accepted by the field. For math and theoretical physics, the reader can (assuming sufficient knowledge and skill) verify the result themselves, so if your proof is incorrect or not rigorous enough, you won't get cited. For experimental physics, it is more common for different teams to reproduce a result, or verify a result using a different method, so papers aren't usually widely cited unless they have been independently verified. Part of that is cultural, part of that is attempting to reproduce results is relatively straightforward compared to say experiments involving human subjects, and part if is because results are usually quantitative, so "we did the same thing as paper X, but with more precision" is still interesting enough to be published.
Your response is even more misleading than the misconception you're trying to correct. The complexes formed in (charged) lithium batteries are unstable and reactive in ways quite similar to the base metal. The salt molecule, in contrast, is pretty unreactive. Salt shakers don't catch fire if dropped.
The substances similar with Prussian blue are very stable. During charge and discharge, the ionic charge of iron ions varies between +2 and +3 and the structure of the electrode has spaces that are empty when the charge of the iron ions is +3 and they are filled with sodium ions when the charge of the iron ions is +2.
Both states of the electrode are very stable, being neutral salts. The composition of the electrolyte does not vary depending on the state of charge of the battery and it is also stable.
The only part of the battery that can be unstable is the other electrode, which stores neutral atoms of sodium intercalated in some porous material. If you take a fully charged battery, you cut it and you extract the electrode with sodium atoms, that electrode would react with water, but at a lower speed than pure sodium, so it is not clear how dangerous such an electrode would be in comparison with the similar lithium electrodes.
Fine, now show a video of what happens if you pierce the Na-ion cell with something metallic. Because explosion doesn't even begin to cover what happens next in that situation. And you are suggesting that everyone should be 2 ft from such a cell, traveling at 60 mph, in all weather conditions. These things should be restricted to grid stabilization batteries and nothing else and you know it. Don't mislead people on such things.
Piercing a Na-ion cell is not good, but the effect is pretty much the same like piercing a Li-ion cell.
In both cells the electrode that stores alkaline metal atoms has high reactivity, but in both cases the reactivity is much smaller than for a compact piece of metal, so the reaction with substances like water would proceed much more slowly than in the movies when someone throws an alkaline metal in water.
If you pierce the cell, but the electrode does not come in contact with something like water or like your hand, nothing much happens, the air would oxidize the metal, but that cannot lead to explosions or other violent reactions.
The electrolyte of lithium-ion batteries is an organic solvent that is very easily flammable if you pierce the battery. The electrolyte of sodium-ion batteries is likely to be water-based, which is safer, because such an electrolyte is not flammable. It would be caustic, but the same is true for any alkaline or acid battery, which have already been used for a couple of centuries without problems.
Overall, sodium-ion batteries should be safer than lithium-ion batteries, so safety is certainly something that cannot be hold against them.
It seems to me there is a word or two missing between “rich” and “slowly”. If I read the whole thing aloud I cannot parse it into a sentence. Or the word “rich” could be removed. That would be clunky but at least grammatically sensible.
“Make data get smoothed out” is a very strange way of saying “smooths out data”
> The weird, rare, surprising patterns [that make data rich] slowly get smoothed out when an AI model trains on outputs from a previous model.
i.e., the patterns are responsible for making data rich, and they are slowly lost as each new generation model trains on the prior generation's output.
Or, if you'd prefer an analogy, we're using a copy machine to output new documents by taking the last copy spit out by the machine, adding some marks to it, and running it through the copier again. Over time, details present in much older copies blur and fade away in Nth generation copies.
It might be weird if you haven't read a lot of English. It's actually quite normal to say that process X is a way to make effect Y happen. "Makes your mout water" is more effective than "waters your mouth". "Makes your breath fresh and tolerable" is better than "freshens and tolerablerizes your breath". Etc.
Actually, what you are describing is what happens when LLM-generated prose cycles and then trains humans to use equally dull thinking.
> Which is all fine and dandy. But why play the "You simply don't understand it as well as I do"
I'll say this from the perspective of a person who publishes content online: because people's revealed preference is for content written this way. You can spend weeks polishing thoughtful, original content that will get few clicks, or you can crank out throwaway op-eds about AI and get thousands of likes and upvotes from people who just wanted to hear their own beliefs explained back to them.
My stuff appeared on HN a couple of times over the years and the less effort I put into it, the better it fared. The temptation to change your writing style and to offer increasingly more provocative and shallow opinions is difficult to resist.
My point is probably this: if you want to see better stuff, I think you gotta stop engaging with articles like this. Patrol /newest and upvote cool in-depth stuff.
> and then the machines run directly from one leg to neutral (230v)
And then every machine has a switching power supply to convert this to low-voltage DC, and then probably random point-of-load converters in various places (DC -> AC -> DC again) for stuff like the CPU / GPU core, RAM, etc. Each of these stages may be ~95% efficient with optimal load, but the losses add up, and get a lot worse outside a narrow envelope.
Yes, but it's not like any other layouts avoid those issues.
You could feed your servers off fat 12/24/48 volt supplies but with how much power a modern server can pull you're already converting in bulk even if you don't do that, limiting the potential advantages. For running CPU/GPU/RAM, there is no other option. When you need hundreds of amps at 1-2 volts, you convert that centimeters away if at all possible.
A datacenter using DC distribution is still using high voltages and stepping them down in layers. The hassle it avoids is in other aspects of power delivery.
Having to repeatedly restart your career is risky, painful, and demoralizing. I have no problem seeing why people don't like that and why it can lead to populist backlash or even violent revolutions (as it did in the past).
By the way, to address your closing comment: people don't like dying either and tend to get upset when others die?
reply