Honest question. Are you fearful of the moral implications of AI? Whenever I hear someone that is fascinated about AI and only thinks about it as an intellectual pursuit, I’m curious if they are thinking at all about the consequences of powerful AI.
Things looked very different in the early 1970s; that's when I took my first AI class. Now, naturally, I worry about technology that can such deep and broad impacts on humans. Back then most of my programming was on punch cards submitted to mainframe operators behind a glass window.
These types of worries always strike me as worrying about keyhole surgery going wrong, before we are capable of making scalpels, anasthetic, or even video cameras -- or hell, before we even know what a tendon is or what the purpose of blood is. Or worrying about the challenger 2 explosion when we can't even make gunpowder. Or worrying about the logistics of flying cars in three dimensions and traffic crashes before we are even able to build a boiler engine to drive a train.
We are so far away from GAI at the moment that I don't for one second actually worry about the moral implications of General Artifical Intelligence.
I don't see the point in worrying about something when we know literally nothing about that thing, and barely have a path to making it. It's very likely that by the time we are capable of making GAI (Excusing the very probable idea that we will be able to simulate a brain, but not at any proper speed -- see the three body problem and the challenges with simulating literally any other physical systems), there will be half a dozen problems we do need to worry about, that we cannot forsee. There will also be half a dozen limitations that mean that our current worries are essentially worthless. It's the same with all new technology.
It's also interesting that people who tend to worry about GAI never worry about current levels of AI, especially in a military context. They seem entirely unconcerned with being worried about literally crappy and half-baked neural networks being deployed for use in drones. They seem entirely unconcerned with the lack of proper dataset balancing and sorting that ensures that current AI models do not have racial bias (or, indeed other types of bias).
Just last year I saw a twitter post about a startup that was re-creating literal phrenology, using AI to try and profile whether people were criminals or not based on facial shape. The typical Less Wrong / MIRI folks never seem to be worried about that, no, they spend their time in fear of Roko's Baselisk and other currently-impossible scenarios. They literally purged posts, threads, and comments that made any mention of that under the utter and complete fear that maybe in the far flung future a very bad simulation (Unless, their brains are cryogenically frozen, I guess, but it's very likely that brain structure would degrade under the immense timespan anyway) of them would be tortured for their current actions, by a good AI, that had apparently gone so insane that it thought that torturing low-fidelity simulations of people in the future could affect the past and cause it to be created faster.
Speculating about the future can be a positive thing, but I don't see how this is at all useful or healthy.
Worrying about the future of surgical technology is very different. The end goal of surgery is to save a life or improve the quality of life, and it involves restoring a single person back to working order.
The end goal of AI is to _think_. The upper bound on that is horrifying. Once something can think it can build. Once something can build it can multiply. The upper bound on AI is replacing the human species.
I’m not saying I’m nervous about this happening next year. I know how terribly inept we are at true GAI. I’m thinking purely abstractly, and in that light I think we should more serious about ground rules for AI.
Can you re-read my post more closely and actually critique it. You chose one of my points, arguably the weakest (partly because it's an analogy -- analogies are mostly for flavour, they don't make a good argument but they help you to appreciate where I am coming from) and ignored the stronger criticisms I posted after that.
Why is the upper bound on thinking horrifying? We are currently the upper bound within our own domain, and on the whole, we've been getting better as we lifted that bound.
From where I sit, the bad outweighs the good.