Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I think another problem is the expectation of what AI will look like at all. We think of intelligent human-like robots like something from science fiction, when in reality AI could just be a bit of code running on a standard machine with specific and likely specialized inputs and outputs.

Think of an AI machine working with atmospheric data as one "sense" combined with seismic data and some others, with the directed goal of predicting certain types of disasters (tsunamis...?).

Free will and emotions are other assumption we would likely not give these machines, so the worry of self-interest may not exist either, which would aid in making it good at something useful for us.



I think the best defintion of "free will" basically boils down to "unpredictable in detail absent simulation". This sounds weird at first, but the results probably match your intuitive understanding of free will. Given that, I think that any general intelligence approaching human level will have free will.

That doesn't mean that it won't have certain goals, though it remains to be seen whether it will be possible to design a clean goal system with a top-level goal (see also "Friendliness"). Humans clearly do not have this kind of goal system.


Would a good Pseudo-Random number generator satisfy your definition?

(By the way, humans, when asked directly for random numbers, are terrible at the task.)


Well, it satisfies the "free" part, but probably not the "will" part, which implies reasons for actions. I'm not actually sure humans have free will under my definition, but I think we probably do. It could be that some algorithm that's much simpler than an actual human could predict in detail what a given human will do without simulation, in which case I'd be forced to admit that humans don't have free will by my definition.


I took a class with leading AI researcher Patrick Winston in the spring... it really is one of the most fascinating fields right now. If there is to truly be an "AI spring" then it will likely require massive collaboration between not just computer scientists, but researchers in a wide span of fields including neuroscientists, general biologists, psychologists, and even philosophers.

Free will and emotions are other assumption we would likely not give these machines, so the worry of self-interest may not exist either, which would aid in making it good at something useful for us.

The question is, is it even possible to mimic human intelligence without emotion or free will? Who is to say they aren't wholly dependent on one another?

And if any group of people finds out how to imbue a machine with free will, I'd bet my life they'll go through with it.


There's nothing particularly magical about emotion - it's just a low level computation by our subconscious that we don't have conscious access to. Happiness, anger, fear, etc are all responses to the environment we evolved because they produce (or at least, did at one time) useful actions. It's the mechanism the brain used to make predictions based on data before it developed a consciousness.

An AI would make predictions based on data just like a human would, but the mechanism it used to do it would certainly be different.


Indeed. So goes the James-Damasio argument, in any rate.

Perhaps the parent actually was referring to the qualia component to emotion--or indeed qualia in general--which is much more difficult to explain.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: