The part of the article that bothers me here the most is a leading nueroscientist asserting that the brain is not computable. That is demonstrably false: The brain is a quantum system, just like everything else in the universe. All quantum systems containing n qubits can be simulated by 2^n classical bits. It may very well be impractical to compute a decent sized brain, but that's still technically computable, which is a serious problem in his whole "you can't reduce human nature to a computer algorithm" tirade. Yes, you can. It will be impossibly huge and slow, but you can.
The whole question of whether everything in the universe is computable is usually regarded as an open question, sometimes called the Church-Turing-Deutsch thesis. This wikipedia article has a little more: http://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93D...
Lots of possibilities allowed by general relativity (e.g. computers orbiting black holes and information traveling backwards through time) that may or may not correspond to our universe allow for computation beyond that of a Turing machine. Until we understand quantum gravity it seems premature to write off those possibilities.
Even simpler, there is the question of whether the universe allows for arbitrary precision measurement of a physical quantity. If it does, then it may turn out that an observable physical quantity (e.g. the mass or charge of a fundamental particle) is a noncomputable real number. If it doesn't, then it may be possible to represent a neural network with noncomputable real weights, but impossible to actually measure them to enough accuracy to simulate the network on a given input.
Lots of possibilities allowed by general relativity (e.g. computers orbiting black holes and information traveling backwards through time) that may or may not correspond to our universe allow for computation beyond that of a Turing machine. Until we understand quantum gravity it seems premature to write off those possibilities.
But that's completely irrelevant when it comes to the functioning of the brain, which operates in the oh so mundane medium size, medium velocity, medium energy scale which modern physics is able to model extremely accurately.
I know that a few folks (Penrose, most notably) think that quantum gravity may be relevant to the brain, but that's an extreme fringe opinion, and in Penrose's case it's practically earned him full fledged crackpot status despite his numerous indisputable successes in physics.
While it seems unlikely that relativistic effects would play a role in the brain, the original argument also said that the same logic should apply to anything in the universe, e.g. a solar system or a galaxy, which is why I mentioned it.
>Yes, you can. It will be impossibly huge and slow, but you can.
A major problem is many of the brain's processes use "chaotic models". These are what we associate with the "butterfly effect" where extremely tiny deviations from the true value make the model diverge from actual behavior. Most models tend to have some tiny deviation from the true value but it usually doesn't matter. In this case it does, and it may be an insurmountable problem.
The brain might be computable but that doesn't mean a classic turing machine would work. Maybe one day we can build a biological protein-based computer that shares the "irreducible properties" of brains, but then all we've really built is a brain.
I'm just wondering, how does this statement make sense if we replace the "brain" with "a block of wood" since they are both a "quantum system, just like everything else in the universe":
"The part of the article that bothers me here the most is a leading lumberjack asserting that a block of wood is not computable. That is demonstrably false: a block of wood is a quantum system, just like everything else in the universe. All quantum systems containing n qubits can be simulated by 2^n classical bits. It may very well be impractical to compute a decent sized block of wood, but that's still technically computable."
What would it mean to say that a block of wood is computable?
You're being overly tight with your definition. We still don't completely know how the brain works, and while we have a top-down discipline and a bottom-up discipline, we still haven't figured out how they meet in the middle - and the fundamental point he's making is that there's nothing to say that we will get those algorithms. Until we get to that stage, regardless of whether it's theoretically feasible if you had a magic emulation tool, it's not feasible because we don't have the algorithms to emulate. And if we get to that stage, 'we made a brain emulator!' is one of the smaller stories, since things like 'we have answered the questions of whether free will and the soul exist' will be far more profound.
Technically, you'd have to say that brain's output is combination of computable system and a perfect random noise source.
That is because a simple source of quantum noise should not be computable. Any computable pseudo-random noise generator produces strings of digit with asymptotically constant Kolmagorov complexity while the Kolmagorov complexity of a true random number generator goes to infinity.
All that said, the person is article is certainly overreaching to imply we can know the brain definitely isn't computable. Clearly he does have any kind of knowledge that would tell him that. This might enough to say since the rest of his argument seems to hinge on this.
What you're saying is merely postulation, it's not "demonstrably" true, not to mention your use of the term "impractical" is probably the most misleading use of that word I've read today.
You're writing off thousands of years of philosophy and dozens of years of experimental results in maybe a paragraph, if I'm being generous.
It actually IS demonstrably true that the brain is made of atoms. That makes it a quantum system, which our modern physics tells us how to simulate. The use of the term "impractical" here refers to the fact that we don't have quantum computers and that we don't have the computing power required to simulate such a large quantum system.
I'd say dismissing thousands of years of philosophy here is relevant, since it was likely produced before we had the tools to understand what we're dealing with. All experimental physics points towards an understanding of how quantum systems work, and that's all we need to model any quantum system. The brain is not any different because it's a brain.
quantum theory is a hypothesis. perhaps it's the best one so far for what the brain is made from. but it's truth is only as demonstrable as this: it's not yet been falsified.
You're confusing hypothesis with theory. Hypothesis is a proposed explanation that can be tested (i.e. is falsifiable). Theory, on the other hand, is a hypothesis that has undergone extensive testing and has been proven to be a plausible explanation for observed phenomena. Quantum mechanics has undergone rigorous testing, and has proven time and time again that it can accurately explain many of the properties of our universe.
You can say the exact same thing about any scientific model. My view on this has always been that as long as the model accurately predicts experimental results, assume it is correct for your calculations until it is proven to be wrong.
Even then, we never stopped using classical mechanics even though they were proven to be wrong at a variety of scales. They just happen to very closely approximate reality in some contexts and are useful.
The fact of the matter is, we have tools that are correct as far as we know and they point towards thinking that every quantum system is computable. Until this has been proven wrong, the fallacy is believing the brain is different, not the other way around.
Theories not only have to predict outcomes of events, they must also be falsifiable (and must expand on something thus far unexplained by other theories, you can't just recreate gravitational theory, for example).
You are, by your own admission, working with an incomplete understanding of how a scientific model functions. So I ask you, why should you be even commenting on this topic? Why should anyone take what you have to say seriously on this specific topic?
So no one should be commenting on this topic unless they have a perfect understanding of scientific theory? That seems terribly counter-productive.
I'm commenting on this topic to share my opinion and, to the extent of my knowledge, try to explain why I believe someone else's reasoning is flawed.
Now if you believe my reasoning is false, you're free to call that out. You're not free, however, to dismiss my contribution to the discussion simply because I'm not operating under perfect understanding of a field that isn't mine.
Call it out, explain why, participate in the discussion, and drop the personal attacks. I think at least part of my point is valid, even after what you pointed out.
I'm pretty sure I am free to dismiss your contribution "simply" because you don't know what you're talking about.
But let's not get caught in the weeds here; I don't think you're correctly conveying the level of certainty with which we understand quantum mechanics. There's a ton we don't have the slightest idea about in this area of science, so let's not forget that.
Quantum theory is a "hypothesis"? The entire foundation of modern electronics is based on quantum theory. It's something that has been tested over and over in labs and the regular world for over a century.
Yes, the brain is made up of atoms, but our modern physics does not, at least not with the degree of certainty you're waving about, tell us that it is therefore, necessarily, simulatable.
I believe this to be the case, and there is some evidence that this is the case, but it is not anywhere near as certain as you're claiming.
As for your word choice re: impractical, the word shouldn't be used in place of 'impossible', which is the correct word you're looking for.
> What you're saying is merely postulation, it's not "demonstrably" true
I don't think the parent disagrees with you. The question of whether or not something is _computable_ in the technical sense of the word does not need any sort of demonstration or experimental results.
Actually, when we assume that any particular physical system is computable, that's exactly what it is - an assumption, there is currently no strong evidence that all physical systems are computable, and despite the fact that it seems most people (including experts) believe that they are, it's still usually considered an open question.
> Viewed from the right angle, the CTD Principle still is shocking. All we have to do is look at it anew. How odd that there is a single physical system – albeit, an idealized system, with unbounded memory – which can be used to simulate any other system in the Universe!
>“Fallacy is what people are selling: that human nature can be reduced to [something] that [a] computer algorithm can run! This is a new church!”
Actually, the "new church" is that human nature is something more than physics. Even if Kurzweil is wrong in his predictions, certainly this neuroscientist is also wrong with his metaphysical beliefs that the brain is something more than a mere "machine".
He doesn't suggest that human nature is more than physics, he is saying that consciousness isn't computable. If neural subsystems are a hierarchically arranged and internally generated chaotic dynamic activity, there is the problem of long-term predictions and causal inference.
>It may seem paradoxical that a deterministic phenomenon is inherently unpredictable, but in systems that exhibit chaotic behavior, small uncertainties are amplified over time by the nonlinear interaction of a few elements. The upshot is that behavior that is predictable in the short run becomes intrinsically unpredictable in the long term. As a result, physiologists cannot make strict causal inferences from the level of individual neurons to that of neural mass actions, nor from the level of receptor activity to internal dynamics. The causal connection between past and future is cut.
That seems irrelevant to making a working brain thingy. We don't want to predict how another given brain will evolve, we want to mimic intelligence. Your brain evolves differently than mine, yet we both are intelligent (I think). I don't need to predict your thoughts to have thoughts of my own.
Perhaps the randomness is even necessary because otherwise some situations could never be resolved (like the classic who should go first to go through a door - after you - no, after you...).
Why does this matter? If you run a simulation of the brain, you'll soon get different output than the original would output, but does that mean the simulation isn't working?
It'll still be intelligent behaviour, even if it isn't the exact same behaviour. It'll still be the same person; if such behavioural differences mattered, then turning up the temperature slightly would make you a different person. Thermal noise bubbles up to the macroscopic level all the time.
That has always been my response to John Searle's Chinese Room [1] thought experiment. The argument boils down to the human consciousness either being magic or not. Magic being something beyond physics.
Thank you for summing that up so concisely. I'm going to try to remember the "so you think human nature is something more than physics?" argument for the future.
I believe the argument is consciousness is not a causal product of relations represented in the brains structures but a property of the substrate.
Consider that a brain model on a turing machine is equivalent to a sticks and stones model running the same program. Do you believe some person moving a bunch of sticks and stones around can produce conscious experience?
If the first paragraph is correct and the crux of his argument is that the brain is not computable, then there's nothing to see here. Once you assume that, then of course you don't think it can be simulated.
That this further devolves into meandering about consciousness just says to me that even at the point where Kurzweil's singularity has already happened, he wouldn't call it AI. Yes - if you take it as an axiom that humans have special sauce that you can't reproduce with an algorithm, AI is impossible.
I keep asking Siri "Can entropy be reversed?" hoping she'll respond with that. And Google hoping it'll show just that answer, no links. Wolfram get's it.
When Newton came up with his universal law of gravitation, he also made an interesting observation. He had no idea why gravity worked the way it did, he could only model its behavior numerically.
It's been over 300 years and we're still trying to figure out exactly how gravity works. But in a way it doesn't matter: his law has given humanity tremendous abilities it didn't have before.
When Deep Blue won the Jeopardy contest a couple of years ago, it was obvious that it wasn't intelligence in the sense that we commonly understood it. Yet it was able to beat the human champions. Deep Blue wasn't a model of a human player, but it didn't matter because for the purposes of its construction it performed just as well as one.
My money says the singularity happens the exact same way -- we are able to "fake" more and more things that look exactly like intelligence until one day we're able to fake intelligence to a degree that it's virtually indistinguishable from our own. We're eventually able to do something that looks like moving our sentience into a computer even though it "won't really" be doing it.
My guess is that we're hundreds of years away from that date, but whether it happens or not, the fact that an expert right now in the complexity of the underlying physical system has an opinion on the computational problem that probably won't be solved until after 2100 doesn't seem to me to be very relevant. Of course it's complicated. Of course we don't understand it. And of course we can't duplicate the structure of things we don't understand. I believe for any layman in the field all of that goes without saying?
That's not really true about Newton by the way. Newton most likely believed he did not have a convincing argument he wanted to put his name to or advance that wouldn't also drag him into yet another round of having to deal with someone like Robert Hooke. At that point of his life, his ambition was greatly lessened and his success somewhat assured.
There is firm evidence he had at least 3 serious views (and one fairly archaic one not many want to talk about) on the cause of physical gravitation. It is only the "modernists" who need for Newton be be not beholden to any of them that elevate and reiterate "Hypotheses non fingo" to some sort of rallying cry as if he had it tattooed on his torso.
It is possible that the easiest way to simulate the brain in a way indistinguishable from an actual brain isn't to have something work in the same way.
There is nothing to say that evolution has arrived at the perfect method.
Kurzweil is saying much more. In The Age of Spiritual Machines he states "Artificial Intelligences claim to be conscious and openly petition for recognition of the fact. Most people admit and accept this new truth." What cluster of running software by which algorithms produces this consciousness 16 years from now?
Deep Blue's Jeopardy performance was in large part due to its buzzer speed. It also managed to answer questions correctly, but game mechanics also played a role.
http://ken-jennings.com/blog/archives/2554
Singularity does not have to do with our perception of the intelligence of a machine. In singularity the importance/validity of that machine's perception of our intelligence is higher than our's. Therefore, I wonder whether we will be able to know it when we reach singularity.
The "singularity" is never going to happen because it assumes intelegence is a single thing that can be increased with more processing power and better algorithms. Completely ignoring things like information theory, over fitting, insanity, training data, and chaos theory.
We can and probably will build reasonably general AI, but you far more likely to see each generation of AI being an ever smaller improvement than any sort of runaway exponential progression. Not to mention hardware progress has slowed to a relative standstill.
Suppose one has the means to build a flexible General AI with modern silicon technology. How could this not result in a thing whose abilities couldn't then be continuously increased at the rate regular silicon technology increased?
The whole "diminishing returns will stop us" just seems like a comforting fairy tale for those who don't want to think about the consequences (which I suspect a bit of thought does show won't be as rosy as Kurzweill imagines).
Edit: Hardware progress is still mostly following "Moore's Law". The only that's not increasing is processor speed. But if we build a "reasonably general AI", how could that box's capacities not be increased by tightly integration with other similarly intelligent boxes?
When a dumb AI comes up with the correct answer using a smart AI is not going to give you a better answer the old one is still correct. However, it could give you a wrong answer for whatever reason. As to diminishing returns, there is a window between a self driving car that's better than people and a 'perfect' self driving car but clearly that's a diminishing return situation, because it was giving you the perfect answer 99.99% of the time there is not a lot of room for improvement.
PS: S curves often look like exponential curves but the real world has real limits so you can't have unlimited exponential progression of any type period end of story. And it looks like we are on the down slope when it comes to transistors. http://www.extremetech.com/computing/123529-nvidia-deeply-un... "Nvidia deeply unhappy with TSMC, claims 20nm essentially worthless" And that's for video cards which are embarrassingly parallel.
I'd say that strong artificial intelligence is far more likely to happen via bottom-up modeling of the existing human brain as a starting point. The computational capacity for reductionist simulations that will exist probably 20 years from now, and as computational capacity increases those simulations start to become emulations. I don't expect to see massive gains over that time in the other way of doing it.
Once you have crude human brain models, a lot of very unethical things will start to happen in the course of further development. (e.g. countless deaths of intelligent beings if you're in the continuity identity boat, and probably a lot of unavoidable pain and suffering regardless of your take on identity). That seems hard to prevent given the enormous advantages that will accrue as a result; there is a strong incentive for people to adopt the pattern identity point of view so as to justify what they are going to do with mind copies in the course of development.
The important point is that once you have human brain models, from there the path to many different forms of strong artificial intelligence is just a matter of iterating those models. This seems far more likely to produce results than constructing new forms of intelligence progressively and de novo.
>Addressing fellow scientists, he dismissed the singularity as “a bunch of hot air,” and went on further to declare that “the brain is not computable and no engineering can reproduce it.”
The human mind is one of the few things that we encounter during everyday life that we may not even have the right conceptual framework to understand.
It is an open question whether the physical processes of the brain can be simulated by a computer, and it is even an open question whether the physical processes of the brain account for the full range of human conscious experience. I look forward to seeing this field evolve during my lifetime, but significant progress may continue to elude us.
We're just waiting for a really smart simulation programmer with access to a super-powerful computer to create a simulation of a small, earth-like world so we can watch as accelerated evolution creates some human-like (or not) being with a similar brain. I think we can write an algorithm for an AI that can pass the Turing test, but I think serious AI that will precipitate the singularity will come as a result of evolutionary algorithms and very powerful computers.
I remember reading a short story along those lines.. it ended with the simulated beings hacking physics and dropping off into a pocket universe, without helping at all.
Well, I guess they didn't kill us all. Might have been written by Greg Egan. Sounds familiar?
Some of this actually fits perfectly into the framework of Kurzweil's The Singularity in the sense that you could argue that most neuroscientists are too focused to the complexity of the brain and aren't anticipating the progress that may be possible through rapid evolution of tools for looking at the brain.
The said, I think Kurzweil's plan for constructing an intelligence actually oversteps his basic approach. Tools are advancing on multiple fronts. Not only does that give us multiple ways a singularity could happen (from brain-simulation to simplistic-but-massive-ai to clever-ai to bio-computers) but the multiple advancing fronts could go around apart walls (a sufficiently sophisticate computer could make brain processes look less opaque etc).
Personally, I'd say Paul Allen's counter-argument, which I recall as boiling down to the inherent limits of human-produced software, is the most plausible counter-argument.
> Personally, I'd say Paul Allen's counter-argument, which I recall as boiling down to the inherent limits of human-produced software, is the most plausible counter-argument
I often wonder if the ultimate end counter argument may be that the same things that make us "intelligent" are also those that give us our human flaws - exactly the same things that we were trying to avoid in the first place by using computers. Perhaps we can't have human-like intelligence without also being forgetful, inaccurate, selfish, lazy, irrational, greedy, angry, sad etc. If someone did invent a computer with all those attributes, would it be useful?
> there's no reason to expect a mind that isn't produced by the same process to possess them
True ... but then will those minds be able to perform the feats of intelligence that we hope for from the "singularity"? Will a mind unable put aside the fact that valves can make a t.v. set run, unable to dream, imagine, love, and lacking the motivation of greed and competition with its peers etc. - will that mind be able to discover the transistor as an alternative? Perhaps our evolutionary psychology is part of the reason we exhibit what intelligence we have, not just an unnecessary relic of our past?
Oh good, a few more years until I'm coerced into uploading my brain into bizarre-o-land by someone's deliberately-create-the-singularity-project gone horribly right.
it may be more accurate to say "parts of the brain of an average human may be computable". Ofcourse I have no data to prove either way. There are incredible human beings who lived in the past and living in the present, who brain is faster than the fastest computers put together. But simulating super humans like ramanujan, is it even possible?
Ramanujan isn't different than you or I in that regard. His brain was an assembly of neurons, connected via synapses. There's nothing pointing towards the fact that this assembly is any different from any machine, regardless of achievements in the field of mathematics. Because we fail to understand it fully yet doesn't mean it transcends physics, that would be absurd.