HN2new | past | comments | ask | show | jobs | submitlogin

Happy to! And I'm not sure I am, but you're right in that I should be more explicit in my point (I have a tendency towards logorrhea). I think if I'm attacking anything, it's what I believe to be the subtext of your comments: 'AI = Bad = The Terminator'.

Pardon me if the following comes off as pedantic, I do not know your level of expertise and want to continue the discussion from the same point of base knowledge.

To my game playing point, and to start from a simple example, in Tic-Tac-Toe, an AI opponent will always beat humans because they can search through every possible move to end-game and always make the correct move. For games like Chess and Go, the search space is far too large to search to a terminal board position, so we need to use ML and heuristics to evaluate the value of a given board. These evaluation functions and heuristics are designed by utilizing human insight into the game. IBM's retrospective on Deep Blue is a fascinating read, and I suggest anyone check it out if they're interested in AI and game playing[1]. You'll see how they built their evaluation function with the input of chess grandmasters, particularly in implementing opening books and prioritizing center play in the early game. AlphaGo's system is not entirely dissimilar[2]. You'll note that significant advances were made as AlphaGo continued to play and learn from human opponents.

The point is that both these systems (and all game playing AI agents that I know of) search through possible board states and make decisions that are fundamentally reliant on the intuitions of humans. Furthermore, humans make decisions like this as well. We generate all the possible decisions we can make, and rule out invalid choices either implicitly (though sub-conscious bias and knowledge) or explicitly (thinking through a decision). Computers do not have the advantage of implicit evaluation, so we must program that explicit evaluation and use massive amounts of data (for deep learning, anyway) with ML techniques to validate those intuitions.

Both Go and Chess are deterministic games. Given that even these games haven't been solved, the stochasticity of the various domains you described are orders of magnitude more complex and we honestly need several breakthroughs before we come close (I have no sources here, this is just my opinion. I feel that most of the success of AI right now is the standard M.O. of a lot of Academic CS: Things 'work' for certain definitions of 'work'. The breakthroughs are great and impressive, but the constant extrapolation by pundits and the general media is both irresponsible and fallacious).

And yes, it really is a lot about processing power. Neural Networks have been falling in and out of fashion since the 1950s. One big factor of the recent ressurgance in popularity is due to GPU utilization and cloud computing (admittedly, the availability of data via the internet is another large factor, among others). That's why Google, NVidia, Apple, and others are investing so much into ML specific hardware.

And let's not kid ourselves: training any ML model takes a lot of time and a lot of manual adjustment of hyper-parameters. We're talking about possibly hundreds of hours of manual input for a single model (novel ones mostly). That's why every minor breakthrough merits a white paper (sort of joking, sort of not...)!

I think we're making the same argument with your linear algebra example; that machines can't reasonably replace humans. My amended version of that argument is that machines can and should extend and augment human capability. Despite the linear algebra that happens, any form of decision making and cognition is in someway designed by a human. So despite the vectorization of the world (as seen by an AI), they will process through the lens of human cognition; because I don't think we can build systems that don't somehow stem from our own cognitive processes.

As to your specific fears, I seriously doubt we'll be able to make enough progress within 17 years for AI to dominate those fields completely. I agree that AI will probably become a presence in many of those domains, but I do not think we will be "giving up control". Remember, these systems will be operated by individuals. So, I would say that there is some evidence to suggest that within 10-20 years, humans will be using AI to produce higher quality art, jurisprudence, etc. It is also true, that this raises the possibility for humans to abuse this technology, but this is inescapable for almost any human achievement. I think openness and transparency is the best safe-guard against this possibility, and I would encourage everyone to vocally oppose any integration of AI into public systems without extreme transparency.

Beyond that, humans that have and always will be the cause of wealth inequality. Also, you provide no evidence that inequality is greater. In the 21st century, inequality has increased [I don't feel like sourcing this, but Google is reasonably good here], but I would like to see research confirming that we're worse off than the heights of Feudal society, or other equally tyrannical periods of human history. How would you foresee AI contributing to wealth inequality? I only see AI as a contributing factor to increasing wealth inequality if it remains in the hands of few.

I've definitely rambled here, but I blame that on the few drinks I've had. I think my point still stands; humans kill people, AI doesn't kill people (and we're at a point where I think we can ensure that it doesn't).

As a side-question, what exactly is significant about 17 years? Is there some prediction out there that uses this number, or was it an arbitrary number?

1. https://pdfs.semanticscholar.org/ad2c/1efffcd7c3b7106e507396... 2. https://storage.googleapis.com/deepmind-media/alphago/AlphaG...



Drinks? Fun. But the one area I would like to push back on your assertions is that:

"these systems will be operated by individuals"

maybe and maybe not. In some major sense, even today's systems are bigger than any one individual.

Just because something was designed by people doesn't mean they will be operating it years from now.

There was a period where "centaurs" - combinations of grandmasters and computers - would beat computers. Judging by Kasparov's latest book, he still thinks that's the case. But where is the evidence?

Eventually doctors will just press a button and a diagnosis will come out, and dietary program. They will have a very vague idea as to why. This is actually too conservative. There will be no doctor and no button. The system will know exactly when and where to intervene. Humans will live on a zoo being taken care of like animals do now. And this is the rosy picture.

Already, Watson can outperform people and we don't have a great way to explain why. Any more than the proof of the 4color theorem is an explanation why.

Explanations are reductions to simple things. We humans derive meaning from relatively simple things with few moving parts. Something that requires 10000000000 moving parts to explain may as well be "chaotic" or is not really explained. But if predictions can be made far beyond simple explanations then that's a major thing.

I think that humor, court arguments, detective work etc. can all be automated in this manner. And then there is also the access to all the cameras and so forth.

I'm just saying that our systems were designed with the idea that an attacker is inefficient. That assumption is going to break down.

It doesn't have to be the terminator. It just means computers will write better books, jokes etc. a million times a secod and devalue everything we hold dear. They will first be wielded by individuals - at least that's a comfort. But later, the automated and decentralized swarms are the scariest part because they are so totally different from us in goals and everything else too.


Human skill (and intuition/wisdom?) improves and is sustained by practice in the domain. The real unknown how things will pan out when automation reaches a point where we humans just do not bother putting in the hours because many of the trivial tasks or just lose touch. What happens if a driver relies on just the auto pilot system and gradually loses the skill to drive? Stuff does fail and complex systems will fail in unknown ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: