HN2new | past | comments | ask | show | jobs | submitlogin

Upvotye, but I wonder about this statement:

> The decision-making needs to be developed (at some core level) by a human.

With neural nets you essentially throw a ton of data at them, and then they get better and better at recognizing certain patterns, and then can then 'see' them in new data you provide. As this gets more and more advanced (less training data, and yet less and less false positives), we will start to stray into the area of 'emergent behaviour', where we really are no longer in charge of making the decisions.



I'm not sure where this idea of NN's as a black box came from. It doesn't really work like that in real world applications. Even basic multi-layer perceptrons require adjustment and fine-tuning. There are tons of hyperparameters to adjust, feature engineering to do, and even just cleaning your data sets is a non-trivial task that can't be completely automated (yet).

Also, training a model is not as easy as dumping data in. NN's often suffer from high variance, so you need to constantly make slight adjustments. This cycle of adjust-process-analyze is very time-consuming both in terms of computing time (even on Google's servers training can take a few hours) and human-time.

Sometimes you'll let lucky. You can build a NN that gets accuracy in the ~80% range for handwriting recognition with ~50 lines of code, if not fewer. But that missing 20% is critical for any important task and getting there requires a lot of "parenting". And most times you won't be working with a vanilla NN and you won't be getting more than ~50% to start with.

It's also important to note that NN's are not a panacea; in fact, they're often not the right tool for the job. They tend to be outperformed by simple statistical learning techniques in a variety of tasks. Deep NN's can do a lot, but require a lot of data and constant adjustment of hyperparameters.

The biggest advances and most impressive predictions these days come from a combination of techniques and models, and these ensemble methods require a lot of work on part of us humans. Ensemble learning is where the magic really happens, and by magic I mean tons of work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: