HN2new | past | comments | ask | show | jobs | submitlogin

1. For drl,you usually don't want a ResNet actually, since reward assignment is difficult.

2. Almost always,.5 is usually good, but you can tune this.

2. You usually flatten right before the final layers.

3. Yes, relu is preferred.

4. You're referring to double q networks, although this helps, optimality bounds are even better.

You'll learn eventually, don't worry. Cheers and welcome to deep learning! ;)



> 2. You usually flatten right before the final layers.

The newer trend appears to be fully-convolutional networks even for classification, since they appear to overfit less, compared to flattening+dropout.


Yup, but he asked me about where to use flatten, not whether to use flatten. But you're right, fully convolutional is the way to go in classification.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: