2. Almost always,.5 is usually good, but you can tune this.
2. You usually flatten right before the final layers.
3. Yes, relu is preferred.
4. You're referring to double q networks, although this helps, optimality bounds are even better.
You'll learn eventually, don't worry. Cheers and welcome to deep learning! ;)
The newer trend appears to be fully-convolutional networks even for classification, since they appear to overfit less, compared to flattening+dropout.
2. Almost always,.5 is usually good, but you can tune this.
2. You usually flatten right before the final layers.
3. Yes, relu is preferred.
4. You're referring to double q networks, although this helps, optimality bounds are even better.
You'll learn eventually, don't worry. Cheers and welcome to deep learning! ;)