Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

>if there is a sample threshold where it's worth exploring deep learning

Not especially, but there are tasks where DL models seem to occasionally outperform by a little. If you really want to milk extra accuracy it can be worth it to try a DL model, and if it performs as well/better you can use it to make an ensemble along with your GBM or replace the GBM, though it's rare that it is worth it. If you check tabular data kaggle winner writeups most use gbms or an ensemble for a tiny boost over just a GBM.

Assuming limited time to work on the problem, you'd almost always want to focus on further feature engineering first and likely some hyperparameter tuning second.



what's GBM?


I assume Gradient Boosting Machine.


In practice almost always LightGBM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: