Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

It's best to think of instruct-tuned LLMs as mirrors rather than intelligences. They generally reflect what you're putting into them, but they do it in a way that can easily masquerade as wisdom. I think this makes it really easy for people to self-delude.
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: