HN2new | past | comments | ask | show | jobs | submitlogin

The idea of the tool you're using "hallucinating" is quite asinine. It makes me question why anyone is building products using ML systems that can do that. Like, why put a rudder on your plane that very rarely just decides to fly you into the ground?


For the precise reason, I would never put LLMs in my product (if I have one) be it financial (tell me what two products are doing good together in winter seasons near XYZ location) or even user facing (how can I turn off photo sharing so that only I see what I take with my camera) or something similar as trusting LLMs might lead to serious troubles if the output is wrong.

Just dry run through the scenarios and assume LLM's output is wrong even if 10% of the time.


The LLMs were never designed to be fact checkers, they're text generators, not fact generators.

Additionally you don't need ML to get your computer to show you confidently something that is completely wrong, all it takes is to multiply specific floating point numbers repeatedly, really.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: