HN2new | past | comments | ask | show | jobs | submitlogin
LLMs generate 'fluent nonsense' when reasoning outside their training zone (venturebeat.com)
8 points by cintusshied 6 months ago | hide | past | favorite | 2 comments


LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational.


I mean, define 'reasoning'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: