HN2
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
LLMs generate 'fluent nonsense' when reasoning outside their training zone
(
venturebeat.com
)
8 points
by
cintusshied
6 months ago
|
hide
|
past
|
favorite
|
2 comments
VivaTechnics
6 months ago
|
next
[–]
LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational.
rsynnott
6 months ago
|
prev
[–]
I mean, define 'reasoning'.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: