HN2new | past | comments | ask | show | jobs | submitlogin

Language models need redundancy (as informing structure). Not surprising, since they're trained on human language. It's hard to train a model on a language with a high entropy. I haven't tried it, but I think LLMs would perform quite badly on languages such as APL, where structure and meaning are closely intertwined.
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: