Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

an LLM would complain that their internal model does not refelct their current input/output.

Since LLM's knows people knock off/test/run afoul/mistakes can be made, it would then raise that as a possibility and likely inquire.



This isn't prompt engineering, it's grammar-constrained decoding. It literally cannot respond with anything but tokens that fulfill the grammar.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: