> What would a "hallucination free" LLM do with that?
To me, there’s a qualitative question of what details to include. Ideally the most important ones. And there’s the binary question of whether it included details not in the original.
A related issue is that preference tuning loves wordy responses, even if they’re factually equivalent.
To me, there’s a qualitative question of what details to include. Ideally the most important ones. And there’s the binary question of whether it included details not in the original.
A related issue is that preference tuning loves wordy responses, even if they’re factually equivalent.