HN2new | past | comments | ask | show | jobs | submitlogin

I don’t think you’re intending to impute intention, it’s just an implication of statements you made: “making stuff up on the spot” and “bullshit generation” vs unknowingly erring—these are all metaphors for human behaviors differing in their backing intention; your entire message changes when you use some form of “unknowingly erring“ instead, but then you lose the rhetorical effect and your argument becomes much weaker.

> that's not even remotely true and if you've worked with these technologies at all you'd know that

I have spent a good amount of time working with llms, but I’d suggest if you think humans don’t do the same thing you might spend some more time working with them ;)

If you try to you can find really bad edge cases, but otherwise wild deviations from truth in a otherwise sober conversation with eg chatgpt rarely occur. I’ve certainly seen it in older models, but actually I don’t think it’s come up once when working with chatgpt (I’m sure I could provoke it to do this but that kinda deflates the whole unpredictability point; but I’ll concede if I had no idea what I was doing I could also just accidentally run into this kind of scenario once in a while and not have the sense to verify)

> If I'm talking to a human I can make some reasonable inferences about what they might get wrong, where their biases lie, etc.

Actually with the right background knowledge you can do a pretty good job reasoning about these things for an llm, whereas you may be assuming you can do it better for humans in general than the reality of the situation



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: