I'd strongly suggest addressing the points I've made rather than resorting to a lazy dismissal like I "sound like a generated account." It's a non-argument and contributes nothing.
For the record, I'm not affiliated with OpenAI. I use ChatGPT often. It solves my problem many times. Many times it fails miserably. I neither love it nor hate it. It is what it is.
My frustration stems from the sheer laziness of some posts and comments about LLMs here on HN. The ones that irritate me the most are those that expect LLMs to somehow possess magical abilities like being able to inherently distinguish between correct and incorrect outputs. Where does this expectation even come from? Have these people spent no time at all understanding how LLMs actually work? No? And yet they expect magic? It's baffling!
If you have a substantial critique of the points I raised, by all means, share it. If not, please spare me the hand-waving dismissals.
Your argument is upside down. If people are insisting LLMs aren’t magic, it’s only because too many people argue that they are. Just like you’re frustrated at the repeated con arguments, other people are frustrated at the repeated pro arguments.
Your "argument" is so poorly constructed that it could easily be considered trolling and therefore feels like it has no point to engage it, thats one of the main reasons it reads as generated by OpenAI bot accounts, like just today someone realized they could ask about the time Jimmy Carter kicked a klansmen in the nuts and GPT would explain with excruciating detail how and when it went down, except this never happened, but with your "argument" we should just assume that GPT making things way too frequently its just a part of life and that its still a pretty useful tool despite misleading thousands of people in a daily basis.
For the record, I'm not affiliated with OpenAI. I use ChatGPT often. It solves my problem many times. Many times it fails miserably. I neither love it nor hate it. It is what it is.
My frustration stems from the sheer laziness of some posts and comments about LLMs here on HN. The ones that irritate me the most are those that expect LLMs to somehow possess magical abilities like being able to inherently distinguish between correct and incorrect outputs. Where does this expectation even come from? Have these people spent no time at all understanding how LLMs actually work? No? And yet they expect magic? It's baffling!
If you have a substantial critique of the points I raised, by all means, share it. If not, please spare me the hand-waving dismissals.