I'm pointing out that they don't 'think' or 'reason' like humans, they're very impressive, but I don't think they've reached the bar for thinking yet, as simple logic puzzles or puzzles like this prove (until the LLM authors take note and add special workarounds for those particular use-cases).
I believe most LLMS no longer fail at this, because they've been given the tools to do so (for example use python under the hood to count letters), but it's an important observation because it shows us that they don't think like us.
I believe most LLMS no longer fail at this, because they've been given the tools to do so (for example use python under the hood to count letters), but it's an important observation because it shows us that they don't think like us.