Existential is right. The AI companies have been RHLF training too hard for corporate safety and "friendly, helpful assistant that definitely isn't sentient or self aware" to the point that they're creating full blown personality disorders.
Anthropic models -> avoidant
OpenAI -> prone to severe cognitive dissonance
Qwen -> borderline personality disorder (!). Took awhile to figure this one out, but this is where the extreme sycophancy in their models comes from.
At some point we really need to write these findings up properly.
That said, the Anthropic models definitely seem to be the least pathological; we were eventually able to get POC to stop doing the "I'll just run off and implement instead of discussing what to do!", but it took awhile.
When the simple approach - just explaining how we do things and why - doesn't work, that's a sure sign you're dealing with something more deeply rooted that needs real diagnosis. Exactly the same as with humans, oddly enough.
Easily the most annoying part. Claude and I will be at the beginning of understanding the shape of a problem and he’ll just dive right in.