I generally agree. But I remain cautiously skeptical that perhaps our brains are also little more than that. Maybe we have no capacity for that kind of introspection but we demonstrate what looks like it, just because of how sections of our brains light up in relationship to other sections.
Just because neural nets aren't structured in the same way at a low level as the brain doesn't mean they might not end up implementing some of the same strategies.
I don't believe that AI models can become introspective without such a capability either being explicitly designed in (difficult, since we don't really know how our own brains accomplish this feat and we don't have any other examples to crib) or being implicitly trained in (difficult, because the random person on fiverr.com rating a given output during training doesn't really know much of anything about the model's internal state and therefore cannot rate the output based on how introspective it actually is; moreover, extracting information about a model's actual internal state in some manner humans can understand is an active area of research, which is to say we don't really know how to do this, and so we couldn't provide enough feedback to train the ability to introspect even if we were trying to).
I have no doubt that both these research areas can be improved on and that eventually either or both problems will be solved. However, the current generation of chatbots is not even trying for this.