It's why the "hallucination" concern is IMO not a helpful way for people to conceive of the remaining challenges. These things aren't meant to be search engines, search engines already exist, and I don't understand the utility of using the model itself as a search engine (I do understand having the model search for you and summarise what it finds, like an integrated assistant). The model is better conceived of as the part that does the thinking, and to work on knowledge that you want to be reliable you have to have that knowledge accessible to the model in some other format. We know how to store information, generally. What is interesting and useful about this model is not their ability to off-the-cuff recall facts without access to any resources, that's a party trick in humans and AI. What is interesting about them is their ability to be given a piece of information, understand it, and use that information for logical reasoning. That provides the ability to answer questions about the information, use the information in conjunction with other information, etc. That is new for a natural language interface, and it has really interesting implications for what we can build with it.