I don't think Norvig thinks differently; he's just using the term "AI" differently. "AI" in the context of the OP is "strong AI" or AGI (artificial general intelligence). Dr. Norvig takes the term "AI" to also encompass "machine learning" in the sense of developing algorithms that learn within limited problem domains. When Norvig means AGI, he says it explicitly (see the video at around 7:30 to 8:30), and he also says specifically that Google is not interested in general intelligence.
Well, he's obviously much more qualified than I am to make that observation. But that doesn't make him right automatically, or does it ?
AI has a certain level of expectation associated with it and I do not see google matching that level in any of their released products.
Computers to date do what we tell them to do. As soon as that changes we call it a bug, not a manifestation of intelligence. As long as that view persists I would argue that we have not yet achieved 'AI'.