I don't know how it's developed since, but from what I remember that was how it started - the AIs weren't following the standard moves (joseki) that we'd built up over centuries and human players were thrown off by the nonstandard responses that were working better than expected.
I wonder if AI could be built to continually adapt, so that instead of playing an optimal strategy, it instead chooses between various suboptimal strategies. If humans train to play against the optimal strategy, then maybe the AI could do better by playing in suboptimally but less expected ways.
This is already happening. The point differences for sometimes huge deviations are minuscule, so it's worthwhile to have in one's repertoire. The same is mostly true for purely human games, too: These are trick moves.