For _probability_ vectors, there is a distance measure called cross-entropy. It's a standard error measure in classification problems. It has some important properties different from the Euclidean distance (especially for low probabilities) and there is information-theoretic interpretation.
I understand it, I just disagree with the presentation. If you're sweeping complexity under the rug, say so and provide a link to further reading. I don't think cross-entropy is that common that someone in the target audience for this course would quickly and easily see the nuance.
For _probability_ vectors, there is a distance measure called cross-entropy. It's a standard error measure in classification problems. It has some important properties different from the Euclidean distance (especially for low probabilities) and there is information-theoretic interpretation.
More on it:
- http://stats.stackexchange.com/questions/80967/qualitively-w...