This is beyond the scope of Unicode. Unicode should be how text is displayed. If there is meaning that is not in the display of the character, then it is outside the scope of Unicode.
After all, we have many uses for the letter 'a', such as a) a*b=c and b) a as in apple. Should those 'a's have different Unicode code points?
Semantic meaning comes from context, and there is no context for a Unicode code point. Trying to insert semantic meaning is both a mistake and a patently impossible task. The article points out some of the wreckage attempting to do this causes.
> Should those 'a's have different Unicode code points?
Unicode does have separate code points for mathematical symbols. See for instance U+1D44E MATHEMATICAL ITALIC SMALL A. They see little use in practice though, apart from people using them for funky fonts on Reddit and such.
They are used by OpenType math fonts and systems that use them (e.g. the new equation editor in Word from 2007 on wards, and Unicode-capable TeX systems with the `unicode-math` package).
After all, we have many uses for the letter 'a', such as a) a*b=c and b) a as in apple. Should those 'a's have different Unicode code points?
Semantic meaning comes from context, and there is no context for a Unicode code point. Trying to insert semantic meaning is both a mistake and a patently impossible task. The article points out some of the wreckage attempting to do this causes.