It's much better at vectorising code (using SSE and AVX) than MSVC and GCC, it's got better loop unrolling heuristics (it's better at working out when unrolling isn't worth it or will slow things down), and its maths functions are much faster than the native ones on all platforms.
In my experience writing high end VFX software for three platforms, Linux's libc is the slowest with normal maths functions, and Windows is the fastest.
Intel's math libs are often 4-5x times faster - if you do a microbenchmark of a loop of powf() or sin() functions, it'll be that much faster using the Intel libs.
If building with fmath=fast for fast floating point math, Intel's fast versions are also quite a bit more accurate, and done't produce nans or infs as much.
> Linux's libc is the slowest with normal maths functions
Do you happen to remember which one? There have been a few "sporks" of glibc in the last few years and, if memory serves me correctly, Debian now use eglibc.
I think the sporks were done to address bloat, but I'm also curious if they address speed.