Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

It's much better at vectorising code (using SSE and AVX) than MSVC and GCC, it's got better loop unrolling heuristics (it's better at working out when unrolling isn't worth it or will slow things down), and its maths functions are much faster than the native ones on all platforms.


> Its maths functions are much faster than the native ones on all platforms.

If this includes OS X, I would love to see a source.


In my experience writing high end VFX software for three platforms, Linux's libc is the slowest with normal maths functions, and Windows is the fastest.

Intel's math libs are often 4-5x times faster - if you do a microbenchmark of a loop of powf() or sin() functions, it'll be that much faster using the Intel libs.

If building with fmath=fast for fast floating point math, Intel's fast versions are also quite a bit more accurate, and done't produce nans or infs as much.


> Linux's libc is the slowest with normal maths functions

Do you happen to remember which one? There have been a few "sporks" of glibc in the last few years and, if memory serves me correctly, Debian now use eglibc.

I think the sporks were done to address bloat, but I'm also curious if they address speed.


Why OS X in particular?


Worth noting that auto-vectorization is new in MSVC.

http://msdn.microsoft.com/en-us/library/hh872235.aspx




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: