It lets you translate freshman level math problems with the same indices. As soon as you deal with Fourier transforms, Vandermonde matrices, polynomials, or pretty much anything where the array index is related to the value in the array, zeros make more sense. People like 1-based because they're familiar with it, and that's fine, but pretending that it makes more sense because their familiar with some math that uses 1-based is silly.
I'm not interested in the language lawyering, because yes I know the standard provides more freedom to compilers. I just think those definitions are very universal for any real computer that would otherwise run my software. Please don't bring up Windows 3.1, that's about as relevant to most of us a PDP-11.
And for what it's worth, using typedefs based on the above provides more readable printf strings. This is hideous:
Well there's a few things to keep in mind. For one, `sizeof(char)`is guarenteed to be `1` - the standard simply says so. That doesn't mean that a `char` is 8-bits long, however. On certain systems, like TI DSP's, `char` is 16-bits long, and thus `sizeof(short)` is also 1. `sizeof(int)` may be either 1 or 2, I can't recall (both are standards compliant).
All that said, POSIX requires `CHAR_BIT == 8`, which then basically ensures what you wrote to be true. So if you're willing to target POSIX (or POSIX-supporting systems) then such a thing is perfectly fine.
The `stdint.h` types are still better though, IMO, because they make your intentions a lot more clear.
I haven't programming on the PS4, but an 8-byte `int` sound very suspect and fairly unlike (but not impossible). That said, it wouldn't be a huge issue. `short` could either be a 2-byte or 4-byte int (Either would be standards compliant), and `char` would presumably still be byte-sized (Not doing so would be a fairly big issue to deal with).
That leaves out either the 2-byte or 4-byte int from the standard data-types, but you can gain that back by simply using a compiler attribute or compiler-defined type to allow access too it. While that sounds non-standard, it really wouldn't be that bad because it could simply be used in `stdint.h` to expose the standard `int16_t` and `int32_t` types, which could be used like normal.