From your comment, I can't understand why defining a special struct in C is different from defining a special type in Haskell.
I understand that many C programmers don't do it, mind you, but how is that different from a Haskell programmer not using the proper type for a certain operation?
In Haskell, there are types that you have probably heard of but not used. Functors, applicative functors, and monads. In Haskell, I could define a value and its type:
> value :: ConstantTime Int
But how do I obtain an "Int" from that, since I know it somehow relates to an integer type? The answer is: you must ask the implementation of ConstantTime for the value, and if it doesn't want to give it to you, it doesn't have to. Constructors can be one way.
The library author could define a number of additional operations. These operations are privileged, being written as part of the library, they can actually inspect the inner data at any time. This is the "small core" that needs to be verified.
> addInt :: ConstantTime Int -> ConstantTime Int -> ConstantTime Int
> subInt :: ConstantTime Int -> ConstantTime Int -> ConstantTime Int
> mulInt :: ConstantTime Int -> ConstantTime Int -> ConstantTime Int
> divInt :: ConstantTime Int -> ConstantTime Int -> ConstantTime Int
Then to compute something like Euler's formula one could write a function in terms of those functions.
> newtonsMethod :: ConstantTime Int -> ConstantTime Int
> newtonsMethod = ...
And one can be certain that the laws that apply to `addInt` apply to `newtonsMethod`.
This composition has wonderful benefits because if there is no branching operation defined by the library, users cannot use branches inside ConstantTime. To serially apply addition, multiplication, etc, one can write in a sort of reverse polish notation:
> square :: ConstantTime Int -> ConstantTime Int
> square input = mulInt input input
Using the multiplication operator, (*), would be illegal there. And that function, square, cannot inspect the value of input. Because it can't inspect the value, it can't make choices based on that value.
I still fail to see how this is fundamentally different from a similar approach in C (but maybe that's because I don't understand Haskell at all). In C you would define your ConstantTimeInt as a struct, and define a bunch of functions that operate on this struct. Using a + or * operator is also not defined for this struct. You'd have to use
In your example, any attempt to extend the core library is fraught with error. There is no way to safely do so and transitively trust the authorship. For example:
This is legal C! The ConstantTimeInt structure does not allow the composition of operations to produce safe operations from their composition!
This is illegal Haskell:
square :: ConstantTime Int
-> ConstantTime Int
square = do
randomRIO (0,100) >>= threadDelay
return $ multiply value value
In Haskell, tight, small libraries can be written using impure constructs (as these ConstantTime things likely would be) and verified logically. Then, using the type system, that kernel can be used to create an amazing variety of strongly typed systems safely from a compact representation.
In your envisioning of a C ConstantTime library, you would never finish writing the library. If someone wanted to perform date arithmetic, you would have to implement that in your library and check it by hand and monitor it and maintain it. You could never let your guard down on any commit that would touch an ever-expanding body of code that would have to be rigorously checked to handle increasingly complex scenarios.
For example, to implement a constant time big (but fixed size) integer multiplication operation, you would not be able to trust that some other author's composition of your functions is safe purely by its type.
OP's point is that the onus is on the library developer to not just use Int. You're treating it like the library developer is a user of another library.
I guess the point is that Haskell is more expressive and declarative, which makes this almost (not totally) the default behaviour of Haskell library developers. And I guess the resulting code would be easier to parse (by a human) because of its declarative nature. It would also be easier to test due to its functional pureness.
Whether that merits dumping the incredible amount of man hours that have gone into OpenSSL is debateable. I'd probably say it's worth fixing OpenSSL and trying to refactor it into a more provably sound system.
In Haskell I can validate and be sure that I'm using "the proper type" in 100% operations - in C it's hard to verify that there is no code (intentional or accidental) that touches the inside of that struct in some way.
For example, C is full of operations (pointer arithmetic, read from an unbounded array) that may read the inside of that struct without that code mentioning that struct in any way.
I understand that many C programmers don't do it, mind you, but how is that different from a Haskell programmer not using the proper type for a certain operation?