The response to a subtle weakness in cryptographic software should not be to reimplement the cryptographic implementation from scratch. This inevitably introduces far more problems than it solves.
Heartbleed is not exactly a "subtle weakness," it is a gaping hole -- and it is a gaping hole that is only possible in languages with simple pointers and simple pointer arithmetic. In other words, using better languages for the next generation of crypto implementations will prevent this from happening, and then we can all get back to worrying about subtle weaknesses.
I'd rather say that the long list of overflow-related vulnerabilities would be a good argument to ensure that all crypto libraries are implemented in some kind of a strict language that disallows a large class of vulnerabilities, even if it costs us some time and effort.
Seriously, for many languages bugs cause unintended behavior within bounds of implemented logic - such as the recent logic bug that returned an 'okay' response even if the certificate was invalid; however, the fact that ordinary bugs tend to have effects of "read arbitrary memory" or "execute arbitrary code" is a very specific niche of C and friends.
In a language appropriate for security, a bug in heartbeat extension would break the heartbeat extension - maybe disable heartbeats not work, maybe cause different heartbeats - but it should be unable to affect the rest of application. In a language appropriate for security, if a single module is secure (i.e., the limited number of API/method calls it exposes are secure), then bugs in all other modules shouldn't be able to affect the insides of that module; so if your app has a tiny core that does some signing w/o exposing the key secrets, then the rest of the app can't touch it even deliberately, much less by an acidental bug.
Haskell is a particularly odd choice for a project like this, as lazy evaluation makes it difficult to reason about the runtime of various operations under various conditions.
Something with a bit more ML in its blood would be better, and for a library that pretty much means OCaml.
Edit: I do not at all mean to imply that it is impossible to implement TLS securely in Haskell. Only that there are more natural choices if one wants the advantage of a strong algebraic type system.
This is an excellent point, and I didn't mean to imply that it's impossible to implement TLS securely in Haskell (though my original comment did seem to; I plan to fix that shortly). Only that there are more natural choices that give the same primary advantage (a strong algebraic type system).
Why don't you test the library before making assumptions about it's runtime characteristics. How about doing the slightest bit of research on the creator?
In cryptography, for most operations, you need to be sure that the operation takes the same amount of time for all possible inputs. Otherwise, you leak information.
The classic example of this is checking string equality with strncmp(): this takes a different amount of time depending on how similar the strings are. If one string is secret and the other is controlled by an attacker, the attacker can use a clock and multiple attempts to discover the secret.
Obviously this particular one isn't relevant to SSL, but there are a number of other possibilities to worry about in most languages, most obviously short-circuiting operations like boolean AND/OR. Lazy evaluation makes every operation short-circuit, so you need to worry about this in every operation.
It can be done, but it's harder than it needs to be.
So we should choose the approach that gives us trivial attacks that reveal 64K straight out of Compton to the approach that may be slightly harder to defend against timing attacks?
This is such an obvious false dichotomy that I'm sure most people will notice, but I'm pointing it out anyway.
We could use something that gives both advantages, like the OCaml I already mentioned. Or, we could take a hybrid approach, where something like Haskell generates C code that provably can't have buffer problems. Or, we could statically verify that the library is written in a known-memory-safe subset of C++. Or, we could use a language like Rust, which (once it's eventually complete) seems ideal for this sort of application.
I think the "define a strict, branchless DSL" approach is the right one, if you're going with Haskell. Then use the type system to ensure that only that stuff can touch key data. No problems with laziness or timing attacks, if the core of that is implemented correctly.
Nope. This is a common troll whenever any Haskell project is mentioned. It's despicable FUD and just plain wrong. Haskell makes it very easy and lightweight to add strictness annotations to your data structures; far, far easier than it is to add laziness to a strict-by-default language.
Not him, but I don't think the regular lazy evaluation worry about building up a million thunks and GC hell would be as much of a problem as it being awkward to not have things happen too fast and leave side channel attacks from timing.