Seems quite advanced for a '88 educational toy. For comparison, I got a VTech SmartStart for xmas in the early '90s and it had just a short 14-segment display line and could mainly be used together with accompanying paper books :-)
I imagine G.H. Hardy might be like: e is the unique base of a logarithm function, ln(n), so so that the average distance between prime numbers less than n, converges proportionally towards ln(n) - the ratio between ln(n) and the actual average distance between primes < n converges to 1 as n goes to infinity.
To me it was only when we got to radioactive decay in physics it clicked to me why the particular "e" was special and useful. Background: exp(x) was introduced as the inverse to log(x), defined as the area under 1/x etc. It was already in 1st high school year (before integrals had been introduced!). We showed how you could express other bases as e^(ax) but you could have done that with any other base, so nothing special about e. In calculus we later learned that exp'(x) = exp(x). This made it clear it was special - but not why it was useful.
The clear usefulness appeared in the case of radioactive decay. We first learned about half-life, and clearly you can write the decay function as N(t) = N0 * 0.5^(at) with 1/a being the half-life. But we then learned that the decay values in the data book are often not the half-life but this special "k" that you put in a decay equation that has e as the: N(t) = N0 * e^(-kx). My reaction was: why did they pick e here? Why not just use 0.5 as base and the book would list the half-life? But then we got to the kicker: we learned the formula: Activity = -k*N, meaning that the activity (decays/time-unit) is proportional to the amount of material with k being the proportionality constant. We hadn't learned of differential equations yet and I was confused about what k was, having first seen it in the decay equation and now in the activity equation which were seemingly very different... how on earth could the same number work in both capacities. And I read a bit ahead in the math book and it all made sense. And then I understood it was because of e as the base that the constant k got this property - which it would only get in that particular base. So this showed to me how e was special, allowing the same constant to serve in these two capacities.
Agreed. While manufacturers do show an impressive increase in the number of noisy qubits, we still have yet to see a demonstration of quantum error correction at anywhere near the levels needed to pull of a QC that breaks e.g. RSA.
The proposal of this paper is that Grover's does not mean we need to double the size of symmetric keys to account for QM but that just a few extra bits are enough to prevent it from being efficient.
Of course, there may be new QM algorithms that don't suffer and, again, it's best to start preparing now and not later. But I think it's noteworthy for the discussion about quantum preparedness.
This conversion O(2^n) -> O(2^n/2) only applies to Grover's algorithm which can be used to break symmetric crypto (AES etc.). This is not the usual concern in the context of quantum computers because it just corresponds to halving the key space (in bits) which you can protect against by doubling the key size. So use AES-256 instead of AES-128. And the benefit by Grovers can only be had once. And as the paper you link to point out, maybe even that benefit is in question.
However, the main concern with quantum computers vs cryptography is Shor's algorithm which works on RSA and ECC. Shor's has the potential to take the problem from exponential (or at least close to exponential but sub, in the case of RSA) to polynomial O(n^3) which is much more powerful (essentially taking a log of the running time!). That is still assuming, of course, that workable quantum computers of required since will appear and that all the associated problems with that (noisy qbits etc.) are solved. Other challenges could also show up (difficulties scaling for longer computations, more passes through quantum gates, larger super-positions or even that the quantum laws governing e.g. probability amplitudes don't hold with full accuracy at such scales).
Yeah, my point was more that there's a lot of "this QM version of the algorithm destroys that crypto scheme" when it may be a lot more nuanced. I used Grover's as an example of that - on its face Grover's implies that you need to double the key size to be safe but there's reason to believe that that's not true in practice. I assume that may be the case with Shor's as well.
The best thing to do is to layer them, as is standard.
Many-worlds etc. does not have such a limit. There are interpretations that posit that collapse is triggered by the system reaching a certain size (also GRW theory etc.). Such interpretations would preclude big quantum computers. Note that such interpretations aren't purely interpretations because they actually do posit measurable new physical phenomena. But since the relevant experiments are not currently possible to carry out and since they do have significant interpretational consequences, they are often called intepretations.
True - on the other hand, that fear/principle could be applied to all algorithms that could theoretically be broke in the future. There's a lot of uncertainty concerning the scalability of quantum computers and quantum error corrections. In the neare future it seems more likely that the post-quantum algorithms might be broken (even on normal computers) compared to quantum computers cracking RSA/ECC (so one should at least combine to get the best of both worlds)
That algorithm was never chosen as a final candidate, unlike Dilithium and Kyber. Nevertheless, it remains intriguing because it advanced significantly in the competition until this vulnerability was identified, which allows for it to be cracked in just a few minutes on a standard computer. This underscores the inherent risk of rapidly introducing new algorithms, whether quantum or not. While RSA might become vulnerable to future quantum computers, its resilience since its public introduction in 1977 (aside from the need to increase key sizes) is quite an achievement. This is why new algorithms should always be paired with trusted classical algorithms to get the best of both worlds: if the new post-quantum component is flawed, at least you're not worse off than if you had used classical algorithms. On the other hand, if quantum computers capable of breaking practical sizes of RSA or ECC emerge, there's still the hope that the post-quantum element remains intact.
Isn't it possible to decouple the two components, that is break the post-quantum one on classical computer and the classical one on a quantum computer, essentially making this combination null?
Yes absolutely: If both security elements fail (quantum computers that break classical crypto appear, and the supposed post-quantum element turns out to be insecure as well) then you're screwed. By combining you get a chain is as strong as the strongest link - but not stronger! The motivation with combining is to avoid a scenario where you start using a new post-quantum algorithm which turns out to be really insecure (like happened to SPHINCS+) so you're actually worse off.
Hello everyone! I'm thrilled to see my project trending here on Hacker News. It's a pure toy implementation inspired by the paper and its reference. While it aligns with all the provided test cases, I wrote it primarily for fun and to see it work seamlessly with the standard JCE interfaces. If you have any questions or feedback, please don't hesitate to ask. Thanks for checking it out! Best regards, the author. :-)
I'm not aware of any production-grade libs of these algorithms, but they might exist. While NIST did pick Dilthium as among the winners in summer 2022, it still hasn't been fully standardized yet. The mathematical principles are final, of course, but they still need to be documented in the form of a standard, with many other details specified, such as the ASN.1/binary encodings of keys, signatures, etc., so they can be used in the context of a broader PKI and certificates. Some of this is specified in the Dilithium submission (primarily because some of the values need to be encoded and processed as part of the algorithms themselves), but it doesn't cover everything and doesn't specify other details like OIDs, etc. This specification is also necessary before validation programs like FIPS 140-2 can get off the ground.