> But rarely in these discussions will you find relevant mathematical considerations. If the goal is to compute a Fibonacci number or a factorial, the proper solution is not a recursive function, but rather knowledge of mathematics.
If you want precise results for large sizes, then the integer-arithmetic recurrence is faster than computing the square root of 5 to very high precision and raising it to the nth power.
The author’s “proper solution” might better be named the “naive undergraduate introductory linear algebra student solution”. Or maybe “18th century analytic solution ignorant of computer number formats”.
> They run in constant [...] time
If we are going to artificially limit ourselves to numbers smaller than the precision of e.g. double precision floats, then the integer algorithms might just as well be considered “constant time”, where the constant is just whatever it takes to compute the largest value we are considering.
Or alternately we can just precompute and cache the entire set of them if we want. There are only 78.
I want to make the same objections (among others) but... interestingly, all of these actually reinforce the author's overarching point, which is less that everyone should be using closed forms for fibonacci and factorial computations, and more that an analytical understanding of conceptual model and computing implementations often matters more than the due generally given. You're able to make the critical evaluation of closed-form solutions using doubles vs other options precisely because you're practicing the approach he's recommending (albeit, apparently, a little bit better!).
You need to know a lot more computers than mathematics to know that floating point numbers are nothing like real numbers, neither are machine implemented integers.
Not necessarily. Error analysis with floating point numbers is closely related to the mathematics, you have no other way. You can use several heuristics, like avoiding catastrophic cancellations or better summation algorithms, to improve error bounds and there is even an automatic approach like Herbie [1], but ultimately you can't be as sure without error analysis. (Note that the interval arithmetics or unum or posit or whatever are crude but useful approximations to the proper error analysis.)
> Note that the interval arithmetics or unum or posit or whatever are crude but useful approximations to the proper error analysis.
"crude but useful approximations" is such a strange view. The purpose of machine-controlled error arithmetics is to give the programmer a powerful language construct that enables evaluation of a function or algorithm for arbitrarily small margin of error, using automatically revised accuracy in the intermediate steps. Error analysis is a math subject that gives us better algorithms. One needs both for hitting that desired margin of error with certainty within as short time as possible.
> a powerful language construct that enables evaluation of a function or algorithm for arbitrarily small margin of error, using automatically revised accuracy in the intermediate steps.
AIUI, this is what "constructive (or 'exact') real numbers" give you, and it's quite computationally-intensive. Error arithmetic/interval arithmetic/etc. are only approximations to it. In many cases people don't even bother with the real deal because they're computing way more than one single result, so roundoff errors can be thought of as just one additional source of noise that is going to be averaged out over many 'steps'.
I agree, interval arithmetics/unums/$other_fancy_number_concept is just a start, not the end of getting reliable results. However, the roundoff errors that FPU generates are hardly controllable. In general they are not just a simple noise that cancels out on averaging. And some calculations do not do averaging at all.
Moreover, the C implementation quoted in the article will give a negative -1323752223 after the 46th number (1836311903). So while I understand the author's point, I think this particular example is not a really good choice.
AIUI, you don't need to "compute" sqrt5 to high precision in order to use that closed form! Just "adjoin" it to Q and compute symbolically over Q(sqrt5).
> If you want precise results for large sizes, then the integer-arithmetic recurrence is faster than computing the square root of 5 to very high precision and raising it to the nth power.
Know what's even faster? Exponentiating a 2x2 matrix of integer coefficients. And again, that's what a bit of math tells you, so I think the point of the author holds.
Then I might have misunderstood what you meant by "integer-arithmetic recurrence"; my understanding of it was that it was the traditional recursive implementation of Fibonacci, which has much worse complexity than matrix exponentiation, and is not the same algorithm at all.
There are several ways we might try to optimize this calculation. Really depends on the context/constraints.
Since the recurrence is always the same we can e.g. use the identities F_{m+n-1} = F_mF_n + F_{m-1}F_{n-1} and F_{m+n} = F_mF_n + F_mF_{n-1} + F_{m-1}F_n to precompute and cache all of the terms for 2^{n-1} and 2^{n}, and then multiply the appropriate ones together to find whatever value we need based on a binary expansion of the index we want to find a fibonacci number for.
This is the same idea as fast matrix exponentiation but saves low-level arithmetic and half of the space, because two of the terms in the matrix are basically redundant.
I am wondering if all three methods (sqrt(5) method, 2x2 matrix exponentiation, and jacobolus's power of two method) take approximately the same amount of time --- slightly less than O( log(F_n)*log(F_n)^2) (using Fürer's algorithm for integer multiplication.)
If you want precise results for large sizes, then the integer-arithmetic recurrence is faster than computing the square root of 5 to very high precision and raising it to the nth power.
The author’s “proper solution” might better be named the “naive undergraduate introductory linear algebra student solution”. Or maybe “18th century analytic solution ignorant of computer number formats”.
> They run in constant [...] time
If we are going to artificially limit ourselves to numbers smaller than the precision of e.g. double precision floats, then the integer algorithms might just as well be considered “constant time”, where the constant is just whatever it takes to compute the largest value we are considering.
Or alternately we can just precompute and cache the entire set of them if we want. There are only 78.
Here’s a Javascript implementation