I've read this article twice now and I'm confused about its intended audience.
If it's meant to be read by programmers, it's basically all wrong. Per IEEE 754, dividing any number by ±0.0 will give you ±Infinity -- so I'm not exactly sure how that's "ok." Also, the idea that it's "important to fit things into some relatively small amount of digits" because of "the speed of light" is just bizarre. It's like saying the reason we build bridges is because of gravity.. I mean, yeah.. I guess? On the other hand, if it's for laypeople, it does a terrible job of explaining much of anything. You'd need to have prior knowledge of floating point numbers, scientific notation, division by zero, etc.
Are people actually reading it or just upvoting on a whim?
That's handwaving the "OKness of dividing by zero" – yeah, computers can only do limited-precision arithmetic performantly. But by allowing overflow to infinity, we are essentially making the relative error infinite instead of bounded. After reading about Posit arithmetic ( https://www.posithub.org/docs/Posits4.pdf ) I think that IEEE 754 under- and overflows and multiple representations of zero are a design mistake.
They round to non-infinite values (the maximum and minimum representable finite values). This is not ideal, of course, but the relative error stays bounded, unlike with rounding to infinity.
Additionally, they have a neat encoding that makes the size of the mantissa and exponent fields variable. (The whole bitsize of the number is fixed at 32 or 64 bits though.) This allows to have "tapered" accuracy at super large and extra tiny numbers, which allows for more gradual under- and overflow, and buys more accuracy where most of the calculations tend to be done, around 1.
This article has a speed-of-light theme, we may as well talk about it.
> How far do you think light travels in one nanosecond given the optimal conditions? In one nanosecond, the light travels only about 30 centimeters. This means that if you want something computed in under 1 nanosecond, the signal starting your computation has to go through all the transistors that do the computation by the path that is only 30 centimeters in length.
In real circuits, light travels in a medium, not in free space or air. The speed-of-light in a medium depends on its relative dielectric constant K of the insulator material. For standard SiO2, it's approximately 4, giving a speed of 0.5c, or only 15 centimeters per nanosecond (coincidentally, a typical circuit board has roughly the same dielectric constant, even coax, CAT-6, or fiber optics cables are not far off from it, 15 cm/s is an useful number to remember).
In the history of process node improvements, various doping techniques have been developed for decreasing (or increasing) the K for better performance. The primary motivation of lowering the K is not increasing the speed-of-light, but to reduce parasitic capacitance, charging and discharging a capacitor creates a much longer delay than the theoretical propagation delay of c / sqrt(K).
I really wish languages would stop following this antipattern of throwing exceptions instead of allowing normal handling of Inf and NaN values. There are some cases where I really do want to perform comparisons against a non-finite value, or do arithmetic with one. Non-finite values are part of the floating point spec, and partitioning off this part of FP functionality is non-standard behavior.
It used to be the case that nans, infs, denormals, anything that wasn't a normal float, were really really slow to process. This seems to have improved somewhat over the last decade, but it's easy to see why it soured people on the concept of propagating nans - it's basically saying "there's a value called the Slow, and it turns everything it touches Slow, and you should let the Slow propagate throughout your data." No, I don't want the Slow to propagate through my data, I want to catch it where it is created so I can fix the Slow there and stop it from becoming an issue in the first place.
("Demons of Slow! Be thou bound by the secret power of Fast, whose sign is DAZ and FTZ! By the command of fesetexcept(FE_ALL_EXCEPT) I shall expose thee!")
My first math teacher taught us division by zero was infinity. He justified it by showing division by 1, then fractions, and how the line goes up towards infinity. Today this is seen as incorrect, but it still makes more sense to me than other divide by zero explosions...
Well, if you add nuance, it’s fine. The limit of a number divided by another number x as x approaches zero is infinity.
That’s almost the same thing.
Much of math just doesn’t work if you try to add infinity as just another number. You say dividing by zero is undefined because once you have that infinity anything further you do with it is also going to be infinity, it breaks the way you prove things, it breaks the idea of a number line.
There is plenty of math that deals with infinities, but it is all handled delicately.
Incidentally, this is also why +0.0 is distinct from -0.0. <+something> / +0.0 is +inf, but <+something> / -0.0 is -inf. And if you want to do a >/< comparison afterwards, you need the +/-.
If it's meant to be read by programmers, it's basically all wrong. Per IEEE 754, dividing any number by ±0.0 will give you ±Infinity -- so I'm not exactly sure how that's "ok." Also, the idea that it's "important to fit things into some relatively small amount of digits" because of "the speed of light" is just bizarre. It's like saying the reason we build bridges is because of gravity.. I mean, yeah.. I guess? On the other hand, if it's for laypeople, it does a terrible job of explaining much of anything. You'd need to have prior knowledge of floating point numbers, scientific notation, division by zero, etc.
Are people actually reading it or just upvoting on a whim?