HN2new | past | comments | ask | show | jobs | submitlogin
[flagged] Why is it ok to divide by 0.0? (wordsandbuttons.online)
24 points by okaleniuk on Aug 24, 2020 | hide | past | favorite | 15 comments


I've read this article twice now and I'm confused about its intended audience.

If it's meant to be read by programmers, it's basically all wrong. Per IEEE 754, dividing any number by ±0.0 will give you ±Infinity -- so I'm not exactly sure how that's "ok." Also, the idea that it's "important to fit things into some relatively small amount of digits" because of "the speed of light" is just bizarre. It's like saying the reason we build bridges is because of gravity.. I mean, yeah.. I guess? On the other hand, if it's for laypeople, it does a terrible job of explaining much of anything. You'd need to have prior knowledge of floating point numbers, scientific notation, division by zero, etc.

Are people actually reading it or just upvoting on a whim?


That's handwaving the "OKness of dividing by zero" – yeah, computers can only do limited-precision arithmetic performantly. But by allowing overflow to infinity, we are essentially making the relative error infinite instead of bounded. After reading about Posit arithmetic ( https://www.posithub.org/docs/Posits4.pdf ) I think that IEEE 754 under- and overflows and multiple representations of zero are a design mistake.


What do posits do when you run out of bits? 32 is ... not very many bits.


They round to non-infinite values (the maximum and minimum representable finite values). This is not ideal, of course, but the relative error stays bounded, unlike with rounding to infinity.

Additionally, they have a neat encoding that makes the size of the mantissa and exponent fields variable. (The whole bitsize of the number is fixed at 32 or 64 bits though.) This allows to have "tapered" accuracy at super large and extra tiny numbers, which allows for more gradual under- and overflow, and buys more accuracy where most of the calculations tend to be done, around 1.


This article has a speed-of-light theme, we may as well talk about it.

> How far do you think light travels in one nanosecond given the optimal conditions? In one nanosecond, the light travels only about 30 centimeters. This means that if you want something computed in under 1 nanosecond, the signal starting your computation has to go through all the transistors that do the computation by the path that is only 30 centimeters in length.

In real circuits, light travels in a medium, not in free space or air. The speed-of-light in a medium depends on its relative dielectric constant K of the insulator material. For standard SiO2, it's approximately 4, giving a speed of 0.5c, or only 15 centimeters per nanosecond (coincidentally, a typical circuit board has roughly the same dielectric constant, even coax, CAT-6, or fiber optics cables are not far off from it, 15 cm/s is an useful number to remember).

In the history of process node improvements, various doping techniques have been developed for decreasing (or increasing) the K for better performance. The primary motivation of lowering the K is not increasing the speed-of-light, but to reduce parasitic capacitance, charging and discharging a capacitor creates a much longer delay than the theoretical propagation delay of c / sqrt(K).

https://en.wikipedia.org/wiki/Low-%CE%BA_dielectric

Another area of research is optical interconnects.

https://en.wikipedia.org/wiki/Optical_interconnect


I really wish languages would stop following this antipattern of throwing exceptions instead of allowing normal handling of Inf and NaN values. There are some cases where I really do want to perform comparisons against a non-finite value, or do arithmetic with one. Non-finite values are part of the floating point spec, and partitioning off this part of FP functionality is non-standard behavior.


It used to be the case that nans, infs, denormals, anything that wasn't a normal float, were really really slow to process. This seems to have improved somewhat over the last decade, but it's easy to see why it soured people on the concept of propagating nans - it's basically saying "there's a value called the Slow, and it turns everything it touches Slow, and you should let the Slow propagate throughout your data." No, I don't want the Slow to propagate through my data, I want to catch it where it is created so I can fix the Slow there and stop it from becoming an issue in the first place.

("Demons of Slow! Be thou bound by the secret power of Fast, whose sign is DAZ and FTZ! By the command of fesetexcept(FE_ALL_EXCEPT) I shall expose thee!")


My first math teacher taught us division by zero was infinity. He justified it by showing division by 1, then fractions, and how the line goes up towards infinity. Today this is seen as incorrect, but it still makes more sense to me than other divide by zero explosions...


Well, if you add nuance, it’s fine. The limit of a number divided by another number x as x approaches zero is infinity.

That’s almost the same thing.

Much of math just doesn’t work if you try to add infinity as just another number. You say dividing by zero is undefined because once you have that infinity anything further you do with it is also going to be infinity, it breaks the way you prove things, it breaks the idea of a number line.

There is plenty of math that deals with infinities, but it is all handled delicately.


Sounds nice at first, but if you poke some more you run into inconsistencies. This 9 minute video by Eddie Woo shows some.

https://www.youtube.com/watch?v=J2z5uzqxJNU

Spoiler: A major one goes like this. Suppose X/0 is infinity. Then:

1/0 = infinity

2/0 = infinity

Therefore, 1 = 2.


I think this show something similar to this:

https://en.wikipedia.org/wiki/Monty_Hall_problem

1 = 2 can't be evaluate in isolation, because in this case is 1 = 2 AFTER x/0 = infinity?


But why not minus infinity? As the same argument holds by approaching from -1, -1/n.


Incidentally, this is also why +0.0 is distinct from -0.0. <+something> / +0.0 is +inf, but <+something> / -0.0 is -inf. And if you want to do a >/< comparison afterwards, you need the +/-.


What happens when you divide 0 by 0 though?


NaN?

I have gotten "NaN" in dialog boxes where it was really inappropriate, on occasion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: