Nobody uses "floating point number" to mean "a number that could theoretically exist in a floating point system I will invent for you after you tell me the number". That's not being pedantic, that's insisting on a wrong definition.
Nobody uses "floating point number" to mean "a number that could theoretically exist in a floating point system I will invent
Put a full stop there -- Oh yes, people do mean that! Most of the time, people are referring to a bit representation in a specific system like IEEE's.
a floating point system I will invent for you after you tell me the number"
The above part of the sentence isn't a definition of "floating point number." It's how I use the common notion of "floating point number" in my argument. Your objection only looks like it works because you conflate the two concepts. If you don't conflate the two concepts, then you are arguing that most low level programming that deals with 32 bit floats doesn't actually deal with "floating point numbers." That is a reasonable assertion for a reasonable set of definitions. However, it's not the one I'm using, which also fits the reality of the mental models most people are actually using.
In terms of program correctness, you can find many examples where using an abstract concept of "floating point" instead of the concrete representation will produce errors. So then why is it "the correct one" as you say?
I can't parse what you're saying. What two concepts are you accusing me of conflating?
There are many different floating point systems, but they all share certain attributes. None of them can exactly represent sqrt([insert large prime]). If you want to invent a bunch of such systems post-facto, you are abusing the term. The union of all systems that can reasonably be called "floating point" is not the set of real numbers. It's countable, as a very loose upper bound.