For folks developing or analyzing new applied math techniques (for solving differential equations or function approximation or whatever), it is helpful to make formal proofs about their behavior and bounding their error etc., and from the papers I have looked at those are often (usually?) done on top of measure-theoretic models.
It might be possible to develop alternative proofs on purely finite/approximate mathematics, but for a working applied mathematician who already went through standard math grad school curriculum that is probably more trouble than it’s worth.
The users of those mathematical tools (whether software implementors or people just calling some software library) usually don’t need to care about the details of the proofs.
This is similar for other kinds of science/engineering.
> The users of those mathematical tools (whether software implementors or people just calling some software library) usually don’t need to care about the details of the proofs
Oh for sure, and perhaps this is just a confusion of terms but i think that's what the thread parent was speaking with "applied statistics". In academia, "applied math/statistics" can mean "I'm doing theoretical math with an eye towards applications but it still requires heavy mathematical machinery", but it can also mean "I'm using mathematical tools to solve empirical problems, and I'm never going to need to worry about Lebesgue measures".
It might be possible to develop alternative proofs on purely finite/approximate mathematics, but for a working applied mathematician who already went through standard math grad school curriculum that is probably more trouble than it’s worth.
The users of those mathematical tools (whether software implementors or people just calling some software library) usually don’t need to care about the details of the proofs.
This is similar for other kinds of science/engineering.