Not only for audio, but everything that doesn’t have to be an exact base 10 representation (like money). Anything that represents something “analog” or “measured” is perfectly fine to store in a float. Temperature, humidity, windspeed, car velocity, rocket acceleration, etc. Calculations with floats are perfectly accurate and given the same bit length are as accurate as decimal types. The only thing they can’t do is exactly(!) represent base 10 decimals but for a very large amount of applications that doesn’t matter.
That’s not really true and it depends on what you mean. If your decimal datatype has the same number of bits it’s not more accurate than base 2 floats. This is often hidden because many decimal implementations aren’t 64 bit but 128 bit or more. But what it can do is exactly represent base 10 numbers which is not a requirement for a lot of applications.
You can use floats everywhere where you don’t need numbers to be base 10. With base 2 floats the operations couldn’t be more accurate given the limit of 64 bits. But if you write f64 x = 0.1;
and one assumes that the computer somehow stored 0.1
inside x they already made a wrong assumption. 0.1 can’t be converted into a float because it’s a periodic in base 2. A very very pedantic compiler wouldn’t even let you compile that and force you to pick a value that actually can be represented.
Down the rabbit hole: https://zeta.one/floats-are-not-inaccurate/
But that’s not because floats are inaccurate. A very very pedantic compiler wouldn’t even let you write f64 x = 0.1;
because 0.1 (and also 0.2 and 0.3) can’t be converted to a float exactly (note that 0.5, 0.25, 0.125, etc. can be stored exactly!)
The moment you write f64 x = 0.1;
and expect the computer to store that inside a float you already made a wrong assumption. What the computer actually stores is the float value that is as close as possible to 0.1. But not because floats are inaccurate, but because floats are base 2. Note that floating point types in general don’t have to be base 2 - they can be any base (for example decimal types are base 10) but IEEE754 floats are base 2, because it allows for simpler hardware implementations.
An even more pedantic compiler would only let you write floating point in binary like 10.10110001b
and let you do the conversation, because it would make it blatantly obvious that most base 10 decimals can’t even be converted without information loss. So the “inaccuracy” is not(!) because float calculations are inaccurate but because many people wrongly assume that the base 10 literal they wrote can be stored inside a float.
Floats are actually really accurate (ignoring some Intel FPU hardware bugs). I skipped a lot of details which you can find here: https://zeta.one/floats-are-not-inaccurate/
Equipped with that knowledge your calculation 0.1+0.2 != 0.3
can simply be translated into: “The closest float to 0.1” + “The closest float to 0.2” is not equal to “The closest float to 0.3”. Keep in mind that the addition itself is perfectly accurate and without any error/rounding(!) on every EEE754 conforming implementation.
Floating point numbers and arithmetic is not inaccurate. They are actually very accurate but a lot of developers have inaccurate assumptions about them. They can’t exactly represent base 10 decimals. That’s the only inaccuracy. If you have two floating point numbers and you let’s say add or multiply them the result is always the closest floating point representation of the real result.
The list of misconceptions wouldn’t reasonably fit in a comment, but if you are really interested and have a few minutes you could give that a read: https://zeta.one/floats-are-not-inaccurate/
Floats are accurate. Could you name a situation (except money) where you think floats are not accurate enough to handle it?