• dejected_warp_core@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    There are probably a lot of scientific applications (e.g. statistics, audio, 3D graphics) where exponential notation is the norm and there’s an understanding about precision and significant digits/bits. It’s a space where fixed-point would absolutely destroy performance, because you’d need as many bits as required to store your largest terms. Yes, NaN and negative zero are utter disasters in the corners of the IEEE spec, but so is trying to do math with 256bit integers.

    For a practical explanation about how stark a difference this is, the PlayStation (one) uses an integer z-buffer (“fixed point”). This is responsible for the vertex popping/warping that the platform is known for. Floating-point z-buffers became the norm almost immediately after the console’s launch, and we’ve used them ever since.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I know this is in jest, but if 0.1+0.2!=0.3 hasn’t caught you out at least once, then you haven’t even done any programming.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      But that’s not because floats are inaccurate. A very very pedantic compiler wouldn’t even let you write f64 x = 0.1; because 0.1 (and also 0.2 and 0.3) can’t be converted to a float exactly (note that 0.5, 0.25, 0.125, etc. can be stored exactly!)

      The moment you write f64 x = 0.1; and expect the computer to store that inside a float you already made a wrong assumption. What the computer actually stores is the float value that is as close as possible to 0.1. But not because floats are inaccurate, but because floats are base 2. Note that floating point types in general don’t have to be base 2 - they can be any base (for example decimal types are base 10) but IEEE754 floats are base 2, because it allows for simpler hardware implementations.

      An even more pedantic compiler would only let you write floating point in binary like 10.10110001b and let you do the conversation, because it would make it blatantly obvious that most base 10 decimals can’t even be converted without information loss. So the “inaccuracy” is not(!) because float calculations are inaccurate but because many people wrongly assume that the base 10 literal they wrote can be stored inside a float.

      Floats are actually really accurate (ignoring some Intel FPU hardware bugs). I skipped a lot of details which you can find here: https://zeta.one/floats-are-not-inaccurate/

      Equipped with that knowledge your calculation 0.1+0.2 != 0.3 can simply be translated into: “The closest float to 0.1” + “The closest float to 0.2” is not equal to “The closest float to 0.3”. Keep in mind that the addition itself is perfectly accurate and without any error/rounding(!) on every EEE754 conforming implementation.

  • RustyNova@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    Floats are only great if you deal with numbers that have no needs for precision and accuracy. Want to calculate the F cost of an a* node? Floats are good enough.

    But every time I need to get any kind of accuracy, I go straight for actual decimal numbers. Unless you are in extreme scenarios, you can afford the extra 64 to 256 bits in your memory

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      That’s not really true and it depends on what you mean. If your decimal datatype has the same number of bits it’s not more accurate than base 2 floats. This is often hidden because many decimal implementations aren’t 64 bit but 128 bit or more. But what it can do is exactly represent base 10 numbers which is not a requirement for a lot of applications.

      You can use floats everywhere where you don’t need numbers to be base 10. With base 2 floats the operations couldn’t be more accurate given the limit of 64 bits. But if you write f64 x = 0.1; and one assumes that the computer somehow stored 0.1 inside x they already made a wrong assumption. 0.1 can’t be converted into a float because it’s a periodic in base 2. A very very pedantic compiler wouldn’t even let you compile that and force you to pick a value that actually can be represented.

      Down the rabbit hole: https://zeta.one/floats-are-not-inaccurate/

      • RustyNova@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Good and bad use-cases for floats

        Floats can be used everywhere where it doesn’t matter that you can’t store a 100% accurate base ten representations. For example positions and speeds in 3D games and animations, “analog” values like temperatures, speed of a vehicle, geo positions with longitude and latitude, a persons weight or heart pressure. In fact if you develop games there is no way around 32 bit floats because GPUs are f32 number crunching beasts. Modern 3D games wouldn’t be possible without all those fast f32 calculations.

        You shouldn’t use binary floats if you need or expect accurate base ten calculations (addition, subtraction, multiplication, - note that divisions also introduce errors quickly in decimal types) and for dimensions that have a smallest unit that can’t be broken down, for example like money. If you need to handle money just store the amount of cents as integers and only divide by 100 in your display function.

        This is exactly my point. Don’t use floats when you need to get accurate stuff, but use it when you need a “feel” for it

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Don’t use floats when you need to get accurate stuff

          Floats are accurate. Could you name a situation (except money) where you think floats are not accurate enough to handle it?

  • TotalSonic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Obviously floating point is of huge benefit for many audio dsp calculations, from my observations (non-programmer, just long time DAW user, from back in the day when fixed point with relatively low accumulators was often what we had to work with, versus now when 64bit floating point for processing happens more as the rule) - e.g. fixed point equalizers can potentially lead to dc offset in the results. I don’t think peeps would be getting as close to modeling non-linear behavior of analog processors with just fixed point math either.

    • wischi@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Not only for audio, but everything that doesn’t have to be an exact base 10 representation (like money). Anything that represents something “analog” or “measured” is perfectly fine to store in a float. Temperature, humidity, windspeed, car velocity, rocket acceleration, etc. Calculations with floats are perfectly accurate and given the same bit length are as accurate as decimal types. The only thing they can’t do is exactly(!) represent base 10 decimals but for a very large amount of applications that doesn’t matter.

  • jabjoe@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    11 months ago

    As a programmer who grew up without a FPU (Archimedes/Acorn), I have never liked float. But I thought this war had been lost a long time ago. Floats are everywhere. I’ve not done graphics for a bit, but I never saw a graphics card that took any form of fixed point. All geometry you load in is in floats. The shaders all work in floats.

    Briefly ARM MCU work was non-float, but loads of those have float support now.

    I mean you can tell good low level programmers because of how they feel about floats. But the battle does seam lost. There is lots of bit of technology that has taken turns I don’t like. Sometimes the market/bazaar has spoken and it’s wrong, but you still have to grudgingly go with it or everything is too difficult.