• areyouevenreal@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      The problem is that some people like me won’t get that reference and instead think AIs are universally bad. A lot of people already think this way, and it’s hard to know who believes what.

      • leftzero@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        The problem is that people selling LLMs keep calling them AI, and people keep buying their bullshit.

        AI isn’t necessarily bad. LLMs are.

        • areyouevenreal@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          5 months ago

          LLMs have legitimate uses today even if they are currently somewhat limited. In the future they will have more legitimate and illegitimate uses. The capabilities of current LLMs are often oversold though, which leads to a lot of this resentment.

          Edit: also LLMs very much are AI (specifically ANI) and ML. It’s literally a form of deep learning. It’s not AGI, but nobody with half a brain ever claimed it was.

          • leftzero@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            LLMs have legitimate uses today

            No they don’t. The only thing they can be somewhat reliable for is autocomplete, and the slight improvement in quality doesn’t compensate the massive increase in costs.

            In the future they will have more legitimate and illegitimate uses

            No. Thanks to LLM peddlers being excessively greedy and saturating the internet with LLM generated garbage newly trained models will be poisoned and only get worse with every iteration.

            The capabilities of current LLMs are often oversold

            LLMs have only one capability: to produce the most statistically likely token after a given chain of tokens, according to their model.

            Future LLMs will still only have this capability, but since their models will have been trained on LLM generated garbage their results will quickly diverge from anything even remotely intelligible.