• Rivalarrival@lemmy.today
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    7
    ·
    7 hours ago

    It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

    How good are the human answers? I mean, I expect that an AI’s error rate is currently higher than an “expert” in their field.

    But I’d guess the AI is quite a bit better than, say, the average Republican.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 minutes ago

      I guess you don’t get the issue. You give the AI some text to summarize the key points. The AI gives you wrong info in a percentage of those summaries.

      There’s no point in comparing this to a human, since this is usually something done for automation, that is, to work for a lot of people or a large quantity of articles. At best you can compare it to other automated summaries that existed before LLMs, which might not have all the info, but won’t make up random facts that aren’t in the article.