Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • _g_be@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 hours ago

    could be a great idea if people could be trusted to correctly interpret things that are not in their scope of expertise. The parallel I’m thinking of is IT, where people will happily and repeatedly call a monitor “the computer”. Imagine telling the AI your heart hurts when it’s actually muscle spasms or indigestion.

    The value in medical professionals is not just the raw knowledge but the practice of objective assessment or deduction of symptoms, in a way that I didn’t foresee a public-facing system being able to replicate

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Over time, the more common mistakes would be integrated into the tree. If some people feel indigestion as a headache, then there will be a probability that “headache” is caused by “indigestion” and questions to try to get the user to differentiate between the two.

      And it would be a supplement to doctors rather than a replacement. Early questions could be handled by the users themselves, but at some point a nurse or doctor will take over and just use it as a diagnosis helper.