“Life-and-death decisions relating to patient acuity, treatment decisions, and staffing levels cannot be made without the assessment skills and critical thinking of registered nurses,” the union wrote in the post. “For example, tell-tale signs of a patient’s condition, such as the smell of a patient’s breath and their skin tone, affect, or demeanor, are often not detected by AI and algorithms.”

“Nurses are not against scientific or technological advancement, but we will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care,” they added.

  • ArbitraryValue
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    4
    ·
    7 months ago

    My experience with the healthcare system, and especially hospitals, is that the people working there are generally knowledgeable and want to help patients, but they are also very busy and often sleep-deprived. A human may be better at medicine than an AI, but an AI that can devote attention to you is better than a human that can’t.

    (The fact that the healthcare system we have is somehow simultaneously very expensive, bad for medical professionals, and bad for patients is a separate issue…)

    • _lilith@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      7 months ago

      Those are all good points but that assumes that hospitals will use AI in addition to the workers they already have. The fear here would be that they would use AI as an excuse to lay off medical staff, making the intentional under staffing even worse and decreasing the overall quality of care while absolutely burning through medical staff.

    • hoshikarakitaridia@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 months ago

      Isn’t there a way to do both for every patient as an additional information layer?

      The dangerous part is not the AI, but the idea that AI can REPLACE everything. And that’s usually on the management.

    • SeaJ@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      7 months ago

      It really depends on how optimized the dataset is for the AI. If it is shitty, it will amplify biases or hallucinate. An AI might be able to give a patient more attention but if it is providing incorrect information, no attention is better than a lot of attention.