• Echo Dot
    link
    fedilink
    English
    310 hours ago

    The headline and the article are completely mismatched.

    Basically all the article is saying is that doctors sometimes use AI. Which is a bit like saying sometimes doctors look things up in books. Yeah, course they do.

    If somebody comes in with a sore throat and the AI prescribes morphine the doctor is probably smart enough to not do that so I don’t really think there’s a major issue here. They are skilled medical professionals they’re not blindly following the AI.

  • magnetichuman
    link
    fedilink
    311 day ago

    I don’t fear skilled professionals using GenAI to boost their productivity. I do fear organisations using GenAI to replace skilled professionals

    • @[email protected]
      link
      fedilink
      English
      141 day ago

      This. It is like any tool. It is down to the skill/knowlege/experience of the user to evaluate the result.

      But as soon as management/government start seeing it as a cheat to reduce hiring. It become a danger.

      • @[email protected]
        link
        fedilink
        English
        12 hours ago

        Imagine an AI with a model trained exclusively on a specific set of medical books, the same set of books all doctors have access to already. While there’s still room for error it would guide the doctor to a very familiar reference. No internet junk, social media, etc.

        Exactly as you say. It’s a tool, not a replacement. Certainly not in healthcare anyway.

      • @[email protected]
        link
        fedilink
        English
        91 day ago

        I think the issue with this particular tool is it can authoritatively provide incorrect, entirely fabricated information or a gross misinterpretation of factual information.

        In any field I’ve worked in, I’ve often had to refer to reference material as I simply can’t remember everything. I have to use my experience and critical thinking skills to determine if I’m utilizing the correct material. I have not had to further determine if my reference material has simply made up a convincing, “correct sounding” answer. Yes, there are errors and corrections to material over time, but never has the entire reference been suspect, yet it continued to be used.

        • @[email protected]
          link
          fedilink
          English
          211 hours ago

          i maintain that AI companies could improve their stuff a huge amount by simply forcing it to prefix “I think” to all statements. It’s sorta like how calculators shouldn’t show more data than it can confidently produce, if the precision is only 4 decimals then don’t show 8.

      • @[email protected]
        link
        fedilink
        English
        -324 hours ago

        I would prefer this to no healthcare until it’s too late which seems to be the option in places with free healthcare.

        • Echo Dot
          link
          fedilink
          English
          2
          edit-2
          10 hours ago

          Yeah we should old use the corporate system which is brilliant. As long as you’re rich, easy solution, just be rich and you’re fine.

          Thank you for your unhelpful and ignorant comment

    • GreatAlbatrossM
      link
      fedilink
      English
      61 day ago

      And I also fear overburdened professionals not having time to second guess ML hallucinations.

      • Echo Dot
        link
        fedilink
        English
        110 hours ago

        You were probably already at risk then of a misstep. Don’t have time to think about the output then they probably didn’t have time before AI came along, so the AI isn’t really adding to the issue here.

  • @[email protected]
    link
    fedilink
    English
    191 day ago

    Using Generative AI as a substitute for professional judgement is a disaster waiting to happen. LLMs aren’t sentient and will frequently hallucinate answers. It’s only a matter of time before incorrect output will lead to catastrophic consequences and the idiot who trusted the LLM, not the LLM itself, will be responsible.

    • Echo Dot
      link
      fedilink
      English
      010 hours ago

      If you read the article that’s not what’s happening here.

      Doctors are just using AI like they use any tool. To inform their decision.

  • Don Piano
    link
    fedilink
    English
    132 days ago

    They need to lose their licenses.

    Everyone anywhere using one on the job should be fired, but medical personnel is endangering people.

    • @[email protected]
      link
      fedilink
      English
      14 hours ago

      It’s depends purely on how it’s used. Used blindly, and yes, it would be a serious issue. It should also not be used as a replacement for doctors.

      However, if they could routinely put symptoms into an AI, and have it flag potential conditions, that would be powerful. The doctor would still be needed to sanity check the results and implement things. If it caught rare conditions or early signs of serious ones, that would be a big deal.

      AI excels at pattern matching. Letting doctors use it to do that efficiently, to work beyond there current knowledge base is quite a positive use of AI.

    • @[email protected]
      link
      fedilink
      English
      202 days ago

      The best use of AI at the moment is to act as a tool to quickly search and present data quicker than humanly possible. Not to act upon the findings blindly.

      It’s not as easy to say anyone using AI should be fired. There needs to be a more nuanced approach to this. It wholly depends on what the GP did with the information it presented.

      An example: back in the day GPs had a huge book of knowledge they would defer to that was peer researched and therefore trusted. If you came in with an odd symptom they’d spend time (often in front of you) flipping through the book to find that elusive disease they read about that one time at university. Later that knowledge moved to a traditional search engine. Why wouldn’t you now use AI to make that search faster? The AI can easily be trained on this same corpus of knowledge.

      Of course the GP should double check what they are being told. But simply using AI is not the problem you make it out to be. If you have a corpus of knowledge and the GP uses this in a dangerous way then the GP should be fired. But you don’t then burn the book they found this information from.

      • @[email protected]OP
        link
        fedilink
        English
        142 days ago

        I think the difference here is that medical reference material is based on long process of proven research. It can be trusted as a reliable source of information.

        AI tools however are so new they haven’t faced anything like the same level of scrutiny. For now they can’t be considered reliable, and their use should be kept within proper medical trials until we understand them better.

        Yes human error will also always be an issue, but putting that on top of the currently shaky foundations of AI only compounds the problem.

        • @ShareMySims
          link
          English
          102 days ago

          Lets not forget that AI is known for not only not providing any sources, or even falsifying them, but now also flat out lying.

          Our GP’s are already mostly running on a tick-box system where they feed your information (but only the stuff on the most recent page of your file, looking any further is too much like hard work) in to their programme and it, rather than the patient or a trained physician, tells them what we need. Remove GP’s from the patients any more, and they’re basically just giving the same generic and often wildly incorrect advice we could find on WebMD.

      • @[email protected]
        link
        fedilink
        English
        7
        edit-2
        2 days ago

        Indeed. GPs have been doing this for a long time. It’s nothing new, and expecting every GP to know every single ailment that humanity has ever experienced, to recall it quickly, and immediately know the course of action to take, is unreasonable. They are only human.

        Like you say, if they’re blindly following a generic ChatGPT instance trained on whatever crap it’s scraped from the internet, then that’s bad.

        If they’re aiding their search using an LLM that has been trained on a good medical dataset, then taking that and looking more into it, then there’s no issue.

        People have become so reactionary to LLMs and other AI stuff. It seems there’s a “omg it’s so cool everybody should use it to the max. Let’s blindly trust it!” camp and a “it’s awful and shouldn’t exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!”

        Both camps are just as stupid. There’s zero nuance in the discussion about this stuff, and it’s tiring.

        • @[email protected]
          link
          fedilink
          English
          7
          edit-2
          1 day ago

          You can build excellent expert systems that will definitely help a doctor remember all the illnesses, know what questions to ask to narrow things down or double check it’s not something weird, and provide options for treatment.

          These exist and are good

          Chatgpt isn’t an expert system and doctors using it like one need a serious warning from the BMC and would eventually need to be struck off, same as using ouija boards or bones to diagnose illnesses.

          • streetlights
            link
            fedilink
            English
            31 day ago

            These exist and are good

            Any examples off the top of your head? I would assume/speculate they are fairly expensive?

        • @YungOnions
          link
          English
          21 day ago

          People have become so reactionary to LLMs and other AI stuff. It seems there’s a “omg it’s so cool everybody should use it to the max. Let’s blindly trust it!” camp and a “it’s awful and shouldn’t exist, burn it all! No algorithms or machine learning anywhere. New tech is bad!”

          Both camps are just as stupid. There’s zero nuance in the discussion about this stuff, and it’s tiring.

          Well said.

    • @YungOnions
      link
      English
      62 days ago

      ‘Everyone anywhere’? That’s an amazingly broad statement. What’re you defining as ‘using one’? If I use ChatGPT to rewrite a paragraph, should I be fired? What about if a non native speaker uses it to remove grammatical errors from an email, should they be fired? How about using it for assisting with coding errors? Or generating draft product marketing copy? Or summarising content for third parties to make it easier to understand? Still a fireable offence? How about generating insights from data? Assistance with Roadmap prioritisation? Generating summaries of meeting notes or presentations? Helping users with learning disabilities understand complex information? Or helping them with letters, emails etc? How about if it use it to remind me of tasks? Or managing my routines?

      • @[email protected]
        link
        fedilink
        English
        6
        edit-2
        13 hours ago

        Don’t you be bringing nuance into this.

        If you used an LLM to find that mistyped variable name, you deserve to lose your job. You and your family must suffer.

        If you are blind and you use a screen reader with some AI features, you should be fired and that tech needs to be taken from you. You must suffer.

        Honestly we should just kill them, even. In a very painful and torturous way.

        • @[email protected]
          link
          fedilink
          English
          41 day ago

          There’s a difference between using LLMs to edit text, provide ideas or give you information that you can double check because you have the subject matter experience. Relying on it as a substitute for skill when something important is at stake like someone’s well being is reckless at best.

          • @[email protected]
            link
            fedilink
            English
            21 day ago

            That’s not what was said. What was said was anybody using it in any capacity for any job should be fired.

            Which is obviously a very, very stupid take.

          • @YungOnions
            link
            English
            41 day ago

            Sure, but the original quote was:

            Everyone anywhere using one on the job should be fired

            There’s no nuance there it’s just AI = bad. I agree that it’s shouldn’t, in its current form, be used as a substitute for skill in important situations. You’re totally right there.

            • Don Piano
              link
              fedilink
              English
              -51 day ago

              I never said AI = bad. AI is much broader and contains worthwhile and non-plagiarized approaches.

              If it’s worth doing, do it properly.

              • Echo Dot
                link
                fedilink
                English
                0
                edit-2
                10 hours ago

                No you did say that.

                Everyone anywhere using one on the job should be fired

                You said anyone using an AI in any capacity should be fired. I have heard infinitely better takes from 4-year-olds and why they need more ice cream.

                • Don Piano
                  link
                  fedilink
                  English
                  15 hours ago

                  This is on a post about chatgpt use. Chatgpt is from the set of llms, which is a subset of ai.

                  Ai is cool. The current batch of LLMs/PISS can leave.

  • streetlights
    link
    fedilink
    English
    102 days ago

    20 years ago there were complaints that GP’s were using Google, now its normal. Can’t help but feel the same will happen here.

    • @[email protected]
      link
      fedilink
      English
      211 hours ago

      to be fair back then google just showed you what you searched for, i’m not as happy about people googling stuff these days. With AI we already know that it tends to make shit up, and it might very well only get worse as they start being trained on their own output.

      • Echo Dot
        link
        fedilink
        English
        010 hours ago

        Actually hallucinations have gone down as AI training has increased. Mostly through things like prompting them to provide evidence. When you prompt them to provide evidence they don’t hallucinate in the first place.

        The problem is really to do with the way the older AIs were originally trained. They were basically trained on data where a question was asked, and then a response was given. Nowhere in the data set was there a question that was asked, and the answer was “I’m sorry I do not know”, so the AI basically was unintentionally taught that it is never acceptable to not answer a question. More modern AI have been trained in a better way and have been told it is acceptable not to answer a question. Combined with the fact that they now have the ability to perform internet searches, so like a human they can go look up data if they recognize that they don’t have access to it in their current data set.

        That being said, Google’s AI is an idiot.

    • @[email protected]
      link
      fedilink
      English
      42 days ago

      You’re right. Within 10 seconds I just found an article from 2006 saying just that. Earlier ones likely exist.