Cabinet Minister Judith Collins wants the government to expand the use of artificial intelligence (AI), starting with the health and education sectors where it could be used to assess mammogram results and provide AI tutors for children.

“It doesn’t do the work for them. It says some things like ‘go back, rethink that one, look at that number,’ those sorts of things. What an exciting way to do your homework if you’re a child.”

Deploying AI in education and health would be seen as high risk uses under new legislation passed by the European Union regulating AI.

Using AI in those settings in EU countries must include high levels of transparency, accuracy and human oversight.

But New Zealand has no specific AI regulation and Collins is keen to get productivity gains from extending its use across government, including using it to process Official Information Act requests.

An OIA request by RNZ for a government Cabinet paper on AI was turned down (by a human) on the grounds that the policy is under live consideration.

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 months ago

    Work paid for me to go to a “getting started with AI for businesses” seminar run by [redacted reputable organisation name] and holy crap the FOMO.

    • The whole premise of the thing basically boiled down to “LLMs are a massive game changing technology that is going to make huge amounts of human tasks obsolete and if you don’t get in on it now your competitors will and you’ll be bankrupt in a decade” which… idk. Useful technology for sure, but this isn’t the AI singularity. The vibe I got was all these people are old enough to see the fortunes won and lost when the internet exploded, and are terrified that this is going to be that all over again and that they’ll end up left behind.
    • People massively personify LLMs without thinking through the actual detail in how they work. Someone asked a question about how you can rely on information the LLM gives you, and the suggestion was to just ask it how confident it is which isn’t really how LLMs work - they are fancy auto complete, it has no theory of mind or actual reasoning - it can’t know if what it’s saying is true or not, but because it is being presented as something you can converse with, it feels like there is some deeper cognition that you can interrogate
    • TagMeInSkipIGotThis@lemmy.nz
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      The more I see & hear, the more I think its all grift.

      Ie the crypto bros left their coins for nfts, and now they’ve tanked they’re finding something else to burn the planet down in order to scam suckers.

      • RegalPotoo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I don’t think it’s all grift - there are absolutely places where LLMs are the best tech out there, but it’s probably not going to take everyone’s jobs any time soon (at least not on merit - in sure there are plenty of places that’d accept a 50% drop in quality for a 90% drop in price)

        I’ve seen a pretty compelling case study of a company using an LLM as a “tier zero” support tech - instead of getting a tier 1 tech to classify a case, decide if they had the tools to address the issue or if it needs to go to tier 2, work out if it was an instance of a known issue etc before they actually start working on the problem, give the LLM some examples and get it to do the triage so the humans can do the more complicated stuff. It does about as well as a human, for a fraction of the price.

        • TagMeInSkipIGotThis@lemmy.nz
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          I’d have to see that in action before I pass judgement but given LLMs predilection for hallucination and the vagaries of how humans report tech faults I would be surprised if it was significantly more accurate or effective than a human. After all if its working out if there’s a known issue then essentially its not much beyond a script at that point and in that case do you want to trade the unpredictability of what an LLM might recommend vs something (human or otherwise) that will follow the script?

          Even if an LLM were an effective level 0 helpdesk it would still need to overcome the user’s cultural expectation (in many places) that they can pick up the phone and speak to somebody about their problem. Having done that job a long long time ago, diagnosing tech problems for people who don’t understand tech can be a fairly complex process. You have to work through their lack of understanding, lack of technical language. You sometimes have to pick up on cues in their hesitations, frustrated tone of voice etc.

          I’m sure an LLM could synthesis that experience 80% of the time, but depending on the tech you’re dealing with you could be missing some pretty major stuff in the 20%, especially if an LLM gives bad instructions, or closes without raising it etc. So you then need to pay someone to monitor the LLM and watch what its doing - at which point you’ve hired your level 1 tech again anyway.