ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • eggymachus
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yeah, I was probably a bit too caustic, and there’s more to (A)GI than an LLM can achieve on its own, but I do believe that some, and perhaps a large, part of human consciousness works in a similar manner.

    I also think that LLMs can have models of concepts, otherwise they couldn’t do what they do. Probably also of truth and falsity, but perhaps with a lack of external grounding?