• Geek_King@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If AI were to develop to the point of surpassing human understanding, we’d be in for some serious shit. If a hyper intelligent General Artificial Intelligence started to figure things out, make decisions and look for certain things to be done for it, would we as humans be able to understand it’s motives, it’s goals? What about new technology developed by the AI? Would it get to the point that we’d have all sorts of sci-fi type tech, but no understanding of how it actually functions?

    I don’t really buy into the whole robots will kill us all trope sci-fi loves. But I think it’s important to consider that a true AI, unless built to have rules it follows would not have a humans perspective, and is truly an “outside intelligence”. It’s not one of us, and it couldn’t be trusted to think like a human, to value life, to understand beauty. We’re a long way off from having to face anything like that, but if you would have described how ChatGPT works to me 5 years ago, I would have had a hard time believe it.

  • SpacePace
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I immediately think of the calculator - that must have been an existential hoop to jump through (who could do maths better than me?!).

    Will we one day look back on human inputs such as decision-making, file selection, mouse-pointing as primitive? I reckon we will