Artificial Superintelligence, AI systems that are more intelligent than humans across every domain, may or may not be coming soon

What could we do to prepare now for a future where it has arrived?

I have been considering:

  • starting local community groups
  • updating my investment strategies to be more resilient to market disruption
  • diversifying personal income streams
  • staying up to date with the latest news and learn to better use the latest tools/technology
  • upgrading personal skills towards the harder to replace industries

It’s a bit difficult to imagine a truly “safe” way of life. Barring UBI and more progressive taxes it seems like it may be quite challenging for the average person to exist comfortably.

Some industries that are already impacted at the level of technology we already have

  • programming
  • ui design
  • creative writing
  • technical writing
  • customer support
  • graphic art
  • data analysis

I think almost every other industry is at risk of significant disruption. A capitalism based society will always stray toward the cheapest option, “if AI can take customer support calls for $1/day and customer satisfaction doesn’t dip, why would I pay a person $150/day?”

  • MrJameGumb@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    Find a way to convince Republicans that it will help the needy, create equality, and/or cost them money somehow. That will kill it off pretty quickly.

  • Vinny_93@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    Until humanity unites there is no ‘we’ who can do anything. We are currently too busy polarising, fighting and reinforcing poverty. If AGI ever arrives, it’ll most likely be weaponised or used to make rich people richer.

    If ASI runs out of control, we can only hope it’ll be a nice god and doesn’t immediately see humans are a disease and try to kill us off.

    It’s, at this point, hubris to assume AGI/ASI wants anything to do with us.

    • 0x01@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      More than fair, how much effort do we put in to make sure the ants have a comfortable life? Or even further, the tardigrades? I am on the optimistic side, hoping that a superintelligence holds a benevolent nostalgia/amusement for sentient life if it does indeed come to that.

      There’s a chance that asi doesn’t happen and we stall indefinitely on a simple token prediction system, in which case the disruption could be limited to what we’ve seen already?

      • Vinny_93@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I am in the hopeful side. Maybe an ASI can quickly analyze our issues and interfere. ASI might spit out any plans to improve everyone’s life but if the people in power ignore all of the advice because they’ll no longer be in power, nothing will really change except now there’s an ASI using huge amounts of electricity.

        Considering how everything’s going, I honestly think an ASI won’t make anything worse happen than the current state of affairs.

  • WoodScientist
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    How to address superintelligence, if that is actually something we realistically face:

    1. Make creating an unlicensed AI with over a certain threshold to be a capital offense.

    2. Regulate the field of artificial intelligence as heavily as we do nuclear science and nuclear weapons development.

    3. Have strict international treaties on model size and capability limitations.

    4. Have inspection regimes in place to allow international monitoring of any electricity usage over a certain threshold.

    5. Use satellites to track anomalous large power use across the globe (monitored via waste heat) and thoroughly investigate any large unexplained energy use.

    6. Target the fabs. High powered chips should be licensed and tracked like nuclear materials.

    7. Make clear that a nuclear first strike is a perfectly acceptable response to a nation state trying to create AGI.

    Anyone who says this technology simply cannot be regulated is a fool. We’re talking models that require hundreds of megawatts or more to run and giant data centers full of millions of dollars worth of chips. There’s only a handful of companies on the planet producing the hardware for these systems. The idea that we can’t regulate such a thing is ridiculous.

    I’m sorry, but I put the survival of the human race above your silly science project. If I have to put every person on this planet with a degree in computer science into a hole in the ground to save the human race, that is a sacrifice I am willing to make. Hell, I’ll go full Dune and outlaw computers all together, go back to pen and paper for everything, before I condone AGI.

    We can’t control this technology? Balderdash. It’s created by human beings. And human beings can be killed.

    So, how do we deal with ASI? You put anyone trying to create it deep in the ground. This is self defense at a species level. Sacrificing a few thousand madmen who think they’re going to summon a benevolent god to serve them is simple self-defense. It’s OK to kill cultists who are trying to summon a demon.

  • C A B B A G E@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    The ideal scenario is that we get good old Gay Space Communism™ but until then it’s probably going to be mutual aid and community organisation that keep the rest of us semi-functioning.

    • 0x01@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Do you think isolated communities could come up with reasonable resilience strategies? Can a mutual aid subculture exist for a long time within our current system as well?