Artificial Superintelligence, AI systems that are more intelligent than humans across every domain, may or may not be coming soon

What could we do to prepare now for a future where it has arrived?

I have been considering:

  • starting local community groups
  • updating my investment strategies to be more resilient to market disruption
  • diversifying personal income streams
  • staying up to date with the latest news and learn to better use the latest tools/technology
  • upgrading personal skills towards the harder to replace industries

It’s a bit difficult to imagine a truly “safe” way of life. Barring UBI and more progressive taxes it seems like it may be quite challenging for the average person to exist comfortably.

Some industries that are already impacted at the level of technology we already have

  • programming
  • ui design
  • creative writing
  • technical writing
  • customer support
  • graphic art
  • data analysis

I think almost every other industry is at risk of significant disruption. A capitalism based society will always stray toward the cheapest option, “if AI can take customer support calls for $1/day and customer satisfaction doesn’t dip, why would I pay a person $150/day?”

  • WoodScientist
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    How to address superintelligence, if that is actually something we realistically face:

    1. Make creating an unlicensed AI with over a certain threshold to be a capital offense.

    2. Regulate the field of artificial intelligence as heavily as we do nuclear science and nuclear weapons development.

    3. Have strict international treaties on model size and capability limitations.

    4. Have inspection regimes in place to allow international monitoring of any electricity usage over a certain threshold.

    5. Use satellites to track anomalous large power use across the globe (monitored via waste heat) and thoroughly investigate any large unexplained energy use.

    6. Target the fabs. High powered chips should be licensed and tracked like nuclear materials.

    7. Make clear that a nuclear first strike is a perfectly acceptable response to a nation state trying to create AGI.

    Anyone who says this technology simply cannot be regulated is a fool. We’re talking models that require hundreds of megawatts or more to run and giant data centers full of millions of dollars worth of chips. There’s only a handful of companies on the planet producing the hardware for these systems. The idea that we can’t regulate such a thing is ridiculous.

    I’m sorry, but I put the survival of the human race above your silly science project. If I have to put every person on this planet with a degree in computer science into a hole in the ground to save the human race, that is a sacrifice I am willing to make. Hell, I’ll go full Dune and outlaw computers all together, go back to pen and paper for everything, before I condone AGI.

    We can’t control this technology? Balderdash. It’s created by human beings. And human beings can be killed.

    So, how do we deal with ASI? You put anyone trying to create it deep in the ground. This is self defense at a species level. Sacrificing a few thousand madmen who think they’re going to summon a benevolent god to serve them is simple self-defense. It’s OK to kill cultists who are trying to summon a demon.