I started a local vibecoders group because I think it has the potential to help my community.

(What is vibecoding? It’s a new word, coined last month. See https://en.wikipedia.org/wiki/Vibe_coding)

Why might it be part of a solarpunk future? I often see and am inspired by solarpunk art that depicts relationships and family happiness set inside a beautiful blend of natural and technological wonder. A mom working on her hydroponic garden as the kids play. Friends chatting as they look at a green cityscape.

All of these visions have what I would call a 3-way harmony–harmony between humankind and itself, between humankind and nature, and between nature and technology.

But how is this harmony achieved? Do the “non-techies” live inside a hellscape of technology that other people have created? No! At least, I sure don’t believe in that vision. We need to be in control of our technology, able to craft it, change it, adjust it to our circumstances. Like gardening, but with technology.

I think vibecoding is a whisper of a beginning in this direction.

Right now, the capital requirements to build software are extremely high–imagine what Meta paid to have Instagram developed, for instance. It’s probably in the tens of millions or hundreds of millions of dollars. It’s likely that only corporations can afford to build this type of software–local communities are priced out.

But imagine if everyone could (vibe)code, at least to some degree. What if you could build just the habit-tracking app you need, in under an hour? What if you didn’t need to be an Open Source software wizard to mold an existing app into the app you actually want?

Having AI help us build software drops the capital requirements of software development from millions of dollars to thousands, maybe even hundreds. It’s possible (for me, at least) to imagine a future of participative software development–where the digital rules of our lives are our own, fashioned individually and collectively. Not necessarily by tech wizards and esoteric capitalists, but by all of us.

Vibecoding isn’t quite there yet–we aren’t quite to the Star Trek computer just yet. I don’t want to oversell it and promise the moon. But I think we’re at the beginning of a shift, and I look forward to exploring it.

P.S. If you want to try vibecoding out, I recommend v0 among all the tools I’ve played with. It has the most accurate results with the least pain and frustration for now. Hopefully we’ll see lots of alternatives and especially open source options crop up soon.

  • perestroika@slrpnk.net
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    21 hours ago

    The concept is new to me, so I’m a bit challenged to give an opinion. I will try however.

    In some systems, software can be isolated from the real world in a nice sandbox with no unexpected inputs. If a clear way of expressing what one really wants is available, and more convenient than a programming language, I believe a well-trained and self-critical AI (capable of estimating its probability of success at a task) will be highly qualified to write that kind of software, and tell when things are doubtful.

    The coder may not understand the code, though, which is something I find politically unacceptable. I don’t want a society where people don’t understand how their systems work.

    It could even contain a logic bomb and nobody would know. Even the AI which wrote it may tomorrow fail to understand it, after the software has become sufficiently unique through customization. So, there’s a risk that the software lacks even a single qualified maintainer.

    Meanwhile some software is mission critical - if it fails, something irreversible happens in the real world. This kind of software usually must be understood by several people. New people must be capable of coming to understand it through review. They must be able to predict its limitations, give specifications for each subsystem and build testing routines to detect introduction of errors.

    Mission critical software typically has a close relationship with hardware. It typically has sensors coming from the real world and effectors changing the real world. Testing it resembles doing electronical and physical experiments. The system may have undescribed properties that an AI cannot be informed about. It may be impossible to code successfully without actually doing those experiments, finding out the limitations and quirks of hardware, and thus it may be impossible for an AI to build from a prompt.

    I’m currently building a drone system and I’m up to my neck in undocumented hardware interactions, but even a heating controller will encounter some. I don’t think people will experience success in the near future with letting an AI build such systems for them. In principle it can. In principle, you can let an AI teach a robot dog to walk, and it will take only a few hours. But this will likely require giving it control of said robot dog, letting it run experiments and learn from outcomes. Which may take a week, while writing the code might have also taken a week. In the end, one code base will be maintainable, the other likely not.