AI singer-songwriter ‘Anna Indiana’ debuted her first single ‘Betrayed by this Town’ on X, formerly Twitter—and listeners were not too impressed.
AI singer-songwriter ‘Anna Indiana’ debuted her first single ‘Betrayed by this Town’ on X, formerly Twitter—and listeners were not too impressed.
This needs to be hammered into techbro’s heads until they shut the fuck up about the so-called “AI” revolution.
I’ve been doing a lot of using, testing, and evaluating LLMs and GPT-style models for generating code and text/prose. Some of it is just general use to see how it behaves, some has been explicit evaluation of creative writing, and a bunch of it is code generation to test out how we need to modify our CS curriculum in light of these new tools.
It’s an impressive piece of technology, but it’s not very creative. It’s meh. The results are meh. Which is to be expected since it’s a statistical model that’s using a large body of prior work to produce a reasonable approximation of what it’s seen before. It trends towards the mean, not the best.
This’d explain why inexperienced users of ai would inevitably get mediocre results. Still takes creativity to get stolen mediocrity.
You have to know how to operate the oven to reheat store bought pie. Generative LLMs are machines like ovens, and turning the knobs is not creativity. Not operating the oven correctly gets you Sharon Weiss results.
I guess a protip is you have to tell it explicitly in the prompt who it’s supposed to steal from.
For instance, midjourney or SD will produce much better results if you put specific artstation channel names along with ‘artstation’ in the prompt.
I’m curious if you’ve gotten anything decent out of them. I’ve tried to use it for tech/code questions, and it’s been nothing but disappointment after disappointment. I’ve tried to use it to get help with new concepts, but it hallucinates like crazy and always give me bad results, some of the time it’s so bad that it gives me answers I’ve already told it we’re wrong.
Yeah, I’ve just set up a hotkey that says something like “back up your answer with multiple reputable sources” and I just always paste it at the end of everything I ask. If it can’t find webpages to show me to back up its claims then I can’t trust it. Of course this isn’t the case with coding, for that I can actually run the code to verify it.
What version are you using?
GPT-4 is quite impressive, and the dedicated code LLMs like Codex and Copilot are as well. The latter must have had a significant update in the past few months, as it’s become wildly better almost overnight. If trying it out, you should really do so in an existing codebase it can use as a context to match style and conventions from. Using a blank context is when you get the least impressive outputs from tools like those.
I’ve used gpt 3/3.5, bing, bard and copilot, and I’m not super stoked. Copilot gave me PS DSC items that don’t actually exist, which was my most recent attempt at using a LLM.
I might see about figuring out if it can hook into my vs code instance so it’s a bit smarter at some point.
There’s an official plug-in to do this that takes like 15 minutes to set up.
am use for end of year ai project for school
That’s where some of the significant advances over the past 12 months of research have been, specifically around using the fine tuning phase to bias towards excellence. The biggest advance there has been that capabilities in larger models seem to be transmissible to smaller models by feeding in output from the larger more complex models.
Also, the process supervision work to enhance CoT from May is pretty nuts.
So while you are correct that the pretrained models come out with a regression towards the mean, there are very promising recent advances in taking that foundation and moving it towards excellence.
I’m excited for how these tools will be used by human creators to accomplish things they could never do alone, and in that aspect it is a revolutionary technology. I hate that their marketing calls it “AI” though, the only intelligence involved is the human user that creates prompts and curates results.
It’s not the techbros leading this, it’s the BBAs and MBAs that wouldn’t know art if Michelangelo came to life and slapped them in the face with the sistine chapel.
I would never call an actual technician a techbro! Techbros are Rick&Morty ledditor “fuck yeah science!” dorks.
deleted by creator
The anger comes from the fact that companies are using AI instead of hiring artists.
There is a distinction between a human being inspired by an existing piece of art and an ai creating something from other art. The human has to experience it through the lens of the human experience and create using the human body. AI takes multiple pieces of art and essentially makes a collage.
Eh, humans still take inspiration from others even in their original art. Most professionals draw from reference, or emulate styles, or follow some common method. Drawing from a singular source is ethically questionable, but imitating elements from many sources is just part of the process.
Arguably, no human creation is purely original, the originality comes from the creativity of the remix.
I’m not arguing for originality. I’m saying that you can have a human connection with a human made piece of art that, by definition, canon exist for AI art.
For the thousandth fucking time, NO.
‘AI’ doesn’t feel joy, sadness, pity, entertained, or inspired when learning from others. Not even inspired to steal.
I think this is an important distinction. AI can be creative in that it can develop something new and unique, but it will have arrived at it by chance - through random inputs to the algorithm designed to minic evolutionary mutations that end up beneficial.
I agree that (at least for now) it would not be able to develop something out of inspiration or emotion. But that’s because we don’t understand enough about how emotion and inspiration are developed to create an algorithm that cultivates it.
I see it an more an inability to analyze, evaluate, and edit. A lot of “creativity” in the world of musical composition is putting together existing elements and seeing what happens. Any composer from pop to the very avant-garde, is influenced and sometimes even borrow from their predecessors (it’s why copyright law is so complex in music).
It’s the ability to make judgements, does this sound good/interesting, does this have value, would anyone want to listen to this, and adjust accordingly that will lead to something original and great. Humans are so good at this, we might be making edits before the notes hit the page (Brainstorming). This AI clearly wasn’t. And deciding on value, seems wildly complex for modern day computers. Humans can agree on it (if you like Rock, but hate country for example).
So in the end, they are “creative” but in a monkey-typewritter situation, but who is going to sort through the billions of songs like this to find the one masterpiece?
One of the overlooked aspects of generative AI is that effectively by definition generative models can also be classifiers.
So let’s say you were Spotify and you fed into an AI all the songs as well as the individual user engagement metadata for all those songs.
You’d end up with a model that would be pretty good at effectively predicting the success of a given song on Spotify.
So now you can pair a purely generative model with the classifier, so you spit out song after song but only move on to promoting it if the classifier thinks there’s a high likelihood of it being a hit.
Within five years systems like what I described above will be in place for a number of major creative platforms, and will be a major profit center for the services sitting on audience metadata for engagement with creative works.
Right, the trick will be quantifying what is ‘likely to be a hit’, which if we’re honest, has already been done.
Also, neural networks and other evolutionary algorithms can inject random perturbations/mutations to the system which, operate a bit like uninformed creativity (something like banging on a piano and hearing something interesting that’s worth pursuing). So, while not ‘inspired’ or ‘soulful’ as we would generally think of it, these algorithms are capable of being creative In some sense. But it would need to be recognized as ‘good’ by someone or something…and back to your point.
What you described in your second paragraph is basically how image generation AI works.
Starting from random noise and gradually moving towards the version a classifier identifies as best matching the prompt.
Plenty of humans make those judgements about their own creations. And plenty of them get a shock when they release their creations to the masses and don’t get the praise that they expected.
I believe that’s vital to the creative process, but yeah, I basically agree.
“Generative” is such a misleading term. It’s not generating anything, it is replicative.
For now.
And don’t forget, humans are also trained on the inputs of others.
Removed by mod
Meat goes in. Sausage comes out.
The problem for a lot of the companies behind these things, is that they’ve run into problems now their investors want them to turn meat into a black forest gateau.
I’m sceptical if they can manage that feat. But what do I know.
Are you saying the idea of a unicorn wasn’t new and original because it was drawing on the pre-existing features of a horse and narwhal?
deleted by creator
Removed by mod
Such a wonderful, thoughtful, creative retort. You must be an AI chat-bot.
Or I’m just sick of utter imbeciles saying stupiest shit possible.
deleted by creator
deleted by creator
Still, AI is able to “create” new things by a combination of existing concepts. It can generate a Roomba in the style of Van Gogh for example, which is probably not something that currently exists.
“Roomba in the style of Van Gogh” is a new combination of existing things, but it can never create something truly original. Derivative.
What is an example of something that is truly original and not derivative?
But all of human creation is derivative.
Right just as soon as all the people proclaiming that can point to the soul bit of my brain. There is absolutely no reason to say that AI cannot be creative there’s nothing fundamentally magic about creativity that means only humans can do it.
You’re equating creativity to the soul. They’re not the same thing. But we can definitely look at the brain and see what parts light up when perform creative tasks.
Right so why can’t the same sections be simulated? If you accept that the human brain is simply an organic implementation of a neural network, then you have to accept that a synthetic implementation can achieve the same thing.
The idea that the human brain is special is ludicrous and completely without evidence
I mean, I’m not arguing anything other than your false equivalent. I’m sure, at some point, we’ll be able to mimic how the human brain actually works, not just imitate the results. But we’re not even close right now. Not in the same ball park. Not in the same tri-state area. We still don’t really understand how it does what it does completely. We know some of the processes, and understand that’s it’s chemicals interacting with the meat in some way, but it’s still mostly kinda just weird stuff our body does. We’re mostly just pointing at areas that light up with activity when we do a thing and saying “yep, that’s the general area that’s doing stuff.”
And that’s just understanding it, let alone figuring out how to imitate it with technology. And none of those parts of the brain work independently. They’re spread out and they overlap and exchange and change information constantly, all with chemicals. Getting a computer to mimic the outcome is still something we’re far from, but without the same processes, its not really gonna come out the same. We’ve got just… so long to go before we actually get close to simulating a human brain.
And just for fun, I do think this line of yours is funny:
Again, I wasn’t saying anything of any sort, and I’m still not really taking any stance beyond “that shits complicated and we’re not there yet.” But you’re supposing that a “synthetic implementation can achieve the same thing.” … without supporting evidence. This argument was clearly meant for someone else, but it’s not really fair to demand evidence from someone for their claim when you don’t support your own. Jumping to the conclusion that something is impossible is the same as assuming it’s definitely possible. You don’t know that. I don’t know that. No one really knows that until it’s done.
The belief that only humans can be creative is interestingly parallel to intelligent design creationism. The latter is fundamentally a religious faith, but it strongly appeals to the intuition that anything that happens needs a humanoid creator.
I don’t think, the human brain is special either, but we are still two big steps ahead IMHO:
Yes, it is literally impossible for any AI to ever exist that can be creative. At no point in the future will it ever create anything creative, that is something only human beings can do. Anybody that doesn’t understand this is simply incapable of using logic and they have no right to contribute to the conversation at all. This has all already been decided by people who understand things really well and anyone who objects is obviously stupid.
Good job tearing down that strawman! 🙄
I was agreeing with you. I’m so sick of people thinking that “someday AI might be creative”. Like no, it’s literally impossible unless some day AI becomes human(impossible) because human is the only thing capable of creativity. What have I said that you disagree with? You’re not one of them are you? What’s with all this obsessive AI love?
LLMs aren’t intelligent. They’re jumped up chatbots lol
Yeah the current popular LLMs, absolutely they are, you couldn’t be more right.
We were talking about “AI” though. Are you implying that you think some day AI might be capable of creativity, and that creativity isn’t strictly a human trait?
I put “AI” in scare quotes specifically because I do not believe we are having an “AI revolution”. These are not AI.
I think AI can exist but that’s not what we have right now. What we have are jumped up algos that can somewhat fake it.
Even those future “real” AIs are going to be taking in human input and regurgitating it back to us. The only difference is that the algorithms processing the data will continue to get better and better. There is not some cutoff where we go from 100% unintelligent chatbot to 100% intelligent AI. It is a gradual spectrum.
I believe a real AI would be able to generate its own inputs without humans to give it input. It would have an actual subjective experience, able to actually imagine new things with zero external inputs. It could experience the redness of the color red.
Except that it’s wrong… AI is capable of creativity. It created the artist name. It’s clearly not a very developed or robust sense of creativity because it clearly just hashed up the name Hanna Montana, and the song is probably likewise just a hashed up existing song, but I’m guessing it probably did a better job of creating an original work than vanilla ice…
I’m sorry, anyone who says these so-called “AI” are capable of creativity are being hoodwinked by marketing. This is an algorithmic probability engine, it doesn’t think and it doesn’t have an imagination. It just regurgitates probabilistic responses from its large data set.
Can you prove your brain is more than a algorithmic probability engine albeit a powerful one?
And here come the techbros to dehumanize themselves.
You and I feel. We don’t just generate outputs from inputs, we experience them. The color red isn’t just a datapoint recorded by photoreceptors, it’s a phenomenal experience that “I”, the self, experience as a being-in-the-world. Further, the color red that I experience is not the same as the color red you experience, even though it’s the same color at the same wavelength. Everything we think and feel relates to everything else, and while I can imagine how you might experience the color red and you can provide me with data points to make it easier for me to imagine it, that imagination will always be tainted by my own subjective experience.
To me it looks like you hold a lot of pride in being a human and consider humanity special. Im here to tell you we are no different from amoebas and giraffes. We just specialize in our complex meat computers.
If you took a psychedelic or a cognitive psychology class you would understand through feel that feel is just the result of you being a meat calculator. Our feelings are the cumulative result of all the inputs and outputs. All at once. Slap some lived experience filters for subjectivity and bam.
Feel is subjective. Not everyone’s a vicious crypto tech bro. Open your mind its a good time ❤️
ais arent meat calculators
I don’t think anyone here said that.
deleted by creator
What I’m saying is LLMs do not actually do that. They’re less creative than most animals, even if they’re more technically capable.
I’m not just a meat calculator, I’m also feedback loop of meat endlessly calculating itself. That’s what subjectivity is. When LLMs do this they hallucinate, and ironically while this is considered undesirable I think that’s actually closer to creativity than the song this AI wrote.
… what do you think imagination is? A gift from God? The probabilities are probably more chaotic, and the data set more biased… but they’re the basic foundation of human imagination.
Machine based “creativity” is nascent, and far less unique… but that doesn’t mean it isn’t a form of creativity.
The human imagination also involves the phenomenal experience. You do not just record the data coming at you and regurgitate it, you experience it and then your experience further changes the data itself. We call this “subjectivity” and it’s where creativity comes from.
I am not saying that machine creativity is impossible. What I’m saying is these LLMs are not creative because they don’t even know what they’re doing and they don’t even know “they” are doing it. There’s no “there” there. No more creative than rolling dice.
and experience is ongoing learning, so if an LLM were training on things after the pretraining period then that’d allow it to be creative in your definition?
but in that case, what’s the difference between doing that all at once, and doing it over a period of time?
experience is just tweaking your neurons to make new/different connections
This. Humans are just meat calculators when you zoom out.
Experience is ongoing learning through the subjective self. When you experience the color red you do not just record it with your photoreceptors, and your experience of the color red is different from mine because we don’t just record wavelengths of light. We don’t just continue to learn from continual exposure to new data, we also continue to learn from generating our own data. In this way our subjective experience is qualitative, not simply quantitative. I don’t just see the specific light wavelengths, I experience the “redness” of red.
When LLM is trained on that kind of data it just starts to hallucinate. This is promising! I think the hallucination phenomenon is actually a precursor to creativity and gives us great insights into the nature of subjective experience. In a sense, my phenomenal experience of the color red is actually much like a hallucination where I am also able to experience the color’s “warmth” and “boldness”. Subjectivity.
it’s only qualitative because we don’t understand it
when an LLM “experiences” new data via training, that’s subjective too: it works its way through the network in a manner that’s different depending on what came before it… if different training data came before it, the network would look differently and the data would change the network as a whole in a different way
When an LLM feeds on its own outputs, though, it quickly starts to hallucinate. I think this is actually closer to creativity, but it betrays the fundamental flaw behind the technology - it does not think about its own thoughts and requires a curator to help it create.
I’ll believe something is an AI when it can be its own curator and not drive itself insane.
The same could be said of a lot of creatives. You speak of greater creativity, that which evokes depth and gravity. There is still more shallow creativity. Learning creativity. That which you do before you learn to do better. Kind of what these are doing.
I’m not saying it’s good or bad, though the people who hold the reigns definitely don’t have the best intentions for their use, but underestimating it is the first step to allowing them to run rampant.
“Never attribute to malice that which you can attribute to stupidity” is the slogan of those who do nothing but look down on others… who underestimate the horrible things the “stupid” can do. Don’t assume stupidity just because you don’t like something. It makes it that much easier for it to bite you on the ass in the future.
I don’t think I’d actually call that shallow thought “creativity”.
Think of a word association game. I don’t think the first word that pops up in my head is creative at all, it’s just a thoughtless reaction.
That’s what LLMs are doing. Without that reflection and depth it’s just a direct input->output
Would you say that a random name generator is a creative algorithm?
That’s a hella skimpy example, but yes.
Your opinion is wrong.
deleted by creator
I’m so sorry you feel that way.