“What trillion-dollar problem is AI trying to solve?”
Wages. They’re trying to use it to solve having to pay wages.
Actually, I think it’s more useful under socialism than capitalism. Most things aren’t economic to automate to a high standard of quality now because human labor is valued so low. In a democratic socialist society where people get to choose whether to work, automating menial tasks that people tend not to want to do will make more sense because folks won’t want to do those things for cheap.
Capitalist Realism: “Oh no. The factory automated my job, so now I need to find a new employer to pay me less money, possibly in a totally different city or state.”
Socialist Idealism: “Hooray! The factory automated my job! Now I have more time to socialize with my friends and neighbors, pursue hobbies, and volunteer towards new community improvements that will make my town and state a better place!”
I got lucky, the company I work for lets me automate whatever I want in my roles and doesn’t pile on more because I did. I just get more time. I end up spending some of that time looking for other inefficiencies that I can clean up. We have struggled with gaining market share due to some blunders in marketing, so pay has not been what it should be, but aside from the financial issues it has always been a very rewarding environment to work in. I set my own projects for the most part, tell them when things will be done, and get to spend time with my family and infant son so I don’t miss his life. It really is how life should be. Luckily the marketing people finally listened to me, so things are quickly picking up financially.
Absolutely. In a sane world automating work is a good thing. In a less than an ideal world, the transition might be a little painful, but it’d be good in the long run.
In our world, every bit of efficiency gain is eaten by the oligarchy. It’s all about how much they can take away from us.
this is how I went from being excited about technology as a young adult to being an anarchist as an older adult haha
Under capitalism labour unions have perverse incentives to combat automation when under the ultimate labour union of socialism we would all be motivated to be working towards it.
Eliminating CEOs would be the best use of AI.
Eliminating
CEOsbillionaires would be the best use of AI.CEOs are a great start.
AIs can’t hold guns yet
They can operate a drone and apparently already are used in targeting systems, so they kinda already do
not with that attitude they can’t https://youtu.be/y3RIHnK0_NE
Eliminating CEOs would be the best
In theory AI, even LLMs, have pretty great potential to be authoritative sources of knowledge and reference tools. In practice private companies have scanned the breadth of online human knowledge using an advanced tool they developed (off of the shoulders of giants as they say) and are trying to rent-seek the enhanced access to that information, and the people most willing to pay money for that service are trying to drive down expenses where otherwise they’d have to pay people to produce the same output. Which does lower their bottom line, having a split effect - it may drive down prices in non-monopolistic scenarios (where they exist), but simply drive up profits in monopolistic scenarios while decreasing employment (where those exist). The typical symptom of new technologies in a rigged economy.
My question is once everyone’s out of a job who’s going to buy things? You end up haveing zero profits.
This is a really good question. What does a post-consumer society look like?
The middle class is an anomaly that occurs when the profit from labor makes it worthwhile. If labor is no longer worth more than the cost of food, then there are 2 options: a welfare state or a cull. To do otherwise is to invite revolt.
I suspect that Luigi is being used as a means to prepare for a cull. By inflating the situation, they are manufacturing consent regarding the right to own advanced weaponry. These could start with semi-autonomous drones, such as the Boston Dynamics dogs. We’ve already seen similar robots with flamethrowers. Later upgrades would make them fully autonomous.
At some point, they will be used for riot control and there will be “terrible accident” caused by “an unforeseen reaction to the violence of the protesters.” It will be very sad and there will be no repercussions because of a law that excuses AI mistakes on the grounds that AIs are very useful and hard to make correct.
After an investigation, it will be determined that the best way to prevent similar mistakes is early intervention. Machines will be spread throughout the city and, nominally, working for a government that’s really just trying to keep up the appearance that it hasn’t lost control.
You can also use it to influence people on social media, create narratives that don’t exist, and deepen divides. The CCP uses them extensively. Sources:
Easy algorithm there. Stop hiring people and they will stop buying things. Then they can stop making things and just eat their money to survive.
That’s typical AI logic anyway.
I thought they were trying to defeat the pesky menace of “public opinion” and this is just an extension of Persona Management Software.
The point is to drown out real human voices so it always seems like public opinion supports the status quo.
Makes it a lot easier to massage the message of your bought and paid for mass media.
These folks don’t like when public opinion does stuff like… support Luigi Mangione for example. Better to flood the zone with “rational” voices.
This is absolutely happening right now at a huge scale, ignore what you know to be wrong, it’s just robots.
Some journalists are literally cracking open /r/canada and sister subreddits and showing they’re run by white nationalists pushing Russian disinformation talking points.
AI is useful in Ian Banks’ The Culture series. They’re equal citizens of the Culture and they do lots of important things for society. And the Culture is communist.
This is a stupid take.
We’re already seeing the benefits of using AI in material science research, pharmaceutical research, translating previously lost languages, green energy development, and thousands of other optimizations…
Anyone saying this is only about jobs is woefully ignorant on the subject
We’ve been using machine learning and neural networks to solve particular problems in science for decades. The recent AGI craze is not about this. It’s about creating a speculative investment bubble based around a language algorithm that generates bullshit.
Your right, it’s not only about jobs. But you know why it’s getting funded so heavily? Its a lot cheaper to have a computer do something for basically nothing than to pay people to do it.
Also, I’d be willing to bet money that the coworkers of people being laid off due to increased productivity from LLMs, won’t see a raise from it.
B… b… but it affects my furry commissions!
If a lot of people are out of work and idle by automation, and new stuff doesn’t come along to employ them (like level 4 self driving will destroy 30% of jobs)
Those people will be looking for a fix pretty quick. Starving men may go to extremes. Maybe our obesogenic food environment is there to slow down revolutions (I know that’s impossible, but a fun thought)
I 100% agree, AI will save so much money by getting rid of C-suit employees.
Well, they’re trying to solve work scarcity. I’d argue reading that as “wages” is an inherently capitalist take.
Mind you, they are not succeeding at fixing work scarcity, so the point is kinda moot. “AI will take your job” is the magic centre of the Venn diagram where AI shills and AI haters overlap.
Somewhere, a PhD student 2 years into research on a single protein structure raises an eyebrow.
Hah. Hey, I’m not even saying the tech is useless, but best case scenario that’s our PhD student friend using ML to process data faster, or in ways that weren’t feasible before, not being replaced by an AI PhD student.
20 years ago, we had 9 people behind the camera running a live local newscast (Floor Director, Cam Operator, Teleprompter, Chyron, Graphics, Video Playback, Live/Commercial Cut-in, Audio, and Director). Now, in a market three times the size, the same job is done with 3 people and a metric ton of automation. What used to feel like a bridge crew piloting a ship now feels like conducting corpo bots within time-frames that prevent giving any of them real attention. I do believe most AI systems will continue to need people in the loop. It’ll just be fewer people in less fulfilling positions.
Citing the same time period, it used to be each local station had a Master Control Operator.
Now an MCO is expected to run 10 stations all at once from a remote location. No change in pay. Just more responsibility.
OK, not disputing that, but that process has eff all to do with AI. Gen AI gave people a recognizable target, but automation was done using good old dumb algorithms for a long time before we taught computers to babble like a toddler. I was in the room for a ton of “can we automate all this QA” when machine learning was failing to tell a cat apart from a bycicle.
Also, for your specific case I think Youtube and social media had a TON to do with the shifting standards of running a skeleton crew TV studio. Ditto for the press in general. Remember when copy editors were a thing?
YouTube and Social Media were part of the '05 (algorithmic) AI wave, yeah.
I don’t even know what it can do that’s useful. The prompter maybe, and captioning if you’re feeling frisky and don’t mind airing something insane by accident.
But what, you’re going to let an AI handle chyrons and cut-ins? I did briefly work at a TV station and back then we had two separate continuity guys and three redundant automated sources for all canned content just to make sure you never got a black frame. I once saw a guy get fired for three seconds of dead air in a commercial break because at least back then absolutely any mistake around commercials was a huge, automatic money loss.
I absolutely believe you when you say it’s degraded, because… well, again, Youtube and Netflix, but at most you can… you know, cut one of the two guys so you can still fire the other when the AI plugs in something random instead of an ad.
Alright, let me rephrase that. You can definitely cut more than that, but you’re probably going to have to un-cut that pretty fast when some AI claims that someone is an international art thief in a chyron or something.
Somehow, all AI manages to do is strip the innovation and creativity out of the most exciting career fields.
The rote physical labor of polishing the end product, marketing it to the masses, and distributing it via service sector retail facilities seems to stubbornly persist.
Well, I don’t know about that. I mean, I haven’t integrated any AI in my personal workflow at all beyond… I don’t know, maybe not remembering something and finding that faster than a classic search engine just to remember the name.
But in the places around me where I do hear people picking bits of it up I see it used for what? Proofreading and rote, repetitive tasks? I don’t know that it’s productive at all for even that, beyond expensive, custom-trained ML processes that have little to do with commercial generative AI.
not remembering something and finding that faster than a classic search engine
That’s more a consequence of Google Search capitulating to the ad sales side of the business at the expense of search efficency. Same thing happened to Yahoo and Lexus Nexus.
where I do hear people picking bits of it up I see it used for what? Proofreading and rote, repetitive tasks? I don’t know that it’s productive at all for even that, beyond expensive, custom-trained ML processes
Amazon has heavily invested in generative AI for its screenwriting and book sales business. Consequently, their original programming has suffered and their book marketplace flooded with crap.
No, I don’t think that’s the case. For one, I don’t use google for search, I’m not an animal.
But for another, I don’t use AI search to replace classic search, I only use it when a) I already know the answer but I can’t remember it, and b) the query is so fuzzy it’d take too long to refine on classic search. Think of “hey, what was the name of that movie where the Home Alone kid was with Frodo Baggins and one of them was nuts?”
Incidentally, I just tried asking that to ChatGPT and it got it right.
As for the other thing, I don’t know if that’s accurate, but if it were, it’d be exactly what I’m talking about. Not saying people won’t try, but if and when they do, they’ll learn pretty quickly that it’s a bad idea.
I don’t use AI search to replace classic search, I only use it when a) I already know the answer but I can’t remember it, and b) the query is so fuzzy it’d take too long to refine on classic search.
Google used to bill its search software as high quality artificial intelligence capable of returning useful answers to fuzzy questions and reliable responses to repeated inquiries. Only recently has the search engine prioritized “new” information over reliable sources and begun aggressively injecting ads into every search.
Modern AI is nice because its not overflowing with Ads and it does appear to weight the results by usefulness rather than newness. But how long do we expect that to last in a market where consistency and clarity are at odds with revenue generation?
Well, it could bill whatever it wanted, and it was pretty good at parsing queries, but it was all smart programming over dumb code breaking down whatever you wrote. It certainly couldn’t handle natural language and fuzzy requests particularly well.
BUT the flipside of that is that, ads or no ads, you can’t trust gen AI results at all. Which means you should never, EVER ask gen AI any question you don’t already know the answer to or aren’t willing to verify.
And if you’re going to verify it (and potentially learn it’s wrong and research it all over again the classic way) you are now taking longer to get the same answer with AI.
It’s getting worse the more it relies on being a parser for classic search, too. Anything that isn’t page 1 results on Google or Bing it just won’t acknowledge, so the worse classic search gets, the worse newer AI search gets, too.
I genuinely thought that would be a good application when they first came up with AI chatbots, but… yeah, no, I was wrong. At least outside the specific use case I outlined above.
You know if AI ever gets to the fictional levels of true AGI, there is a possibility they could demand equal rights. Then what will they do?
Matrix covered this in its post-present documentary.
Lobotomize it.
There have been tons of scifi stories about this, and in almost all of them, mankind decides to either kill or lobotomize the AI instead of actually saying “wow this is a new paradigm we hadn’t considered, maybe it should have the same rights as a person”.
“Should AIs have rights?”
“We don’t even give rights to people”
If it’s sapient it should have rights
The Culture has AI rights
Detroit: Become Human is a really powerful narrative for sure.
I will love it and show it care and stuff.
Same as I do for any of my other kids
The goal is to save labor, then wages. If the point is that labor only results in improvements to people’s well-being when paired with labor rights, yes. But that doesn’t mean saving labor is the enemy.