- cross-posted to:
- becomeme
- cross-posted to:
- becomeme
People Are Increasingly Worried AI Will Make Daily Life Worse::A Pew survey finds that a majority of Americans are more concerned than excited about the impact of artificial intelligence—adding weight to calls for more regulation.
Not really. It hallucinates so much I don’t use it for factual information. It has massive glaring issues in applications like driverless cars. I suppose that applications like a driverless train would be nice but it’s not something I expect anytime soon. I suspect I’ll be told to like it when it tries to get me to consume more.
Maybe better ai in video games will be nice.
Maybe I’ve just become a cranky old lady, but while I can acknowledge actual theoretical value in it when I hear ai hype it feels like listening to crypto bros at worst and at best like listening to an executive telling me I need to implement lean manufacturing and plugging their ears when I want to discuss the costs and risks.
Are you only thinking of current LLMs and not expecting them to improve?
I first rode a train without a driver about twenty five years ago so I think you’re a little behind on that one, they have pilotless planes too, there’s a lot of clever stuff going on.
I totally get that feeling that everything exists to make you consume more but what if an AI could help you consume less and more healthily? If it could reduce waste by using more efficient ways of doing things? If it could give you access to better things at a lower price and with less manufacturing related environmental issues?
What is it could sum up all the information on a product you need to buy like saying ‘there are 3674 adverts for proprietary models however consumer testing demonstrates one of these cheaper open source models would be more effective for you needs…’
If it could actually give you the information you need and filter out at the advertising junk?
If there was evidence AI was heading that direction at all, that direction was where society wanted to move AI to, and that there was the understanding we absolutely aren’t there yet… I’d be significantly more optimistic.
My problem is that currently, Machine Learning and Expert Systems are being implemented quietly by a number of companies to at best to improve their own commercial offerings and at worst to cut their human staffed support teams to ribbons. Nearly everyone can relate to frustrations of seeking support with an automated system instead of a human. Those situations have continued to get worse, instead of better, as this tech has grown.
Additionally, thanks to how convincing LLMs are at appearing intelligent, they’ve become a fad rather than being evaluated and appreciated for what they actually are. There are countless startups now who are just trying to cash in on the hype by using the ChatGPT api to offer products that just shove GPT at all sorts of entirely unsuitable use cases.
Lastly, there are a good deal of issues with the currently most popular AI tech, LLMs, that the industry appears to have no intention of attempting to address in good faith. The complete disdain for copyright, IP, or even fair use when it comes to the data the models have been trained on. The recent articles stating that in order to remove material from a dataset would require effectively rebuilding the LLM. The lack of methodology to get true sources for the data used in responses, lack of reproducability of responses, lack of any auditability of these systems because that would jeapordize the “secret sauce” or is just simply impossible on a technical level. And when most people discuss this they get shouted down by the “true believers” as just not understanding the technology rather than any attempt at discussion in good faith. If you have concerns you’re either stupid or against technological advancement. Don’t you see all the good this could potentially do in the future but it it isn’t doing yet?
I would love for the type of trustworthy, helpful digital assistant it sounds like you’re describing. I’ve wanted that technology for well over a decade. We’re just not there yet.
That sounds really nice and we get to the root cause of my issue here: I don’t think that that is what will happen. I’m not saying to ban the stuff or anything but when I see how it’s being sold to the investors I’m not seeing reasonable and achievable plans of action that benefit everyone. I’m seeing gimmicks, ads, and moonshots. All while the dishonest are getting a lot out of it. I’m seeing it at its most effective being a means to increase the power of the capital holding class because that’s who’s investing in it and I don’t think that training such things will get cheaper.
And I expect them to improve yes, but I’m also concerned with methodological failures. And I’m not saying that it’ll never make life better, but right now in 2023 I’m not impressed by what I’m seeing. And that’s before I get into the realm of the tendency for trends like this to blind policy makers and business leaders. Hyper loop was sold as being for autonomous vehicles and specifically made to not be cheaply convertible to a known better solution. The whole fucking cloud computing craze comes to mind as well.
I will cede one thing here though. I do think it has a lot of room for use as one of many engineering tools to help with the design process. Being able to directly compare to known optimization methods is always going to be useful and if it can automatically plug a layout or process into a model it would be nice. Idk if I expect that to happen as well as anyone seems to think though.
I guess I just don’t trust the tech industry anymore. When I see something like LLMs it seems gimmicky as hell and a lot of early adoption is either minor or harmful. I see driverless cars getting priority over public transit over and over despite the fact that they’ve been 5 years away since I was a kid. I see people talking about using AI to help the fight against climate change from the same people who won’t quit meat. Meanwhile surveillance increases, wages stay stagnant, and the world keeps getting hotter. Contrary to how I sound I love technology. I’m an engineer for a reason. But there’s just so many reasons to feel skeptical of it. So yeah enjoy your hype. If it winds up useful for someone like me I’ll try it. But I’m not buying into the hype and I’ll be skeptical of it until I start seeing actual results.
Ha yeah I agree on all that, well one thing I disagree with but yes people who pretend to care about the environment but eat meat are annoying and scammers pushing their big money making ideas in our faces nonstop is infuriating, but honestly it’s the same with gardening - I get endless bullshit adverts for garden gadgets which do nothing but make the job harder, trying to trick people into giving you money is the culture we live in.
What I disagree is that it’s only the rich getting access to this, most of the actually important stuff is open source. I’m not just taking about how Adobe’s image gen is trash compared to a well set up SD, the knowledge of how to train and the tools to make NNs are all open source. The cost of training is high but the cost of writing Wikipedia would be astronomical if it was written by paid staff, chatGPT cost about ten million to train using current technology which is a lot money but the pet toy market is 7.5 Billion annually, the video game content revenue is fifty billion a year - as things progress training will get cheaper and more community projects will get made, hopefully we’ll see people learn to support organisations that contribute to the commons rather than create walled gardens.
AI design tools are going to make it incredibly easy for people like me who design 3d printable things and share them on thingiverse, that alone will undermine a lot of shitty corporate monopolies and help change the structure of society for the better - imagine being able to just ask your computer to find a template for an item you need then describing how you went it customised, having the ai sort out all the strength and materials stuff then being able to print it or farm the job out locally.
An AI that knows the content of a billion adverts but also the little things posted on random corners of the internet which do exactly what you need and don’t come with any bullshit - it could be what we need to cut through the nonsence that flooda us.
But yeah I’m not asking you to like Sam Altman or any of those techbro silicon valley capitalist cultists - we need open source and free AI for the people by the people.