I can’t imagine who would hire him. He fucked Unity badly.
I can’t imagine who would hire him. He fucked Unity badly.
AI bots never had rights to waive. Their work is not their work.
This is only partially true. In the US (which tends to set the tone on copyright, but other jurisdictions will weigh in over time) generative AI cannot be considered an “author.” That doesn’t mean that other forms of rights don’t apply to AI generated works (for example, AI generated works may be treated as trade secrets and probably will be accepted for trademark purposes).
Also, all of the usual transformations which can take work from the public domain and result in a new copyrightable derivative also apply.
This is a much more complex issue than just, “AI bots never had rights to waive.”
Or at least require a decent font.
Artists, construction workers, administrative clerks, police and video game developers all develop their neural networks in the same way, a method simulated by ANNs.
This is not, “foreign to most artists,” it’s just that most artists have no idea what the mechanism of learning is.
The method by which you provide input to the network for training isn’t the same thing as learning.
Problem is their, “experiment,” is resulting in the return of previously eradicated diseases.
There are valid concerns with regard to bidet use. They do result in aerosolized particulates in greater number than results from wiping, which means you are literally breathing more feces.
Is it enough to be problematic? Probably not, but that may also depend on how aggressively/frequently you use them.
See also:
AI/LLMs can train on whatever they want but when then these LLMs are used for commercial reasons to make money, an argument can be made that the copyrighted material has been used in a money making endeavour.
And does this apply equally to all artists who have seen any of my work? Can I start charging all artists born after 1990, for training their neural networks on my work?
Learning is not and has never been considered a financial transaction.
As someone who has worked extensively with the homeless, I’ve seen quite a few examples of where supposedly anti-homeless takes have been attempts to inject more nuance into discussions than simply being pro- or anti-homeless, both of which are practically meaningless positions.
Looking over their concerns, I’m not sure that they have a leg to stand on. The claim they’re making is that they’ve measured an increase in hate-related tweets (I’ll take them at their word on this) and then they associate this with Musk taking over.
They present no evidence for this later claim and do not, as far as I can see, make any attempt to compare against increases in hate among other social media platforms.
Grooming, for example, is one topic they covered. But this is a topic that Republicans have been pushing increasingly as election season spins up. Musk didn’t cause that, and that kind of nonsense can be found on Facebook and reddit as well.
I’m inclined to sympathize with an underdog nonprofit, but in this case I just can’t see why they expected not to get pushback on such poorly grounded claims
I wouldn’t say obsolete because that implies it’s not really used anymore.
I’m not sure where you heard someone use the word “obsolete” that way, but I assure you that there are thousands if not millions of examples of obsolete technologies in constant and everyday use.
In an effort to have a smooth and quick transition to this new infrastructure, we will migrate chat messages sent from January 1, 2023 onward. This change will be effective starting June 30th.
It really seems like everything reddit is doing is rushed and always chooses to harm the users as a default. It’s as if they’re actively sabotaging their own platform.
Yeah, this is important. Make it a really big number too so that I have to change my password lots of times in a row in order to put it back to what it was. ;)
It MIGHT not be as bad as you think. If the UI was just terrible at communicating and what it actually meant was, “that password is in our database of known compromised passwords,” then that would be reasonable. Google does this now too, but I think they only do it after the fact (e.g. you get a warning that your password is in a database of compromised passwords).
Fun fact: password controls like this have been obsolete since 2020. Standards that guide password management now focus on password length and external security features (like 2FA and robust password encryption for storage) rather than on individual characters in passwords.
They cannot be anything other than stochastic parrots because that is all the technology allows them to be.
Are you referring to humans or AI? I’m not sure you’re wrong about humans…
Clearly the Founding Fathers were not advanced enough to have crafted the US Constitution unaided.
In a sense you are correct. They cribbed from lots of the most well known political philosophers at the time. For example, there are direct quotes from Locke in the Declaration and his influence over the Constitution can be felt clearly.
What you are describing is true of older LLMs. GPT4, it’s less true of. GPT5 or whatever it is they are training now will likely begin to shed these issues.
The shocking thing that we discovered that lead to all of this is that this sort of LLM continues to scale in capabilities with the quality and size of the training set. AI researchers were convinced that this was not possible until GPT proved that it was.
So the idea that you can look at the limitations of the current generation of LLM and make blanket statements about the limitations of all future generations is demonstrably flawed.
That’s not what Popper is talking about. He’s talking about maintaining the option to be intolerant of the act of intolerance, not of people.