silence7@slrpnk.net to Technology@lemmy.worldEnglish · 3 months agoWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.comexternal-linkmessage-square53fedilinkarrow-up1353arrow-down112cross-posted to: [email protected]
arrow-up1341arrow-down1external-linkWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.comsilence7@slrpnk.net to Technology@lemmy.worldEnglish · 3 months agomessage-square53fedilinkcross-posted to: [email protected]
minus-squaredaniskarma@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up4arrow-down4·3 months agoIf AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
minus-squaredoodledup@lemmy.worldlinkfedilinkEnglisharrow-up5·3 months agoThis is completely unrelated. Besides, how does AI suddenly become sentient?
minus-squareleftzero@lemmynsfw.comlinkfedilinkEnglisharrow-up3arrow-down1·3 months agoLLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).
If AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
This is completely unrelated.
Besides, how does AI suddenly become sentient?
It was a joke.
LLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).