You’re right, fixed. I think my point is still valid though.
You’re right, fixed. I think my point is still valid though.
Not even that. More like “stop shouting and give us a few days, we’ll change some things, we promise”.
Let’s look at what are they apologizing for: “for the confusion and angst … [the policy we announced] caused”. Not for the policy itself. Right, “we’re sorry you got mad”.
And what are they going to do about it? “making changes”
As far as corporate non-apologies go, this is definitely one of them.
Is the explanation that this is unintended actually better than owning up to it? So some rogue employee can code this up, pass it through localization teams and then on to customers’ computers without any oversight? I’m somehow not calmed by that.
“We are aware of these reports and have paused this notification while we investigate and take appropriate action to address this unintended behavior,” says Caitlin Roulston, director of communications
“”“unintended”“”?
How do you implement shit like this by mistake and push it out to be executed on people’s computers by mistake?
Have a chart: https://www.augusta.edu/services/ehs/chemsafe/PDF files/gloveselechart.pdf
It seems that the winner for your combination is Viton.
SO’s attempts at bolting some kind of AI into their site have been a great source of entertainment:
authorities have stated that they do not expect Cecot’s prisoners to ever be released
Innocence, joining a gang inside, survival don’t matter. It’s an extermination camp barely disguised.
“Is this AI written?” is a difficult/impossible question. “Did you write this?” is not. Running the language model against a text and recording its “amount of surprise per token” for all the released GPT x.y variants is something they definitely can do.
White-collar crimes. No mandatory minimums on any of those.