content warning: Zack Davis. so of course this is merely the intro to Zack’s unquenchable outrage at Yudkowsky using the pronouns that someone wants to be called by
content warning: Zack Davis. so of course this is merely the intro to Zack’s unquenchable outrage at Yudkowsky using the pronouns that someone wants to be called by
This “Gettier” attack seems to me to have no more interesting content than a “stopped clock”. To use an extremely similar, extremely common phrase, the New York Times would have been “right for the wrong reasons” to call Scott Alexander a racist. And this would be conceptually identical to pointing out that, I dunno, crazed conspiracy theorists suggested before he was caught that Jeffrey Epstein was part of an extensive paedophile network.
But we see this happen all the time, in fact it’s such a key building block of our daily experience that we have at least two cliches devoted to capturing it.
Perhaps it would be interesting if we were to pick out authentic Gettier cases which are also accusations of some kind, but it seems likely that in any case (i.e. all cases) where an accusation is levelled with complex evidence, the character of justification fails to be the very kind which would generate a Gettier case. Gettier cases cease to function like Gettier cases when there is a swathe of evidence to be assessed, because already our sense of justification is partial and difficult to target with the precision characteristic of unexpected failure - such cases turn out to be just “stopped clocks”. The sense of counter-intuitivity here seems mostly to be generated by the convoluted grammar of your summarising assessment, but this is just an example of bare recursivity, since you’re applying the language of the post to the post itself.
I don’t think it’s counter-intuitive and the post itself never mentioned ‘epistemic luck’.
This seems easy enough to contstruct, just base an accusation on a Gettier case. So in the case of the stopped clock, say we had an appointment at 6:00 and due to my broken watch I think it’s 7:00, as it so happens it actually is 7:00. When I accuse you of being an hour late it is a “Gettier attack”, it’s a true accusation, but it isn’t based on knowledge because it is based on a Gettier case.
I suppose I must be confused, your saying that the piece was interesting was just because it made you think about the phrase “Gettier attack”?
It made me think of epistemic luck in the rat-sphere in general, him inventing then immediately fumbling ‘gettier attack’ is just such a perfect example, but there are other examples in there such as Yud saying:
Which @200fifty points out:
I suppose I get it, although I’m still a bit unsure how these examples count as “epistemic luck”
Zack thought the Times had all the justification they needed (for a Gettier case) since he thought they 1) didn’t have a good justification but 2) also didn’t need a good justification. He was wrong about his second assumption (they did need a good justification), but also wrong about the first assumption (they did have a good justification), so they cancelled each other out, and his conclusion ‘they have all the justification they need’ is correct through epistemic luck.
The strongest possible argument supports the right conclusion. Yud thought he could just dream up the strongest arguments and didn’t need to consult the literature to reach the right conclusion. Dreaming up arguments is not going to give you the strongest arguments, while consulting the literature will. However, one of the weaker arguments he dreamt up just so happened to also support the right conclusion, so he got the right answer through epistemic luck.
Ooooh I get it for Yudkowsky now, I thought you were targeting something else in his comment, on Davis I remain a bit confused, because previously you seemed to be saying that his epistemic luck was in having come up with the term - but this cannot be an example of epistemic luck because there is nothing (relevantly) epistemic in coming up with a term
No no, not the term (my comment is about how he got his own term wrong), just his reasoning. If you make a lot of reasoning errors, but two faulty premises cancel each other out, and you write, say, 17000 words or sequences of hundreds of blog posts, then you’re going to stumble into the right conclusion from time to time. (It might be fun to model this mathematically, can you err your way into being unerring?, but unfortunately in reality-land the amount of premises an argument needs varies wildly)
If I had to pick a mathematical model, it’d be a drunken walk.
I had a long reply which i think made some errors of interpretation as to what you’re saying. I find this “cancels” language confusing, but I don’t have the energy to do any more in-depth clarification on this thing!
deleted by creator