Excerpt from a message I just posted in a #diaspora team internal forum category. The context here is that I recently get pinged by slowness/load spikes on the diaspora* project web infrastructure (Discourse, Wiki, the project website, ...), and looking at the traffic logs makes me impressively angry.
In the last 60 days, the diaspora* web assets received 11.3 million requests. That equals to 2.19 req/s - which honestly isn't that much. I mean, it's more than your average personal blog, but nothing that my infrastructure shouldn't be able to handle.
However, here's what's grinding my fucking gears. Looking at the top user agent statistics, there are the leaders:
2.78 million requests - or 24.6% of all traffic - is coming from Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot).
1.69 million reuqests - 14.9% - Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/600.2.5 (KHTML, like Gecko) Version/8.0.2 Safari/600.2.5 (Amazonb...
Evidence for the DDoS attack that bigtech LLM scrapers actually are.
the threshold is proportional to 1.5^(32-subnet_mask)
what are you basing that prefix length decision off? whois/NIC allocation data?
is the decision loop running locally to any given f2b instance, or do you aggregate for processing then distribute blocklist?
either way, seems like an interesting approach for catching the type of shit that likes to snowshoe from random cloud providers while lying in agent signature
CIDR ranges (a.b.c.d/subnet_mask) contain 2^(32-subnet_mask) IP addresses. The 1.5 I’m using controls the filter’s sensitivity and can be tuned to anything between 1 and 2
Using 1 or smaller would mean that the filter gets triggered earlier for larger ranges (we want to avoid this so that a single IP can’t trick you into banning a /16)
Using 2 or more would mean you tolerate more fail/IP for larger ranges, making you ban all smaller subranges before the filter gets a chance to trigger on a larger range.
This is running locally to a single f2b instance, but should work pretty much the same with aggregated logs from multiple instances
I’m aware of the construction of a CIDR prefix, I meant what are you using to categorise IPs from requests to look up mask size? whois? using published NIC/RIR data? what’s in BGP/routedumps? other?
what are you basing that prefix length decision off? whois/NIC allocation data?
is the decision loop running locally to any given f2b instance, or do you aggregate for processing then distribute blocklist?
either way, seems like an interesting approach for catching the type of shit that likes to snowshoe from random cloud providers while lying in agent signature
CIDR ranges (
a.b.c.d/subnet_mask
) contain2^(32-subnet_mask)
IP addresses. The1.5
I’m using controls the filter’s sensitivity and can be tuned to anything between 1 and 2Using 1 or smaller would mean that the filter gets triggered earlier for larger ranges (we want to avoid this so that a single IP can’t trick you into banning a /16)
Using 2 or more would mean you tolerate more fail/IP for larger ranges, making you ban all smaller subranges before the filter gets a chance to trigger on a larger range.
This is running locally to a single f2b instance, but should work pretty much the same with aggregated logs from multiple instances
I’m aware of the construction of a CIDR prefix, I meant what are you using to categorise IPs from requests to look up mask size? whois? using published NIC/RIR data? what’s in BGP/routedumps? other?