I used to get a lot of scrappers hitting my Lemmy instance, most of them using a bunch of IP ranges, some of them masquerading their user agents as a regular browser.
What’s been working for me is using a custom nginx log format with a custom fail2ban filter that mets me easily block new bots once I identify some kind of signature.
For instance, one of these scrappers almost always sends requests that are around 250 bytes long, using the user agent of a legitimate browser that always sends requests that are 300 bytes or larger. I can then add a fail2ban jail that triggers on seeing this specific user agent with the wrong request size.
On top of this, I wrote a simple script that monitors my fail2ban logs and writes CIDR ranges that appear too often (the threshold is proportional to 1.5^(32-subnet_mask)). This file is then parsed by fail2ban to block whole ranges. There are some specific details I omitted regarding bantime and findtime, that ensure that a small malicious range will not be able to trick me into blocking a larger one. This has worked flawlessly to block “hostile” ranges with apparently 0 false positives for nearly a year
the threshold is proportional to 1.5^(32-subnet_mask)
what are you basing that prefix length decision off? whois/NIC allocation data?
is the decision loop running locally to any given f2b instance, or do you aggregate for processing then distribute blocklist?
either way, seems like an interesting approach for catching the type of shit that likes to snowshoe from random cloud providers while lying in agent signature
CIDR ranges (a.b.c.d/subnet_mask) contain 2^(32-subnet_mask) IP addresses. The 1.5 I’m using controls the filter’s sensitivity and can be tuned to anything between 1 and 2
Using 1 or smaller would mean that the filter gets triggered earlier for larger ranges (we want to avoid this so that a single IP can’t trick you into banning a /16)
Using 2 or more would mean you tolerate more fail/IP for larger ranges, making you ban all smaller subranges before the filter gets a chance to trigger on a larger range.
This is running locally to a single f2b instance, but should work pretty much the same with aggregated logs from multiple instances
I’m aware of the construction of a CIDR prefix, I meant what are you using to categorise IPs from requests to look up mask size? whois? using published NIC/RIR data? what’s in BGP/routedumps? other?
I used to get a lot of scrappers hitting my Lemmy instance, most of them using a bunch of IP ranges, some of them masquerading their user agents as a regular browser.
What’s been working for me is using a custom nginx log format with a custom fail2ban filter that mets me easily block new bots once I identify some kind of signature.
For instance, one of these scrappers almost always sends requests that are around 250 bytes long, using the user agent of a legitimate browser that always sends requests that are 300 bytes or larger. I can then add a fail2ban jail that triggers on seeing this specific user agent with the wrong request size.
On top of this, I wrote a simple script that monitors my fail2ban logs and writes CIDR ranges that appear too often (the threshold is proportional to
1.5^(32-subnet_mask)
). This file is then parsed by fail2ban to block whole ranges. There are some specific details I omitted regardingbantime
andfindtime
, that ensure that a small malicious range will not be able to trick me into blocking a larger one. This has worked flawlessly to block “hostile” ranges with apparently 0 false positives for nearly a yearwhat are you basing that prefix length decision off? whois/NIC allocation data?
is the decision loop running locally to any given f2b instance, or do you aggregate for processing then distribute blocklist?
either way, seems like an interesting approach for catching the type of shit that likes to snowshoe from random cloud providers while lying in agent signature
CIDR ranges (
a.b.c.d/subnet_mask
) contain2^(32-subnet_mask)
IP addresses. The1.5
I’m using controls the filter’s sensitivity and can be tuned to anything between 1 and 2Using 1 or smaller would mean that the filter gets triggered earlier for larger ranges (we want to avoid this so that a single IP can’t trick you into banning a /16)
Using 2 or more would mean you tolerate more fail/IP for larger ranges, making you ban all smaller subranges before the filter gets a chance to trigger on a larger range.
This is running locally to a single f2b instance, but should work pretty much the same with aggregated logs from multiple instances
I’m aware of the construction of a CIDR prefix, I meant what are you using to categorise IPs from requests to look up mask size? whois? using published NIC/RIR data? what’s in BGP/routedumps? other?