Got a warning for my blog going over 100GB in bandwidth this month… which sounded incredibly unusual. My blog is text and a couple images and I haven’t posted anything to it in ages… like how would that even be possible?

Turns out it’s possible when you have crawlers going apeshit on your server. Am I even reading this right? 12,181 with 181 zeros at the end for ‘Unknown robot’? This is actually bonkers.

Edit: As Thunraz points out below, there’s a footnote that reads “Numbers after + are successful hits on ‘robots.txt’ files” and not scientific notation.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    AI scrapers are the new internet DDoS.

    Might want to throw something Infront of your blog to ward them off like Anubis or a Tarpit.

    • ikt@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      the one with the quadrillion hits is this bad boy: https://www.babbar.tech/crawler

      Babbar.tech is operating a crawler service named Barkrowler which fuels and update our graph representation of the world wide web. This database and all the metrics we compute with are used to provide a set of online marketing and referencing tools for the SEO community.

  • dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I run an ecommerce site and lately they’ve latched onto one very specific product with attempts to hammer its page and any of those branching from it for no readily identifiable reason, at the rate of several hundred times every second. I found out pretty quickly, because suddenly our view stats for that page in particular rocketed into the millions.

    I had to insert a little script to IP ban these fuckers, which kicks in if I see a malformed user agent string or if you try to hit this page specifically more than 100 times. Through this I discovered that the requests are coming from hundreds of thousands of individual random IP addresses, many of which are located in Singapore, Brazil, and India, and mostly resolve down into those owned by local ISPs and cell phone carriers.

    Of course they ignore your robots.txt as well. This smells like some kind of botnet thing to me.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I don’t really get those bots.

      Like, there are bots that are trying to scrape product info, or prices, or scan for quantity fields. But why the hell do some of these bots behave the way they do?

      Do you use Shopify by chance? With Shopify the bots could be scraping the product.json endpoint unless it’s disabled in your theme. Shopify just seems to show the updated at timestamp from the db in their headers+product data, so inventory quantity changes actually result in a timestamp change that can be used to estimate your sales.

      There are companies that do that and sell sales numbers to competitors.

      No idea why they have inventory info on their products table, it’s probably a performance optimization.

      I haven’t really done much scraping work in a while, not since before these new stupid scrapers started proliferating.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Negative. Our solution is completely home grown. All artisinal-like, from scratch. I can’t imagine I reveal anything anyone would care about much except product specs, and our inventory and pricing really doesn’t change very frequently.

        Even so, you think someone bothering to run a botnet to hound our site would distribute page loads across all of our products, right? Not just one. It’s nonsensical.

    • artyom@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      One corporation DDOS’s your server to death so that you need the other corporations’ protection.

    • Lee@retrolemmy.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      A friend (works in IT, but asks me about server related things) of a friend (not in tech at all) has an incredibility low traffic niche forum. It was running really slow (on shared hosting) due to bots. The forum software counts unique visitors per 15 mins and it was about 15k/15 mins for over a week. I told him to add Cloudflare. It dropped to about 6k/15 mins. We excitemented turning Cloudflare off/on and it was pretty consistent. So then I put Anubis on a server I have and they pointed the domain to my server. Traffic drops to less than 10/15 mins. I’ve been experimenting with toggling on/off Anubis/Cloudflare for a couple months now with this forum. I have no idea how the bots haven’t scrapped all of the content by now.

      TLDR: in my single isolated test, Cloudflare blocks 60% of crawlers. Anubis blocks presumably all of them.

      Also if anyone active on Lemmy runs a low traffic personal site and doesn’t know how or can’t run Anubis (eg shared hosting), I have plenty of excess resources I can run Anubis for you off one of my servers (in a data center) at no charge (probably should have some language about it not being perpetual, I have the right to terminate without cause for any reason and without notice, no SLA, etc). Be aware that it does mean HTTPS is terminated at my Anubis instance, so I could log/monitor your traffic if I wanted as well, so that’s a risk you should be aware of.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        This dance to get access is just a minor annoyance for me, but I question how it proves I’m not a bot. These steps can be trivially and cheaply automated.

        I don’t think the author understands the point of Anubis. The point isn’t to block bots completely from your site, bots can still get in. The point is to put up a problem at the door to the site. This problem, as the author states, is relatively trivial for the average device to solve, it’s meant to be solved by a phone or any consumer device.

        The actual protection mechanism is scale, the scale of this solving solution is costly. Bot farms aren’t one single host or machine, they’re thousands, tens of thousands of VMs running in clusters constantly trying to scrape sites. So to them, a calculating something that trivial is simple once, very very costly at scale. Say calculating the hash once takes about 5 seconds. Easy for a phone. Let’s say that’s 1000 scrapes of your site, that’s now 5000 seconds to scrape, roughly an hour and a half. Now we’re talking about real dollars and cents lost. Scraping does have a cost, and having worked at a company that does professionally scrape content they know this. Most companies will back off after trying to load a page that takes too long, or is too intensive - and that is why we see the dropoff in bot attacks. It’s that it’s not worth it for them to scrape the site anymore.

        So for Anubis they’re “judging your value” by saying “Are you willing to put your money where your mouth is to access this site?” For consumer it’s a fraction of a fraction of a penny in electricity spent for that one page load, barely noticeable. For large bot farms it’s real dollars wasted on my little lemmy instance/blog, and thankfully they’ve stopped caring.

        • deffard@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          The author demonstrated that the challenge can be solved in 17ms however, and that is only necessary once every 7 days per site. They need less than a second of compute time, per site, to be able to send unlimited requests 365 days a year.

          The deterrent might work temporarily until the challenge pattern is recognised, but there’s no actual protection here, just obscurity. The downside is real however for the user on an old phone that must wait 30 seconds, or like the blogger, a user of a text browser not running JavaScript. The very need to support an old phone is what defeats this approach based on compute power, as it’s always a trivial amount for the data center.

          • [object Object]@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            The deterrent might work temporarily until the challenge pattern is recognised, but there’s no actual protection here, just obscurity.

            Anubis uses a proof-of-work challenge to ensure that clients are using a modern browser and are able to calculate SHA-256 checksums. Anubis has a customizable difficulty for this proof-of-work challenge, but defaults to 5 leading zeroes.

            Please tell me how you’re gonna un-obscure a proof-of-work challenge requiring calculation of hashes.

            And since the challenge is adjustable, you can make it take as long as you want.

            • deffard@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              You just solve it as per the blog post, because it’s trivial to solve, as your browser is literally doing so in a slow language on a potentially slow CPU. It’s only solving 5 digits of the hash by default.

              If a phone running JavaScript in the browser has to be able to solve it you can’t just crank up the complexity. Real humans will only wait tens of seconds, if that, before giving up.

  • pendel@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I had to pull an all nighter to fix some unoptimized query because I had just launched a new website with barely any visitors and hadn’t implemented caching yet for something that I thought no one uses anyway, but a bot found it and broke my entire DB through hitting the endpoint again and again until nothing worked anymore

  • Thunraz@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It’s 12181 hits and the number behind the plus sign are robots.txt hits. See the footnote at the bottom of your screenshot.