• @[email protected]
    link
    fedilink
    1589 months ago

    Everyone loves the idea of scraping, no one likes maintaining scrapers that break once a week because the CSS or HTML changed.

  • @[email protected]
    link
    fedilink
    739 months ago

    Just a heads up for companies thinking it’s wrong to scrap: if you don’t want info to be scraped, don’t put it on the internet.

    • @[email protected]
      link
      fedilink
      219 months ago

      Much less beholden to arbitrary rules also. Way too many times companies will just up and lift their API access or push through restrictions. No ty, I’ll just access it myself then

      • @[email protected]
        link
        fedilink
        English
        69 months ago

        API starter kit

        • Outdated and unsupported and hasn’t been replaced yet but is the standard way to use the service.
        • Lots of authorization tokens.
        • The example in the docs doesn’t work (if there is one).
        • You have no idea where the online tutorial got the information because it doesn’t have links to resources and the docs have barely anything even though its giant.
        • Uses asynchronous programming to make it faster but its still much much slower then scrapping without asynchronous programming.
  • @[email protected]
    link
    fedilink
    299 months ago

    Hold on, I thought it was supposed to be realism on the virgin’s behalf and ridiculous nonsense on the chad behalf:

    All I see is realism on both sides lol

  • darcy
    link
    289 months ago

    someone’s never used a good api. like mastodon

    • @[email protected]
      link
      fedilink
      99 months ago

      I created a shitty script (with ChatGPT’s help) that uses Selenium and can dump a Confluence page from work, all its subpages and all linked Google Drive documents.

      • @[email protected]
        link
        fedilink
        English
        109 months ago

        When a customer needs a part replaced, they send in shipping data. This data has to be entered into 3-4 different web forms and an email. This allows me to automate it all from a single form that has built in error checking so human mistakes are limited.

        Company could probably automate this all in the backend but they won’t :shrug:

        • @[email protected]
          link
          fedilink
          49 months ago

          Using Selenium for this is probably overkill. You might be better off sending direct HTTP requests with your form data. This way you don’t actually have to spin up an entire browser to perform that simple operation for you.

          That said, if it works - it works!

          • @[email protected]
            link
            fedilink
            29 months ago

            I’m guessing forms like this have CSRF protection, so you’d probably have to obtain that token and hope they don’t make a new one on every request.

            • @[email protected]
              link
              fedilink
              29 months ago

              Good point. This is also possible to overcome with one additional HTTP request and some HTML parsing. Still less overhead than running Selenium! In any event, I was replying in a general sense: Selenium is easy to understand and seems like an intuitive solution to a simple problem. In 99% of cases some additional effort will result in a more efficient solution.

  • sebinspace
    link
    fedilink
    219 months ago

    I wanted to build a Discord bot that would check NIST for new CVEs every 24 hours. But their API leaves quiiiiiiite a bit to be desired.

    Their pages, however…

  • @[email protected]
    link
    fedilink
    209 months ago

    It’s all fun and games until you have to support all this shit and it breaks weekly!

    That being said, I do miss the simplicity of maintaining selenium projects for work

  • @[email protected]
    link
    fedilink
    179 months ago

    I use scrapy. It has a steeper learning curve than other libraries, but it’s totally worth it.

    • @[email protected]
      link
      fedilink
      419 months ago

      Websites and services create APIs for programmers to use them. So Spotify has code that let’s you build a program that can use its features. But you need a token they give you after you sign up. The token can be revoked and used to monitor how much of their service you’re using. That way they can restrict if its too much.

      Scraping is raw dogging the web slut you met at the cougar ranch who went home with you because you reminded her of her dog

    • @[email protected]
      link
      fedilink
      14
      edit-2
      9 months ago

      ‘Scraping’ is the process of anonymously and programmatically collecting data from a webpage(s), often without the website’s permission and only limited to the content made publicly available. This is in contrast to using an API provided by the database owner which is limited by tokens, access volume, available end points etc.

    • @[email protected]
      link
      fedilink
      49 months ago

      Everytime I think I’m good with tech, something like this shows up in my feed and makes me realize I know jackshit.

  • @[email protected]
    link
    fedilink
    129 months ago

    Fuck, I think I’ve been doing it wrong and this meme gave me more things to learn than any YouTube video has