For years I’ve on and off looked for web archiving software that can capture most sites, including ones that are “complex” with lots of AJAX and require logins like Reddit. Which ones have worked best for you?

Ideally I want one that can be started up programatically or via command line, an opens a chromium instance (or any browser), and captures everything shown on the page. I could also open the instance myself and log into sites and install addons like UBlock Origin. (btw, archiveweb.page must be started manually).

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        6 days ago

        wget is the most comprehensive site cloner there is. What exactly do you mean by complex? Because wget works for anything static and public… If you’re trying to clone compiled source files, like PHP or something, obviously that’s not going to work. If that’s what you mean by “complex” then just give up, because you can’t.

        • TheTwelveYearOld@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          6 days ago

          For instance, I can’t download completely youtube pages with videos using wget, but can with pywb (though pywb has issues with sites like reddit).

          Not that I would necessarily use it for youtube pages, but that’s an example of a complex page with lots of AJAX.

        • Paragone@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          4 days ago

          There’s a “philosopher” who the far-right techbro-oligarchs rely on, whose blog is grey-something-or-other…

          I tried using wget & there’s a bug or something in the site, so it keeps inserting links-to-other-sites into uri’s, so you get bullshit like

          grey-something-or-other.substack.com/e/b/a/http://en.wikipedia.org/wiki/etc

          The site apparently works for the people who browse it, but wget isn’t succeeding in just cloning the thing.

          I want the items that the usable-site is made-of, not endless-failed-requests following recursive errors, forever…

          Apparently one has to be ultra-competent to be able to configure all the disincludes & things in the command-line-switches, to get any particular site dealt-with by wget.

          Sure, on static-sites it’s magic, but on too many sites with dynamically-constructed portions of themselves, it’s a damn headache, at times…

          _ /\ _

          • Xanza@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            That’s not a bug. You literally told wget to follow links, so it did.

              • Xanza@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                There is. wget doesn’t follow recursive links by default. If it is, you’re using an option which is telling it to…