@nev What you need is a spider. There was a tool that allowed you to download all the data from a site, or at least list all the links...
This list might help you.
https://en.wikipedia.org/wiki/Web_crawler#Open-source_crawlers
@nev You have options to download external lnks, and whether to download or not pages outside a given path. It needs a bit of trial and error, but it's excellent for backups.
@nev Found a page with general instructions and comments:
https://wptavern.com/how-to-archive-a-site-you-dont-have-access-to
@nev Official httrack / winhttrack manual:
@nev Good luck! 😉 👍
@rick_777 this looks handy, thanks!
@nev HTTRACK! That was it! I used that to back up old websites of mine.