I’m developing a site that supposed to store lots of important information per user.
Naturally it will be useful to organize this information backup (separately for each user).
The idea is to connect with some CLI tool like wget/curl and download user-specific dump by accessing dedicated URL (internal web-site API).
Appears that hosting provider placed a smart defense against automatic crawlers, so every attempt to access my site URL (does not matter which page) ends with some HTML code that redirects to actual link using JS and browser storage. For real browser it works seamlessly, but any headless tool receive wrong content.
My question is: how such automation could be organized?
I mean - access to limited set of site pages using curl/wget or some other headless browsing tool.
Thanks in advance,