Access site API using automation tool

Hi All,

I’m developing a site that supposed to store lots of important information per user.
Naturally it will be useful to organize this information backup (separately for each user).
The idea is to connect with some CLI tool like wget/curl and download user-specific dump by accessing dedicated URL (internal web-site API).
Appears that hosting provider placed a smart defense against automatic crawlers, so every attempt to access my site URL (does not matter which page) ends with some HTML code that redirects to actual link using JS and browser storage. For real browser it works seamlessly, but any headless tool receive wrong content.

My question is: how such automation could be organized?
I mean - access to limited set of site pages using curl/wget or some other headless browsing tool.

Thanks in advance,

~FreeRSS_

Hey there.

InfinityFree is a hosting provider, so we only provide support to browsers.

The only way to bypass it is to allow cookies and JavaScript in you ‘browser’.

5 Likes

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.