We want to copy the entire file that makes the data displayed on page
Then we want to insert it in a new server.
Imagine there are 900 hundred valuable crawling instructions in http://zxc.asd.e.wt:8090/Table_API_p.html
A URL line on page zxc.asd.e.wt:8090/Table_API_p.html - such as this - will display the entire list
These URLs and RSS feeds, etcetera are In the list entitled " Recorded Actions "
Prepare the list by cleaning it of the majority of housekeeping duties, if possible.
We wish to extract the crawling data and the frequency of crawling instructions, only.
The common technique is to copy all the instructions, but this becomes impractical as the number of instructions grows and the frequency of crawl is difficult to manage.
Doing this the " old " way is extremely manual and time consuming. It assumes that each instruction and frequency will be manually reinserted into another server.
You can, alternatively, place the data in an spreadsheet and use selectors to remove housekeeping instructions.
Still, this is very manual.
After removing all server housekeeping instructions, we wish to reinsert the pages to crawl and RSS feeds into the same file format, but in a fresh server.
Then we will insert the prepared file into the new server - replacing the generic file.
Why do this?
Our experience indicates that doing an extraction of URLs in RSS format does not capture all the URLs for some unknown reason.
Where is the file that holds the data that generates the/Table_API_p.html , please?