Creates a web crawler job whose objective is to crawl the provided URLs/sitemaps and generate corresponding webpages as artifacts.
The parts to which created webpage/articles during this crawler job will be linked to.
The list of regexes a URL must satisfy to be crawled.
The description of the job.
The list of allowed domain names to crawl.
Number of days between re-sync job runs. If 0, the job will run only once.
The maximum depth to crawl.
Whether to notify the user when the job is complete. Default is true.
The list of regexes which if satisfied by a URL results in rejection of the URL. If a URL matches both accept and reject regexes, it is rejected.
The list of sitemap index URLs to crawl.
The list of sitemap URLs to crawl.
The list of URLs to crawl.
The regex a URL must satisfy to be crawled.
The regex which if satisfied by a URL results in rejection of the URL. If a URL matches both accept and reject regexes, it is rejected.
The response to create a web crawler job.