Web Crawler API
Resume the Web Crawler
Resume a paused web crawler immediately, without waiting for the next cron tick. Continues from where the crawler left off.
Endpoint
GET https://opensolr.com/solr_manager/api/start_crawl
This uses the same
start_crawl endpoint with clean=no to continue from where the crawler left off, rather than starting a fresh crawl.Parameters
| Parameter | Status | Description |
|---|---|---|
email | Required | Your Opensolr registration email address |
api_key | Required | Your Opensolr API key |
core_name | Required | The name of the index to resume crawling for |
clean | Required | Set to no to resume from where the crawler left off |
If the crawler has finished processing all URLs in its queue (todo is empty), resuming will have no effect. Check stats first using the Get LIVE Web Crawler Stats API to see if there are remaining pages to crawl.
Code Examples
cURL
curl -s "https://opensolr.com/solr_manager/api/start_crawl?email=YOUR_EMAIL&api_key=YOUR_API_KEY&core_name=my_solr_core&clean=no"
PHP
$params = http_build_query([ 'email' => 'YOUR_EMAIL', 'api_key' => 'YOUR_API_KEY', 'core_name' => 'my_solr_core', 'clean' => 'no', ]); $response = file_get_contents("https://opensolr.com/solr_manager/api/start_crawl?{$params}"); $result = json_decode($response, true); print_r($result);
Python
import requests response = requests.get("https://opensolr.com/solr_manager/api/start_crawl", params={ "email": "YOUR_EMAIL", "api_key": "YOUR_API_KEY", "core_name": "my_solr_core", "clean": "no", }) print(response.json())
Related Documentation
Start CrawlerLaunch the web crawler with full control over mode, threads, and rendering.
Pause CrawlerTemporarily halt crawling while keeping the cron schedule.
Resume CrawlerContinue crawling immediately from where it left off.
Check StatusCheck whether crawler processes are currently active.
Get Live StatsReal-time crawler statistics: pages crawled, queued, errors.
Flush BufferForce pending documents from the crawler buffer into Solr.
Need help with the Opensolr Web Crawler? We are here to help.
Contact Support