Getting Started: Web Crawler Site Search
Getting Started with the Opensolr Web Crawler
What is the Opensolr Web Crawler?
The Opensolr Web Crawler is an AI-powered web crawling and indexing engine that automatically crawls your website, extracts content from every page, enriches it with NLP analysis (sentiment, language detection, named entities), generates vector embeddings for semantic search, and stores everything in a high-performance Apache Solr 9.x index โ ready to search immediately.
Think of it as your own private Google, but for your website. You point it at your site, it crawls every page, and within minutes you have a fully searchable index with all the bells and whistles: full-text search, autocomplete, spell checking, AI-powered semantic search, and more.
Step 1: Create an Opensolr Account
Head over to opensolr.com and create a free account. No credit card required to get started.
Step 2: Add a New Index on a Web Crawler Server
Once logged in, go to Control Panel โ Add New Index. You will see a list of available Solr servers across different regions.
Important: You need to select a Web Crawler server. These are the servers that have the crawling engine built in. You can easily spot them โ they are marked with a small spider icon next to the server name.
You can also use the Crawler filter dropdown at the top of the server list to show only Web Crawler-enabled servers. Select "Yes" in that dropdown and you will only see the crawler-capable servers.
Currently available Web Crawler regions:
- EU-NORTH (Helsinki, Finland) โ
FINLAND9 - US-EAST (Chicago) โ
CHICAGO-96 - More regions may be added in the future
Pick the region closest to your website audience for the best performance, give your index a name, and click create.
Step 3: Start Crawling Your Website
Once your index is created, go into the Index Control Panel and click on "Web Crawler" in the left sidebar menu.
Here you can:
- Enter your starting URL โ this is typically your homepage (e.g.,
https://yoursite.com) - Click Start to begin the crawling process
- Monitor progress in real-time โ you will see pages being discovered, crawled, and indexed live
The crawler will automatically:
- Follow all internal links on your site
- Extract the page title, meta description, full body text, author info, and OG images
- Detect the language of each page
- Run sentiment analysis on the content
- Generate 1024-dimensional vector embeddings for AI/semantic search
- Store all extracted metadata (Open Graph tags, Twitter cards, icons, etc.)
- Deduplicate content using MD5 signatures
How the Crawler Works
The Opensolr Web Crawler works similarly to how Google and other search engines crawl the web:
- It starts from the URL you provide and follows every link it finds on your pages
- It respects your
robots.txtfile โ if you have blocked certain paths in robots.txt, the crawler will honor those rules - It checks for valid URLs, proper HTTP status codes, and well-formed HTML
- It will not index pages that return errors (404, 500, etc.)
- It handles JavaScript-rendered pages โ if your site uses React, Vue, Angular, Next.js, or similar frameworks, the crawler can render pages with a headless browser (Playwright/Chromium) to get the fully rendered content
- It extracts content from HTML pages, PDFs, DOCX, ODT, XLSX, and more
- It detects and handles Cloudflare and other anti-bot protections
Make Sure Your Site is Crawl-Friendly
Before starting the crawler, make sure:
- Do not block the crawler in your
robots.txtor firewall rules - Your pages should have proper
<title>tags and<meta name="description">tags โ these become the most important search fields - Your site should return HTTP 200 status codes for pages you want indexed
- Internal links should be clean and working (no broken links)
- If your site is behind authentication (login required), the crawler will not be able to access those pages
What Happens Next?
Once the crawling is complete, your Opensolr Index is fully populated and ready to use. You have two options:
Option A: Use the Built-in Search UI (Embed Code)
Opensolr provides a ready-made, responsive search interface that you can embed on your website with just two lines of HTML. See the next article: Embedding the Opensolr Search UI.
Option B: Build Your Own Custom Search UI
If you want full control over the look and feel, you can query the Solr index directly using the native Solr /select API and build your own frontend. This developer guide covers everything you need. See: Web Crawler Index Field Reference.
Your site's content is smart โ your search should be too.