Documentation > Opensolr Web Crawler - Site Search Solution

What is the Opensolr Web Crawler?

The Opensolr Web Crawler offers a seamless solution, effortlessly indexing websites while leveraging robust Natural Language Processing (NLP) and Named Entity Recognition (NER) capabilities. By crawling every page, it automatically extracts and inserts comprehensive meta-information directly into the Solr index. This process ensures that the content is instantly searchable through a fully responsive, embeddable search engine UI, enabling users to create a powerful and tailored search experience within minutes.

Click here for an example Solr API for one of our Demo Web Crawl projects

Search Engine Demos:

Fresh News Search Engine

Tech News Search Engine

Romanian News Search Engine

German News Search Engine

Swedish News Search Engine

India News Search Engine

Opensolr Search Engine

Documents Search Engine

What's new:

  • Automatic Content Language Detection via OpenNLP.
  • Automatic NER via integration with OpenNLP.
    • Implemented recognition for people, locations, and organizations.
  • Can be customised for any languange analysis, with stopwords, synonyms, spellcheck, etc.
  • Fully responsive, embeddable Search Engine UI
  • Automatic schedule, re-crawling of fresh content only.
  • HTTP Auth, so that you can follow your protected documents/pages.
  • Full support for spellcheck and autocomplete.
  • Follows and indexes full content and meta data of the following rich text formats: doc, docx, xls, pdf, and most image files formats.
  • Adds content sentiment to each page/document indexed, in order to identify potential hateful content for each of the indexed documents (web pages).
  • Adds GPS position for image files meta data, that can be used as location fields in Solr, to perform geo-location radius search requests.
  • Fulll live crawling stats that also serves as an SEO tool, while crawling.
  • Smartly collects page/document creation date and includes the date in the search scoring function for fresh results elevation.
  • Automate crawling and get LIVE stats via the Opensolr Web Crawler UI, or via the Automation REST APIs
  • Supports resume, without losing any data. Crawl parts of your website, every day, or based on your own cron jobs, by taking advantage of the Automation REST API.

To make sure crawling works correctly, only use our Web Crawler Enabled environments, and make sure to apply the below Solr configuration archive, corresponding to the Solr version you are using:
Solr 9 Config Zip Archive

To learn more about what fields are indexed, simply create a new opensolr index, go to Config Files Editor, and select schema.xml.
In order to preserve your Web Crawler's functionality, please do not edit your schema.xml fields, or any other configuration files.

Quick Video Demo