Select a category on the left, to get your answers quickly
What is the Opensolr Web Crawler?
The Opensolr Web Crawler offers a seamless solution, effortlessly indexing websites while leveraging robust Natural Language Processing (NLP) and Named Entity Recognition (NER) capabilities. By crawling every page, it automatically extracts and inserts comprehensive meta-information directly into the Solr index. This process ensures that the content is instantly searchable through a fully responsive, embeddable search engine UI, enabling users to create a powerful and tailored search experience within minutes.
Click here for an example Solr API for one of our Demo Web Crawl projects.
Search Engine Demos:
All the search engines below, can be embedded intro your wbesite, or used as they are. If you embed, you also get the option to hide the top search bar, and customize the search experience, by addingthe following parameters:
&topbar=off / block - either to show the top search tool or not &q=SEARCH_QUERY - enter a starting search query, or leave empty to get all results &in=web / media / images - either to search only in web pages, documents or images &og=yes / no - either to display the og image for each result or not &source=WEBSITE - enter the domain, to restrict the search to. If you have multiple websites you crawled and indexed, use this parameter to restrict to only one domain &fresh=yes / no / hour / today / previous_week / previous_month / previous_3month / positive / negative - the bias applied to the search results &lang=en - ISO code of the results language
To make sure crawling works correctly, only use our Web Crawler Enabled environments, and make sure to apply the below Solr configuration archive, corresponding to the Solr version you are using: Solr 9 Config Zip Archive
To learn more about what fields are indexed, simply create a new opensolr index, go to Config Files Editor, and select schema.xml. In order to preserve your Web Crawler's functionality, please do not edit your schema.xml fields, or any other configuration files.
Quick Video Demo
1. Page has to respond within less than 5 seconds (that's not the page download time, it's the page / website response time), otherwise the page in question will be ommited from indexing.
2. Our web crawler will follow, but will never index dynamic pages (pages with a ? query in the URL). Such as: https://website.com?query=value
3. In order to be indexed, pages should never reflect a meta tag of the form
<meta name="robots" content="noindex" />
4. In order to be followed for other links, pages should never reflect a meta tag of the form:
<meta name="robots" content="nofollow" />
5. Just as in the case of #3 and #4, all pages that are desired to appear in search results should never include "noindex or nofollow or none" as a robots meta tag.
6. Pages that should appear in the search results, and are desired to be indexed and crawled, should never appear as restricted in the generic website.tld/robots.txt file
7. Pages should have a clear, concise title, while also trying to avoid duplicates in the titles, if at all possible. Pages without a title whatsoever, will always be ommited from indexing.
8. Article pages should present a creation date, by either one of the following meta tags:
article:published_time
or
og:updated_time
9. #8 Will apply , as best practice, for any other pages, in order to be able to correctly and consistently present fresh content at the top of the search results, for any given query.
10. Presence of: author, or og:author, or article:creator meta tag is a best practice, even though that will be something generic such as: "Admin", etc, in order to provide better data structure for search in the future.
11. Presence of a category or og:category tag will also help with faceting and more consistent data structure.
12. In case two or more different pages, that reside at two or more different URLs, BUT present the same actual content, they should both have a canonical meta tag, which indicates which one of the URLs should be indexed. Otherwise, search API will present duplicates in the results