Solr is a beast—it loves RAM like a dog loves a steak. If your Solr server is gobbling up memory and crashing, don’t panic! Here’s what you need to know, plus battle-tested ways to keep things lean, mean, and not out-of-memory.
Solr eats memory to build search results, cache data, and keep things fast.
But:
- Bad configuration or huge, inefficient requests can cause even the biggest server to choke and burn through RAM.
- Sometimes, small indexes on giant machines will still crash if your setup isn’t right.
- Good news: Opensolr has self-healing—if Solr crashes, it’ll be back in under a minute. Still, prevention is better than panic.
Want to save bandwidth and RAM? Read these tips.
Optimizing your queries is a win-win: less data in and out, and less stress on your server.
rows
parameter below 100 for most queries.&rows=100
&start=500000&rows=100
), Solr has to allocate a ton of memory for all those results.start
under 50,000 if possible.docValues=true
docValues="true"
in schema.xml
.Example:
xml
<field name="name" docValues="true" type="text_general" indexed="true" stored="true" />
For highlighting, you may want even more settings:
xml
<field name="description" type="text_general" indexed="true" stored="true" docValues="true" termVectors="true" termPositions="true" termOffsets="true" storeOffsetsWithPositions="true" />
Solr caches are great… until they eat all your memory and leave nothing for real work.
The big four:
filterCache
: stores document ID lists for filter queries (fq
)queryResultCache
: stores doc IDs for search resultsdocumentCache
: caches stored field valuesfieldCache
: stores all values for a field in memory (dangerous for big fields!)Solution: Tune these in solrconfig.xml
and keep sizes low.
xml
<filterCache size="1" initialSize="1" autowarmCount="0"/>
Questions? Want a config review or more tips? Contact the Opensolr team!