Background Merge Hit Exception
Understanding and resolving MergeAbortedException in Apache Solr
What Does This Error Look Like?
You will see one or both of these in your Solr logs:
ERROR RequestHandlerBase - Server exception java.io.IOException: background merge hit exception: _segmentName(9.x.x):C508909/4761 ... [maxNumSegments=1] [ABORTED]
Caused by: org.apache.lucene.index.MergePolicy$MergeAbortedException: Merge aborted.
at org.apache.lucene.index.MergeRateLimiter.maybePause(...)
The error is always accompanied by a long list of Lucene segment identifiers and ends with [ABORTED].
What Is Happening?
commit request arrives with optimize=trueforceMerge(maxSegments=1) to compact all segments into oneThe key indicator is [maxNumSegments=1] at the end of the error — this means a forceMerge (optimize) was requested, not a normal background merge.
Why Does the Merge Get Aborted?
Lucene's ConcurrentMergeScheduler will abort an in-progress merge when:
- Another optimize/commit arrives while the first one is still running — the new commit triggers a new merge plan, aborting the old one
- The Solr core is being reloaded or closed — all running merges are immediately aborted
- A core swap or collection reload happens during the merge
- The index is very large (hundreds of thousands of documents across many segments) — the forceMerge takes a long time, increasing the chance of interruption
Is This Dangerous?
How to Fix It
1. Stop Sending optimize=true on Every Commit
This is the most common root cause. Many CMS integrations (Drupal Search API, WordPress, custom ETL pipelines) include optimize=true in their commit requests. This is almost never necessary and causes exactly this problem on large indexes.
Instead of:
/solr/mycore/update?commit=true&optimize=true
Use:
/solr/mycore/update?commit=true
Solr's built-in TieredMergePolicy handles segment merging automatically in the background — you do not need to manually optimize.
2. If Using Drupal Search API
In your Drupal Search API server configuration, ensure that the "Optimize on commit" checkbox is unchecked. This setting is found at:
Admin → Configuration → Search and metadata → Search API → [Your Server] → Edit
For drush indexing commands, make sure your indexing script does not call $index->getServerInstance()->getBackend()->optimizeIndex() after batch indexing.
3. If You Must Optimize
If you have a valid reason to optimize (e.g., after a massive bulk re-index), do it:
- During a maintenance window when no other indexing is happening
- Only once, not on every commit
- Be aware it can take minutes to hours on large indexes (500K+ documents)
4. Tune solrconfig.xml Merge Settings
You can adjust the merge policy to keep segment count reasonable without forceMerge:
<mergePolicyFactory class="org.apache.solr.index.TieredMergePolicyFactory"> <int name="maxMergeAtOnce">10</int> <int name="segmentsPerTier">10</int> <double name="maxMergedSegmentMB">5120</double> </mergePolicyFactory>
Quick Reference
| Detail | Value |
|---|---|
| Error class | java.io.IOException wrapping MergePolicy$MergeAbortedException |
| Trigger | commit with optimize=true or maxSegments=1 |
| Root cause | Concurrent commits/reloads abort in-progress forceMerge |
| Data loss? | No — merges are transactional |
| Fix | Remove optimize=true from commit requests |
| Solr versions | All versions (4.x through 9.x) |