ConcurrentModificationException — Two Operations Collided Inside Solr

Errors
Solr Error Guide

ConcurrentModificationException

Two internal Solr operations tried to use the same piece of data at the exact same instant. This is a transient threading glitch — here's what it means and whether you need to worry.


What Happened?

The Error: java.util.ConcurrentModificationException: null

Inside Solr, two threads (think of them as two workers) tried to use the same internal list or data structure at the exact same microsecond. One was reading through the list while the other was changing it. Java's safety system caught this and stopped the operation rather than give you corrupted results.


The Simple Explanation

Imagine you're reading a guest list at a party, checking names one by one. While you're in the middle of reading, someone walks up and adds three new names to the list — right in the section you're currently reading. Now your count is off, you might skip names or read the same name twice. Instead of giving you a wrong answer, Java stops and says: "Whoa — someone changed the list while I was reading it. I can't trust my results anymore."

What Happens Inside SolrThread A (Query)"I'm reading thelist of results..."Reading items 1, 2, 3...Thread B (Update)"I'm adding newitems to the list!"Writing items 4, 5, 6...COLLISION!Same data, same instantJava stops the operation to prevent corrupted results


What Triggers It?

Heavy Indexing + Querying at the Same Time

The most common cause. Your application is sending a burst of updates/adds to the index while simultaneously running search queries. Internally, Solr's caches or facet structures briefly get caught between a read and a write.

AutoCommit During Active Queries

Solr's autoCommit triggers periodically to make new documents searchable. If a commit happens while a complex query (like a faceted search) is iterating over internal data, the two can collide.

Cache Warming or Searcher Opening

After a commit, Solr opens a new "searcher" and may warm caches. During this brief window, the old searcher is still serving queries while the new one is being built — and they might access shared structures simultaneously.


Is It Serious?

Almost never. This is a transient, harmless error.

The single request that hit this collision failed, but your data is perfectly safe. No documents were lost or corrupted. The next request (even milliseconds later) will work fine because the collision window has passed. Java threw this exception precisely to protect your data from corruption.

No data corruption

No documents lost

Self-resolving


What Should You Do?

If It Happened Once or Twice

Ignore it. This is completely normal under load. A single occurrence means two threads briefly collided and Java safely aborted one of them. Your next request will work fine.

If It Happens Frequently

If you see this many times per hour, your indexing pipeline may be sending too many rapid updates while simultaneously running heavy queries. Try batching your updates (send 100-500 documents per request instead of one at a time) and reducing commit frequency.

Add Retry Logic

If your application needs 100% reliability, add a simple retry: if a query returns a 500 error, wait 1 second and try again. The collision is gone by then.


How to Reduce These Errors

If you're seeing this repeatedly and want to minimize it:

Strategy How It Helps
Batch your updates Send 100-500 docs per request instead of one-by-one. Fewer requests = fewer chances for collision.
Reduce explicit commits Don't send a commit=true with every update. Let Solr's autoCommit handle it — it's designed to batch commits efficiently.
Separate index and query time If possible, do heavy bulk indexing during off-peak hours when fewer search queries are running.
Use commitWithin Instead of explicit commits, use commitWithin=10000 (milliseconds) to let Solr pick the optimal commit timing and reduce contention.

Quick Reference

Item Details
Error Class java.util.ConcurrentModificationException
Root Cause Two threads accessed the same internal collection simultaneously — one reading, one writing
Severity Low — Transient
Data Loss Risk None — Java threw the exception to prevent corruption
Fix Usually none needed. For frequent occurrences: batch updates, reduce commits, add client-side retry.

Seeing This Error Repeatedly?

Check your Error Audit to see how often it's happening and whether it clusters around indexing bursts.