ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 1

Errors

The Error

You're running a search query and Solr responds with this:

java.lang.ArrayIndexOutOfBoundsException: Index -1 out of bounds for length 1

The full stack trace typically starts at RequestHandlerBase and points into Lucene's internal index-reading code. Your query simply fails — no results, just an error.

This is not a bug in your query — it's Solr telling you that something is wrong with the underlying index data.


What's Actually Happening

When Solr executes a search, it reads through index segments — compressed data structures that Lucene uses to store your documents. Each segment contains term dictionaries, postings lists, and position data.

The -1 index means Lucene tried to look up a term or document position that doesn't exist in the segment's internal array. This is like opening a book to page -1 — the page simply isn't there.

HOW SOLR READS INDEX SEGMENTS DURING A QUERYHEALTHY SEGMENTSSegment _0terms: [0..47]docs: 1,250Segment _1terms: [0..83]docs: 3,400Segment _2terms: [0..61]docs: 980CORRUPTEDSegment _3terms: [???]Index -1 !Query Flow:QuerySeg _0Seg _1Seg _2Seg _3CRASHSolr reads each segment in order. When it hits a corrupted segment,the internal array lookup returns -1 and the query fails.An optimize (force merge) rewrites all segments, eliminating the corrupted one.


Common Causes

1. Interrupted Indexing (Most Common)

If the indexing process was killed mid-commit — due to a server restart, process crash, out-of-memory event, or network interruption — the last segment may have been written only partially. The term dictionary says "there are N entries" but only N-1 were actually flushed to disk.

2. Disk Full During a Commit

Solr writes new segment files during a commit. If the disk fills up mid-write, you get a segment where the header claims more data than what's actually on disk. Lucene tries to read position -1 because the data it expected simply isn't there.

3. Corrupted Segment After a Hard Server Crash

If the server lost power or the JVM was killed with kill -9 while Solr was writing, the filesystem may not have flushed all buffered writes. The segment file looks complete but contains zeroed-out or garbage bytes in critical positions.

4. Stale NFS / Network Storage

If your Solr data directory is on network-attached storage (NFS, CIFS), caching or network glitches can cause Solr to see a stale version of a segment file that doesn't match the segments metadata.


How to Fix It

Solution 1: Optimize (Force Merge) the Index

The safest and most common fix. An optimize operation rewrites all segments into a new, clean set — skipping any corrupted internal structures:

Via Solr Admin UI or API:

https://your-index.solrcluster.com/solr/your_core/update?optimize=true

Or in the Opensolr Control Panel, go to your Opensolr Index and click Optimize. This merges all segments into one clean segment, eliminating any corruption.

Important: Optimize can be resource-intensive on large indexes. It's safe to run, but expect temporarily higher CPU/memory usage while it processes.

Solution 2: Re-index Your Data

If optimize doesn't resolve it (rare), or if the corruption is too deep, a full re-index is the definitive fix:

  1. Delete all documents from the index:
https://your-index.solrcluster.com/solr/your_core/update?stream.body=<delete><query>*:*</query></delete>&commit=true
  1. Re-index your data from your application
  2. Commit to finalize

This gives you a completely fresh index with no legacy segment issues.

Solution 3: Check Your Disk Space

Before anything else, verify that your Opensolr Index isn't running out of its allocated disk space. A full disk during indexing is one of the most common triggers for this corruption.

Check your disk usage in the Opensolr Control Panel under your index's dashboard. If you're at or near 100%, you'll need to either clean up old data or upgrade your plan.


How to Prevent It

Prevention What It Does
Don't kill Solr mid-commit Always use graceful shutdown. Never kill -9 a Solr process during indexing.
Monitor disk space Keep at least 20% free disk space. Solr needs room for segment merges and commits.
Use soft commits wisely Soft commits (commitWithin) are less likely to cause corruption than frequent hard commits.
Enable auto-commit Let Solr manage commits with autoCommit in solrconfig.xml instead of forcing manual commits.
Avoid indexing during peak query load Heavy concurrent indexing + querying increases the chance of segment issues.

Is This Error Dangerous?

Not to your data. The original documents in your application (database, CMS, files) are unaffected — Solr's index is just a search copy. The error means the search index has a corrupted segment, not that your source data is damaged.

However, queries that hit the corrupted segment will fail until you fix it. Not all queries may fail — it depends on which segment the query needs to read. You might see intermittent errors where some searches work fine and others throw this exception.


Quick Checklist

  • Check your disk space in the Opensolr Control Panel — is it near full?
  • Try an optimize (force merge) first — this fixes most cases
  • If optimize fails, re-index your data from scratch
  • Check if indexing was interrupted recently (server restart, timeout, crash)
  • Going forward, ensure your application uses commitWithin instead of hard commits after every document

This error is recoverable. An optimize usually clears it up in minutes. If you need help, reach out to us at support@opensolr.com — we can check your index health directly.