Troubleshooting¶
Common issues and solutions when using RavenRustRAG.
Ollama Connection Errors¶
"Failed to connect to Ollama"¶
Symptoms: Indexing or querying fails with a connection error.
Solutions:
-
Verify Ollama is running:
-
Start Ollama if not running:
-
If using a non-default URL, pass
--url: -
Run diagnostics:
"Model not found"¶
Symptoms: Embedding fails with a model error.
Solutions:
-
Pull the model:
-
Verify it's available:
-
If using a different model, specify it:
Database Errors¶
"Database is locked"¶
Symptoms: Operations fail with SQLite busy/locked errors.
Solutions:
- This usually resolves within 5 seconds (busy_timeout). If persistent:
- Check for zombie processes holding the lock:
- Remove stale WAL files (only if no process is using the DB):
"Dimension mismatch"¶
Symptoms: Query fails because the embedding dimension doesn't match stored vectors.
Cause: You indexed with one model (e.g., 768-dim nomic-embed-text) but are querying with a different model (e.g., 1536-dim text-embedding-3-small).
Solution: Use the same model for indexing and querying, or clear and re-index:
ravenrag clear
ravenrag index ./docs --model nomic-embed-text
ravenrag query "test" --model nomic-embed-text
Indexing Issues¶
"No documents found"¶
Symptoms: ravenrag index reports 0 documents.
Solutions:
-
Check file extensions match. Default is
txt,md: -
Verify the path contains files:
"Duplicate indexing / growing database"¶
Symptoms: Database grows on every re-index even without content changes.
Solution: RavenRustRAG uses fingerprinting for incremental indexing. If your files haven't changed, re-indexing should be a no-op. If the database grows unexpectedly:
- Check if file modification times are changing (backups, syncing tools)
- Clear and re-index from scratch:
Server Issues¶
"Address already in use"¶
Symptoms: ravenrag serve fails to bind.
Solutions:
-
Use a different port:
-
Find and kill the existing process:
"401 Unauthorized"¶
Symptoms: API calls return 401.
Solution: Include the Bearer token:
Performance Issues¶
Slow Indexing¶
- Check Ollama is running on the same machine (network latency)
- Consider a faster model (nomic-embed-text is fast; larger models are slower)
- Reduce chunk overlap to generate fewer chunks
- Use SSD storage for the database
Slow Queries¶
- Check database size — very large indexes benefit from mmap (enabled by default)
- Reduce
top_kif you don't need many results - Avoid hybrid search if BM25 isn't needed for your use case
- Run
ravenrag benchmarkto baseline your system
Docker Issues¶
"Permission denied" on volume¶
Symptoms: Container can't write to mounted volume.
Solution: The container runs as UID 65534. Ensure the host directory is writable:
chmod 777 ./data # or chown to 65534
docker run -v ./data:/data ghcr.io/egkristi/ravenrustrag:latest info --db /data/raven.db
Container can't reach Ollama¶
Solution: Use host networking or the host's IP:
# Linux
docker run --network host ghcr.io/egkristi/ravenrustrag:latest \
query "test" --url http://localhost:11434
# macOS/Windows (Docker Desktop)
docker run ghcr.io/egkristi/ravenrustrag:latest \
query "test" --url http://host.docker.internal:11434
Getting Help¶
- Run
ravenrag doctorfor automated diagnostics - Use
--verboseflag for detailed logging - Use
--jsonfor machine-parseable output - Check GitHub Issues