Common Issues¶
Solutions to frequently encountered problems.
Startup Issues¶
Service Won't Start¶
Symptoms: Service fails to start, exits immediately.
Diagnosis:
Common Causes:
-
Port already in use
-
Invalid model name
-
Database path not writable
Model Download Fails¶
Symptoms: First startup hangs or fails during model download.
Solutions:
- Check internet connectivity
- Try a different model
- Manually pre-download:
API Errors¶
400 Bad Request: Unsupported embedding model¶
Cause: Requested model not in ENABLED_MODELS.
Solution:
400 Bad Request: Unsupported chunking type¶
Cause: LLM chunking requested but LLM not configured.
Solution:
# Either configure LLM
LLM_PROVIDER=ollama
LLM_MODEL=llama3
# Or use word chunking
curl ... -d '{"config": {"chunking_type": "words"}}'
500 Internal Server Error¶
Diagnosis:
Common Causes: - Database write failure - Model loading error - URL fetch failure
Performance Issues¶
Slow First Request¶
Cause: Model loading on first use.
Solution: Pre-warm after startup:
curl -X POST http://localhost:8081/v1/embed \
-H "Content-Type: application/json" \
-d '{"text": ["warmup"]}'
High Memory Usage¶
Cause: Multiple large models loaded.
Solutions:
1. Enable fewer models
2. Use smaller models
3. Use quantized models (*Q variants)
Slow URL Processing¶
Causes: - URL fetch slow - Large content - LLM chunking enabled
Solutions: 1. Use word chunking for speed 2. Reduce chunk size 3. Check network connectivity
LLM Issues¶
LLM Chunkers Not Available¶
Symptoms: Only "words" in chunking_types.
Diagnosis:
Solution: Configure LLM provider:
LLM Chunking Falls Back to Words¶
Causes: - LLM server not running - Timeout exceeded - Invalid response
Diagnosis:
Solutions:
1. Start LLM server
2. Increase timeout: LLM_TIMEOUT=120
3. Check LLM server logs
Ollama Connection Refused¶
Symptoms: "Connection refused" in logs.
Solutions:
# Start Ollama
ollama serve
# Check if running
curl http://localhost:11434/api/tags
# If in Docker, use host network
LLM_BASE_URL=http://host.docker.internal:11434
Cache Issues¶
Cache Not Working¶
Diagnosis:
# Check cache file exists
ls -la cache.db
# Check cache contents
sqlite3 cache.db "SELECT COUNT(*) FROM cache;"
Solutions: 1. Check DB_PATH is writable 2. Check disk space 3. Verify cache table exists
Cache File Growing Large¶
Solution: Vacuum the database:
Clear Cache¶
# Delete all entries
sqlite3 cache.db "DELETE FROM cache;"
# Or delete the file (when service stopped)
rm cache.db
Docker Issues¶
Container Exits Immediately¶
Diagnosis:
Solutions: 1. Check environment variables 2. Verify volume mounts 3. Check resource limits
Can't Connect to Host Ollama¶
Solution:
# Use host.docker.internal on macOS/Windows
LLM_BASE_URL=http://host.docker.internal:11434
# Or use host network on Linux
docker run --network host embedcache
Permission Denied for Volume¶
Solution:
# Fix ownership
sudo chown -R 1000:1000 /path/to/data
# Or run as root (not recommended)
docker run --user root embedcache
Debug Mode¶
Enable detailed logging: