Troubleshooting Common issues and solutions when using Blogus.
Installation Issues "Command not found: blogus" Symptoms:
Bash $ blogus
bash: blogus: command not found
Solutions:
Check if installed:
Add to PATH (if using pip with --user):
Bash # Add to ~/.bashrc or ~/.zshrc
export PATH = " $HOME /.local/bin: $PATH "
source ~/.bashrc
Use python -m:
Bash python -m blogus --help
Reinstall:
Bash pip uninstall blogus
pip install blogus
"ModuleNotFoundError: No module named 'blogus'" Symptoms:
Python >>> from blogus import load_prompt
ModuleNotFoundError : No module named 'blogus'
Solutions:
Check virtual environment:
Bash # Make sure you're in the right venv
which python
pip list | grep blogus
Install in current environment:
Check Python version:
Bash python --version # Must be 3.9+
Symptoms:
Bash $ blogus-web
Error: Web dependencies not installed. Run: pip install blogus[ web]
Solutions:
Bash pip install blogus[ web]
# or
uv add blogus[ web]
API Key Issues "OPENAI_API_KEY not set" Symptoms:
Text Only Error: API key not found. Set OPENAI_API_KEY environment variable.
Solutions:
Set environment variable:
Bash export OPENAI_API_KEY = "sk-..."
Use .env file:
Bash # .env
OPENAI_API_KEY = sk-...
Check if set correctly:
Bash echo $OPENAI_API_KEY
# Should show your key (or at least sk-...)
For different providers:
Bash export ANTHROPIC_API_KEY = "sk-ant-..."
export GROQ_API_KEY = "gsk_..."
"Invalid API key" or "Authentication failed" Symptoms:
Text Only Error: Authentication failed. Check your API key.
litellm.AuthenticationError: Invalid API Key
Solutions:
Verify key is correct: Check for extra spaces or newlines Ensure you copied the complete key Try regenerating the key in your provider's dashboard
Check key permissions:
Some API keys have restricted permissions Ensure your key has access to the models you're using
Test key directly:
Bash curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY "
"Rate limit exceeded" Symptoms:
Text Only Error: Rate limit exceeded. Please retry after X seconds.
litellm.RateLimitError: Rate limit reached
Solutions:
Wait and retry:
Bash # Blogus will auto-retry with exponential backoff
blogus analyze prompt.prompt --timeout 120
Use a different model:
Bash # Try a less popular model
blogus analyze prompt.prompt --judge-model gpt-3.5-turbo
Batch requests:
Bash # Process fewer prompts at once
blogus analyze prompts/chat.prompt
# Wait, then
blogus analyze prompts/summarize.prompt
Upgrade API tier:
Check your provider's rate limits Consider upgrading for higher limits Scan Issues "No LLM calls found" Symptoms:
Bash $ blogus scan
Found 0 LLM API calls.
Solutions:
Check file patterns:
Bash # Scan specific file types
blogus scan --include "**/*.py" --include "**/*.js"
Check exclusions:
Bash # Don't exclude your source files
blogus scan --exclude "nothing"
Verify code patterns: Blogus looks for standard SDK patterns:
Python # These ARE detected:
openai . chat . completions . create ( ... )
anthropic . messages . create ( ... )
litellm . completion ( ... )
# These might NOT be detected:
my_custom_wrapper ( prompt ) # Custom wrappers
client . invoke ( ... ) # Non-standard methods
Try verbose mode:
"Scan is slow" Solutions:
Exclude unnecessary directories:
Bash blogus scan \
--exclude "**/node_modules/**" \
--exclude "**/venv/**" \
--exclude "**/.git/**" \
--exclude "**/dist/**"
Scan specific directories:
Bash blogus scan ./src ./lib
Use non-recursive for quick check:
Bash blogus scan --no-recursive ./src
Prompt File Issues "Invalid YAML in prompt file" Symptoms:
Text Only Error: Failed to parse prompts/my-prompt.prompt
yaml.scanner.ScannerError: mapping values are not allowed here
Solutions:
Check YAML syntax:
YAML # Wrong - missing quotes around special characters
description: This prompt : does stuff
# Correct
description : "This prompt: does stuff"
Check indentation:
YAML # Wrong - inconsistent indentation
model :
id : gpt-4o
temperature : 0.7 # Wrong indent!
# Correct
model :
id : gpt-4o
temperature : 0.7
Validate YAML:
Bash python -c "import yaml; yaml.safe_load(open('prompts/my-prompt.prompt'))"
Check frontmatter delimiters:
YAML ---
name : my-prompt
---
# Make sure there are exactly three dashes
"Variable not found" Symptoms:
Text Only Error: Variable 'user_name' not found in prompt template
Solutions:
Check variable names match:
YAML variables :
- name : userName # camelCase
---
Hello {{userName}} # Must match exactly
Provide all required variables:
Bash blogus exec my-prompt \
--var userName = "John" \
--var message = "Hello"
Check for typos:
YAML # Template uses {{user_name}}
# But variable defined as 'userName'
"Hash mismatch" during verify Symptoms:
Text Only ✗ assistant-summarize: Hash mismatch (file was modified)
Solutions:
Update the lock file:
Bash blogus lock
git add prompts.lock
git commit -m "Update prompt lock file"
If you didn't intend to change the prompt:
Bash # Restore from git
git checkout -- prompts/assistant-summarize.prompt
Check for whitespace changes:
Bash git diff --ignore-space-change prompts/
Execution Issues "Model not found" Symptoms:
Text Only Error: Model 'gpt-5' not found
litellm.NotFoundError: The model `gpt-5` does not exist
Solutions:
Check model name:
Bash # Common model names:
gpt-4o
gpt-4-turbo
gpt-3.5-turbo
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-haiku-20240307
Check provider prefix:
Bash # For Groq models, use prefix:
groq/llama3-70b-8192
groq/mixtral-8x7b-32768
Check API access:
Some models require special access Verify your API plan includes the model "Context length exceeded" Symptoms:
Text Only Error: This model's maximum context length is 8192 tokens
Solutions:
Use a model with longer context:
Bash blogus exec my-prompt \
--model claude-3-sonnet-20240229 # 200K context
Reduce input size:
Bash # Truncate long inputs
blogus exec summarize --var text = " $( head -c 10000 long_file.txt) "
Set max_tokens in prompt:
YAML model :
id : gpt-4o
max_tokens : 1000 # Limit response size
"Timeout" errors Symptoms:
Text Only Error: Request timed out after 60 seconds
Solutions:
Increase timeout:
Bash blogus exec my-prompt --timeout 120
Use streaming:
Bash blogus exec my-prompt --stream
Check network:
Bash curl -I https://api.openai.com
Use a faster model:
Bash blogus exec my-prompt --model gpt-3.5-turbo
Lock File Issues "Lock file not found" Symptoms:
Text Only Error: prompts.lock not found
Solutions:
Generate lock file:
Check path:
Bash blogus verify --lock ./path/to/prompts.lock
"Prompt not in lock file" Symptoms:
Text Only Warning: new-prompt: Not in lock file
Solutions:
Add to lock file:
If prompt should be ignored:
Bash blogus lock --exclude "new-prompt.prompt"
CI/CD Issues "Verify fails in CI but works locally" Common Causes:
Lock file not committed:
Bash git add prompts.lock
git commit -m "Update lock file"
git push
Different line endings:
Bash # In .gitattributes
*.prompt text eol = lf
prompts.lock text eol = lf
File permissions:
Bash chmod 644 prompts/*.prompt
chmod 644 prompts.lock
Missing prompts directory:
YAML # In CI, ensure prompts are checked out
- uses : actions/checkout@v4
with :
fetch-depth : 0
"API key not available in CI" Solutions:
GitHub Actions:
YAML - name : Verify prompts
run : blogus verify
env :
OPENAI_API_KEY : ${{ secrets.OPENAI_API_KEY }}
GitLab CI:
YAML variables :
OPENAI_API_KEY : $OPENAI_API_KEY
For verify (no API needed):
Bash # blogus verify doesn't need API keys
# It only checks file hashes
blogus verify # Works without API key
"Analysis is slow" Solutions:
Use faster models for testing:
Bash blogus analyze prompt.prompt --judge-model gpt-3.5-turbo
Skip detailed analysis:
Bash # Don't use --detailed for quick checks
blogus analyze prompt.prompt
Batch operations:
Bash # Analyze multiple prompts in sequence
for f in prompts/*.prompt; do
blogus analyze " $f " --json >> results.json
done
"Memory usage is high" Solutions:
Process fewer files:
Bash blogus scan ./src/specific_module
Use streaming for large outputs:
Bash blogus exec large-prompt --stream
Getting More Help Enable Debug Logging Bash blogus -vv scan
# or
BLOGUS_DEBUG = 1 blogus scan
Check Version Bash blogus --version
pip show blogus
Report a Bug Search existing issues Create a new issue with: Blogus version (blogus --version) Python version (python --version) OS and version Complete error message Steps to reproduce Expected vs actual behavior