Blogus¶
package.lock for AI prompts.
Extract prompts from your codebase, version them like dependencies, and keep everything in sync.
The Problem¶
Managing AI prompts in production applications is challenging:
- Scattered Prompts: Prompts are embedded as strings throughout your codebase, making them hard to find and manage
- No Version Control: When prompts change, there's no way to track what changed, when, or why
- Drift: The prompt in your code diverges from what's actually running in production
- No Testing: Prompts are rarely tested systematically before deployment
- Team Collaboration: Multiple developers editing prompts leads to conflicts and inconsistencies
The Current State of Prompt Management¶
# This is how most teams manage prompts today
response = openai.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": "You are a helpful assistant. Be concise."},
{"role": "user", "content": user_input}
]
)
# Where did this prompt come from? Who wrote it? When did it change?
# Is this the same prompt running in production?
The Solution: Blogus¶
Blogus brings the same rigor to prompt management that package.lock brought to dependency management:
| Feature | Without Blogus | With Blogus |
|---|---|---|
| Discovery | Manually search codebase | blogus scan finds all prompts |
| Versioning | Hope git history helps | Content-addressed hashes |
| Testing | Manual testing | Automated test generation |
| Sync | Manual copy-paste | blogus fix auto-syncs |
| CI/CD | No verification | blogus verify in pipelines |
| Collaboration | Merge conflicts | Structured .prompt files |
How Blogus Works¶
┌─────────────────────────────────────────────────────────────────────────────┐
│ BLOGUS WORKFLOW │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ SCAN │───▶│ VERSION │───▶│ LOCK │───▶│ SYNC │ │
│ │ │ │ │ │ │ │ │ │
│ │ Find LLM │ │ .prompt │ │ prompts │ │ Update │ │
│ │ calls in │ │ files │ │ .lock │ │ source │ │
│ │ codebase │ │ │ │ │ │ code │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Detects │ │ Creates │ │ Tracks │ │ Updates │ │
│ │ OpenAI, │ │ YAML + │ │ hashes, │ │ imports │ │
│ │ Anthropic│ │ template │ │ commits, │ │ and refs │ │
│ │ LangChain│ │ format │ │ metadata │ │ in code │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Step 1: Scan - Discover Prompts¶
Blogus scans your codebase to find all LLM API calls:
$ blogus scan
Scanning /home/user/myproject...
Found 5 LLM API calls:
Location Provider Status Prompt Preview
─────────────────────────────────────────────────────────────────────
src/chat.py:15 OpenAI unversioned "You are a helpful..."
src/summarize.py:42 OpenAI unversioned "Summarize the foll..."
src/translate.py:28 Anthropic unversioned "Translate the foll..."
lib/sentiment.py:56 OpenAI unversioned "Analyze the sentim..."
api/generate.py:103 LangChain unversioned "Generate a product..."
Summary:
Total calls found: 5
Unversioned: 5
Versioned: 0
Supported Providers:
- OpenAI SDK (
openai.chat.completions.create) - Anthropic SDK (
anthropic.messages.create) - LiteLLM (
litellm.completion) - LangChain (
ChatPromptTemplate,PromptTemplate) - Azure OpenAI
- Google Vertex AI / Gemini
- Cohere
- Custom patterns (configurable)
Step 2: Version - Create Prompt Files¶
Initialize structured .prompt files:
$ blogus init
Creating prompts directory...
Created 5 prompt files:
prompts/chat.prompt
prompts/summarize.prompt
prompts/translate.prompt
prompts/sentiment.prompt
prompts/generate.prompt
Next steps:
1. Review and edit the generated .prompt files
2. Run 'blogus lock' to generate the lock file
3. Run 'blogus fix' to update your source code
Each .prompt file has a structured format:
# prompts/summarize.prompt
---
name: summarize
description: Summarize long-form content into key bullet points
version: 1.0.0
author: engineering-team
created: 2024-01-15
model:
id: gpt-4o
temperature: 0.3
max_tokens: 500
variables:
- name: content
type: string
required: true
description: The content to summarize
- name: num_points
type: integer
default: 5
description: Number of bullet points to generate
- name: style
type: string
default: professional
enum: [professional, casual, technical]
description: Writing style for the summary
tags:
- summarization
- content-processing
- production
---
You are an expert content summarizer. Your task is to distill complex information into clear, actionable bullet points.
Guidelines:
- Extract the {{num_points}} most important points
- Use {{style}} language appropriate for business communication
- Each bullet should be self-contained and meaningful
- Prioritize actionable insights over general observations
Content to summarize:
{{content}}
Provide exactly {{num_points}} bullet points, each on its own line starting with "•".
Step 3: Lock - Track Versions¶
Generate a lock file that tracks exact versions:
$ blogus lock
Generating lock file...
Prompt Hash Status
────────────────────────────────────────────────
summarize sha256:a1b2c3d4... locked
translate sha256:b2c3d4e5... locked
chat sha256:c3d4e5f6... locked
sentiment sha256:d4e5f6a7... locked
generate sha256:e5f6a7b8... locked
Lock file written to: prompts.lock
The lock file (prompts.lock) provides a complete audit trail:
# prompts.lock - DO NOT EDIT MANUALLY
version: 1
generated: 2024-01-15T10:30:00Z
generator: blogus v1.2.0
prompts:
summarize:
file: prompts/summarize.prompt
hash: sha256:a1b2c3d4e5f6789012345678901234567890abcdef
content_hash: sha256:1234567890abcdef1234567890abcdef12345678
commit: 4903f76
author: jane@company.com
modified: 2024-01-15T10:30:00Z
variables:
- content
- num_points
- style
model: gpt-4o
translate:
file: prompts/translate.prompt
hash: sha256:b2c3d4e5f6a7890123456789012345678901bcdef0
content_hash: sha256:234567890abcdef1234567890abcdef123456789
commit: 4903f76
author: john@company.com
modified: 2024-01-14T15:22:00Z
variables:
- text
- source_language
- target_language
model: gpt-4o
# ... more prompts
Step 4: Sync - Update Source Code¶
Automatically update your source code to use managed prompts:
$ blogus fix
Analyzing source files...
Changes to apply:
src/summarize.py:42
- content = "Summarize the following: " + text
+ # @blogus:summarize sha256:a1b2c3d4
+ content = load_prompt("summarize", content=text)
src/translate.py:28
- prompt = f"Translate from {src} to {tgt}: {text}"
+ # @blogus:translate sha256:b2c3d4e5
+ prompt = load_prompt("translate", text=text, source_language=src, target_language=tgt)
Apply changes? [y/N]: y
Updated 2 files.
Backup files created in .blogus/backups/
Why Blogus?¶
Comparison with Other Approaches¶
| Approach | Discover | Version | Test | Sync | Lock | Collaborate |
|---|---|---|---|---|---|---|
| Inline strings | - | - | - | - | - | - |
| Manual .txt files | - | Git | - | Manual | - | - |
| LangChain Hub | - | Yes | - | Manual | - | Yes |
| PromptLayer | - | Yes | Yes | Manual | - | Yes |
| Humanloop | - | Yes | Yes | Manual | - | Yes |
| Blogus | Auto | Git | Yes | Auto | Yes | Yes |
Key Differentiators¶
- Zero Migration: Works with your existing code - no need to rewrite anything
- Content-Addressed: Prompts are tracked by content hash, not arbitrary version numbers
- Git-Native: Integrates seamlessly with your existing git workflow
- Bi-directional Sync: Changes flow from code → prompts AND prompts → code
- CI/CD Ready: Built-in verification for continuous integration
Quick Start¶
Installation¶
Your First Workflow¶
# 1. Scan your codebase
blogus scan
# 2. Initialize prompt files
blogus init
# 3. Edit the generated .prompt files as needed
# (use your favorite editor)
# 4. Lock the versions
blogus lock
# 5. Update your source code
blogus fix
# 6. Commit everything
git add prompts/ prompts.lock src/
git commit -m "Add prompt versioning with Blogus"
Verify in CI¶
Add to your CI pipeline:
Web Interface¶
Blogus includes a visual interface for managing prompts:
# Install with web extras
pip install blogus[web]
# Start the server
blogus-web
# Or with uvx
uvx --with blogus[web] blogus-web
Open http://localhost:8000 to:
- Browse all prompts with syntax highlighting
- Search across prompt content and metadata
- Edit prompts with live preview
- Test prompts with sample inputs
- Analyze prompt effectiveness with AI feedback
- Compare versions side-by-side
- Scan projects for new LLM calls
Use Cases¶
1. Enterprise Prompt Governance¶
Large organizations need to track and audit AI prompts for compliance:
2. A/B Testing Prompts¶
Test different prompt versions in production:
from blogus import load_prompt
# Load specific version for A/B test
prompt_a = load_prompt("summarize", version="sha256:a1b2c3d4")
prompt_b = load_prompt("summarize", version="sha256:e5f6a7b8")
3. Prompt Development Workflow¶
Develop prompts like code with review and testing:
# Create feature branch
git checkout -b improve-summarize-prompt
# Edit prompt
vim prompts/summarize.prompt
# Test changes
blogus exec summarize --var content="Test content here"
# Analyze quality
blogus analyze prompts/summarize.prompt
# Lock and commit
blogus lock
git add -A && git commit -m "Improve summarize prompt clarity"
git push -u origin improve-summarize-prompt
# Create PR for review
4. Multi-Environment Deployment¶
Manage prompts across dev/staging/production:
# Different lock files per environment
blogus lock --output prompts.dev.lock
blogus lock --output prompts.staging.lock
blogus lock --output prompts.prod.lock
Documentation¶
-
Complete setup guide with step-by-step instructions
-
Comprehensive documentation for all CLI commands
-
Use Blogus programmatically in your applications
-
Complete
.promptfile format specification -
Guide to choosing the right models for analysis and execution
-
Power user features and integration patterns
-
Configure Blogus for your project
-
Real-world examples and use cases
-
Common issues and solutions
Community & Support¶
- GitHub Issues: Report bugs and request features
- Discussions: Ask questions and share ideas
- Contributing: See CONTRIBUTING.md
License¶
Blogus is open source under the MIT License.