Quick Start¶
Get EmbedCache running in 5 minutes.
Starting the Service¶
- Create a configuration file (optional):
- Start the server:
You should see:
Making Your First API Call¶
Generate Embeddings¶
curl -X POST http://localhost:8081/v1/embed \
-H "Content-Type: application/json" \
-d '{
"text": ["Hello, world!", "This is a test."],
"config": {
"chunking_type": "words",
"chunking_size": 512,
"embedding_model": "AllMiniLML6V2"
}
}'
Process a URL¶
curl -X POST http://localhost:8081/v1/process \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"config": {
"chunking_type": "words",
"chunking_size": 256,
"embedding_model": "AllMiniLML6V2"
}
}'
List Supported Features¶
Using as a Library¶
use embedcache::{FastEmbedder, Embedder};
use fastembed::{InitOptions, EmbeddingModel};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create an embedder
let embedder = FastEmbedder {
options: InitOptions::new(EmbeddingModel::BGESmallENV15),
};
// Texts to embed
let texts = vec![
"Machine learning is fascinating.".to_string(),
"Natural language processing enables computers to understand text.".to_string(),
];
// Generate embeddings
let embeddings = embedder.embed(&texts).await?;
// Use the embeddings
for (i, embedding) in embeddings.iter().enumerate() {
println!("Text {}: {} dimensions", i, embedding.len());
}
Ok(())
}
API Documentation¶
EmbedCache comes with built-in API documentation. Once the server is running, visit:
- Swagger UI: http://localhost:8081/swagger
- ReDoc: http://localhost:8081/redoc
- RapiDoc: http://localhost:8081/rapidoc
- Scalar: http://localhost:8081/scalar
- OpenAPI JSON: http://localhost:8081/openapi.json
Next Steps¶
- Configuration - Customize EmbedCache for your needs
- Chunking Strategies - Learn about different chunking options
- Embedding Models - Explore available models