Infinity
Infinity is a high-throughput, low-latency REST API for serving vector embeddings, supporting all sentence-transformer models and frameworks. Infinity is developed under MIT License. Infinity powers inference behind Gradient.ai and other Embedding API providers.
Why Infinity
Infinity provides the following features:
- Deploy any model from MTEB: deploy the model you know from SentenceTransformers
- Fast inference backends: The inference server is built on top of torch, optimum(onnx/tensorrt) and CTranslate2, using FlashAttention to get the most out of CUDA, ROCM, CPU or MPS device.
- Dynamic batching: New embedding requests are queued while GPU is busy with the previous ones. New requests are squeezed intro your device as soon as ready. Similar max throughput on GPU as text-embeddings-inference.
- Correct and tested implementation: Unit and end-to-end tested. Embeddings via infinity are identical to SentenceTransformers (up to numerical precision). Lets API users create embeddings till infinity and beyond.
- Easy to use: The API is built on top of FastAPI, Swagger makes it fully documented. API are aligned to OpenAI's Embedding specs. See below on how to get started.
Getting started
Install infinity_emb
via pip
Install from source with Poetry
Advanced: To install via Poetry use Poetry 1.8.4, Python 3.11 on Ubuntu 22.04Launch the CLI using a pre-built docker container (recommended)
port=7997
model1=michaelfeil/bge-small-en-v1.5
model2=mixedbread-ai/mxbai-rerank-xsmall-v1
volume=$PWD/data
docker run -it --gpus all \
-v $volume:/app/.cache \
-p $port:$port \
michaelf34/infinity:latest \
v2 \
--model-id $model1 \
--model-id $model2 \
--port $port
HF_HOME
.
or launch the cli after the pip install
After your pip install, with your venv activate, you can run the CLI directly.
Check the --help
command to get a description for all parameters.