ProSA processor to handle Ollama API.
It leverages the Ollama-rs crate to handle Ollama calls.
With this processor, you can:
- Download Ollama models
- List available Ollama models
- Get detailed information about a specific model
- Make AI requests
- Request AI embeddings
To configure your processor, you need to set the following parameters:
ollama:
url: "http://localhost:11434"
models:
- "mistral"
- "devstral"
service: "PROC_SERVICE_NAME"