Overview
In today’s data-driven world, security and privacy are paramount, especially for companies running their own large language model (LLM) instances in the cloud. Recognizing the need for customization, Vespper now offers the ability to configure your own LLM providers, with the default previously set to OpenAI. This much-anticipated feature is made possible through the integration of the LiteLLM package.
LiteLLM Integration
LiteLLM is a versatile project that simplifies interaction with over 100 LLM providers through a single interface using the OpenAI protocol. For instance, chat completions are available at /chat/completions
and expect an array of messages. The same structure applies to embeddings and audio endpoints. We have integrated the LiteLLM Proxy Server, which processes our code and forwards requests to the specified provider.
Configuring Your LLM Provider
Configuring your LLM provider with LiteLLM is straightforward. Simply modify the LiteLLM configuration file. We’ve provided an example file at config/litellm/config.example.yaml
, which you can copy and rename to config.yaml
for customization. Detailed instructions can be found in our quickstart guide and documentation.
Conclusion
With the integration of the LiteLLM proxy server, Merlinn users can now configure and utilize over 100 different LLM vendors, including private models. This enhancement significantly elevates Merlinn’s capabilities, making it a more valuable tool for organizations focused on security and privacy.