Asynchronous LLM Calling, using OpenAI SDK and Gemini 2.0 Flash
Latest Machine Learning
Asynchronous LLM Calling, using OpenAI SDK and Gemini 2.0 Flash
0 like
April 13, 2025
Share this post
Last Updated on April 14, 2025 by Editorial Team
Author(s): Hadi Rouhani
Originally published on Towards AI.
– A Universal Approach in Calling LLMs from various brands
This member-only story is on us. Upgrade to access all of Medium.
Hadi Rouhani
·
Subscribe
Published in
Towards AI
·5 min read·17 hours ago
19
Listen
Share
More
Looking to hide highlights? You can now hide them from the “•••” menu.
Okay, got it
Access to the Github Repository: Link
I worked on a production application that uses GPT-4o to generate summaries from a RAG system. Stakeholders requested testing Gemini 2.0 Flash to compare responses and assess user feedback. This led me to realize the need for a universal approach in an era of multiple LLM APIs. I discovered that the OpenAI Python SDK supports most LLM APIs, allowing you to specify the base URL of the desired API. This flexibility greatly optimizes the core functionality of applications.
For my use case, the backend service relied entirely on Azure OpenAI and other Azure resources, such as Azure AI Search and CosmosDB. We needed a solution to smoothly switch the generation model from OpenAI to Google’s Gemini API (or Vertex AI API for enterprise use).
In this article, I walk you step by step, how to seamlessly switch between models using only OpenAI sdk library in Python.
A few benefits of this approach:
Less dependency libraries to be installed (think about that lightweight docker container app!)Switching between models… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI - Medium
Share this post