How to Build a Powerful and Intelligent Question-Answering System by Using Tavily Search API, Chroma, Google Gemini LLMs, and the LangChain Framework
In this tutorial, we demonstrate how to build a powerful and intelligent question-answering system by combining the strengths of Tavily Search API, Chroma, Google Gemini LLMs, and the LangChain framework. The pipeline leverages real-time web search using Tavily, semantic document caching with Chroma vector store, and contextual response generation through the Gemini model. These tools are integrated through LangChain’s modular components, such as RunnableLambda, ChatPromptTemplate, ConversationBufferMemory, and GoogleGenerativeAIEmbeddings. It goes beyond simple Q&A by introducing a hybrid retrieval mechanism that checks for cached embeddings before invoking fresh web searches. The retrieved documents are intelligently formatted, summarized, and passed through a structured LLM prompt, with attention to source attribution, user history, and confidence scoring. Key functions such as advanced prompt engineering, sentiment and entity analysis, and dynamic vector store updates make this pipeline suitable for advanced use cases like research assistance, domain-specific summarization, and intelligent agents.
!pip install -qU langchain-community tavily-python langchain-google-genai streamlit matplotlib pandas tiktoken chromadb langchain_core pydantic langchain
We install and upgrade a comprehensive set of libraries required to build an advanced AI search assistant. It includes tools for retrieval, LLM integration, data handling, visualization, and tokenization. These components form the core foundation for constructing a real-time, context-aware QA system.
import os
import getpass
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import json
import time
from typing import List, Dict, Any, Optional
from datetime import datetime
We import essential Python libraries used throughout the notebook. It includes standard libraries for environment variables, secure input, time tracking, and data types. Additionally, it brings in core data science tools like pandas, matplotlib, and numpy for data handling, visualization, and numerical computations, as well as json for parsing structured data.
if "TAVILY_API_KEY" not in os.environ:
os.environ= getpass.getpassif "GOOGLE_API_KEY" not in os.environ:
os.environ= getpass.getpassimport logging
logging.basicConfigs - %s - %s - %s')
logger = logging.getLoggerWe securely initialize API keys for Tavily and Google Gemini by prompting users only if they’re not already set in the environment, ensuring safe and repeatable access to external services. It also configures a standardized logging setup using Python’s logging module, which helps monitor execution flow and capture debug or error messages throughout the notebook.
from langchain_community.retrievers import TavilySearchAPIRetriever
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from langchain_core.output_parsers import StrOutputParser, JsonOutputParser
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain.memory import ConversationBufferMemory
We import key components from the LangChain ecosystem and its integrations. It brings in the TavilySearchAPIRetriever for real-time web search, Chroma for vector storage, and GoogleGenerativeAI modules for chat and embedding models. Core LangChain modules like ChatPromptTemplate, RunnableLambda, ConversationBufferMemory, and output parsers enable flexible prompt construction, memory handling, and pipeline execution.
class SearchQueryError:
"""Exception raised for errors in the search query."""
pass
def format_docs:
formatted_content =for i, doc in enumerate:
metadata = doc.metadata
source = metadata.gettitle = metadata.getscore = metadata.getformatted_content.appendreturn "nn".joinWe define two essential components for search and document handling. The SearchQueryError class creates a custom exception to manage invalid or failed search queries gracefully. The format_docs function processes a list of retrieved documents by extracting metadata such as title, source, and relevance score and formatting them into a clean, readable string.
class SearchResultsParser:
def parse:
try:
if isinstance:
import re
import json
json_match = re.searchif json_match:
json_str = json_match.groupreturn json.loadsreturn {"answer": text, "sources":, "confidence": 0.5}
elif hasattr:
return {"answer": text.content, "sources":, "confidence": 0.5}
else:
return {"answer": str, "sources":, "confidence": 0.5}
except Exception as e:
logger.warningreturn {"answer": str, "sources":, "confidence": 0.5}
The SearchResultsParser class provides a robust method for extracting structured information from LLM responses. It attempts to parse a JSON-like string from the model output, returning to a plain text response format if parsing fails. It gracefully handles string outputs and message objects, ensuring consistent downstream processing. In case of errors, it logs a warning and returns a fallback response containing the raw answer, empty sources, and a default confidence score, enhancing the system’s fault tolerance.
class EnhancedTavilyRetriever:
def __init__:
self.api_key = api_key
self.max_results = max_results
self.search_depth = search_depth
self.include_domains = include_domains orself.exclude_domains = exclude_domains orself.retriever = self._create_retrieverself.previous_searches =def _create_retriever:
try:
return TavilySearchAPIRetrieverexcept Exception as e:
logger.errorraise
def invoke:
if not query or not query.strip:
raise SearchQueryErrortry:
start_time = time.timeresults = self.retriever.invokeend_time = time.timesearch_record = {
"timestamp": datetime.now.isoformat,
"query": query,
"num_results": len,
"response_time": end_time - start_time
}
self.previous_searches.appendreturn results
except Exception as e:
logger.errorraise SearchQueryError}")
def get_search_history:
return self.previous_searches
The EnhancedTavilyRetriever class is a custom wrapper around the TavilySearchAPIRetriever, adding greater flexibility, control, and traceability to search operations. It supports advanced features like limiting search depth, domain inclusion/exclusion filters, and configurable result counts. The invoke method performs web searches and tracks each query’s metadata, storing it for later analysis.
class SearchCache:
def __init__:
self.embedding_function = GoogleGenerativeAIEmbeddingsself.vector_store = None
self.text_splitter = RecursiveCharacterTextSplitterdef add_documents:
if not documents:
return
try:
if self.vector_store is None:
self.vector_store = Chroma.from_documentselse:
self.vector_store.add_documentsexcept Exception as e:
logger.errordef search:
if self.vector_store is None:
returntry:
return self.vector_store.similarity_searchexcept Exception as e:
logger.errorreturnThe SearchCache class implements a semantic caching layer that stores and retrieves documents using vector embeddings for efficient similarity search. It uses GoogleGenerativeAIEmbeddings to convert documents into dense vectors and stores them in a Chroma vector database. The add_documents method initializes or updates the vector store, while the search method enables fast retrieval of the most relevant cached documents based on semantic similarity. This reduces redundant API calls and improves response times for repeated or related queries, serving as a lightweight hybrid memory layer in the AI assistant pipeline.
search_cache = SearchCacheenhanced_retriever = EnhancedTavilyRetrievermemory = ConversationBufferMemorysystem_template = """You are a research assistant that provides accurate answers based on the search results provided.
Follow these guidelines:
1. Only use the context provided to answer the question
2. If the context doesn't contain the answer, say "I don't have sufficient information to answer this question."
3. Cite your sources by referencing the document numbers
4. Don't make up information
5. Keep the answer concise but complete
Context: {context}
Chat History: {chat_history}
"""
system_message = SystemMessagePromptTemplate.from_templatehuman_template = "Question: {question}"
human_message = HumanMessagePromptTemplate.from_templateprompt = ChatPromptTemplate.from_messagesWe initialize the core components of the AI assistant: a semantic SearchCache, the EnhancedTavilyRetriever for web-based querying, and a ConversationBufferMemory to retain chat history across turns. It also defines a structured prompt using ChatPromptTemplate, guiding the LLM to act as a research assistant. The prompt enforces strict rules for factual accuracy, context usage, source citation, and concise answering, ensuring reliable and grounded responses.
def get_llm:
try:
return ChatGoogleGenerativeAIexcept Exception as e:
logger.errorraise
output_parser = SearchResultsParserWe define the get_llm function, which initializes a Google Gemini language model with configurable parameters such as model name, temperature, and decoding settings. It ensures robustness with error handling for failed model initialization. An instance of SearchResultsParser is also created to standardize and structure the LLM’s raw responses, enabling consistent downstream processing of answers and metadata.
def plot_search_metrics:
if not search_history:
printreturn
df = pd.DataFrameplt.figure)
plt.subplotplt.plot), df, marker='o')
plt.titleplt.xlabelplt.ylabel')
plt.gridplt.subplotplt.bar), df)
plt.titleplt.xlabelplt.ylabelplt.gridplt.tight_layoutplt.showThe plot_search_metrics function visualizes performance trends from past queries using Matplotlib. It converts the search history into a DataFrame and plots two subgraphs: one showing response time per search and the other displaying the number of results returned. This aids in analyzing the system’s efficiency and search quality over time, helping developers fine-tune the retriever or identify bottlenecks in real-world usage.
def retrieve_with_fallback:
cached_results = search_cache.searchif cached_results:
logger.info} documents from cache")
return cached_results
logger.infosearch_results = enhanced_retriever.invokesearch_cache.add_documentsreturn search_results
def summarize_documents:
llm = get_llmsummarize_prompt = ChatPromptTemplate.from_templatechain =, "query": lambda _: query}
| summarize_prompt
| llm
| StrOutputParser)
return chain.invokeThese two functions enhance the assistant’s intelligence and efficiency. The retrieve_with_fallback function implements a hybrid retrieval mechanism: it first attempts to fetch semantically relevant documents from the local Chroma cache and, if unsuccessful, falls back to a real-time Tavily web search, caching the new results for future use. Meanwhile, summarize_documents leverages a Gemini LLM to generate concise summaries from retrieved documents, guided by a structured prompt that ensures relevance to the query. Together, they enable low-latency, informative, and context-aware responses.
def advanced_chain:
llm = get_llmif query_engine == "enhanced":
retriever = lambda query: retrieve_with_fallbackelse:
retriever = enhanced_retriever.invoke
def chain_with_history:
query = input_dictchat_history = memory.load_memory_variablesif include_history elsedocs = retrievercontext = format_docsresult = prompt.invokememory.save_contextreturn llm.invokereturn RunnableLambda| StrOutputParserThe advanced_chain function defines a modular, end-to-end reasoning workflow for answering user queries using cached or real-time search. It initializes the specified Gemini model, selects the retrieval strategy, constructs a response pipeline incorporating chat history, formats documents into context, and prompts the LLM using a system-guided template. The chain also logs the interaction in memory and returns the final answer, parsed into clean text. This design enables flexible experimentation with models and retrieval strategies while maintaining conversation coherence.
qa_chain = advanced_chaindef analyze_query:
llm = get_llmanalysis_prompt = ChatPromptTemplate.from_template3. Key entities mentioned
4. Query typeQuery: {query}
Return the analysis in JSON format with the following structure:
{{
"topic": "main topic",
"sentiment": "sentiment",
"entities":,
"type": "query type"
}}
"""
)
chain = analysis_prompt | llm | output_parser
return chain.invokeprintprintquery = "what year was breath of the wild released and what was its reception?"
printWe initialize the final components of the intelligent assistant. qa_chain is the assembled reasoning pipeline ready to process user queries using retrieval, memory, and Gemini-based response generation. The analyze_query function performs a lightweight semantic analysis on a query, extracting the main topic, sentiment, entities, and query type using the Gemini model and a structured JSON prompt. The example query, about Breath of the Wild’s release and reception, showcases how the assistant is triggered and prepared for full-stack inference and semantic interpretation. The printed heading marks the start of interactive execution.
try:
printanswer = qa_chain.invokeprintprintprinttry:
query_analysis = analyze_queryprintprint)
except Exception as e:
print: {e}")
except Exception as e:
printhistory = enhanced_retriever.get_search_historyprintfor i, h in enumerate:
printprintspecialized_retriever = EnhancedTavilyRetrievertry:
specialized_results = specialized_retriever.invokeprint} specialized results")
summary = summarize_documentsprintprintexcept Exception as e:
printprintplot_search_metricsWe demonstrate the complete pipeline in action. It performs a search using the qa_chain, displays the generated answer, and then analyzes the query for sentiment, topic, entities, and type. It also retrieves and prints each query’s search history, response time, and result count. Also, it runs a domain-filtered search focused on Nintendo-related sites, summarizes the results, and visualizes search performance using plot_search_metrics, offering a comprehensive view of the assistant’s capabilities in real-time use.
In conclusion, following this tutorial gives users a comprehensive blueprint for creating a highly capable, context-aware, and scalable RAG system that bridges real-time web intelligence with conversational AI. The Tavily Search API lets users directly pull fresh and relevant content from the web. The Gemini LLM adds robust reasoning and summarization capabilities, while LangChain’s abstraction layer allows seamless orchestration between memory, embeddings, and model outputs. The implementation includes advanced features such as domain-specific filtering, query analysis, and fallback strategies using a semantic vector cache built with Chroma and GoogleGenerativeAIEmbeddings. Also, structured logging, error handling, and analytics dashboards provide transparency and diagnostics for real-world deployment.
Check out the Colab Notebook. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.
Asif RazzaqWebsite | + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/AWS Open-Sources Strands Agents SDK to Simplify AI Agent DevelopmentAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software EngineeringAsif Razzaqhttps://www.marktechpost.com/author/6flvq/AI Agents Now Write Code in Parallel: OpenAI Introduces Codex, a Cloud-Based Coding Agent Inside ChatGPTAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Guide to Build an Automated Knowledge Graph Pipeline Using LangGraph and NetworkX
Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub!
#how #build #powerful #intelligent #questionanswering
How to Build a Powerful and Intelligent Question-Answering System by Using Tavily Search API, Chroma, Google Gemini LLMs, and the LangChain Framework
In this tutorial, we demonstrate how to build a powerful and intelligent question-answering system by combining the strengths of Tavily Search API, Chroma, Google Gemini LLMs, and the LangChain framework. The pipeline leverages real-time web search using Tavily, semantic document caching with Chroma vector store, and contextual response generation through the Gemini model. These tools are integrated through LangChain’s modular components, such as RunnableLambda, ChatPromptTemplate, ConversationBufferMemory, and GoogleGenerativeAIEmbeddings. It goes beyond simple Q&A by introducing a hybrid retrieval mechanism that checks for cached embeddings before invoking fresh web searches. The retrieved documents are intelligently formatted, summarized, and passed through a structured LLM prompt, with attention to source attribution, user history, and confidence scoring. Key functions such as advanced prompt engineering, sentiment and entity analysis, and dynamic vector store updates make this pipeline suitable for advanced use cases like research assistance, domain-specific summarization, and intelligent agents.
!pip install -qU langchain-community tavily-python langchain-google-genai streamlit matplotlib pandas tiktoken chromadb langchain_core pydantic langchain
We install and upgrade a comprehensive set of libraries required to build an advanced AI search assistant. It includes tools for retrieval, LLM integration, data handling, visualization, and tokenization. These components form the core foundation for constructing a real-time, context-aware QA system.
import os
import getpass
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import json
import time
from typing import List, Dict, Any, Optional
from datetime import datetime
We import essential Python libraries used throughout the notebook. It includes standard libraries for environment variables, secure input, time tracking, and data types. Additionally, it brings in core data science tools like pandas, matplotlib, and numpy for data handling, visualization, and numerical computations, as well as json for parsing structured data.
if "TAVILY_API_KEY" not in os.environ:
os.environ= getpass.getpassif "GOOGLE_API_KEY" not in os.environ:
os.environ= getpass.getpassimport logging
logging.basicConfigs - %s - %s - %s')
logger = logging.getLoggerWe securely initialize API keys for Tavily and Google Gemini by prompting users only if they’re not already set in the environment, ensuring safe and repeatable access to external services. It also configures a standardized logging setup using Python’s logging module, which helps monitor execution flow and capture debug or error messages throughout the notebook.
from langchain_community.retrievers import TavilySearchAPIRetriever
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from langchain_core.output_parsers import StrOutputParser, JsonOutputParser
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from langchain.memory import ConversationBufferMemory
We import key components from the LangChain ecosystem and its integrations. It brings in the TavilySearchAPIRetriever for real-time web search, Chroma for vector storage, and GoogleGenerativeAI modules for chat and embedding models. Core LangChain modules like ChatPromptTemplate, RunnableLambda, ConversationBufferMemory, and output parsers enable flexible prompt construction, memory handling, and pipeline execution.
class SearchQueryError:
"""Exception raised for errors in the search query."""
pass
def format_docs:
formatted_content =for i, doc in enumerate:
metadata = doc.metadata
source = metadata.gettitle = metadata.getscore = metadata.getformatted_content.appendreturn "nn".joinWe define two essential components for search and document handling. The SearchQueryError class creates a custom exception to manage invalid or failed search queries gracefully. The format_docs function processes a list of retrieved documents by extracting metadata such as title, source, and relevance score and formatting them into a clean, readable string.
class SearchResultsParser:
def parse:
try:
if isinstance:
import re
import json
json_match = re.searchif json_match:
json_str = json_match.groupreturn json.loadsreturn {"answer": text, "sources":, "confidence": 0.5}
elif hasattr:
return {"answer": text.content, "sources":, "confidence": 0.5}
else:
return {"answer": str, "sources":, "confidence": 0.5}
except Exception as e:
logger.warningreturn {"answer": str, "sources":, "confidence": 0.5}
The SearchResultsParser class provides a robust method for extracting structured information from LLM responses. It attempts to parse a JSON-like string from the model output, returning to a plain text response format if parsing fails. It gracefully handles string outputs and message objects, ensuring consistent downstream processing. In case of errors, it logs a warning and returns a fallback response containing the raw answer, empty sources, and a default confidence score, enhancing the system’s fault tolerance.
class EnhancedTavilyRetriever:
def __init__:
self.api_key = api_key
self.max_results = max_results
self.search_depth = search_depth
self.include_domains = include_domains orself.exclude_domains = exclude_domains orself.retriever = self._create_retrieverself.previous_searches =def _create_retriever:
try:
return TavilySearchAPIRetrieverexcept Exception as e:
logger.errorraise
def invoke:
if not query or not query.strip:
raise SearchQueryErrortry:
start_time = time.timeresults = self.retriever.invokeend_time = time.timesearch_record = {
"timestamp": datetime.now.isoformat,
"query": query,
"num_results": len,
"response_time": end_time - start_time
}
self.previous_searches.appendreturn results
except Exception as e:
logger.errorraise SearchQueryError}")
def get_search_history:
return self.previous_searches
The EnhancedTavilyRetriever class is a custom wrapper around the TavilySearchAPIRetriever, adding greater flexibility, control, and traceability to search operations. It supports advanced features like limiting search depth, domain inclusion/exclusion filters, and configurable result counts. The invoke method performs web searches and tracks each query’s metadata, storing it for later analysis.
class SearchCache:
def __init__:
self.embedding_function = GoogleGenerativeAIEmbeddingsself.vector_store = None
self.text_splitter = RecursiveCharacterTextSplitterdef add_documents:
if not documents:
return
try:
if self.vector_store is None:
self.vector_store = Chroma.from_documentselse:
self.vector_store.add_documentsexcept Exception as e:
logger.errordef search:
if self.vector_store is None:
returntry:
return self.vector_store.similarity_searchexcept Exception as e:
logger.errorreturnThe SearchCache class implements a semantic caching layer that stores and retrieves documents using vector embeddings for efficient similarity search. It uses GoogleGenerativeAIEmbeddings to convert documents into dense vectors and stores them in a Chroma vector database. The add_documents method initializes or updates the vector store, while the search method enables fast retrieval of the most relevant cached documents based on semantic similarity. This reduces redundant API calls and improves response times for repeated or related queries, serving as a lightweight hybrid memory layer in the AI assistant pipeline.
search_cache = SearchCacheenhanced_retriever = EnhancedTavilyRetrievermemory = ConversationBufferMemorysystem_template = """You are a research assistant that provides accurate answers based on the search results provided.
Follow these guidelines:
1. Only use the context provided to answer the question
2. If the context doesn't contain the answer, say "I don't have sufficient information to answer this question."
3. Cite your sources by referencing the document numbers
4. Don't make up information
5. Keep the answer concise but complete
Context: {context}
Chat History: {chat_history}
"""
system_message = SystemMessagePromptTemplate.from_templatehuman_template = "Question: {question}"
human_message = HumanMessagePromptTemplate.from_templateprompt = ChatPromptTemplate.from_messagesWe initialize the core components of the AI assistant: a semantic SearchCache, the EnhancedTavilyRetriever for web-based querying, and a ConversationBufferMemory to retain chat history across turns. It also defines a structured prompt using ChatPromptTemplate, guiding the LLM to act as a research assistant. The prompt enforces strict rules for factual accuracy, context usage, source citation, and concise answering, ensuring reliable and grounded responses.
def get_llm:
try:
return ChatGoogleGenerativeAIexcept Exception as e:
logger.errorraise
output_parser = SearchResultsParserWe define the get_llm function, which initializes a Google Gemini language model with configurable parameters such as model name, temperature, and decoding settings. It ensures robustness with error handling for failed model initialization. An instance of SearchResultsParser is also created to standardize and structure the LLM’s raw responses, enabling consistent downstream processing of answers and metadata.
def plot_search_metrics:
if not search_history:
printreturn
df = pd.DataFrameplt.figure)
plt.subplotplt.plot), df, marker='o')
plt.titleplt.xlabelplt.ylabel')
plt.gridplt.subplotplt.bar), df)
plt.titleplt.xlabelplt.ylabelplt.gridplt.tight_layoutplt.showThe plot_search_metrics function visualizes performance trends from past queries using Matplotlib. It converts the search history into a DataFrame and plots two subgraphs: one showing response time per search and the other displaying the number of results returned. This aids in analyzing the system’s efficiency and search quality over time, helping developers fine-tune the retriever or identify bottlenecks in real-world usage.
def retrieve_with_fallback:
cached_results = search_cache.searchif cached_results:
logger.info} documents from cache")
return cached_results
logger.infosearch_results = enhanced_retriever.invokesearch_cache.add_documentsreturn search_results
def summarize_documents:
llm = get_llmsummarize_prompt = ChatPromptTemplate.from_templatechain =, "query": lambda _: query}
| summarize_prompt
| llm
| StrOutputParser)
return chain.invokeThese two functions enhance the assistant’s intelligence and efficiency. The retrieve_with_fallback function implements a hybrid retrieval mechanism: it first attempts to fetch semantically relevant documents from the local Chroma cache and, if unsuccessful, falls back to a real-time Tavily web search, caching the new results for future use. Meanwhile, summarize_documents leverages a Gemini LLM to generate concise summaries from retrieved documents, guided by a structured prompt that ensures relevance to the query. Together, they enable low-latency, informative, and context-aware responses.
def advanced_chain:
llm = get_llmif query_engine == "enhanced":
retriever = lambda query: retrieve_with_fallbackelse:
retriever = enhanced_retriever.invoke
def chain_with_history:
query = input_dictchat_history = memory.load_memory_variablesif include_history elsedocs = retrievercontext = format_docsresult = prompt.invokememory.save_contextreturn llm.invokereturn RunnableLambda| StrOutputParserThe advanced_chain function defines a modular, end-to-end reasoning workflow for answering user queries using cached or real-time search. It initializes the specified Gemini model, selects the retrieval strategy, constructs a response pipeline incorporating chat history, formats documents into context, and prompts the LLM using a system-guided template. The chain also logs the interaction in memory and returns the final answer, parsed into clean text. This design enables flexible experimentation with models and retrieval strategies while maintaining conversation coherence.
qa_chain = advanced_chaindef analyze_query:
llm = get_llmanalysis_prompt = ChatPromptTemplate.from_template3. Key entities mentioned
4. Query typeQuery: {query}
Return the analysis in JSON format with the following structure:
{{
"topic": "main topic",
"sentiment": "sentiment",
"entities":,
"type": "query type"
}}
"""
)
chain = analysis_prompt | llm | output_parser
return chain.invokeprintprintquery = "what year was breath of the wild released and what was its reception?"
printWe initialize the final components of the intelligent assistant. qa_chain is the assembled reasoning pipeline ready to process user queries using retrieval, memory, and Gemini-based response generation. The analyze_query function performs a lightweight semantic analysis on a query, extracting the main topic, sentiment, entities, and query type using the Gemini model and a structured JSON prompt. The example query, about Breath of the Wild’s release and reception, showcases how the assistant is triggered and prepared for full-stack inference and semantic interpretation. The printed heading marks the start of interactive execution.
try:
printanswer = qa_chain.invokeprintprintprinttry:
query_analysis = analyze_queryprintprint)
except Exception as e:
print: {e}")
except Exception as e:
printhistory = enhanced_retriever.get_search_historyprintfor i, h in enumerate:
printprintspecialized_retriever = EnhancedTavilyRetrievertry:
specialized_results = specialized_retriever.invokeprint} specialized results")
summary = summarize_documentsprintprintexcept Exception as e:
printprintplot_search_metricsWe demonstrate the complete pipeline in action. It performs a search using the qa_chain, displays the generated answer, and then analyzes the query for sentiment, topic, entities, and type. It also retrieves and prints each query’s search history, response time, and result count. Also, it runs a domain-filtered search focused on Nintendo-related sites, summarizes the results, and visualizes search performance using plot_search_metrics, offering a comprehensive view of the assistant’s capabilities in real-time use.
In conclusion, following this tutorial gives users a comprehensive blueprint for creating a highly capable, context-aware, and scalable RAG system that bridges real-time web intelligence with conversational AI. The Tavily Search API lets users directly pull fresh and relevant content from the web. The Gemini LLM adds robust reasoning and summarization capabilities, while LangChain’s abstraction layer allows seamless orchestration between memory, embeddings, and model outputs. The implementation includes advanced features such as domain-specific filtering, query analysis, and fallback strategies using a semantic vector cache built with Chroma and GoogleGenerativeAIEmbeddings. Also, structured logging, error handling, and analytics dashboards provide transparency and diagnostics for real-world deployment.
Check out the Colab Notebook. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.
Asif RazzaqWebsite | + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/AWS Open-Sources Strands Agents SDK to Simplify AI Agent DevelopmentAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Windsurf Launches SWE-1: A Frontier AI Model Family for End-to-End Software EngineeringAsif Razzaqhttps://www.marktechpost.com/author/6flvq/AI Agents Now Write Code in Parallel: OpenAI Introduces Codex, a Cloud-Based Coding Agent Inside ChatGPTAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Guide to Build an Automated Knowledge Graph Pipeline Using LangGraph and NetworkX
🚨 Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub!
#how #build #powerful #intelligent #questionanswering
·296 Visualizações