
A Coding Implementation of Extracting Structured Data Using LangSmith, Pydantic, LangChain, and Claude 3.7 Sonnet
www.marktechpost.com
Unlock the power of structured data extraction with LangChain and Claude 3.7 Sonnet, transforming raw text into actionable insights. This tutorial focuses on tracing LLM tool calling using LangSmith, enabling real-time debugging and performance monitoring of your extraction system. We utilize Pydantic schemas for precise data formatting and LangChains flexible prompting to guide Claude. Experience example-driven refinement, eliminating the need for complex training. This is a glimpse into LangSmiths capabilities, showcasing how to build robust extraction pipelines for diverse applications, from document processing to automated data entry.First, we need to install the necessary packages. Well use langchain-core and langchain_anthropic to interface with the Claude model.!pip install --upgrade langchain-core!pip install langchain_anthropicIf youre using LangSmith for tracing and debugging, you can set up environment variables:LANGSMITH_TRACING=TrueLANGSMITH_ENDPOINT="https://api.smith.langchain.com"LANGSMITH_API_KEY="Your API KEY"LANGSMITH_PROJECT="extraction_api"Next, we must define the schema for the information we want to extract. Well use Pydantic models to create a structured representation of a person.from typing import Optionalfrom pydantic import BaseModel, Fieldclass Person(BaseModel): """Information about a person.""" name: Optional[str] = Field(default=None, description="The name of the person") hair_color: Optional[str] = Field( default=None, description="The color of the person's hair if known" ) height_in_meters: Optional[str] = Field( default=None, description="Height measured in meters" )Now, well define a prompt template that instructs Claude on how to perform the extraction task:from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderprompt_template = ChatPromptTemplate.from_messages( [ ( "system", "You are an expert extraction algorithm. " "Only extract relevant information from the text. " "If you do not know the value of an attribute asked to extract, " "return null for the attribute's value.", ), ("human", "{text}"), ])This template provides clear instructions to the model about its task and how to handle missing information.Next, well initialize the Claude model that will perform our information extraction:import getpassimport osif not os.environ.get("ANTHROPIC_API_KEY"): os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter API key for Anthropic: ")from langchain.chat_models import init_chat_modelllm = init_chat_model("claude-3-7-sonnet-20250219", model_provider="anthropic")Now, well configure our LLM to return structured output according to our schema:structured_llm = llm.with_structured_output(schema=Person)This key step tells the model to format its responses according to our Person schema.Lets test our extraction system with a simple example:text = "Alan Smith is 6 feet tall and has blond hair."prompt = prompt_template.invoke({"text": text})result = structured_llm.invoke(prompt)print(result)Now, Lets try a more complex example:from typing import Listclass Data(BaseModel): """Container for extracted information about people.""" people: List[Person] = Field(default_factory=list, description="List of people mentioned in the text")structured_llm = llm.with_structured_output(schema=Data)text = "My name is Jeff, my hair is black and I am 6 feet tall. Anna has the same color hair as me."prompt = prompt_template.invoke({"text": text})result = structured_llm.invoke(prompt)print(result)# Next exampletext = "The solar system is large, (it was discovered by Nicolaus Copernicus), but earth has only 1 moon."prompt = prompt_template.invoke({"text": text})result = structured_llm.invoke(prompt)print(result)In conclusion, this tutorial demonstrates building a structured information extraction system with LangChain and Claude that transforms unstructured text into organized data about people. The approach uses Pydantic schemas, custom prompts, and example-driven improvement without requiring specialized training pipelines. The systems power comes from its flexibility, domain adaptability, and utilization of advanced LLM reasoning capabilities.Here is the Colab Notebook. Also,dont forget to follow us onTwitterand join ourTelegram ChannelandLinkedIn Group. Dont Forget to join our85k+ ML SubReddit. Asif RazzaqWebsite| + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Meet LocAgent: Graph-Based AI Agents Transforming Code Localization for Scalable Software MaintenanceAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation to Build a Conversational Research Assistant with FAISS, Langchain, Pypdf, and TinyLlama-1.1B-Chat-v1.0Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Sea AI Lab Researchers Introduce Dr. GRPO: A Bias-Free Reinforcement Learning Method that Enhances Math Reasoning Accuracy in Large Language Models Without Inflating ResponsesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Code Implementation of a Rapid Disaster Assessment Tool Using IBMs Open-Source ResNet-50 Model
0 التعليقات
·0 المشاركات
·18 مشاهدة