TOWARDSAI.NET
DSPy: Machine Learning Attitude Towards LLM Prompting
DSPy: Machine Learning Attitude Towards LLM Prompting 0 like November 8, 2024Share this postAuthor(s): Serj Smorodinsky Originally published on Towards AI. Transition from prompt string manipulations to a PyTorch like frameworkThis member-only story is on us. Upgrade to access all of Medium.Link to the official tutorialFull code at your one stop LLM classification projectHeres a link to a short YouTube video with the code rundownMy goal is to showcase complex technologies through non trivial use cases. This time I have chosen to focus DSPy framework. Its raison detre (reason of being) is to abstract, encapsulate and optimize the logic that is needed for tasking LLM outputs.DSPy allows coders to specify inputs and outputs for an LLM task, and let the framework deal with composing the best possible prompt.Why should you care?You can brag about it during lunchImprove code readabilityImprove LLM task outputsThis is the first part of a series, in which we will focus on an implementation of LLM based classifier. In the next instalment we go deeper with actual optimization.What is DSPy?Why DSPy?Use case: LLM intent classifier for customer serviceDSPy is a framework that was created by Stanford researches. I love the way that the official docs explain so Im attaching it here:DSPy emphasises programming over prompting. It unifies techniques for prompting and fine-tuning LMs as well as improving them with reasoning and tool/retrieval augmentation, all expressed through a Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
0 Σχόλια 0 Μοιράστηκε 20 Views