TOWARDSAI.NET
LAI #74: Smarter Prompts, Context-Aware Agents, and the Math Behind KANs
LAI #74: Smarter Prompts, Context-Aware Agents, and the Math Behind KANs
0 like
May 8, 2025
Share this post
Author(s): Towards AI Editorial Team
Originally published on Towards AI.
Good morning, AI enthusiasts!
Want to start this week’s issue with a quick announcement: the enrollment for the next “From Beginner to Advanced LLM Developer” cohort is open right now — and will kick off June 1st with a call with our CEO, Louie Peters!
Reserve Your Seat — at a 75% Discount!
Now, back to this week’s issue, we start with a breakdown of dynamic and customized prompting techniques, moving beyond static instructions to more adaptive, conversational flows. Then we dive into orthogonal polynomials in Kolmogorov-Arnold Networks — yes, it’s technical, but worth it if you care about what’s under the hood of interpretability and efficiency.
There’s more: we cover the Model Context Protocol and CrewAI for scaling enterprise agents, a hybrid attention method for forecasting binary sequences, and the evolution of Bayesian Networks for real-time, probabilistic reasoning.
Enjoy the read!
— Louis-François Bouchard, Towards AI Co-founder & Head of Community
Learn AI Together Community section!
AI poll of the week!
If most research doesn’t stick, what kind of breakthrough actually changes how you build or think? Every now and then, something shifts our perspective. Was it a technique, a paper, or a tool that rewired your approach to AI? Tell us in the thread!
Collaboration Opportunities
The Learn AI Together Discord community is flooded with collaboration opportunities. If you are excited to dive into applied AI, want a study partner, or even want to find a partner for your passion project, join the collaboration channel! Keep an eye on this section, too — we share cool opportunities every week!
1. Benjaminlee9472_18198 is looking for a partner to work on AI projects, such as training a model. If this is your focus as well, reach out in the thread!
2. Lokomatic is looking to collaborate with partners focusing on AI literacy/ training, DS literacy, GenAI Upskilling, and other learning areas. If this sounds relevant to you, connect in the thread!
Meme of the week!
Meme shared by ghost_in_the_machine
TAI Curated Section
Article of the week
Designing Customized and Dynamic Prompts for Large Language Models By Shenggang Li
The author reviewed techniques for creating customized and dynamic prompts for Large Language Models. Customized prompts provide fixed, task-specific instructions, whereas dynamic prompts adjust based on conversational context for more adaptive responses. The discussion covered practical implementation through manual construction, DSPy for structured workflows, the dynamic-prompting library for real-time adjustments, Jinja2 for template-based composition, and LangChain for building comprehensive LLM applications.
Our must-read articles
1. Orthogonal Polynomials in Kolmogorov-Arnold Networks: Use Case and Scenarios By Fabio Yáñez Romero
This blog reviewed Kolmogorov-Arnold Networks (KANs), noting challenges with B-Spline implementations, particularly regarding parallelization and memory. It then explored Orthogonal Polynomials (OPs) as an alternative, highlighting their advantages in computational efficiency, reduced memory requirements, and effective capture of global patterns due to properties like linear independence and recurrence relations. While OPs offer these benefits, they necessitate input normalization and may be less adept with local variations. It also discussed implementations such as Chebyshev-KAN and Legendre-KAN.
2. Model Context Protocol and CrewAI: Scaling Enterprise AI with Standardized Context By Samvardhan Singh
This article explored Model Context Protocol (MCP) and CrewAI, technologies designed to enhance enterprise AI. MCP standardizes secure access to business data for AI agents, acting as a universal translator. CrewAI orchestrates these AI agents into collaborative teams to tackle complex tasks. Together, they enable scalable, context-aware AI systems by providing a structured approach to data access and agent coordination, detailing the technical workflow from query to completion.
3. Hybrid Attention for Binary Sequence Forecasting By Shenggang Li
This article introduces a method for binary sequence forecasting, BinaryTrendFormer. It integrates n-gram embeddings, count-aware self-attention, and recency-weighted statistics. This model predicts the next binary outcome and the K-step count distribution by transforming raw data into signals fused via attention. It employs multi-task optimization and time-series cross-validation. A code experiment demonstrates its application and performance, including uncertainty interval comparisons, while noting improvement areas. It also discussed real-world uses in finance, retail, genomics, and industrial IoT.
4. From Static to Dynamic: Evolving Bayesian Network Thinking for Real-World Applications By Shenggang Li
This article explored Bayesian Networks, distinguishing between static and dynamic models. Static BNs utilize current data and conditional probabilities for immediate risk assessment, as shown in a medical pneumonia diagnosis. Dynamic BNs extend this by adding a time dimension to forecast evolving trends, demonstrated with stock market predictions. Both approaches leverage historical data and domain knowledge. The provided Python code illustrated a practical implementation for each network type.
5. Understanding LLM Agents: Concepts, Patterns & Frameworks By Allohvk
This article explored LLM agents, defining them as entities that use LLMs to solve complex tasks by interacting with environments and tools, distinguishing them from static workflows. It covered their evolution from ReAct (Reason+Act) principles, memory integration, tool use, and agentic RAG. The discussion extended to single versus multi-agent systems, key collaborative patterns, and frameworks like LangGraph. Significant developments like the Model Context Protocol (MCP) for data connectivity and Agent-to-Agent (A2A) protocol for inter-agent communication were highlighted, along with the complexities of agent evaluation.
If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI
Towards AI - Medium
Share this post