• The 16 best gifts for dads

    Finding the right gift for any parent can be a struggle, especially a dad who may not give you any indication of what they’d like as a gift. But if you know your dad is a tinkerer, DIY master, tech lover or general hobbyist, we can help. We pulled together a list of gadgets and gear we’ve tested that would make a great gift for nearly any dad out there, from cameras to keyboards, pizza ovens to VR headsets, smartwatches to multitools and much more. Best of all, these recommendations will fit any gift giver’s budget.
    Best gifts for dads in 2025

    Check out the rest of our gift ideas here. This article originally appeared on Engadget at
    #best #gifts #dads
    The 16 best gifts for dads
    Finding the right gift for any parent can be a struggle, especially a dad who may not give you any indication of what they’d like as a gift. But if you know your dad is a tinkerer, DIY master, tech lover or general hobbyist, we can help. We pulled together a list of gadgets and gear we’ve tested that would make a great gift for nearly any dad out there, from cameras to keyboards, pizza ovens to VR headsets, smartwatches to multitools and much more. Best of all, these recommendations will fit any gift giver’s budget. Best gifts for dads in 2025 Check out the rest of our gift ideas here. This article originally appeared on Engadget at #best #gifts #dads
    WWW.ENGADGET.COM
    The 16 best gifts for dads
    Finding the right gift for any parent can be a struggle, especially a dad who may not give you any indication of what they’d like as a gift. But if you know your dad is a tinkerer, DIY master, tech lover or general hobbyist, we can help. We pulled together a list of gadgets and gear we’ve tested that would make a great gift for nearly any dad out there, from cameras to keyboards, pizza ovens to VR headsets, smartwatches to multitools and much more. Best of all, these recommendations will fit any gift giver’s budget. Best gifts for dads in 2025 Check out the rest of our gift ideas here. This article originally appeared on Engadget at https://www.engadget.com/best-gifts-for-dads-170014057.html?src=rss
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation

    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly.
    import subprocess
    import sys

    def install_packages:
    packages =for package in packages:
    try:
    subprocess.check_callprintexcept subprocess.CalledProcessError:
    printprintinstall_packagesprintWe automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly.
    import os
    import json
    import math
    import requests
    from typing import Dict, List, Any, Annotated, TypedDict
    from datetime import datetime
    import operator

    from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
    from langchain_core.tools import tool
    from langchain_anthropic import ChatAnthropic
    from langgraph.graph import StateGraph, START, END
    from langgraph.prebuilt import ToolNode
    from langgraph.checkpoint.memory import MemorySaver
    from duckduckgo_search import DDGS
    We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions.
    os.environ= "Use Your API Key Here"

    ANTHROPIC_API_KEY = os.getenvWe set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key, while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times.
    from typing import TypedDict

    class AgentState:
    messages: Annotated, operator.add]

    @tool
    def calculator-> str:
    """
    Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more.

    Args:
    expression: Mathematical expression as a string")

    Returns:
    Result of the calculation as a string
    """
    try:
    allowed_names = {
    'abs': abs, 'round': round, 'min': min, 'max': max,
    'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
    'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
    'log': math.log, 'log10': math.log10, 'exp': math.exp,
    'pi': math.pi, 'e': math.e
    }

    expression = expression.replaceresult = evalreturn f"Result: {result}"
    except Exception as e:
    return f"Error in calculation: {str}"
    We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution.
    @tool
    def web_search-> str:
    """
    Search the web for information using DuckDuckGo.

    Args:
    query: Search query string
    num_results: Number of results to returnReturns:
    Search results as formatted string
    """
    try:
    num_results = min, 10)

    with DDGSas ddgs:
    results = list)

    if not results:
    return f"No search results found for: {query}"

    formatted_results = f"Search results for '{query}':\n\n"
    for i, result in enumerate:
    formatted_results += f"{i}. **{result}**\n"
    formatted_results += f" {result}\n"
    formatted_results += f" Source: {result}\n\n"

    return formatted_results
    except Exception as e:
    return f"Error performing web search: {str}"
    We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility.
    @tool
    def weather_info-> str:
    """
    Get current weather information for a city using OpenWeatherMap API.
    Note: This is a mock implementation for demo purposes.

    Args:
    city: Name of the city

    Returns:
    Weather information as a string
    """
    mock_weather = {
    "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65},
    "london": {"temp": 15, "condition": "Rainy", "humidity": 80},
    "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70},
    "paris": {"temp": 18, "condition": "Overcast", "humidity": 75}
    }

    city_lower = city.lowerif city_lower in mock_weather:
    weather = mock_weatherreturn f"Weather in {city}:\n" \
    f"Temperature: {weather}°C\n" \
    f"Condition: {weather}\n" \
    f"Humidity: {weather}%"
    else:
    return f"Weather data not available for {city}."
    We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API.
    @tool
    def text_analyzer-> str:
    """
    Analyze text and provide statistics like word count, character count, etc.

    Args:
    text: Text to analyze

    Returns:
    Text analysis results
    """
    if not text.strip:
    return "Please provide text to analyze."

    words = text.splitsentences = text.split+ text.split+ text.splitsentences =analysis = f"Text Analysis Results:\n"
    analysis += f"• Characters: {len}\n"
    analysis += f"• Characters: {len)}\n"
    analysis += f"• Words: {len}\n"
    analysis += f"• Sentences: {len}\n"
    analysis += f"• Average words per sentence: {len/ max, 1):.1f}\n"
    analysis += f"• Most common word: {max, key=words.count) if words else 'N/A'}"

    return analysis
    The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count, word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit.
    @tool
    def current_time-> str:
    """
    Get the current date and time.

    Returns:
    Current date and time as a formatted string
    """
    now = datetime.nowreturn f"Current date and time: {now.strftime}"
    The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow.
    tools =def create_llm:
    if ANTHROPIC_API_KEY:
    return ChatAnthropicelse:
    class MockLLM:
    def invoke:
    last_message = messages.content if messages else ""

    if anyfor word in):
    import re
    numbers = re.findall\s\w]+', last_message)
    expr = numbersif numbers else "2+2"
    return AIMessage}, "id": "calc1"}])
    elif anyfor word in):
    query = last_message.replace.replace.replace.stripif not query or len< 3:
    query = "python programming"
    return AIMessageelif anyfor word in):
    city = "New York"
    words = last_message.lower.splitfor i, word in enumerate:
    if word == 'in' and i + 1 < len:
    city = words.titlebreak
    return AIMessageelif anyfor word in):
    return AIMessageelif anyfor word in):
    text = last_message.replace.replace.stripif not text:
    text = "Sample text for analysis"
    return AIMessageelse:
    return AIMessagedef bind_tools:
    return self

    printreturn MockLLMllm = create_llmllm_with_tools = llm.bind_toolsWe initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed.
    def agent_node-> Dict:
    """Main agent node that processes messages and decides on tool usage."""
    messages = stateresponse = llm_with_tools.invokereturn {"messages":}

    def should_continue-> str:
    """Determine whether to continue with tool calls or end."""
    last_message = stateif hasattrand last_message.tool_calls:
    return "tools"
    return END
    We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model, and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow.
    def create_agent_graph:
    tool_node = ToolNodeworkflow = StateGraphworkflow.add_nodeworkflow.add_nodeworkflow.add_edgeworkflow.add_conditional_edgesworkflow.add_edgememory = MemorySaverapp = workflow.compilereturn app

    printagent = create_agent_graphprintWe construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application, enabling a structured, memory-aware multi-tool agent ready for deployment.
    def test_agent:
    """Test the agent with various queries."""
    config = {"configurable": {"thread_id": "test-thread"}}

    test_queries =printfor i, query in enumerate:
    printprinttry:
    response = agent.invoke]},
    config=config
    )

    last_message = responseprintexcept Exception as e:
    print}\n")
    The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use.
    def chat_with_agent:
    """Interactive chat function."""
    config = {"configurable": {"thread_id": "interactive-thread"}}

    printprintprintwhile True:
    try:
    user_input = input.stripif user_input.lowerin:
    printbreak
    elif user_input.lower== 'help':
    printprint?'")
    printprintprintprintprintcontinue
    elif not user_input:
    continue

    response = agent.invoke]},
    config=config
    )

    last_message = responseprintexcept KeyboardInterrupt:
    printbreak
    except Exception as e:
    print}\n")
    The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval.
    if __name__ == "__main__":
    test_agentprintprintprintchat_with_agentdef quick_demo:
    """Quick demonstration of agent capabilities."""
    config = {"configurable": {"thread_id": "demo"}}

    demos =printfor category, query in demos:
    printtry:
    response = agent.invoke]},
    config=config
    )
    printexcept Exception as e:
    print}\n")

    printprintprintprintprintfor a quick demonstration")
    printfor interactive chat")
    printprintprintFinally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agentto validate functionality with sample queries, followed by launching the interactive chat_with_agentmode for real-time interaction. The quick_demofunction also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality.
    In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge.

    Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    #stepbystep #guide #build #customizable #multitool
    Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation
    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly. import subprocess import sys def install_packages: packages =for package in packages: try: subprocess.check_callprintexcept subprocess.CalledProcessError: printprintinstall_packagesprintWe automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly. import os import json import math import requests from typing import Dict, List, Any, Annotated, TypedDict from datetime import datetime import operator from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage from langchain_core.tools import tool from langchain_anthropic import ChatAnthropic from langgraph.graph import StateGraph, START, END from langgraph.prebuilt import ToolNode from langgraph.checkpoint.memory import MemorySaver from duckduckgo_search import DDGS We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions. os.environ= "Use Your API Key Here" ANTHROPIC_API_KEY = os.getenvWe set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key, while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times. from typing import TypedDict class AgentState: messages: Annotated, operator.add] @tool def calculator-> str: """ Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more. Args: expression: Mathematical expression as a string") Returns: Result of the calculation as a string """ try: allowed_names = { 'abs': abs, 'round': round, 'min': min, 'max': max, 'sum': sum, 'pow': pow, 'sqrt': math.sqrt, 'sin': math.sin, 'cos': math.cos, 'tan': math.tan, 'log': math.log, 'log10': math.log10, 'exp': math.exp, 'pi': math.pi, 'e': math.e } expression = expression.replaceresult = evalreturn f"Result: {result}" except Exception as e: return f"Error in calculation: {str}" We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution. @tool def web_search-> str: """ Search the web for information using DuckDuckGo. Args: query: Search query string num_results: Number of results to returnReturns: Search results as formatted string """ try: num_results = min, 10) with DDGSas ddgs: results = list) if not results: return f"No search results found for: {query}" formatted_results = f"Search results for '{query}':\n\n" for i, result in enumerate: formatted_results += f"{i}. **{result}**\n" formatted_results += f" {result}\n" formatted_results += f" Source: {result}\n\n" return formatted_results except Exception as e: return f"Error performing web search: {str}" We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility. @tool def weather_info-> str: """ Get current weather information for a city using OpenWeatherMap API. Note: This is a mock implementation for demo purposes. Args: city: Name of the city Returns: Weather information as a string """ mock_weather = { "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65}, "london": {"temp": 15, "condition": "Rainy", "humidity": 80}, "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70}, "paris": {"temp": 18, "condition": "Overcast", "humidity": 75} } city_lower = city.lowerif city_lower in mock_weather: weather = mock_weatherreturn f"Weather in {city}:\n" \ f"Temperature: {weather}°C\n" \ f"Condition: {weather}\n" \ f"Humidity: {weather}%" else: return f"Weather data not available for {city}." We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API. @tool def text_analyzer-> str: """ Analyze text and provide statistics like word count, character count, etc. Args: text: Text to analyze Returns: Text analysis results """ if not text.strip: return "Please provide text to analyze." words = text.splitsentences = text.split+ text.split+ text.splitsentences =analysis = f"Text Analysis Results:\n" analysis += f"• Characters: {len}\n" analysis += f"• Characters: {len)}\n" analysis += f"• Words: {len}\n" analysis += f"• Sentences: {len}\n" analysis += f"• Average words per sentence: {len/ max, 1):.1f}\n" analysis += f"• Most common word: {max, key=words.count) if words else 'N/A'}" return analysis The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count, word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit. @tool def current_time-> str: """ Get the current date and time. Returns: Current date and time as a formatted string """ now = datetime.nowreturn f"Current date and time: {now.strftime}" The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow. tools =def create_llm: if ANTHROPIC_API_KEY: return ChatAnthropicelse: class MockLLM: def invoke: last_message = messages.content if messages else "" if anyfor word in): import re numbers = re.findall\s\w]+', last_message) expr = numbersif numbers else "2+2" return AIMessage}, "id": "calc1"}]) elif anyfor word in): query = last_message.replace.replace.replace.stripif not query or len< 3: query = "python programming" return AIMessageelif anyfor word in): city = "New York" words = last_message.lower.splitfor i, word in enumerate: if word == 'in' and i + 1 < len: city = words.titlebreak return AIMessageelif anyfor word in): return AIMessageelif anyfor word in): text = last_message.replace.replace.stripif not text: text = "Sample text for analysis" return AIMessageelse: return AIMessagedef bind_tools: return self printreturn MockLLMllm = create_llmllm_with_tools = llm.bind_toolsWe initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed. def agent_node-> Dict: """Main agent node that processes messages and decides on tool usage.""" messages = stateresponse = llm_with_tools.invokereturn {"messages":} def should_continue-> str: """Determine whether to continue with tool calls or end.""" last_message = stateif hasattrand last_message.tool_calls: return "tools" return END We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model, and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow. def create_agent_graph: tool_node = ToolNodeworkflow = StateGraphworkflow.add_nodeworkflow.add_nodeworkflow.add_edgeworkflow.add_conditional_edgesworkflow.add_edgememory = MemorySaverapp = workflow.compilereturn app printagent = create_agent_graphprintWe construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application, enabling a structured, memory-aware multi-tool agent ready for deployment. def test_agent: """Test the agent with various queries.""" config = {"configurable": {"thread_id": "test-thread"}} test_queries =printfor i, query in enumerate: printprinttry: response = agent.invoke]}, config=config ) last_message = responseprintexcept Exception as e: print}\n") The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use. def chat_with_agent: """Interactive chat function.""" config = {"configurable": {"thread_id": "interactive-thread"}} printprintprintwhile True: try: user_input = input.stripif user_input.lowerin: printbreak elif user_input.lower== 'help': printprint?'") printprintprintprintprintcontinue elif not user_input: continue response = agent.invoke]}, config=config ) last_message = responseprintexcept KeyboardInterrupt: printbreak except Exception as e: print}\n") The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval. if __name__ == "__main__": test_agentprintprintprintchat_with_agentdef quick_demo: """Quick demonstration of agent capabilities.""" config = {"configurable": {"thread_id": "demo"}} demos =printfor category, query in demos: printtry: response = agent.invoke]}, config=config ) printexcept Exception as e: print}\n") printprintprintprintprintfor a quick demonstration") printfor interactive chat") printprintprintFinally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agentto validate functionality with sample queries, followed by launching the interactive chat_with_agentmode for real-time interaction. The quick_demofunction also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality. In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge. Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding #stepbystep #guide #build #customizable #multitool
    WWW.MARKTECHPOST.COM
    Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation
    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly. import subprocess import sys def install_packages(): packages = [ "langgraph", "langchain", "langchain-anthropic", "langchain-community", "requests", "python-dotenv", "duckduckgo-search" ] for package in packages: try: subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"]) print(f"✓ Installed {package}") except subprocess.CalledProcessError: print(f"✗ Failed to install {package}") print("Installing required packages...") install_packages() print("Installation complete!\n") We automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly. import os import json import math import requests from typing import Dict, List, Any, Annotated, TypedDict from datetime import datetime import operator from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage from langchain_core.tools import tool from langchain_anthropic import ChatAnthropic from langgraph.graph import StateGraph, START, END from langgraph.prebuilt import ToolNode from langgraph.checkpoint.memory import MemorySaver from duckduckgo_search import DDGS We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions. os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Here" ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY") We set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key (which you should replace with a valid key), while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times. from typing import TypedDict class AgentState(TypedDict): messages: Annotated[List[BaseMessage], operator.add] @tool def calculator(expression: str) -> str: """ Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more. Args: expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)") Returns: Result of the calculation as a string """ try: allowed_names = { 'abs': abs, 'round': round, 'min': min, 'max': max, 'sum': sum, 'pow': pow, 'sqrt': math.sqrt, 'sin': math.sin, 'cos': math.cos, 'tan': math.tan, 'log': math.log, 'log10': math.log10, 'exp': math.exp, 'pi': math.pi, 'e': math.e } expression = expression.replace('^', '**') result = eval(expression, {"__builtins__": {}}, allowed_names) return f"Result: {result}" except Exception as e: return f"Error in calculation: {str(e)}" We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution. @tool def web_search(query: str, num_results: int = 3) -> str: """ Search the web for information using DuckDuckGo. Args: query: Search query string num_results: Number of results to return (default: 3, max: 10) Returns: Search results as formatted string """ try: num_results = min(max(num_results, 1), 10) with DDGS() as ddgs: results = list(ddgs.text(query, max_results=num_results)) if not results: return f"No search results found for: {query}" formatted_results = f"Search results for '{query}':\n\n" for i, result in enumerate(results, 1): formatted_results += f"{i}. **{result['title']}**\n" formatted_results += f" {result['body']}\n" formatted_results += f" Source: {result['href']}\n\n" return formatted_results except Exception as e: return f"Error performing web search: {str(e)}" We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility. @tool def weather_info(city: str) -> str: """ Get current weather information for a city using OpenWeatherMap API. Note: This is a mock implementation for demo purposes. Args: city: Name of the city Returns: Weather information as a string """ mock_weather = { "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65}, "london": {"temp": 15, "condition": "Rainy", "humidity": 80}, "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70}, "paris": {"temp": 18, "condition": "Overcast", "humidity": 75} } city_lower = city.lower() if city_lower in mock_weather: weather = mock_weather[city_lower] return f"Weather in {city}:\n" \ f"Temperature: {weather['temp']}°C\n" \ f"Condition: {weather['condition']}\n" \ f"Humidity: {weather['humidity']}%" else: return f"Weather data not available for {city}. (This is a demo with limited cities: New York, London, Tokyo, Paris)" We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API. @tool def text_analyzer(text: str) -> str: """ Analyze text and provide statistics like word count, character count, etc. Args: text: Text to analyze Returns: Text analysis results """ if not text.strip(): return "Please provide text to analyze." words = text.split() sentences = text.split('.') + text.split('!') + text.split('?') sentences = [s.strip() for s in sentences if s.strip()] analysis = f"Text Analysis Results:\n" analysis += f"• Characters (with spaces): {len(text)}\n" analysis += f"• Characters (without spaces): {len(text.replace(' ', ''))}\n" analysis += f"• Words: {len(words)}\n" analysis += f"• Sentences: {len(sentences)}\n" analysis += f"• Average words per sentence: {len(words) / max(len(sentences), 1):.1f}\n" analysis += f"• Most common word: {max(set(words), key=words.count) if words else 'N/A'}" return analysis The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count (with and without spaces), word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit. @tool def current_time() -> str: """ Get the current date and time. Returns: Current date and time as a formatted string """ now = datetime.now() return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}" The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow. tools = [calculator, web_search, weather_info, text_analyzer, current_time] def create_llm(): if ANTHROPIC_API_KEY: return ChatAnthropic( model="claude-3-haiku-20240307", temperature=0.1, max_tokens=1024 ) else: class MockLLM: def invoke(self, messages): last_message = messages[-1].content if messages else "" if any(word in last_message.lower() for word in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']): import re numbers = re.findall(r'[\d\+\-\*/\.\(\)\s\w]+', last_message) expr = numbers[0] if numbers else "2+2" return AIMessage(content="I'll help you with that calculation.", tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}]) elif any(word in last_message.lower() for word in ['search', 'find', 'look up', 'information about']): query = last_message.replace('search for', '').replace('find', '').replace('look up', '').strip() if not query or len(query) < 3: query = "python programming" return AIMessage(content="I'll search for that information.", tool_calls=[{"name": "web_search", "args": {"query": query}, "id": "search1"}]) elif any(word in last_message.lower() for word in ['weather', 'temperature']): city = "New York" words = last_message.lower().split() for i, word in enumerate(words): if word == 'in' and i + 1 < len(words): city = words[i + 1].title() break return AIMessage(content="I'll get the weather information.", tool_calls=[{"name": "weather_info", "args": {"city": city}, "id": "weather1"}]) elif any(word in last_message.lower() for word in ['time', 'date']): return AIMessage(content="I'll get the current time.", tool_calls=[{"name": "current_time", "args": {}, "id": "time1"}]) elif any(word in last_message.lower() for word in ['analyze', 'analysis']): text = last_message.replace('analyze this text:', '').replace('analyze', '').strip() if not text: text = "Sample text for analysis" return AIMessage(content="I'll analyze that text for you.", tool_calls=[{"name": "text_analyzer", "args": {"text": text}, "id": "analyze1"}]) else: return AIMessage(content="Hello! I'm a multi-tool agent powered by Claude. I can help with:\n• Mathematical calculations\n• Web searches\n• Weather information\n• Text analysis\n• Current time/date\n\nWhat would you like me to help you with?") def bind_tools(self, tools): return self print("⚠️ Note: Using mock LLM for demo. Add your ANTHROPIC_API_KEY for full functionality.") return MockLLM() llm = create_llm() llm_with_tools = llm.bind_tools(tools) We initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed. def agent_node(state: AgentState) -> Dict[str, Any]: """Main agent node that processes messages and decides on tool usage.""" messages = state["messages"] response = llm_with_tools.invoke(messages) return {"messages": [response]} def should_continue(state: AgentState) -> str: """Determine whether to continue with tool calls or end.""" last_message = state["messages"][-1] if hasattr(last_message, 'tool_calls') and last_message.tool_calls: return "tools" return END We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model (with tools), and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow. def create_agent_graph(): tool_node = ToolNode(tools) workflow = StateGraph(AgentState) workflow.add_node("agent", agent_node) workflow.add_node("tools", tool_node) workflow.add_edge(START, "agent") workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END}) workflow.add_edge("tools", "agent") memory = MemorySaver() app = workflow.compile(checkpointer=memory) return app print("Creating LangGraph Multi-Tool Agent...") agent = create_agent_graph() print("✓ Agent created successfully!\n") We construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application (app), enabling a structured, memory-aware multi-tool agent ready for deployment. def test_agent(): """Test the agent with various queries.""" config = {"configurable": {"thread_id": "test-thread"}} test_queries = [ "What's 15 * 7 + 23?", "Search for information about Python programming", "What's the weather like in Tokyo?", "What time is it?", "Analyze this text: 'LangGraph is an amazing framework for building AI agents.'" ] print("🧪 Testing the agent with sample queries...\n") for i, query in enumerate(test_queries, 1): print(f"Query {i}: {query}") print("-" * 50) try: response = agent.invoke( {"messages": [HumanMessage(content=query)]}, config=config ) last_message = response["messages"][-1] print(f"Response: {last_message.content}\n") except Exception as e: print(f"Error: {str(e)}\n") The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use. def chat_with_agent(): """Interactive chat function.""" config = {"configurable": {"thread_id": "interactive-thread"}} print("🤖 Multi-Tool Agent Chat") print("Available tools: Calculator, Web Search, Weather Info, Text Analyzer, Current Time") print("Type 'quit' to exit, 'help' for available commands\n") while True: try: user_input = input("You: ").strip() if user_input.lower() in ['quit', 'exit', 'q']: print("Goodbye!") break elif user_input.lower() == 'help': print("\nAvailable commands:") print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'") print("• Web Search: 'Search for Python tutorials' or 'Find information about AI'") print("• Weather: 'Weather in Tokyo' or 'What's the temperature in London?'") print("• Text Analysis: 'Analyze this text: [your text]'") print("• Current Time: 'What time is it?' or 'Current date'") print("• quit: Exit the chat\n") continue elif not user_input: continue response = agent.invoke( {"messages": [HumanMessage(content=user_input)]}, config=config ) last_message = response["messages"][-1] print(f"Agent: {last_message.content}\n") except KeyboardInterrupt: print("\nGoodbye!") break except Exception as e: print(f"Error: {str(e)}\n") The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval. if __name__ == "__main__": test_agent() print("=" * 60) print("🎉 LangGraph Multi-Tool Agent is ready!") print("=" * 60) chat_with_agent() def quick_demo(): """Quick demonstration of agent capabilities.""" config = {"configurable": {"thread_id": "demo"}} demos = [ ("Math", "Calculate the square root of 144 plus 5 times 3"), ("Search", "Find recent news about artificial intelligence"), ("Time", "What's the current date and time?") ] print("🚀 Quick Demo of Agent Capabilities\n") for category, query in demos: print(f"[{category}] Query: {query}") try: response = agent.invoke( {"messages": [HumanMessage(content=query)]}, config=config ) print(f"Response: {response['messages'][-1].content}\n") except Exception as e: print(f"Error: {str(e)}\n") print("\n" + "="*60) print("🔧 Usage Instructions:") print("1. Add your ANTHROPIC_API_KEY to use Claude model") print(" os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'") print("2. Run quick_demo() for a quick demonstration") print("3. Run chat_with_agent() for interactive chat") print("4. The agent supports: calculations, web search, weather, text analysis, and time") print("5. Example: 'Calculate 15*7+23' or 'Search for Python tutorials'") print("="*60) Finally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agent() to validate functionality with sample queries, followed by launching the interactive chat_with_agent() mode for real-time interaction. The quick_demo() function also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality. In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge. Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • After testing 200+ men’s grooming products, the Panasonic MultiShape is my favorite

    The author using the MultiShape at home.
    Credit: Timothy Beck Werth / Mashable Photo Composite

    Deal pricing and availability subject to change after time of publication.
    Learn more about how we select deals.

    Back in 2022, Panasonic released a men’s grooming tool called the MultiShape. It had a simple value proposition: Built with swappable heads, this multi-groomer lets you combine a beard trimmer, body groomer, electric razor, nose hair trimmer, and electric toothbrush into one gadget.Three years later, I’m still using that same MultiShape on a near-daily basis. In that time, Panasonic has also introduced new accessories for this multi-groomer, including hair clipping tools, a facial brush, and an electric foot scrubber. Is it weird to brush your teeth and scrub your feet with the same device? Yup, and that's why I recommend choosing your accessories wisely.We saw this men's multigroomer hit a new record-low price of recently, and at the time of writing, you can get it for Obviously, it's not as steep a price cut, but it's still off full price.

    You May Also Like

    Opens in a new window

    Credit: Panasonic

    Panasonic MultiShape Ultimate All-In-One Kit

    at Panasonic

    What makes this grooming gadget unique?After the MultiShape came out, I expected Philips-Norelco, Braun, and other big brands to launch similar multitools. However, it's still the only gadget that can go from electric toothbrush to beard trimmer to electric shaver. The MultiShape is also waterproof, so you can clean it pretty easily.

    I use the MultiShape for daily shaving.
    Credit: Timothy Beck Werth / Mashable

    It's also one of the only beard trimmers that can trim long hair.
    Credit: Timothy Beck Werth / Mashable

    When this product first came out, GQ called it a “god-tier” grooming tool. Whenever I write about the MultiShape, I reference that line because it pretty much sums up my experience. I’ve tested hundreds of grooming products. So. Many. Grooming. Products. I’m probably one of the few men in the world who has more beauty products than his female partner. A lot more. As I write this, I have a locker next to my desk filled to the brim with beard trimmers, beard oils, face moisturizers, retinol serums, face masks — you name it. The vast majority of these products I never recommend; I'm only looking for the rare exceptions.

    Related Stories

    Mashable Deals

    Want more hand-picked deals from our shopping experts?
    Sign up for the Mashable Deals newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    I’ve written about the MultiShape a few times over the years, and it’s the grooming product I use more than any other. As someone with a longer beard, it’s one of the few beard trimmers that can handle longer hair. Popular trimmers from Philips-Norelco just don’t work for long facial hair, unfortunately. It’s also my everyday shaving tool. When I travel, I bring the electric toothbrush head. The other tools I use on more sensitive areas — like my ears — so I'll leave it at that.Right now, you can grab a MultiShape all-in-one kit from Panasonic on sale for You can also buy the MultiShape with a handful of pre-selected accessories or build your own kit at Panasonic.

    Best Panasonic MultiShape Kits

    All-in-one kit

    Panasonic MultiShape All-in-One KitFor Amazon shoppers

    Panasonic MultiShape Trim and Shave KitFor guys without beards

    Panasonic MultiShape Clean Cut Shaver Kit

    For beard trimming

    Panasonic MultiShape Beard Trimmer Kit

    Build your own kit

    Panasonic MultiShape and accessories

    Starting at Topics
    Beauty

    Timothy Beck Werth
    Tech Editor

    Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.
    #after #testing #mens #grooming #products
    After testing 200+ men’s grooming products, the Panasonic MultiShape is my favorite
    The author using the MultiShape at home. Credit: Timothy Beck Werth / Mashable Photo Composite Deal pricing and availability subject to change after time of publication. Learn more about how we select deals. Back in 2022, Panasonic released a men’s grooming tool called the MultiShape. It had a simple value proposition: Built with swappable heads, this multi-groomer lets you combine a beard trimmer, body groomer, electric razor, nose hair trimmer, and electric toothbrush into one gadget.Three years later, I’m still using that same MultiShape on a near-daily basis. In that time, Panasonic has also introduced new accessories for this multi-groomer, including hair clipping tools, a facial brush, and an electric foot scrubber. Is it weird to brush your teeth and scrub your feet with the same device? Yup, and that's why I recommend choosing your accessories wisely.We saw this men's multigroomer hit a new record-low price of recently, and at the time of writing, you can get it for Obviously, it's not as steep a price cut, but it's still off full price. You May Also Like Opens in a new window Credit: Panasonic Panasonic MultiShape Ultimate All-In-One Kit at Panasonic What makes this grooming gadget unique?After the MultiShape came out, I expected Philips-Norelco, Braun, and other big brands to launch similar multitools. However, it's still the only gadget that can go from electric toothbrush to beard trimmer to electric shaver. The MultiShape is also waterproof, so you can clean it pretty easily. I use the MultiShape for daily shaving. Credit: Timothy Beck Werth / Mashable It's also one of the only beard trimmers that can trim long hair. Credit: Timothy Beck Werth / Mashable When this product first came out, GQ called it a “god-tier” grooming tool. Whenever I write about the MultiShape, I reference that line because it pretty much sums up my experience. I’ve tested hundreds of grooming products. So. Many. Grooming. Products. I’m probably one of the few men in the world who has more beauty products than his female partner. A lot more. As I write this, I have a locker next to my desk filled to the brim with beard trimmers, beard oils, face moisturizers, retinol serums, face masks — you name it. The vast majority of these products I never recommend; I'm only looking for the rare exceptions. Related Stories Mashable Deals Want more hand-picked deals from our shopping experts? Sign up for the Mashable Deals newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! I’ve written about the MultiShape a few times over the years, and it’s the grooming product I use more than any other. As someone with a longer beard, it’s one of the few beard trimmers that can handle longer hair. Popular trimmers from Philips-Norelco just don’t work for long facial hair, unfortunately. It’s also my everyday shaving tool. When I travel, I bring the electric toothbrush head. The other tools I use on more sensitive areas — like my ears — so I'll leave it at that.Right now, you can grab a MultiShape all-in-one kit from Panasonic on sale for You can also buy the MultiShape with a handful of pre-selected accessories or build your own kit at Panasonic. Best Panasonic MultiShape Kits All-in-one kit Panasonic MultiShape All-in-One KitFor Amazon shoppers Panasonic MultiShape Trim and Shave KitFor guys without beards Panasonic MultiShape Clean Cut Shaver Kit For beard trimming Panasonic MultiShape Beard Trimmer Kit Build your own kit Panasonic MultiShape and accessories Starting at Topics Beauty Timothy Beck Werth Tech Editor Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book. #after #testing #mens #grooming #products
    MASHABLE.COM
    After testing 200+ men’s grooming products, the Panasonic MultiShape is my favorite
    The author using the MultiShape at home. Credit: Timothy Beck Werth / Mashable Photo Composite Deal pricing and availability subject to change after time of publication. Learn more about how we select deals. Back in 2022, Panasonic released a men’s grooming tool called the MultiShape. It had a simple value proposition: Built with swappable heads, this multi-groomer lets you combine a beard trimmer, body groomer, electric razor, nose hair trimmer, and electric toothbrush into one gadget.Three years later, I’m still using that same MultiShape on a near-daily basis. In that time, Panasonic has also introduced new accessories for this multi-groomer, including hair clipping tools, a facial brush, and an electric foot scrubber. Is it weird to brush your teeth and scrub your feet with the same device? Yup, and that's why I recommend choosing your accessories wisely.We saw this men's multigroomer hit a new record-low price of $147 recently, and at the time of writing, you can get it for $170. Obviously, it's not as steep a price cut, but it's still $40 off full price. You May Also Like Opens in a new window Credit: Panasonic Panasonic MultiShape Ultimate All-In-One Kit $170 at Panasonic $210 Save $40 What makes this grooming gadget unique?After the MultiShape came out, I expected Philips-Norelco, Braun, and other big brands to launch similar multitools. However, it's still the only gadget that can go from electric toothbrush to beard trimmer to electric shaver. The MultiShape is also waterproof, so you can clean it pretty easily. I use the MultiShape for daily shaving. Credit: Timothy Beck Werth / Mashable It's also one of the only beard trimmers that can trim long hair. Credit: Timothy Beck Werth / Mashable When this product first came out, GQ called it a “god-tier” grooming tool. Whenever I write about the MultiShape, I reference that line because it pretty much sums up my experience. I’ve tested hundreds of grooming products. So. Many. Grooming. Products. I’m probably one of the few men in the world who has more beauty products than his female partner. A lot more. As I write this, I have a locker next to my desk filled to the brim with beard trimmers, beard oils, face moisturizers, retinol serums, face masks — you name it. The vast majority of these products I never recommend; I'm only looking for the rare exceptions. Related Stories Mashable Deals Want more hand-picked deals from our shopping experts? Sign up for the Mashable Deals newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! I’ve written about the MultiShape a few times over the years, and it’s the grooming product I use more than any other. As someone with a longer beard, it’s one of the few beard trimmers that can handle longer hair. Popular trimmers from Philips-Norelco just don’t work for long facial hair, unfortunately. It’s also my everyday shaving tool. When I travel, I bring the electric toothbrush head. The other tools I use on more sensitive areas — like my ears — so I'll leave it at that.Right now, you can grab a MultiShape all-in-one kit from Panasonic on sale for $170. You can also buy the MultiShape with a handful of pre-selected accessories at Amazon or build your own kit at Panasonic. Best Panasonic MultiShape Kits All-in-one kit Panasonic MultiShape All-in-One Kit $150 (save $40 ) For Amazon shoppers Panasonic MultiShape Trim and Shave Kit $87.09 (Save $42.90) For guys without beards Panasonic MultiShape Clean Cut Shaver Kit $149.99 For beard trimming Panasonic MultiShape Beard Trimmer Kit $79.99 Build your own kit Panasonic MultiShape and accessories Starting at $85 Topics Beauty Timothy Beck Werth Tech Editor Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.
    0 Yorumlar 0 hisse senetleri 0 önizleme
CGShares https://cgshares.com