• Scientists in Canada have developed a biotint that simulates lung tissue. It's part of ongoing research in 3D bioprinting, which seems to be slowly evolving in the field of personalized medicine. A team from McMaster University in Ontario worked on this, aiming to replicate mechanical properties. Not sure how exciting this really is, but it's something, I guess.

    #Bioprinting #LungTissue #PersonalizedMedicine #McMasterUniversity #Research
    Scientists in Canada have developed a biotint that simulates lung tissue. It's part of ongoing research in 3D bioprinting, which seems to be slowly evolving in the field of personalized medicine. A team from McMaster University in Ontario worked on this, aiming to replicate mechanical properties. Not sure how exciting this really is, but it's something, I guess. #Bioprinting #LungTissue #PersonalizedMedicine #McMasterUniversity #Research
    Científicos en Canadá desarrollan una biotinta que simula el tejido pulmonar
    La bioimpresión 3D sigue estudiándose y evolucionando como una herramienta en el ámbito de la medicina personalizada. En esta línea, un equipo de investigadores de la Universidad McMaster, en Ontario, ha desarrollado una nueva biotinta que replica la
    Love
    Like
    Wow
    11
    1 Комментарии 0 Поделились
  • Four Strategies for Getting Better Sleep Away From Home

    Sleep can be a mysterious process even under ideal conditions, but when you’re in a completely alien environment like a hotel room or other temporary lodging it can become seemingly impossible. But if you take a little control over your environment, you can get more—and better—sleep no matter where you find yourself at night.Make the space feel more like homeStudies have shown that aspects of our home environment like sound and smell can help us be more relaxed and and happy when we’re away, so replicating those aspects of your life in an unfamiliar spot can help you sleep:Sound. If you normally sleep with a white noise machine, bring it with you when you travel, or find a travel-size model or phone app that simulates it.Smell. Everyone’s home has a unique scent map. Bringing those scents with you can trick your brain into feeling “at home” in a strange place. Using the same lotions, shampoos, and soaps on the road can recreate that scent matrix. Bringing an item of clothing that smells like the dryer sheets or detergent you use at home into bed with you can also help make an unfamiliar bed seem inviting.Routine. Another way to make an unfamiliar place seem more like home is to keep to your usual routine. However you approach bedtime at home—whether it’s reading a book, meditating for a few moments, or watching a little mindless television—do it as much as possible in your temporary digs. Try to hit the sack around the same time as usual, if you can, and keep to the same bathroom routine as well. Control the environmentAs much as possible, you want to control the physical environment that you’re sleeping in. If you’re used to sleeping in a pitch-black room, block light sources as much as possible by clipping curtains shut, putting tape or Post-It notes over incidental light sources like alarms and thermostats, and blocking gaps under doors that allow light to leak in.If you prefer some light while you’re sleeping, bring a nightlight with you that you can plug in to make sure even the darkest room is illuminated. And adjust the temperature, if you can—most people sleep better when the room is a little on the cool side, about 60 to 65 degrees. But if you’re used to sleeping in a warmer or even colder environment, try to get as close to that as you can.Select a strategic locationIf you have control over the location of your room, use that control to select a spot that’s conducive to a good night’s sleep. That starts with the location of the building itself—if you have a choice of guest rooms or hotels to spend the night, choose one far away from busy streets or other sources of noise. Then look for a spot that’s far from common areas like elevators or lobbies—or your friend’s living room where everyone stays up all night chatting.Get out of bedFinally, if you’re struggling to fall asleep in a strange place despite all of these efforts, give up and get out of bed. Forcing yourself to lie there and count the minutes as they slip past you just reinforces the connection between stress and anxiety and that bed, making it even less likely that you’ll fall asleep. Instead, after about 20 minutes it’s best to get up and do something relaxing for a short period of time. This resets your body and mind and breaks the association between frustration and the bed, making it easier to relax when you try again.
    #four #strategies #getting #better #sleep
    Four Strategies for Getting Better Sleep Away From Home
    Sleep can be a mysterious process even under ideal conditions, but when you’re in a completely alien environment like a hotel room or other temporary lodging it can become seemingly impossible. But if you take a little control over your environment, you can get more—and better—sleep no matter where you find yourself at night.Make the space feel more like homeStudies have shown that aspects of our home environment like sound and smell can help us be more relaxed and and happy when we’re away, so replicating those aspects of your life in an unfamiliar spot can help you sleep:Sound. If you normally sleep with a white noise machine, bring it with you when you travel, or find a travel-size model or phone app that simulates it.Smell. Everyone’s home has a unique scent map. Bringing those scents with you can trick your brain into feeling “at home” in a strange place. Using the same lotions, shampoos, and soaps on the road can recreate that scent matrix. Bringing an item of clothing that smells like the dryer sheets or detergent you use at home into bed with you can also help make an unfamiliar bed seem inviting.Routine. Another way to make an unfamiliar place seem more like home is to keep to your usual routine. However you approach bedtime at home—whether it’s reading a book, meditating for a few moments, or watching a little mindless television—do it as much as possible in your temporary digs. Try to hit the sack around the same time as usual, if you can, and keep to the same bathroom routine as well. Control the environmentAs much as possible, you want to control the physical environment that you’re sleeping in. If you’re used to sleeping in a pitch-black room, block light sources as much as possible by clipping curtains shut, putting tape or Post-It notes over incidental light sources like alarms and thermostats, and blocking gaps under doors that allow light to leak in.If you prefer some light while you’re sleeping, bring a nightlight with you that you can plug in to make sure even the darkest room is illuminated. And adjust the temperature, if you can—most people sleep better when the room is a little on the cool side, about 60 to 65 degrees. But if you’re used to sleeping in a warmer or even colder environment, try to get as close to that as you can.Select a strategic locationIf you have control over the location of your room, use that control to select a spot that’s conducive to a good night’s sleep. That starts with the location of the building itself—if you have a choice of guest rooms or hotels to spend the night, choose one far away from busy streets or other sources of noise. Then look for a spot that’s far from common areas like elevators or lobbies—or your friend’s living room where everyone stays up all night chatting.Get out of bedFinally, if you’re struggling to fall asleep in a strange place despite all of these efforts, give up and get out of bed. Forcing yourself to lie there and count the minutes as they slip past you just reinforces the connection between stress and anxiety and that bed, making it even less likely that you’ll fall asleep. Instead, after about 20 minutes it’s best to get up and do something relaxing for a short period of time. This resets your body and mind and breaks the association between frustration and the bed, making it easier to relax when you try again. #four #strategies #getting #better #sleep
    LIFEHACKER.COM
    Four Strategies for Getting Better Sleep Away From Home
    Sleep can be a mysterious process even under ideal conditions, but when you’re in a completely alien environment like a hotel room or other temporary lodging it can become seemingly impossible. But if you take a little control over your environment, you can get more—and better—sleep no matter where you find yourself at night.Make the space feel more like homeStudies have shown that aspects of our home environment like sound and smell can help us be more relaxed and and happy when we’re away, so replicating those aspects of your life in an unfamiliar spot can help you sleep:Sound. If you normally sleep with a white noise machine, bring it with you when you travel, or find a travel-size model or phone app that simulates it.Smell. Everyone’s home has a unique scent map. Bringing those scents with you can trick your brain into feeling “at home” in a strange place. Using the same lotions, shampoos, and soaps on the road can recreate that scent matrix. Bringing an item of clothing that smells like the dryer sheets or detergent you use at home into bed with you can also help make an unfamiliar bed seem inviting.Routine. Another way to make an unfamiliar place seem more like home is to keep to your usual routine. However you approach bedtime at home—whether it’s reading a book, meditating for a few moments, or watching a little mindless television—do it as much as possible in your temporary digs. Try to hit the sack around the same time as usual, if you can, and keep to the same bathroom routine as well. Control the environmentAs much as possible, you want to control the physical environment that you’re sleeping in. If you’re used to sleeping in a pitch-black room, block light sources as much as possible by clipping curtains shut (binder clips work well for this), putting tape or Post-It notes over incidental light sources like alarms and thermostats, and blocking gaps under doors that allow light to leak in.If you prefer some light while you’re sleeping, bring a nightlight with you that you can plug in to make sure even the darkest room is illuminated. And adjust the temperature, if you can—most people sleep better when the room is a little on the cool side, about 60 to 65 degrees. But if you’re used to sleeping in a warmer or even colder environment, try to get as close to that as you can.Select a strategic locationIf you have control over the location of your room (when staying at a hotel, for example), use that control to select a spot that’s conducive to a good night’s sleep. That starts with the location of the building itself—if you have a choice of guest rooms or hotels to spend the night, choose one far away from busy streets or other sources of noise. Then look for a spot that’s far from common areas like elevators or lobbies—or your friend’s living room where everyone stays up all night chatting.Get out of bed (for a little while) Finally, if you’re struggling to fall asleep in a strange place despite all of these efforts, give up and get out of bed. Forcing yourself to lie there and count the minutes as they slip past you just reinforces the connection between stress and anxiety and that bed, making it even less likely that you’ll fall asleep. Instead, after about 20 minutes it’s best to get up and do something relaxing for a short period of time. This resets your body and mind and breaks the association between frustration and the bed, making it easier to relax when you try again.
    0 Комментарии 0 Поделились
  • Try the new UI Toolkit sample – now available on the Asset Store

    In Unity 2021 LTS, UI Toolkit offers a collection of features, resources, and tools to help you build and debug adaptive runtime UIs on a wide range of game applications and Editor extensions. Its intuitive workflow enables Unity creators in different roles – artists, programmers, and designers alike – to get started with UI development as quickly as possible.See our earlier blog post for an explanation of UI Toolkit’s main benefits, such as enhanced scalability and performance, already being leveraged by studios like Mechanistry for their game, Timberborn.While Unity UI remains the go-to solution for positioning and lighting UI in a 3D world or integrating with other Unity systems, UI Toolkit for runtime UI can already benefit game productions seeking performance and scalability as of Unity 2021 LTS. It’s particularly effective for Screen Space – Overlay UI, and scales well on a variety of screen resolutions.That’s why we’re excited to announce two new learning resources to better support UI development with UI Toolkit:UI Toolkit sample – Dragon Crashers: The demo is now available to download for free from the Asset Store.User interface design and implementation in Unity: This free e-book can be download from hereRead on to learn about some key features part of the UI Toolkit sample project.The UI Toolkit sample demonstrates how you can leverage UI Toolkit for your own applications. This demo involves a full-featured interface over a slice of the 2D project Dragon Crashers, a mini RPG, using the Unity 2021 LTS UI Toolkit workflow at runtime.Some of the actions illustrated in the sample project show you how to:Style with selectors in Unity style sheetfiles and use UXML templatesCreate custom controls, such as a circular meter or tabbed viewsCustomize the appearance of elements like sliders and toggle buttonsUse Render Texture for UI overlay effects, USS animations, seasonal themes, and moreTo try out the project after adding it to your assets, enter Play mode. Please note that UI Toolkit interfaces do not appear in the Scene view. Instead, you can view them in the Game view or UI Builder.The menu on the left helps you navigate the modal main menu screens. This vertical column of buttons provides access to the five modal screens that comprise the main menu.While some interactivity is possible, such as healing the characters by dragging available potions in the scene, gameplay has been kept to a minimum to ensure continued focus on the UI examples.Let’s take a closer look at the UIs in the menu bar:The home screen serves as a landing pad when launching the application. You can use this screen to play the game or receive simulated chat messages.The character screen involves a mix of GameObjects and UI elements. This is where you can explore each of the four Dragon Crashers characters. Use the stats, skills, and bio tabs to read the specific character details, and click on the inventory slots to add or remove items. The preview area shows a 2D lit and rigged character over a tiled background.The resources screen links to documentation, the forum, and other resources for making the most of UI Toolkit.The shop screen simulates an in-game store where you can purchase hard and soft currency, such as gold or gems, as well as virtual goods like healing potions. Each item in the shop screen is a separate VisualTreeAsset. UI Toolkit instantiates these assets at runtime; one for each ScriptableObject in the Resources/GameData.The mail screen is a front-end reader of fictitious messages that uses a tabbed menu to separate the inbox and deleted messages.The game screen is a mini version of the Dragon Crashers project that starts playing automatically. In this project, you’ll notice a few revised elements with UI Toolkit, such as a pause button, health bars, and the capacity to drag a healing potion element to your characters when they take damage.UI Toolkit enables you to build stable and consistent UIs for your entire project. At the same time, it provides flexible tools for adding your own design flourishes and details to further flesh out the game’s theme and style.Let’s go over some of the features used to refine the UI designs in the sample:Render Textures:UI Toolkit interfaces are rendered last in the render queue, meaning you can’t overlay other game graphics on top of a UI Toolkit UI. Render Textures provide a workaround to this limitation, making it possible to integrate in-game effects into UI Toolkit UIs. While these effects based on Render Textures should be used sparingly, you’ll still be able to afford sharp effects within the context of a fullscreen UI, without running gameplay. The following images show a number of Render Textures from the demo.Themes with Theme style sheets: TSS files are Asset files that are similar to regular USS files. They serve as a starting point for defining your own custom theme via USS selectors as well as property and variable settings. In the demo, we duplicated the default theme files and modified the copies to offer seasonal variations.Custom UI elements: Since designers are trained to think outside the box, UI Toolkit gives you plenty of room to customize or extend the standard library. The demo project highlights a few custom-built elements in the tabbed menus, slide toggles, and drop-down lists, plus a radial counter to demonstrate what UI artists are capable of alongside developers.USS transitions for animated UI state changes: Adding transitions to the menu screens can polish and smooth out your visuals. UI Toolkit makes this more straightforward with the Transition Animations property, part of the UI Builder’s Inspector. Adjust the Property, Duration, Easing, and Delay properties to set up the animation. Then simply change styles for UI Toolkit to apply the animated transition at runtime.Post-processing volume for a background blur: A popular effect in games is to blur a crowded gameplay scene to draw the player’s attention to a particular pop-up message or dialog window. You can achieve this effect by enabling Depth of Field in the Volume framework.We made sure that efficient workflows were used to fortify the UI. Here are a few recommendations for keeping the project well-organized:Consistent naming conventions: It’s important to adopt naming conventions that align with your visual elements and style sheets. Clear naming conventions not only maintain the hierarchy’s organization in UI Builder, they make it more accessible to your teammates, and keep the code clean and readable. More specifically, we suggest the Block Element Modifiernaming convention for visual elements and style sheets. Just at a glance, an element’s BEM naming can tell you what it does, how it appears, and how it relates to the other elements around it. See the following BEM naming examples:Responsive UI layout: Similar to web technologies, UI Toolkit offers the possibility of creating layouts where “child” visual elements adapt to the current available size inside their “parent” visual elements, and others where each element has an absolute position anchored to a reference point, akin to the Unity UI system. The sample uses both options as needed through the visual elements of the UI.PSD Importer: One of the most effective tools for creating the demo, PSD Importer allows artists to work in a master document without having to manually export every sprite separately. When changes are needed, they can be done in the original PSD file and updated automatically in Unity.ScriptableObjects: In order to focus on UI design and implementation, the sample project simulates backend data, such as in-app purchases and mail messages, using ScriptableObjects. You can conveniently customize this stand-in data from the Resources/GameData folder and use the example to create similar data assets, like inventory items and character or dialog data in UI Toolkit.Remember that with UI Toolkit, UI layouts and styles are decoupled from code. This means that rewriting the backend data can occur independently from the UI design. If your development team replaces those systems, the interface should continue to work.Additional tools used in the demo include particle systems created with the Built-in Particle System for special effects, and the 2D toolset, among others. Feel free to review the project via the Inspector to see how these different elements come into play.You can find reference art made by the UI artists under UI/Reference, as replicated in UI Builder. The whole process, from mockups to wireframes, is also documented in the e-book. Finally, all of the content in the sample can be added to your own Unity project.You can download the UI Toolkit sample – Dragon Crashers from the Asset Store. Once you’ve explored its different UI designs, please provide your feedback on the forum.Then be sure to check out our e-book, User interface design and implementation in Unity. Download
    #try #new #toolkit #sample #now
    Try the new UI Toolkit sample – now available on the Asset Store
    In Unity 2021 LTS, UI Toolkit offers a collection of features, resources, and tools to help you build and debug adaptive runtime UIs on a wide range of game applications and Editor extensions. Its intuitive workflow enables Unity creators in different roles – artists, programmers, and designers alike – to get started with UI development as quickly as possible.See our earlier blog post for an explanation of UI Toolkit’s main benefits, such as enhanced scalability and performance, already being leveraged by studios like Mechanistry for their game, Timberborn.While Unity UI remains the go-to solution for positioning and lighting UI in a 3D world or integrating with other Unity systems, UI Toolkit for runtime UI can already benefit game productions seeking performance and scalability as of Unity 2021 LTS. It’s particularly effective for Screen Space – Overlay UI, and scales well on a variety of screen resolutions.That’s why we’re excited to announce two new learning resources to better support UI development with UI Toolkit:UI Toolkit sample – Dragon Crashers: The demo is now available to download for free from the Asset Store.User interface design and implementation in Unity: This free e-book can be download from hereRead on to learn about some key features part of the UI Toolkit sample project.The UI Toolkit sample demonstrates how you can leverage UI Toolkit for your own applications. This demo involves a full-featured interface over a slice of the 2D project Dragon Crashers, a mini RPG, using the Unity 2021 LTS UI Toolkit workflow at runtime.Some of the actions illustrated in the sample project show you how to:Style with selectors in Unity style sheetfiles and use UXML templatesCreate custom controls, such as a circular meter or tabbed viewsCustomize the appearance of elements like sliders and toggle buttonsUse Render Texture for UI overlay effects, USS animations, seasonal themes, and moreTo try out the project after adding it to your assets, enter Play mode. Please note that UI Toolkit interfaces do not appear in the Scene view. Instead, you can view them in the Game view or UI Builder.The menu on the left helps you navigate the modal main menu screens. This vertical column of buttons provides access to the five modal screens that comprise the main menu.While some interactivity is possible, such as healing the characters by dragging available potions in the scene, gameplay has been kept to a minimum to ensure continued focus on the UI examples.Let’s take a closer look at the UIs in the menu bar:The home screen serves as a landing pad when launching the application. You can use this screen to play the game or receive simulated chat messages.The character screen involves a mix of GameObjects and UI elements. This is where you can explore each of the four Dragon Crashers characters. Use the stats, skills, and bio tabs to read the specific character details, and click on the inventory slots to add or remove items. The preview area shows a 2D lit and rigged character over a tiled background.The resources screen links to documentation, the forum, and other resources for making the most of UI Toolkit.The shop screen simulates an in-game store where you can purchase hard and soft currency, such as gold or gems, as well as virtual goods like healing potions. Each item in the shop screen is a separate VisualTreeAsset. UI Toolkit instantiates these assets at runtime; one for each ScriptableObject in the Resources/GameData.The mail screen is a front-end reader of fictitious messages that uses a tabbed menu to separate the inbox and deleted messages.The game screen is a mini version of the Dragon Crashers project that starts playing automatically. In this project, you’ll notice a few revised elements with UI Toolkit, such as a pause button, health bars, and the capacity to drag a healing potion element to your characters when they take damage.UI Toolkit enables you to build stable and consistent UIs for your entire project. At the same time, it provides flexible tools for adding your own design flourishes and details to further flesh out the game’s theme and style.Let’s go over some of the features used to refine the UI designs in the sample:Render Textures:UI Toolkit interfaces are rendered last in the render queue, meaning you can’t overlay other game graphics on top of a UI Toolkit UI. Render Textures provide a workaround to this limitation, making it possible to integrate in-game effects into UI Toolkit UIs. While these effects based on Render Textures should be used sparingly, you’ll still be able to afford sharp effects within the context of a fullscreen UI, without running gameplay. The following images show a number of Render Textures from the demo.Themes with Theme style sheets: TSS files are Asset files that are similar to regular USS files. They serve as a starting point for defining your own custom theme via USS selectors as well as property and variable settings. In the demo, we duplicated the default theme files and modified the copies to offer seasonal variations.Custom UI elements: Since designers are trained to think outside the box, UI Toolkit gives you plenty of room to customize or extend the standard library. The demo project highlights a few custom-built elements in the tabbed menus, slide toggles, and drop-down lists, plus a radial counter to demonstrate what UI artists are capable of alongside developers.USS transitions for animated UI state changes: Adding transitions to the menu screens can polish and smooth out your visuals. UI Toolkit makes this more straightforward with the Transition Animations property, part of the UI Builder’s Inspector. Adjust the Property, Duration, Easing, and Delay properties to set up the animation. Then simply change styles for UI Toolkit to apply the animated transition at runtime.Post-processing volume for a background blur: A popular effect in games is to blur a crowded gameplay scene to draw the player’s attention to a particular pop-up message or dialog window. You can achieve this effect by enabling Depth of Field in the Volume framework.We made sure that efficient workflows were used to fortify the UI. Here are a few recommendations for keeping the project well-organized:Consistent naming conventions: It’s important to adopt naming conventions that align with your visual elements and style sheets. Clear naming conventions not only maintain the hierarchy’s organization in UI Builder, they make it more accessible to your teammates, and keep the code clean and readable. More specifically, we suggest the Block Element Modifiernaming convention for visual elements and style sheets. Just at a glance, an element’s BEM naming can tell you what it does, how it appears, and how it relates to the other elements around it. See the following BEM naming examples:Responsive UI layout: Similar to web technologies, UI Toolkit offers the possibility of creating layouts where “child” visual elements adapt to the current available size inside their “parent” visual elements, and others where each element has an absolute position anchored to a reference point, akin to the Unity UI system. The sample uses both options as needed through the visual elements of the UI.PSD Importer: One of the most effective tools for creating the demo, PSD Importer allows artists to work in a master document without having to manually export every sprite separately. When changes are needed, they can be done in the original PSD file and updated automatically in Unity.ScriptableObjects: In order to focus on UI design and implementation, the sample project simulates backend data, such as in-app purchases and mail messages, using ScriptableObjects. You can conveniently customize this stand-in data from the Resources/GameData folder and use the example to create similar data assets, like inventory items and character or dialog data in UI Toolkit.Remember that with UI Toolkit, UI layouts and styles are decoupled from code. This means that rewriting the backend data can occur independently from the UI design. If your development team replaces those systems, the interface should continue to work.Additional tools used in the demo include particle systems created with the Built-in Particle System for special effects, and the 2D toolset, among others. Feel free to review the project via the Inspector to see how these different elements come into play.You can find reference art made by the UI artists under UI/Reference, as replicated in UI Builder. The whole process, from mockups to wireframes, is also documented in the e-book. Finally, all of the content in the sample can be added to your own Unity project.You can download the UI Toolkit sample – Dragon Crashers from the Asset Store. Once you’ve explored its different UI designs, please provide your feedback on the forum.Then be sure to check out our e-book, User interface design and implementation in Unity. Download #try #new #toolkit #sample #now
    UNITY.COM
    Try the new UI Toolkit sample – now available on the Asset Store
    In Unity 2021 LTS, UI Toolkit offers a collection of features, resources, and tools to help you build and debug adaptive runtime UIs on a wide range of game applications and Editor extensions. Its intuitive workflow enables Unity creators in different roles – artists, programmers, and designers alike – to get started with UI development as quickly as possible.See our earlier blog post for an explanation of UI Toolkit’s main benefits, such as enhanced scalability and performance, already being leveraged by studios like Mechanistry for their game, Timberborn.While Unity UI remains the go-to solution for positioning and lighting UI in a 3D world or integrating with other Unity systems, UI Toolkit for runtime UI can already benefit game productions seeking performance and scalability as of Unity 2021 LTS. It’s particularly effective for Screen Space – Overlay UI, and scales well on a variety of screen resolutions.That’s why we’re excited to announce two new learning resources to better support UI development with UI Toolkit:UI Toolkit sample – Dragon Crashers: The demo is now available to download for free from the Asset Store.User interface design and implementation in Unity: This free e-book can be download from hereRead on to learn about some key features part of the UI Toolkit sample project.The UI Toolkit sample demonstrates how you can leverage UI Toolkit for your own applications. This demo involves a full-featured interface over a slice of the 2D project Dragon Crashers, a mini RPG, using the Unity 2021 LTS UI Toolkit workflow at runtime.Some of the actions illustrated in the sample project show you how to:Style with selectors in Unity style sheet (USS) files and use UXML templatesCreate custom controls, such as a circular meter or tabbed viewsCustomize the appearance of elements like sliders and toggle buttonsUse Render Texture for UI overlay effects, USS animations, seasonal themes, and moreTo try out the project after adding it to your assets, enter Play mode. Please note that UI Toolkit interfaces do not appear in the Scene view. Instead, you can view them in the Game view or UI Builder.The menu on the left helps you navigate the modal main menu screens. This vertical column of buttons provides access to the five modal screens that comprise the main menu (they stay active while switching between screens).While some interactivity is possible, such as healing the characters by dragging available potions in the scene, gameplay has been kept to a minimum to ensure continued focus on the UI examples.Let’s take a closer look at the UIs in the menu bar:The home screen serves as a landing pad when launching the application. You can use this screen to play the game or receive simulated chat messages.The character screen involves a mix of GameObjects and UI elements. This is where you can explore each of the four Dragon Crashers characters. Use the stats, skills, and bio tabs to read the specific character details, and click on the inventory slots to add or remove items. The preview area shows a 2D lit and rigged character over a tiled background.The resources screen links to documentation, the forum, and other resources for making the most of UI Toolkit.The shop screen simulates an in-game store where you can purchase hard and soft currency, such as gold or gems, as well as virtual goods like healing potions. Each item in the shop screen is a separate VisualTreeAsset. UI Toolkit instantiates these assets at runtime; one for each ScriptableObject in the Resources/GameData.The mail screen is a front-end reader of fictitious messages that uses a tabbed menu to separate the inbox and deleted messages.The game screen is a mini version of the Dragon Crashers project that starts playing automatically. In this project, you’ll notice a few revised elements with UI Toolkit, such as a pause button, health bars, and the capacity to drag a healing potion element to your characters when they take damage.UI Toolkit enables you to build stable and consistent UIs for your entire project. At the same time, it provides flexible tools for adding your own design flourishes and details to further flesh out the game’s theme and style.Let’s go over some of the features used to refine the UI designs in the sample:Render Textures:UI Toolkit interfaces are rendered last in the render queue, meaning you can’t overlay other game graphics on top of a UI Toolkit UI. Render Textures provide a workaround to this limitation, making it possible to integrate in-game effects into UI Toolkit UIs. While these effects based on Render Textures should be used sparingly, you’ll still be able to afford sharp effects within the context of a fullscreen UI, without running gameplay. The following images show a number of Render Textures from the demo.Themes with Theme style sheets (TSS): TSS files are Asset files that are similar to regular USS files. They serve as a starting point for defining your own custom theme via USS selectors as well as property and variable settings. In the demo, we duplicated the default theme files and modified the copies to offer seasonal variations.Custom UI elements: Since designers are trained to think outside the box, UI Toolkit gives you plenty of room to customize or extend the standard library. The demo project highlights a few custom-built elements in the tabbed menus, slide toggles, and drop-down lists, plus a radial counter to demonstrate what UI artists are capable of alongside developers.USS transitions for animated UI state changes: Adding transitions to the menu screens can polish and smooth out your visuals. UI Toolkit makes this more straightforward with the Transition Animations property, part of the UI Builder’s Inspector. Adjust the Property, Duration, Easing, and Delay properties to set up the animation. Then simply change styles for UI Toolkit to apply the animated transition at runtime.Post-processing volume for a background blur: A popular effect in games is to blur a crowded gameplay scene to draw the player’s attention to a particular pop-up message or dialog window. You can achieve this effect by enabling Depth of Field in the Volume framework (available in the Universal Render Pipeline).We made sure that efficient workflows were used to fortify the UI. Here are a few recommendations for keeping the project well-organized:Consistent naming conventions: It’s important to adopt naming conventions that align with your visual elements and style sheets. Clear naming conventions not only maintain the hierarchy’s organization in UI Builder, they make it more accessible to your teammates, and keep the code clean and readable. More specifically, we suggest the Block Element Modifier (BEM) naming convention for visual elements and style sheets. Just at a glance, an element’s BEM naming can tell you what it does, how it appears, and how it relates to the other elements around it. See the following BEM naming examples:Responsive UI layout: Similar to web technologies, UI Toolkit offers the possibility of creating layouts where “child” visual elements adapt to the current available size inside their “parent” visual elements, and others where each element has an absolute position anchored to a reference point, akin to the Unity UI system. The sample uses both options as needed through the visual elements of the UI.PSD Importer: One of the most effective tools for creating the demo, PSD Importer allows artists to work in a master document without having to manually export every sprite separately. When changes are needed, they can be done in the original PSD file and updated automatically in Unity.ScriptableObjects: In order to focus on UI design and implementation, the sample project simulates backend data, such as in-app purchases and mail messages, using ScriptableObjects. You can conveniently customize this stand-in data from the Resources/GameData folder and use the example to create similar data assets, like inventory items and character or dialog data in UI Toolkit.Remember that with UI Toolkit, UI layouts and styles are decoupled from code. This means that rewriting the backend data can occur independently from the UI design. If your development team replaces those systems, the interface should continue to work.Additional tools used in the demo include particle systems created with the Built-in Particle System for special effects, and the 2D toolset, among others. Feel free to review the project via the Inspector to see how these different elements come into play.You can find reference art made by the UI artists under UI/Reference, as replicated in UI Builder. The whole process, from mockups to wireframes, is also documented in the e-book. Finally, all of the content in the sample can be added to your own Unity project.You can download the UI Toolkit sample – Dragon Crashers from the Asset Store. Once you’ve explored its different UI designs, please provide your feedback on the forum.Then be sure to check out our e-book, User interface design and implementation in Unity. Download
    0 Комментарии 0 Поделились
  • Reliably Detecting Third-Party Cookie Blocking In 2025

    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Grouphas been vocal in advocating for the complete removal of third-party cookies from the web platform.
    Major browsersare responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on, fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully.
    Don’t Let Silent Failures Win: Why Cookie Detection Still Matters
    Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing.
    Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work.
    And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow.
    Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience.
    Why It’s Hard To Tell If Third-Party Cookies Are Blocked
    Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know:

    // DOES NOT detect third-party cookie blocking
    function areCookiesEnabled{
    if{
    return false;
    }

    try {
    document.cookie = "test_cookie=1; SameSite=None; Secure";
    const hasCookie = document.cookie.includes;
    document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure";

    return hasCookie;
    } catch{
    return false;
    }
    }

    This function only confirms that cookies work in the currentcontext. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection.
    These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts.
    How Modern Browsers Handle Third-Party Cookies
    The behavior of modern browsers directly affects which detection methods will work and which ones silently fail.
    Safari: Full Third-Party Cookie Blocking
    Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention.
    For embedded contentthat requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API.
    Firefox: Cookie Partitioning By Design
    Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared.
    As of Firefox 102, this behavior is enabled by default in the Standardmode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site.
    As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that.
    Chrome: From Deprecation Plans To Privacy SandboxChromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected.
    In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024.
    In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally.
    This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authorityinto Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking.
    As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout.
    Edge: Tracker-Focused Blocking With User Configurability
    Edgeshares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced, and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected.
    Other Browsers: What About Them?
    Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance.
    Internet Explorer11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferencesheaders. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present.
    Older versions of Safari had partial third-party cookie restrictions, but, as mentioned before, this was replaced with full blocking via ITP.
    As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model.
    To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions.
    Overview Of Detection Techniques
    Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t workand what does.
    Basic JavaScript API ChecksAs mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status:

    In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked.
    Setting document.cookie in the parent doesn’t test the third-party context.

    These checks are first-party only. Avoid using them for detection.
    Storage Hacks Via localStoragePreviously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage.
    Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable.
    Server-Assisted Cookie ProbeOne classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back:

    Load a script/image from a third-party server that sets a cookie.
    Immediately load another resource, and the server checks whether the cookie was sent.

    This works, but it:

    Requires custom server-side logic,
    Depends on HTTP caching, response headers, and cookie attributes, and
    Adds development and infrastructure complexity.

    While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here.
    Storage Access APIThe document.hasStorageAccessmethod allows embedded third-party content to check if it has access to unpartitioned cookies:

    ChromeSupports hasStorageAccessand requestStorageAccessstarting from version 119. Additionally, hasUnpartitionedCookieAccessis available as an alias for hasStorageAccessfrom version 125 onwards.
    FirefoxSupports both hasStorageAccessand requestStorageAccessmethods.
    SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccesswithout a direct user gestureis ignored.

    Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement.
    This API is particularly useful for detecting scenarios where cookies are present but partitioned, as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check.
    iFrame + postMessageDespite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method:

    Embed a hidden iframe from a third-party domain.
    Inside the iframe, attempt to set a test cookie.
    Use window.postMessage to report success or failure to the parent.

    This approach works across all major browsers, requires no server, and simulates a real-world third-party scenario.
    We’ll implement this step-by-step next.
    Bonus: Sec-Fetch-Storage-Access
    Chromeis introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection.
    As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem.
    Step-by-Step: Detecting Third-Party Cookies Via iFrame
    So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic, it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following:

    You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com,
    The domain needs HTTPS, and
    You’ll need to host a simple static file, even if no server code is involved.

    Once that’s set up, the rest of the logic is fully client-side.
    Step 1: Create A Cookie Test PageMinimal version:

    <!DOCTYPE html>
    <html>
    <body>
    <script>
    document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;";
    const cookieFound = document.cookie.includes;

    const sendResult ==> window.parent?.postMessage;

    if{
    document.hasStorageAccess.then=> {
    sendResult;
    }).catch=> sendResult);
    } else {
    sendResult;
    }
    </script>
    </body>
    </html>

    Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it.
    Step 2: Embed The iFrame And Listen For The Result
    On your main page:

    function checkThirdPartyCookies{
    return new Promise=> {
    const iframe = document.createElement;
    iframe.style.display = 'none';
    iframe.src = ";; // your subdomain
    document.body.appendChild;

    let resolved = false;
    const cleanup ==> {
    ifreturn;
    resolved = true;
    window.removeEventListener;
    iframe.remove;
    resolve;
    };

    const onMessage ==> {
    if) {
    cleanup;
    }
    };

    window.addEventListener;
    setTimeout=> cleanup, 1000);
    });
    }

    Example usage:

    checkThirdPartyCookies.then=> {
    if{
    someCookiesBlockedCallback; // Third-party cookies are blocked.
    if{
    // No response received.
    // Optional fallback UX goes here.
    someCookiesBlockedTimeoutCallback;
    };
    }
    });

    Step 3: Enhance Detection With The Storage Access API
    In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture.
    Here’s how you could implement that in your iframe test page:

    <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button>

    <script>
    document.getElementById?.addEventListener=> {
    if{
    try {
    const granted = await document.requestStorageAccess;
    if{
    window.parent.postMessage;
    } else {
    window.parent.postMessage;
    }
    } catch{
    window.parent.postMessage;
    }
    }
    });
    </script>

    Then, on the parent page, you can listen for this message and retry detection if needed:

    // Inside the same onMessage listener from before:
    if{
    // Optionally: retry the cookie test, or reload iframe logic
    checkThirdPartyCookies.then;
    }A Purely Client-Side FallbackIn some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible.
    When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess, localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked.
    The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX.
    Here’s a basic example:

    async function inferCookieSupportFallback{
    let hasCookieAPI = navigator.cookieEnabled;
    let canSetCookie = false;
    let hasStorageAccess = false;

    try {
    document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;";
    canSetCookie = document.cookie.includes;

    document.cookie = "test_fallback=; Max-Age=0; Path=/;";
    } catch{
    canSetCookie = false;
    }

    if{
    try {
    hasStorageAccess = await document.hasStorageAccess;
    } catch{}
    }

    return {
    inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess,
    raw: { hasCookieAPI, canSetCookie, hasStorageAccess }
    };
    }

    Example usage:

    inferCookieSupportFallback.then=> {
    if{
    console.log;
    } else {
    console.warn;
    // You could inform the user or adjust behavior accordingly
    }
    });

    Use this fallback when:

    You’re building a JavaScript-only widget embedded on unknown sites,
    You don’t control a second domain, or
    You just need some visibility into user-side behavior.

    Don’t rely on it for security-critical logic! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing.
    Fallback Strategies When Third-Party Cookies Are Blocked
    Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you:
    Redirect-Based Flows
    For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless.
    Request Storage Access
    Prompt the user using requestStorageAccessafter a clear UI gesture. Use this to re-enable cookies without leaving the page.
    Token-Based Communication
    Pass session info directly from parent to iframe via:

    postMessage;
    Query params.

    This avoids reliance on cookies entirely but requires coordination between both sides:

    // Parent
    const iframe = document.getElementById;

    iframe.onload ==> {
    const token = getAccessTokenSomehow; // JWT or anything else
    iframe.contentWindow.postMessage;
    };

    // iframe
    window.addEventListener=> {
    ifreturn;

    const { type, token } = event.data;

    if{
    validateAndUseToken; // process JWT, init session, etc
    }
    });

    Partitioned CookiesChromeand other Chromium-based browsers now support cookies with the Partitioned attribute, allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed.
    Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism, while Safari blocks third-party cookies entirely.

    But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed.
    Final Thought: Transparency, Transition, And The Path Forward
    Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means:

    Keep an eye on the standards.APIs like FedCM and Privacy Sandbox featuresare reshaping how we handle identity and analytics without relying on cross-site cookies.
    Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess, or falling back to token-based messaging — every small UX improvement adds up.
    Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion.

    The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time.
    And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions.
    And don’t forget: having something is better than nothing.
    #reliably #detectingthirdparty #cookie #blockingin
    Reliably Detecting Third-Party Cookie Blocking In 2025
    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Grouphas been vocal in advocating for the complete removal of third-party cookies from the web platform. Major browsersare responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on, fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully. Don’t Let Silent Failures Win: Why Cookie Detection Still Matters Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing. Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work. And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow. Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience. Why It’s Hard To Tell If Third-Party Cookies Are Blocked Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know: // DOES NOT detect third-party cookie blocking function areCookiesEnabled{ if{ return false; } try { document.cookie = "test_cookie=1; SameSite=None; Secure"; const hasCookie = document.cookie.includes; document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure"; return hasCookie; } catch{ return false; } } This function only confirms that cookies work in the currentcontext. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection. These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts. How Modern Browsers Handle Third-Party Cookies The behavior of modern browsers directly affects which detection methods will work and which ones silently fail. Safari: Full Third-Party Cookie Blocking Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention. For embedded contentthat requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API. Firefox: Cookie Partitioning By Design Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared. As of Firefox 102, this behavior is enabled by default in the Standardmode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site. As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that. Chrome: From Deprecation Plans To Privacy SandboxChromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected. In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024. In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally. This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authorityinto Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking. As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout. Edge: Tracker-Focused Blocking With User Configurability Edgeshares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced, and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected. Other Browsers: What About Them? Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance. Internet Explorer11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferencesheaders. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present. Older versions of Safari had partial third-party cookie restrictions, but, as mentioned before, this was replaced with full blocking via ITP. As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model. To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions. Overview Of Detection Techniques Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t workand what does. Basic JavaScript API ChecksAs mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status: In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked. Setting document.cookie in the parent doesn’t test the third-party context. These checks are first-party only. Avoid using them for detection. Storage Hacks Via localStoragePreviously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage. Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable. Server-Assisted Cookie ProbeOne classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back: Load a script/image from a third-party server that sets a cookie. Immediately load another resource, and the server checks whether the cookie was sent. This works, but it: Requires custom server-side logic, Depends on HTTP caching, response headers, and cookie attributes, and Adds development and infrastructure complexity. While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here. Storage Access APIThe document.hasStorageAccessmethod allows embedded third-party content to check if it has access to unpartitioned cookies: ChromeSupports hasStorageAccessand requestStorageAccessstarting from version 119. Additionally, hasUnpartitionedCookieAccessis available as an alias for hasStorageAccessfrom version 125 onwards. FirefoxSupports both hasStorageAccessand requestStorageAccessmethods. SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccesswithout a direct user gestureis ignored. Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement. This API is particularly useful for detecting scenarios where cookies are present but partitioned, as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check. iFrame + postMessageDespite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method: Embed a hidden iframe from a third-party domain. Inside the iframe, attempt to set a test cookie. Use window.postMessage to report success or failure to the parent. This approach works across all major browsers, requires no server, and simulates a real-world third-party scenario. We’ll implement this step-by-step next. Bonus: Sec-Fetch-Storage-Access Chromeis introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection. As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem. Step-by-Step: Detecting Third-Party Cookies Via iFrame So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic, it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following: You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com, The domain needs HTTPS, and You’ll need to host a simple static file, even if no server code is involved. Once that’s set up, the rest of the logic is fully client-side. Step 1: Create A Cookie Test PageMinimal version: <!DOCTYPE html> <html> <body> <script> document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;"; const cookieFound = document.cookie.includes; const sendResult ==> window.parent?.postMessage; if{ document.hasStorageAccess.then=> { sendResult; }).catch=> sendResult); } else { sendResult; } </script> </body> </html> Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it. Step 2: Embed The iFrame And Listen For The Result On your main page: function checkThirdPartyCookies{ return new Promise=> { const iframe = document.createElement; iframe.style.display = 'none'; iframe.src = ";; // your subdomain document.body.appendChild; let resolved = false; const cleanup ==> { ifreturn; resolved = true; window.removeEventListener; iframe.remove; resolve; }; const onMessage ==> { if) { cleanup; } }; window.addEventListener; setTimeout=> cleanup, 1000); }); } Example usage: checkThirdPartyCookies.then=> { if{ someCookiesBlockedCallback; // Third-party cookies are blocked. if{ // No response received. // Optional fallback UX goes here. someCookiesBlockedTimeoutCallback; }; } }); Step 3: Enhance Detection With The Storage Access API In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture. Here’s how you could implement that in your iframe test page: <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button> <script> document.getElementById?.addEventListener=> { if{ try { const granted = await document.requestStorageAccess; if{ window.parent.postMessage; } else { window.parent.postMessage; } } catch{ window.parent.postMessage; } } }); </script> Then, on the parent page, you can listen for this message and retry detection if needed: // Inside the same onMessage listener from before: if{ // Optionally: retry the cookie test, or reload iframe logic checkThirdPartyCookies.then; }A Purely Client-Side FallbackIn some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible. When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess, localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked. The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX. Here’s a basic example: async function inferCookieSupportFallback{ let hasCookieAPI = navigator.cookieEnabled; let canSetCookie = false; let hasStorageAccess = false; try { document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;"; canSetCookie = document.cookie.includes; document.cookie = "test_fallback=; Max-Age=0; Path=/;"; } catch{ canSetCookie = false; } if{ try { hasStorageAccess = await document.hasStorageAccess; } catch{} } return { inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess, raw: { hasCookieAPI, canSetCookie, hasStorageAccess } }; } Example usage: inferCookieSupportFallback.then=> { if{ console.log; } else { console.warn; // You could inform the user or adjust behavior accordingly } }); Use this fallback when: You’re building a JavaScript-only widget embedded on unknown sites, You don’t control a second domain, or You just need some visibility into user-side behavior. Don’t rely on it for security-critical logic! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing. Fallback Strategies When Third-Party Cookies Are Blocked Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you: Redirect-Based Flows For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless. Request Storage Access Prompt the user using requestStorageAccessafter a clear UI gesture. Use this to re-enable cookies without leaving the page. Token-Based Communication Pass session info directly from parent to iframe via: postMessage; Query params. This avoids reliance on cookies entirely but requires coordination between both sides: // Parent const iframe = document.getElementById; iframe.onload ==> { const token = getAccessTokenSomehow; // JWT or anything else iframe.contentWindow.postMessage; }; // iframe window.addEventListener=> { ifreturn; const { type, token } = event.data; if{ validateAndUseToken; // process JWT, init session, etc } }); Partitioned CookiesChromeand other Chromium-based browsers now support cookies with the Partitioned attribute, allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed. Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism, while Safari blocks third-party cookies entirely. But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed. Final Thought: Transparency, Transition, And The Path Forward Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means: Keep an eye on the standards.APIs like FedCM and Privacy Sandbox featuresare reshaping how we handle identity and analytics without relying on cross-site cookies. Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess, or falling back to token-based messaging — every small UX improvement adds up. Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion. The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time. And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions. And don’t forget: having something is better than nothing. #reliably #detectingthirdparty #cookie #blockingin
    SMASHINGMAGAZINE.COM
    Reliably Detecting Third-Party Cookie Blocking In 2025
    The web is beginning to part ways with third-party cookies, a technology it once heavily relied on. Introduced in 1994 by Netscape to support features like virtual shopping carts, cookies have long been a staple of web functionality. However, concerns over privacy and security have led to a concerted effort to eliminate them. The World Wide Web Consortium Technical Architecture Group (W3C TAG) has been vocal in advocating for the complete removal of third-party cookies from the web platform. Major browsers (Chrome, Safari, Firefox, and Edge) are responding by phasing them out, though the transition is gradual. While this shift enhances user privacy, it also disrupts legitimate functionalities that rely on third-party cookies, such as single sign-on (SSO), fraud prevention, and embedded services. And because there is still no universal ban in place and many essential web features continue to depend on these cookies, developers must detect when third-party cookies are blocked so that applications can respond gracefully. Don’t Let Silent Failures Win: Why Cookie Detection Still Matters Yes, the ideal solution is to move away from third-party cookies altogether and redesign our integrations using privacy-first, purpose-built alternatives as soon as possible. But in reality, that migration can take months or even years, especially for legacy systems or third-party vendors. Meanwhile, users are already browsing with third-party cookies disabled and often have no idea that anything is missing. Imagine a travel booking platform that embeds an iframe from a third-party partner to display live train or flight schedules. This embedded service uses a cookie on its own domain to authenticate the user and personalize content, like showing saved trips or loyalty rewards. But when the browser blocks third-party cookies, the iframe cannot access that data. Instead of a seamless experience, the user sees an error, a blank screen, or a login prompt that doesn’t work. And while your team is still planning a long-term integration overhaul, this is already happening to real users. They don’t see a cookie policy; they just see a broken booking flow. Detecting third-party cookie blocking isn’t just good technical hygiene but a frontline defense for user experience. Why It’s Hard To Tell If Third-Party Cookies Are Blocked Detecting whether third-party cookies are supported isn’t as simple as calling navigator.cookieEnabled. Even a well-intentioned check like this one may look safe, but it still won’t tell you what you actually need to know: // DOES NOT detect third-party cookie blocking function areCookiesEnabled() { if (navigator.cookieEnabled === false) { return false; } try { document.cookie = "test_cookie=1; SameSite=None; Secure"; const hasCookie = document.cookie.includes("test_cookie=1"); document.cookie = "test_cookie=; Max-Age=0; SameSite=None; Secure"; return hasCookie; } catch (e) { return false; } } This function only confirms that cookies work in the current (first-party) context. It says nothing about third-party scenarios, like an iframe on another domain. Worse, it’s misleading: in some browsers, navigator.cookieEnabled may still return true inside a third-party iframe even when cookies are blocked. Others might behave differently, leading to inconsistent and unreliable detection. These cross-browser inconsistencies — combined with the limitations of document.cookie — make it clear that there is no shortcut for detection. To truly detect third-party cookie blocking, we need to understand how different browsers actually behave in embedded third-party contexts. How Modern Browsers Handle Third-Party Cookies The behavior of modern browsers directly affects which detection methods will work and which ones silently fail. Safari: Full Third-Party Cookie Blocking Since version 13.1, Safari blocks all third-party cookies by default, with no exceptions, even if the user previously interacted with the embedded domain. This policy is part of Intelligent Tracking Prevention (ITP). For embedded content (such as an SSO iframe) that requires cookie access, Safari exposes the Storage Access API, which requires a user gesture to grant storage permission. As a result, a test for third-party cookie support will nearly always fail in Safari unless the iframe explicitly requests access via this API. Firefox: Cookie Partitioning By Design Firefox’s Total Cookie Protection isolates cookies on a per-site basis. Third-party cookies can still be set and read, but they are partitioned by the top-level site, meaning a cookie set by the same third-party on siteA.com and siteB.com is stored separately and cannot be shared. As of Firefox 102, this behavior is enabled by default in the Standard (default) mode of Enhanced Tracking Protection. Unlike the Strict mode — which blocks third-party cookies entirely, similar to Safari — the Standard mode does not block them outright. Instead, it neutralizes their tracking capability by isolating them per site. As a result, even if a test shows that a third-party cookie was successfully set, it may be useless for cross-site logins or shared sessions due to this partitioning. Detection logic needs to account for that. Chrome: From Deprecation Plans To Privacy Sandbox (And Industry Pushback) Chromium-based browsers still allow third-party cookies by default — but the story is changing. Starting with Chrome 80, third-party cookies must be explicitly marked with SameSite=None; Secure, or they will be rejected. In January 2020, Google announced their intention to phase out third-party cookies by 2022. However, the timeline was updated multiple times, first in June 2021 when the company pushed the rollout to begin in mid-2023 and conclude by the end of that year. Additional postponements followed in July 2022, December 2023, and April 2024. In July 2024, Google has clarified that there is no plan to unilaterally deprecate third-party cookies or force users into a new model without consent. Instead, Chrome is shifting to a user-choice interface that will allow individuals to decide whether to block or allow third-party cookies globally. This change was influenced in part by substantial pushback from the advertising industry, as well as ongoing regulatory oversight, including scrutiny by the UK Competition and Markets Authority (CMA) into Google’s Privacy Sandbox initiative. The CMA confirmed in a 2025 update that there is no intention to force a deprecation or trigger automatic prompts for cookie blocking. As for now, third-party cookies remain enabled by default in Chrome. The new user-facing controls and the broader Privacy Sandbox ecosystem are still in various stages of experimentation and limited rollout. Edge (Chromium-Based): Tracker-Focused Blocking With User Configurability Edge (which is a Chromium-based browser) shares Chrome’s handling of third-party cookies, including the SameSite=None; Secure requirement. Additionally, Edge introduces Tracking Prevention modes: Basic, Balanced (default), and Strict. In Balanced mode, it blocks known third-party trackers using Microsoft’s maintained list but allows many third-party cookies that are not classified as trackers. Strict mode blocks more resource loads than Balanced, which may result in some websites not behaving as expected. Other Browsers: What About Them? Privacy-focused browsers, like Brave, block third-party cookies by default as part of their strong anti-tracking stance. Internet Explorer (IE) 11 allowed third-party cookies depending on user privacy settings and the presence of Platform for Privacy Preferences (P3P) headers. However, IE usage is now negligible. Notably, the default “Medium” privacy setting in IE could block third-party cookies unless a valid P3P policy was present. Older versions of Safari had partial third-party cookie restrictions (such as “Allow from websites I visit”), but, as mentioned before, this was replaced with full blocking via ITP. As of 2025, all major browsers either block or isolate third-party cookies by default, with the exception of Chrome, which still allows them in standard browsing mode pending the rollout of its new user-choice model. To account for these variations, your detection strategy must be grounded in real-world testing — specifically by reproducing a genuine third-party context such as loading your script within an iframe on a cross-origin domain — rather than relying on browser names or versions. Overview Of Detection Techniques Over the years, many techniques have been used to detect third-party cookie blocking. Most are unreliable or obsolete. Here’s a quick walkthrough of what doesn’t work (and why) and what does. Basic JavaScript API Checks (Misleading) As mentioned earlier, the navigator.cookieEnabled or setting document.cookie on the main page doesn’t reflect cross-site cookie status: In third-party iframes, navigator.cookieEnabled often returns true even when cookies are blocked. Setting document.cookie in the parent doesn’t test the third-party context. These checks are first-party only. Avoid using them for detection. Storage Hacks Via localStorage (Obsolete) Previously, some developers inferred cookie support by checking if window.localStorage worked inside a third-party iframe — which is especially useful against older Safari versions that blocked all third-party storage. Modern browsers often allow localStorage even when cookies are blocked. This leads to false positives and is no longer reliable. Server-Assisted Cookie Probe (Heavyweight) One classic method involves setting a cookie from a third-party domain via HTTP and then checking if it comes back: Load a script/image from a third-party server that sets a cookie. Immediately load another resource, and the server checks whether the cookie was sent. This works, but it: Requires custom server-side logic, Depends on HTTP caching, response headers, and cookie attributes (SameSite=None; Secure), and Adds development and infrastructure complexity. While this is technically valid, it is not suitable for a front-end-only approach, which is our focus here. Storage Access API (Supplemental Signal) The document.hasStorageAccess() method allows embedded third-party content to check if it has access to unpartitioned cookies: ChromeSupports hasStorageAccess() and requestStorageAccess() starting from version 119. Additionally, hasUnpartitionedCookieAccess() is available as an alias for hasStorageAccess() from version 125 onwards. FirefoxSupports both hasStorageAccess() and requestStorageAccess() methods. SafariSupports the Storage Access API. However, access must always be triggered by a user interaction. For example, even calling requestStorageAccess() without a direct user gesture (like a click) is ignored. Chrome and Firefox also support the API, and in those browsers, it may work automatically or based on browser heuristics or site engagement. This API is particularly useful for detecting scenarios where cookies are present but partitioned (e.g., Firefox’s Total Cookie Protection), as it helps determine if the iframe has unrestricted cookie access. But for now, it’s still best used as a supplemental signal, rather than a standalone check. iFrame + postMessage (Best Practice) Despite the existence of the Storage Access API, at the time of writing, this remains the most reliable and browser-compatible method: Embed a hidden iframe from a third-party domain. Inside the iframe, attempt to set a test cookie. Use window.postMessage to report success or failure to the parent. This approach works across all major browsers (when properly configured), requires no server (kind of, more on that next), and simulates a real-world third-party scenario. We’ll implement this step-by-step next. Bonus: Sec-Fetch-Storage-Access Chrome (starting in version 133) is introducing Sec-Fetch-Storage-Access, an HTTP request header sent with cross-site requests to indicate whether the iframe has access to unpartitioned cookies. This header is only visible to servers and cannot be accessed via JavaScript. It’s useful for back-end analytics but not applicable for client-side cookie detection. As of May 2025, this feature is only implemented in Chrome and is not supported by other browsers. However, it’s still good to know that it’s part of the evolving ecosystem. Step-by-Step: Detecting Third-Party Cookies Via iFrame So, what did I mean when I said that the last method we looked at “requires no server”? While this method doesn’t require any back-end logic (like server-set cookies or response inspection), it does require access to a separate domain — or at least a cross-site subdomain — to simulate a third-party environment. This means the following: You must serve the test page from a different domain or public subdomain, e.g., example.com and cookietest.example.com, The domain needs HTTPS (for SameSite=None; Secure cookies to work), and You’ll need to host a simple static file (the test page), even if no server code is involved. Once that’s set up, the rest of the logic is fully client-side. Step 1: Create A Cookie Test Page (On A Third-Party Domain) Minimal version (e.g., https://cookietest.example.com/cookie-check.html): <!DOCTYPE html> <html> <body> <script> document.cookie = "thirdparty_test=1; SameSite=None; Secure; Path=/;"; const cookieFound = document.cookie.includes("thirdparty_test=1"); const sendResult = (status) => window.parent?.postMessage(status, "*"); if (cookieFound && document.hasStorageAccess instanceof Function) { document.hasStorageAccess().then((hasAccess) => { sendResult(hasAccess ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED"); }).catch(() => sendResult("TP_COOKIE_BLOCKED")); } else { sendResult(cookieFound ? "TP_COOKIE_SUPPORTED" : "TP_COOKIE_BLOCKED"); } </script> </body> </html> Make sure the page is served over HTTPS, and the cookie uses SameSite=None; Secure. Without these attributes, modern browsers will silently reject it. Step 2: Embed The iFrame And Listen For The Result On your main page: function checkThirdPartyCookies() { return new Promise((resolve) => { const iframe = document.createElement('iframe'); iframe.style.display = 'none'; iframe.src = "https://cookietest.example.com/cookie-check.html"; // your subdomain document.body.appendChild(iframe); let resolved = false; const cleanup = (result, timedOut = false) => { if (resolved) return; resolved = true; window.removeEventListener('message', onMessage); iframe.remove(); resolve({ thirdPartyCookiesEnabled: result, timedOut }); }; const onMessage = (event) => { if (["TP_COOKIE_SUPPORTED", "TP_COOKIE_BLOCKED"].includes(event.data)) { cleanup(event.data === "TP_COOKIE_SUPPORTED", false); } }; window.addEventListener('message', onMessage); setTimeout(() => cleanup(false, true), 1000); }); } Example usage: checkThirdPartyCookies().then(({ thirdPartyCookiesEnabled, timedOut }) => { if (!thirdPartyCookiesEnabled) { someCookiesBlockedCallback(); // Third-party cookies are blocked. if (timedOut) { // No response received (iframe possibly blocked). // Optional fallback UX goes here. someCookiesBlockedTimeoutCallback(); }; } }); Step 3: Enhance Detection With The Storage Access API In Safari, even when third-party cookies are blocked, users can manually grant access through the Storage Access API — but only in response to a user gesture. Here’s how you could implement that in your iframe test page: <button id="enable-cookies">This embedded content requires cookie access. Click below to continue.</button> <script> document.getElementById('enable-cookies')?.addEventListener('click', async () => { if (document.requestStorageAccess && typeof document.requestStorageAccess === 'function') { try { const granted = await document.requestStorageAccess(); if (granted !== false) { window.parent.postMessage("TP_STORAGE_ACCESS_GRANTED", "*"); } else { window.parent.postMessage("TP_STORAGE_ACCESS_DENIED", "*"); } } catch (e) { window.parent.postMessage("TP_STORAGE_ACCESS_FAILED", "*"); } } }); </script> Then, on the parent page, you can listen for this message and retry detection if needed: // Inside the same onMessage listener from before: if (event.data === "TP_STORAGE_ACCESS_GRANTED") { // Optionally: retry the cookie test, or reload iframe logic checkThirdPartyCookies().then(handleResultAgain); } (Bonus) A Purely Client-Side Fallback (Not Perfect, But Sometimes Necessary) In some situations, you might not have access to a second domain or can’t host third-party content under your control. That makes the iframe method unfeasible. When that’s the case, your best option is to combine multiple signals — basic cookie checks, hasStorageAccess(), localStorage fallbacks, and maybe even passive indicators like load failures or timeouts — to infer whether third-party cookies are likely blocked. The important caveat: This will never be 100% accurate. But, in constrained environments, “better something than nothing” may still improve the UX. Here’s a basic example: async function inferCookieSupportFallback() { let hasCookieAPI = navigator.cookieEnabled; let canSetCookie = false; let hasStorageAccess = false; try { document.cookie = "testfallback=1; SameSite=None; Secure; Path=/;"; canSetCookie = document.cookie.includes("test_fallback=1"); document.cookie = "test_fallback=; Max-Age=0; Path=/;"; } catch (_) { canSetCookie = false; } if (typeof document.hasStorageAccess === "function") { try { hasStorageAccess = await document.hasStorageAccess(); } catch (_) {} } return { inferredThirdPartyCookies: hasCookieAPI && canSetCookie && hasStorageAccess, raw: { hasCookieAPI, canSetCookie, hasStorageAccess } }; } Example usage: inferCookieSupportFallback().then(({ inferredThirdPartyCookies }) => { if (inferredThirdPartyCookies) { console.log("Cookies likely supported. Likely, yes."); } else { console.warn("Cookies may be blocked or partitioned."); // You could inform the user or adjust behavior accordingly } }); Use this fallback when: You’re building a JavaScript-only widget embedded on unknown sites, You don’t control a second domain (or the team refuses to add one), or You just need some visibility into user-side behavior (e.g., debugging UX issues). Don’t rely on it for security-critical logic (e.g., auth gating)! But it may help tailor the user experience, surface warnings, or decide whether to attempt a fallback SSO flow. Again, it’s better to have something rather than nothing. Fallback Strategies When Third-Party Cookies Are Blocked Detecting blocked cookies is only half the battle. Once you know they’re unavailable, what can you do? Here are some practical options that might be useful for you: Redirect-Based Flows For auth-related flows, switch from embedded iframes to top-level redirects. Let the user authenticate directly on the identity provider's site, then redirect back. It works in all browsers, but the UX might be less seamless. Request Storage Access Prompt the user using requestStorageAccess() after a clear UI gesture (Safari requires this). Use this to re-enable cookies without leaving the page. Token-Based Communication Pass session info directly from parent to iframe via: postMessage (with required origin); Query params (e.g., signed JWT in iframe URL). This avoids reliance on cookies entirely but requires coordination between both sides: // Parent const iframe = document.getElementById('my-iframe'); iframe.onload = () => { const token = getAccessTokenSomehow(); // JWT or anything else iframe.contentWindow.postMessage( { type: 'AUTH_TOKEN', token }, 'https://iframe.example.com' // Set the correct origin! ); }; // iframe window.addEventListener('message', (event) => { if (event.origin !== 'https://parent.example.com') return; const { type, token } = event.data; if (type === 'AUTH_TOKEN') { validateAndUseToken(token); // process JWT, init session, etc } }); Partitioned Cookies (CHIPS) Chrome (since version 114) and other Chromium-based browsers now support cookies with the Partitioned attribute (known as CHIPS), allowing per-top-site cookie isolation. This is useful for widgets like chat or embedded forms where cross-site identity isn’t needed. Note: Firefox and Safari don’t support the Partitioned cookie attribute. Firefox enforces cookie partitioning by default using a different mechanism (Total Cookie Protection), while Safari blocks third-party cookies entirely. But be careful, as they are treated as “blocked” by basic detection. Refine your logic if needed. Final Thought: Transparency, Transition, And The Path Forward Third-party cookies are disappearing, albeit gradually and unevenly. Until the transition is complete, your job as a developer is to bridge the gap between technical limitations and real-world user experience. That means: Keep an eye on the standards.APIs like FedCM and Privacy Sandbox features (Topics, Attribution Reporting, Fenced Frames) are reshaping how we handle identity and analytics without relying on cross-site cookies. Combine detection with graceful fallback.Whether it’s offering a redirect flow, using requestStorageAccess(), or falling back to token-based messaging — every small UX improvement adds up. Inform your users.Users shouldn't be left wondering why something worked in one browser but silently broke in another. Don’t let them feel like they did something wrong — just help them move forward. A clear, friendly message can prevent this confusion. The good news? You don’t need a perfect solution today, just a resilient one. By detecting issues early and handling them thoughtfully, you protect both your users and your future architecture, one cookie-less browser at a time. And as seen with Chrome’s pivot away from automatic deprecation, the transition is not always linear. Industry feedback, regulatory oversight, and evolving technical realities continue to shape the time and the solutions. And don’t forget: having something is better than nothing.
    14 Комментарии 0 Поделились
  • Godot 3D Audio – Audio Occlusion

    Audio is such an important component of any game and 3D audio in 3D games adds massively to immersion. Today we are going to look at two different Godot add-ons that add Audio Occlusion capabilities to the Godot game engine. The first solution is VASTLY easier to use; the aptly named Godot Audio Occlusion Plugin that’s part of the Audio Arsenal bundle by Ovani Sounds. The other is part of the free and open-source Giga Audio plugin, which also provides two different 3D area audio zones.
    The Ovani Godot Audio Occlusion Plugin is described as:

    The Audio Occlusion Plugin for Godot makes sounds in your game behave more realistically. When a sound is behind a wall, door, or obstacle, this plugin will automatically make it sound muffled or filtered—just like it would in real life.
    It works by attaching an AudioOccluder to any AudioStreamPlayer3D in your scene. The plugin calculates how sound would travel through the environment and adjusts the audio in real time, depending on what’s between the source and the listener.
    To do this, it simplifies your world into a voxel grid—a 3D block-based map—and simulates how sound waves move through it. You can even preview how the plugin sees your world by enabling Voxel Preview in the Inspector.
    You can easily customize settings like:

    Range
    Voxel resolution
    Collision mask
    Detection margin

    The Giga Audio plugin is described as:

    Audio Occlusion, Audio Areas, and Audio Depth Areas for your project.

    …yeah, slightly less verbose description there. You will find that as a general trend, the Giga Audio plugin has less documentation and no samples to get you up and going. Don’t worry though, we have that process mostly covered in the video below.
    Key Links
    Audio Arsenal bundle by Ovani Sounds
    Ovani Godot Audio Plugin
    Giga Audio GitHub Repository
    Giga Audio YouTube Video
    Using the links on this page to purchase the bundle helps support GFS and thanks so much if you do! You can learn more about using both of the Godot 4.x audio occlusion add-ons in the video below.
    #godot #audio #occlusion
    Godot 3D Audio – Audio Occlusion
    Audio is such an important component of any game and 3D audio in 3D games adds massively to immersion. Today we are going to look at two different Godot add-ons that add Audio Occlusion capabilities to the Godot game engine. The first solution is VASTLY easier to use; the aptly named Godot Audio Occlusion Plugin that’s part of the Audio Arsenal bundle by Ovani Sounds. The other is part of the free and open-source Giga Audio plugin, which also provides two different 3D area audio zones. The Ovani Godot Audio Occlusion Plugin is described as: The Audio Occlusion Plugin for Godot makes sounds in your game behave more realistically. When a sound is behind a wall, door, or obstacle, this plugin will automatically make it sound muffled or filtered—just like it would in real life. It works by attaching an AudioOccluder to any AudioStreamPlayer3D in your scene. The plugin calculates how sound would travel through the environment and adjusts the audio in real time, depending on what’s between the source and the listener. To do this, it simplifies your world into a voxel grid—a 3D block-based map—and simulates how sound waves move through it. You can even preview how the plugin sees your world by enabling Voxel Preview in the Inspector. You can easily customize settings like: Range Voxel resolution Collision mask Detection margin The Giga Audio plugin is described as: Audio Occlusion, Audio Areas, and Audio Depth Areas for your project. …yeah, slightly less verbose description there. 😉 You will find that as a general trend, the Giga Audio plugin has less documentation and no samples to get you up and going. Don’t worry though, we have that process mostly covered in the video below. Key Links Audio Arsenal bundle by Ovani Sounds Ovani Godot Audio Plugin Giga Audio GitHub Repository Giga Audio YouTube Video Using the links on this page to purchase the bundle helps support GFS and thanks so much if you do! You can learn more about using both of the Godot 4.x audio occlusion add-ons in the video below. #godot #audio #occlusion
    GAMEFROMSCRATCH.COM
    Godot 3D Audio – Audio Occlusion
    Audio is such an important component of any game and 3D audio in 3D games adds massively to immersion. Today we are going to look at two different Godot add-ons that add Audio Occlusion capabilities to the Godot game engine. The first solution is VASTLY easier to use; the aptly named Godot Audio Occlusion Plugin that’s part of the Audio Arsenal bundle by Ovani Sounds. The other is part of the free and open-source Giga Audio plugin, which also provides two different 3D area audio zones. The Ovani Godot Audio Occlusion Plugin is described as: The Audio Occlusion Plugin for Godot makes sounds in your game behave more realistically. When a sound is behind a wall, door, or obstacle, this plugin will automatically make it sound muffled or filtered—just like it would in real life. It works by attaching an AudioOccluder (a Node3D) to any AudioStreamPlayer3D in your scene. The plugin calculates how sound would travel through the environment and adjusts the audio in real time, depending on what’s between the source and the listener. To do this, it simplifies your world into a voxel grid—a 3D block-based map—and simulates how sound waves move through it. You can even preview how the plugin sees your world by enabling Voxel Preview in the Inspector (after activating the included plugin.gd). You can easily customize settings like: Range Voxel resolution Collision mask Detection margin The Giga Audio plugin is described as: Audio Occlusion, Audio Areas, and Audio Depth Areas for your project. …yeah, slightly less verbose description there. 😉 You will find that as a general trend, the Giga Audio plugin has less documentation and no samples to get you up and going. Don’t worry though, we have that process mostly covered in the video below. Key Links Audio Arsenal bundle by Ovani Sounds Ovani Godot Audio Plugin Giga Audio GitHub Repository Giga Audio YouTube Video Using the links on this page to purchase the bundle helps support GFS and thanks so much if you do! You can learn more about using both of the Godot 4.x audio occlusion add-ons in the video below.
    0 Комментарии 0 Поделились
  • Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation

    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly.
    import subprocess
    import sys

    def install_packages:
    packages =for package in packages:
    try:
    subprocess.check_callprintexcept subprocess.CalledProcessError:
    printprintinstall_packagesprintWe automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly.
    import os
    import json
    import math
    import requests
    from typing import Dict, List, Any, Annotated, TypedDict
    from datetime import datetime
    import operator

    from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
    from langchain_core.tools import tool
    from langchain_anthropic import ChatAnthropic
    from langgraph.graph import StateGraph, START, END
    from langgraph.prebuilt import ToolNode
    from langgraph.checkpoint.memory import MemorySaver
    from duckduckgo_search import DDGS
    We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions.
    os.environ= "Use Your API Key Here"

    ANTHROPIC_API_KEY = os.getenvWe set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key, while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times.
    from typing import TypedDict

    class AgentState:
    messages: Annotated, operator.add]

    @tool
    def calculator-> str:
    """
    Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more.

    Args:
    expression: Mathematical expression as a string")

    Returns:
    Result of the calculation as a string
    """
    try:
    allowed_names = {
    'abs': abs, 'round': round, 'min': min, 'max': max,
    'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
    'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
    'log': math.log, 'log10': math.log10, 'exp': math.exp,
    'pi': math.pi, 'e': math.e
    }

    expression = expression.replaceresult = evalreturn f"Result: {result}"
    except Exception as e:
    return f"Error in calculation: {str}"
    We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution.
    @tool
    def web_search-> str:
    """
    Search the web for information using DuckDuckGo.

    Args:
    query: Search query string
    num_results: Number of results to returnReturns:
    Search results as formatted string
    """
    try:
    num_results = min, 10)

    with DDGSas ddgs:
    results = list)

    if not results:
    return f"No search results found for: {query}"

    formatted_results = f"Search results for '{query}':\n\n"
    for i, result in enumerate:
    formatted_results += f"{i}. **{result}**\n"
    formatted_results += f" {result}\n"
    formatted_results += f" Source: {result}\n\n"

    return formatted_results
    except Exception as e:
    return f"Error performing web search: {str}"
    We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility.
    @tool
    def weather_info-> str:
    """
    Get current weather information for a city using OpenWeatherMap API.
    Note: This is a mock implementation for demo purposes.

    Args:
    city: Name of the city

    Returns:
    Weather information as a string
    """
    mock_weather = {
    "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65},
    "london": {"temp": 15, "condition": "Rainy", "humidity": 80},
    "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70},
    "paris": {"temp": 18, "condition": "Overcast", "humidity": 75}
    }

    city_lower = city.lowerif city_lower in mock_weather:
    weather = mock_weatherreturn f"Weather in {city}:\n" \
    f"Temperature: {weather}°C\n" \
    f"Condition: {weather}\n" \
    f"Humidity: {weather}%"
    else:
    return f"Weather data not available for {city}."
    We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API.
    @tool
    def text_analyzer-> str:
    """
    Analyze text and provide statistics like word count, character count, etc.

    Args:
    text: Text to analyze

    Returns:
    Text analysis results
    """
    if not text.strip:
    return "Please provide text to analyze."

    words = text.splitsentences = text.split+ text.split+ text.splitsentences =analysis = f"Text Analysis Results:\n"
    analysis += f"• Characters: {len}\n"
    analysis += f"• Characters: {len)}\n"
    analysis += f"• Words: {len}\n"
    analysis += f"• Sentences: {len}\n"
    analysis += f"• Average words per sentence: {len/ max, 1):.1f}\n"
    analysis += f"• Most common word: {max, key=words.count) if words else 'N/A'}"

    return analysis
    The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count, word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit.
    @tool
    def current_time-> str:
    """
    Get the current date and time.

    Returns:
    Current date and time as a formatted string
    """
    now = datetime.nowreturn f"Current date and time: {now.strftime}"
    The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow.
    tools =def create_llm:
    if ANTHROPIC_API_KEY:
    return ChatAnthropicelse:
    class MockLLM:
    def invoke:
    last_message = messages.content if messages else ""

    if anyfor word in):
    import re
    numbers = re.findall\s\w]+', last_message)
    expr = numbersif numbers else "2+2"
    return AIMessage}, "id": "calc1"}])
    elif anyfor word in):
    query = last_message.replace.replace.replace.stripif not query or len< 3:
    query = "python programming"
    return AIMessageelif anyfor word in):
    city = "New York"
    words = last_message.lower.splitfor i, word in enumerate:
    if word == 'in' and i + 1 < len:
    city = words.titlebreak
    return AIMessageelif anyfor word in):
    return AIMessageelif anyfor word in):
    text = last_message.replace.replace.stripif not text:
    text = "Sample text for analysis"
    return AIMessageelse:
    return AIMessagedef bind_tools:
    return self

    printreturn MockLLMllm = create_llmllm_with_tools = llm.bind_toolsWe initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed.
    def agent_node-> Dict:
    """Main agent node that processes messages and decides on tool usage."""
    messages = stateresponse = llm_with_tools.invokereturn {"messages":}

    def should_continue-> str:
    """Determine whether to continue with tool calls or end."""
    last_message = stateif hasattrand last_message.tool_calls:
    return "tools"
    return END
    We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model, and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow.
    def create_agent_graph:
    tool_node = ToolNodeworkflow = StateGraphworkflow.add_nodeworkflow.add_nodeworkflow.add_edgeworkflow.add_conditional_edgesworkflow.add_edgememory = MemorySaverapp = workflow.compilereturn app

    printagent = create_agent_graphprintWe construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application, enabling a structured, memory-aware multi-tool agent ready for deployment.
    def test_agent:
    """Test the agent with various queries."""
    config = {"configurable": {"thread_id": "test-thread"}}

    test_queries =printfor i, query in enumerate:
    printprinttry:
    response = agent.invoke]},
    config=config
    )

    last_message = responseprintexcept Exception as e:
    print}\n")
    The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use.
    def chat_with_agent:
    """Interactive chat function."""
    config = {"configurable": {"thread_id": "interactive-thread"}}

    printprintprintwhile True:
    try:
    user_input = input.stripif user_input.lowerin:
    printbreak
    elif user_input.lower== 'help':
    printprint?'")
    printprintprintprintprintcontinue
    elif not user_input:
    continue

    response = agent.invoke]},
    config=config
    )

    last_message = responseprintexcept KeyboardInterrupt:
    printbreak
    except Exception as e:
    print}\n")
    The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval.
    if __name__ == "__main__":
    test_agentprintprintprintchat_with_agentdef quick_demo:
    """Quick demonstration of agent capabilities."""
    config = {"configurable": {"thread_id": "demo"}}

    demos =printfor category, query in demos:
    printtry:
    response = agent.invoke]},
    config=config
    )
    printexcept Exception as e:
    print}\n")

    printprintprintprintprintfor a quick demonstration")
    printfor interactive chat")
    printprintprintFinally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agentto validate functionality with sample queries, followed by launching the interactive chat_with_agentmode for real-time interaction. The quick_demofunction also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality.
    In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge.

    Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    #stepbystep #guide #build #customizable #multitool
    Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation
    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly. import subprocess import sys def install_packages: packages =for package in packages: try: subprocess.check_callprintexcept subprocess.CalledProcessError: printprintinstall_packagesprintWe automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly. import os import json import math import requests from typing import Dict, List, Any, Annotated, TypedDict from datetime import datetime import operator from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage from langchain_core.tools import tool from langchain_anthropic import ChatAnthropic from langgraph.graph import StateGraph, START, END from langgraph.prebuilt import ToolNode from langgraph.checkpoint.memory import MemorySaver from duckduckgo_search import DDGS We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions. os.environ= "Use Your API Key Here" ANTHROPIC_API_KEY = os.getenvWe set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key, while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times. from typing import TypedDict class AgentState: messages: Annotated, operator.add] @tool def calculator-> str: """ Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more. Args: expression: Mathematical expression as a string") Returns: Result of the calculation as a string """ try: allowed_names = { 'abs': abs, 'round': round, 'min': min, 'max': max, 'sum': sum, 'pow': pow, 'sqrt': math.sqrt, 'sin': math.sin, 'cos': math.cos, 'tan': math.tan, 'log': math.log, 'log10': math.log10, 'exp': math.exp, 'pi': math.pi, 'e': math.e } expression = expression.replaceresult = evalreturn f"Result: {result}" except Exception as e: return f"Error in calculation: {str}" We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution. @tool def web_search-> str: """ Search the web for information using DuckDuckGo. Args: query: Search query string num_results: Number of results to returnReturns: Search results as formatted string """ try: num_results = min, 10) with DDGSas ddgs: results = list) if not results: return f"No search results found for: {query}" formatted_results = f"Search results for '{query}':\n\n" for i, result in enumerate: formatted_results += f"{i}. **{result}**\n" formatted_results += f" {result}\n" formatted_results += f" Source: {result}\n\n" return formatted_results except Exception as e: return f"Error performing web search: {str}" We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility. @tool def weather_info-> str: """ Get current weather information for a city using OpenWeatherMap API. Note: This is a mock implementation for demo purposes. Args: city: Name of the city Returns: Weather information as a string """ mock_weather = { "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65}, "london": {"temp": 15, "condition": "Rainy", "humidity": 80}, "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70}, "paris": {"temp": 18, "condition": "Overcast", "humidity": 75} } city_lower = city.lowerif city_lower in mock_weather: weather = mock_weatherreturn f"Weather in {city}:\n" \ f"Temperature: {weather}°C\n" \ f"Condition: {weather}\n" \ f"Humidity: {weather}%" else: return f"Weather data not available for {city}." We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API. @tool def text_analyzer-> str: """ Analyze text and provide statistics like word count, character count, etc. Args: text: Text to analyze Returns: Text analysis results """ if not text.strip: return "Please provide text to analyze." words = text.splitsentences = text.split+ text.split+ text.splitsentences =analysis = f"Text Analysis Results:\n" analysis += f"• Characters: {len}\n" analysis += f"• Characters: {len)}\n" analysis += f"• Words: {len}\n" analysis += f"• Sentences: {len}\n" analysis += f"• Average words per sentence: {len/ max, 1):.1f}\n" analysis += f"• Most common word: {max, key=words.count) if words else 'N/A'}" return analysis The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count, word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit. @tool def current_time-> str: """ Get the current date and time. Returns: Current date and time as a formatted string """ now = datetime.nowreturn f"Current date and time: {now.strftime}" The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow. tools =def create_llm: if ANTHROPIC_API_KEY: return ChatAnthropicelse: class MockLLM: def invoke: last_message = messages.content if messages else "" if anyfor word in): import re numbers = re.findall\s\w]+', last_message) expr = numbersif numbers else "2+2" return AIMessage}, "id": "calc1"}]) elif anyfor word in): query = last_message.replace.replace.replace.stripif not query or len< 3: query = "python programming" return AIMessageelif anyfor word in): city = "New York" words = last_message.lower.splitfor i, word in enumerate: if word == 'in' and i + 1 < len: city = words.titlebreak return AIMessageelif anyfor word in): return AIMessageelif anyfor word in): text = last_message.replace.replace.stripif not text: text = "Sample text for analysis" return AIMessageelse: return AIMessagedef bind_tools: return self printreturn MockLLMllm = create_llmllm_with_tools = llm.bind_toolsWe initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed. def agent_node-> Dict: """Main agent node that processes messages and decides on tool usage.""" messages = stateresponse = llm_with_tools.invokereturn {"messages":} def should_continue-> str: """Determine whether to continue with tool calls or end.""" last_message = stateif hasattrand last_message.tool_calls: return "tools" return END We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model, and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow. def create_agent_graph: tool_node = ToolNodeworkflow = StateGraphworkflow.add_nodeworkflow.add_nodeworkflow.add_edgeworkflow.add_conditional_edgesworkflow.add_edgememory = MemorySaverapp = workflow.compilereturn app printagent = create_agent_graphprintWe construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application, enabling a structured, memory-aware multi-tool agent ready for deployment. def test_agent: """Test the agent with various queries.""" config = {"configurable": {"thread_id": "test-thread"}} test_queries =printfor i, query in enumerate: printprinttry: response = agent.invoke]}, config=config ) last_message = responseprintexcept Exception as e: print}\n") The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use. def chat_with_agent: """Interactive chat function.""" config = {"configurable": {"thread_id": "interactive-thread"}} printprintprintwhile True: try: user_input = input.stripif user_input.lowerin: printbreak elif user_input.lower== 'help': printprint?'") printprintprintprintprintcontinue elif not user_input: continue response = agent.invoke]}, config=config ) last_message = responseprintexcept KeyboardInterrupt: printbreak except Exception as e: print}\n") The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval. if __name__ == "__main__": test_agentprintprintprintchat_with_agentdef quick_demo: """Quick demonstration of agent capabilities.""" config = {"configurable": {"thread_id": "demo"}} demos =printfor category, query in demos: printtry: response = agent.invoke]}, config=config ) printexcept Exception as e: print}\n") printprintprintprintprintfor a quick demonstration") printfor interactive chat") printprintprintFinally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agentto validate functionality with sample queries, followed by launching the interactive chat_with_agentmode for real-time interaction. The quick_demofunction also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality. In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge. Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding #stepbystep #guide #build #customizable #multitool
    WWW.MARKTECHPOST.COM
    Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation
    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly. import subprocess import sys def install_packages(): packages = [ "langgraph", "langchain", "langchain-anthropic", "langchain-community", "requests", "python-dotenv", "duckduckgo-search" ] for package in packages: try: subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"]) print(f"✓ Installed {package}") except subprocess.CalledProcessError: print(f"✗ Failed to install {package}") print("Installing required packages...") install_packages() print("Installation complete!\n") We automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly. import os import json import math import requests from typing import Dict, List, Any, Annotated, TypedDict from datetime import datetime import operator from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage from langchain_core.tools import tool from langchain_anthropic import ChatAnthropic from langgraph.graph import StateGraph, START, END from langgraph.prebuilt import ToolNode from langgraph.checkpoint.memory import MemorySaver from duckduckgo_search import DDGS We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions. os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Here" ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY") We set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key (which you should replace with a valid key), while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times. from typing import TypedDict class AgentState(TypedDict): messages: Annotated[List[BaseMessage], operator.add] @tool def calculator(expression: str) -> str: """ Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more. Args: expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)") Returns: Result of the calculation as a string """ try: allowed_names = { 'abs': abs, 'round': round, 'min': min, 'max': max, 'sum': sum, 'pow': pow, 'sqrt': math.sqrt, 'sin': math.sin, 'cos': math.cos, 'tan': math.tan, 'log': math.log, 'log10': math.log10, 'exp': math.exp, 'pi': math.pi, 'e': math.e } expression = expression.replace('^', '**') result = eval(expression, {"__builtins__": {}}, allowed_names) return f"Result: {result}" except Exception as e: return f"Error in calculation: {str(e)}" We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution. @tool def web_search(query: str, num_results: int = 3) -> str: """ Search the web for information using DuckDuckGo. Args: query: Search query string num_results: Number of results to return (default: 3, max: 10) Returns: Search results as formatted string """ try: num_results = min(max(num_results, 1), 10) with DDGS() as ddgs: results = list(ddgs.text(query, max_results=num_results)) if not results: return f"No search results found for: {query}" formatted_results = f"Search results for '{query}':\n\n" for i, result in enumerate(results, 1): formatted_results += f"{i}. **{result['title']}**\n" formatted_results += f" {result['body']}\n" formatted_results += f" Source: {result['href']}\n\n" return formatted_results except Exception as e: return f"Error performing web search: {str(e)}" We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility. @tool def weather_info(city: str) -> str: """ Get current weather information for a city using OpenWeatherMap API. Note: This is a mock implementation for demo purposes. Args: city: Name of the city Returns: Weather information as a string """ mock_weather = { "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65}, "london": {"temp": 15, "condition": "Rainy", "humidity": 80}, "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70}, "paris": {"temp": 18, "condition": "Overcast", "humidity": 75} } city_lower = city.lower() if city_lower in mock_weather: weather = mock_weather[city_lower] return f"Weather in {city}:\n" \ f"Temperature: {weather['temp']}°C\n" \ f"Condition: {weather['condition']}\n" \ f"Humidity: {weather['humidity']}%" else: return f"Weather data not available for {city}. (This is a demo with limited cities: New York, London, Tokyo, Paris)" We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API. @tool def text_analyzer(text: str) -> str: """ Analyze text and provide statistics like word count, character count, etc. Args: text: Text to analyze Returns: Text analysis results """ if not text.strip(): return "Please provide text to analyze." words = text.split() sentences = text.split('.') + text.split('!') + text.split('?') sentences = [s.strip() for s in sentences if s.strip()] analysis = f"Text Analysis Results:\n" analysis += f"• Characters (with spaces): {len(text)}\n" analysis += f"• Characters (without spaces): {len(text.replace(' ', ''))}\n" analysis += f"• Words: {len(words)}\n" analysis += f"• Sentences: {len(sentences)}\n" analysis += f"• Average words per sentence: {len(words) / max(len(sentences), 1):.1f}\n" analysis += f"• Most common word: {max(set(words), key=words.count) if words else 'N/A'}" return analysis The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count (with and without spaces), word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit. @tool def current_time() -> str: """ Get the current date and time. Returns: Current date and time as a formatted string """ now = datetime.now() return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}" The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow. tools = [calculator, web_search, weather_info, text_analyzer, current_time] def create_llm(): if ANTHROPIC_API_KEY: return ChatAnthropic( model="claude-3-haiku-20240307", temperature=0.1, max_tokens=1024 ) else: class MockLLM: def invoke(self, messages): last_message = messages[-1].content if messages else "" if any(word in last_message.lower() for word in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']): import re numbers = re.findall(r'[\d\+\-\*/\.\(\)\s\w]+', last_message) expr = numbers[0] if numbers else "2+2" return AIMessage(content="I'll help you with that calculation.", tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}]) elif any(word in last_message.lower() for word in ['search', 'find', 'look up', 'information about']): query = last_message.replace('search for', '').replace('find', '').replace('look up', '').strip() if not query or len(query) < 3: query = "python programming" return AIMessage(content="I'll search for that information.", tool_calls=[{"name": "web_search", "args": {"query": query}, "id": "search1"}]) elif any(word in last_message.lower() for word in ['weather', 'temperature']): city = "New York" words = last_message.lower().split() for i, word in enumerate(words): if word == 'in' and i + 1 < len(words): city = words[i + 1].title() break return AIMessage(content="I'll get the weather information.", tool_calls=[{"name": "weather_info", "args": {"city": city}, "id": "weather1"}]) elif any(word in last_message.lower() for word in ['time', 'date']): return AIMessage(content="I'll get the current time.", tool_calls=[{"name": "current_time", "args": {}, "id": "time1"}]) elif any(word in last_message.lower() for word in ['analyze', 'analysis']): text = last_message.replace('analyze this text:', '').replace('analyze', '').strip() if not text: text = "Sample text for analysis" return AIMessage(content="I'll analyze that text for you.", tool_calls=[{"name": "text_analyzer", "args": {"text": text}, "id": "analyze1"}]) else: return AIMessage(content="Hello! I'm a multi-tool agent powered by Claude. I can help with:\n• Mathematical calculations\n• Web searches\n• Weather information\n• Text analysis\n• Current time/date\n\nWhat would you like me to help you with?") def bind_tools(self, tools): return self print("⚠️ Note: Using mock LLM for demo. Add your ANTHROPIC_API_KEY for full functionality.") return MockLLM() llm = create_llm() llm_with_tools = llm.bind_tools(tools) We initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed. def agent_node(state: AgentState) -> Dict[str, Any]: """Main agent node that processes messages and decides on tool usage.""" messages = state["messages"] response = llm_with_tools.invoke(messages) return {"messages": [response]} def should_continue(state: AgentState) -> str: """Determine whether to continue with tool calls or end.""" last_message = state["messages"][-1] if hasattr(last_message, 'tool_calls') and last_message.tool_calls: return "tools" return END We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model (with tools), and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow. def create_agent_graph(): tool_node = ToolNode(tools) workflow = StateGraph(AgentState) workflow.add_node("agent", agent_node) workflow.add_node("tools", tool_node) workflow.add_edge(START, "agent") workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END}) workflow.add_edge("tools", "agent") memory = MemorySaver() app = workflow.compile(checkpointer=memory) return app print("Creating LangGraph Multi-Tool Agent...") agent = create_agent_graph() print("✓ Agent created successfully!\n") We construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application (app), enabling a structured, memory-aware multi-tool agent ready for deployment. def test_agent(): """Test the agent with various queries.""" config = {"configurable": {"thread_id": "test-thread"}} test_queries = [ "What's 15 * 7 + 23?", "Search for information about Python programming", "What's the weather like in Tokyo?", "What time is it?", "Analyze this text: 'LangGraph is an amazing framework for building AI agents.'" ] print("🧪 Testing the agent with sample queries...\n") for i, query in enumerate(test_queries, 1): print(f"Query {i}: {query}") print("-" * 50) try: response = agent.invoke( {"messages": [HumanMessage(content=query)]}, config=config ) last_message = response["messages"][-1] print(f"Response: {last_message.content}\n") except Exception as e: print(f"Error: {str(e)}\n") The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use. def chat_with_agent(): """Interactive chat function.""" config = {"configurable": {"thread_id": "interactive-thread"}} print("🤖 Multi-Tool Agent Chat") print("Available tools: Calculator, Web Search, Weather Info, Text Analyzer, Current Time") print("Type 'quit' to exit, 'help' for available commands\n") while True: try: user_input = input("You: ").strip() if user_input.lower() in ['quit', 'exit', 'q']: print("Goodbye!") break elif user_input.lower() == 'help': print("\nAvailable commands:") print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'") print("• Web Search: 'Search for Python tutorials' or 'Find information about AI'") print("• Weather: 'Weather in Tokyo' or 'What's the temperature in London?'") print("• Text Analysis: 'Analyze this text: [your text]'") print("• Current Time: 'What time is it?' or 'Current date'") print("• quit: Exit the chat\n") continue elif not user_input: continue response = agent.invoke( {"messages": [HumanMessage(content=user_input)]}, config=config ) last_message = response["messages"][-1] print(f"Agent: {last_message.content}\n") except KeyboardInterrupt: print("\nGoodbye!") break except Exception as e: print(f"Error: {str(e)}\n") The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval. if __name__ == "__main__": test_agent() print("=" * 60) print("🎉 LangGraph Multi-Tool Agent is ready!") print("=" * 60) chat_with_agent() def quick_demo(): """Quick demonstration of agent capabilities.""" config = {"configurable": {"thread_id": "demo"}} demos = [ ("Math", "Calculate the square root of 144 plus 5 times 3"), ("Search", "Find recent news about artificial intelligence"), ("Time", "What's the current date and time?") ] print("🚀 Quick Demo of Agent Capabilities\n") for category, query in demos: print(f"[{category}] Query: {query}") try: response = agent.invoke( {"messages": [HumanMessage(content=query)]}, config=config ) print(f"Response: {response['messages'][-1].content}\n") except Exception as e: print(f"Error: {str(e)}\n") print("\n" + "="*60) print("🔧 Usage Instructions:") print("1. Add your ANTHROPIC_API_KEY to use Claude model") print(" os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'") print("2. Run quick_demo() for a quick demonstration") print("3. Run chat_with_agent() for interactive chat") print("4. The agent supports: calculations, web search, weather, text analysis, and time") print("5. Example: 'Calculate 15*7+23' or 'Search for Python tutorials'") print("="*60) Finally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agent() to validate functionality with sample queries, followed by launching the interactive chat_with_agent() mode for real-time interaction. The quick_demo() function also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality. In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge. Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    0 Комментарии 0 Поделились
  • Campfire device concept gives safer, smarter spark for outdoor experiences

    I do not live in a country with a camping culture, although I could see its appeal if you live in areas where you could actually go camping. But seeing all the wildfire disasters in the US and even in South Korea makes me a bit concerned about the dangers of lighting fires out in the open fields or even by the beach. While of course there have been no connections to campfires from these incidents, there are a lot of dangers that can come from these campfires even if they seem fun and romantic.
    The Magic Campfire is a concept designed by students from the Shenzhen Technology University. It offers a safer and more environmentally friendly alternative to open flames. Its advanced electronic illumination simulates the warm, flickering glow of a real fire without the associated risks, making it suitable for various outdoor settings. It is constructed from durable, recyclable materials and embraces eco-conscious principles without compromising the device’s performance.
    Designers: Bowen Deng and Yihan Peng from Shenzhen Technology University

    The device resembles a bundle of elongated flashlights or torches arranged vertically, mimicking the shape and glow of a real campfire. This clever design not only creates a familiar visual effect but also enhances the immersive outdoor atmosphere without the need for open flames. The structure is supported by collapsible legs, making it incredibly easy to set up or pack away, whether you’re at a campsite, on the beach, or in your backyard. Each torch-like unit is detachable, allowing users to remove one and use it as a standalone flashlight — perfect for late-night walks, finding your way in the dark, or navigating uneven terrain.

    To further elevate the campfire experience, the device includes an integrated outdoor speaker that delivers ambient sound or music, enriching the social atmosphere around the “fire.” In addition to its visual and audio features, it functions as a practical power bank, equipped with a USB Type-C charging port to keep your phone, lanterns, or other essential gadgets powered throughout your outdoor adventure.

    By merging technology with timeless design, the Magic Campfire offers an engaging outdoor experience that meets modern needs while preserving the nostalgic allure of a real campfire. It allows people to enjoy the warmth, light, and ambiance of a traditional campfire without endangering the environment or public safety. Whether you’re gathering in your backyard, camping in a controlled site, or simply creating a cozy atmosphere, this student-designed device proves that reimagining tradition through innovation can light the way to a safer future.

    The post Campfire device concept gives safer, smarter spark for outdoor experiences first appeared on Yanko Design.
    #campfire #device #concept #gives #safer
    Campfire device concept gives safer, smarter spark for outdoor experiences
    I do not live in a country with a camping culture, although I could see its appeal if you live in areas where you could actually go camping. But seeing all the wildfire disasters in the US and even in South Korea makes me a bit concerned about the dangers of lighting fires out in the open fields or even by the beach. While of course there have been no connections to campfires from these incidents, there are a lot of dangers that can come from these campfires even if they seem fun and romantic. The Magic Campfire is a concept designed by students from the Shenzhen Technology University. It offers a safer and more environmentally friendly alternative to open flames. Its advanced electronic illumination simulates the warm, flickering glow of a real fire without the associated risks, making it suitable for various outdoor settings. It is constructed from durable, recyclable materials and embraces eco-conscious principles without compromising the device’s performance. Designers: Bowen Deng and Yihan Peng from Shenzhen Technology University The device resembles a bundle of elongated flashlights or torches arranged vertically, mimicking the shape and glow of a real campfire. This clever design not only creates a familiar visual effect but also enhances the immersive outdoor atmosphere without the need for open flames. The structure is supported by collapsible legs, making it incredibly easy to set up or pack away, whether you’re at a campsite, on the beach, or in your backyard. Each torch-like unit is detachable, allowing users to remove one and use it as a standalone flashlight — perfect for late-night walks, finding your way in the dark, or navigating uneven terrain. To further elevate the campfire experience, the device includes an integrated outdoor speaker that delivers ambient sound or music, enriching the social atmosphere around the “fire.” In addition to its visual and audio features, it functions as a practical power bank, equipped with a USB Type-C charging port to keep your phone, lanterns, or other essential gadgets powered throughout your outdoor adventure. By merging technology with timeless design, the Magic Campfire offers an engaging outdoor experience that meets modern needs while preserving the nostalgic allure of a real campfire. It allows people to enjoy the warmth, light, and ambiance of a traditional campfire without endangering the environment or public safety. Whether you’re gathering in your backyard, camping in a controlled site, or simply creating a cozy atmosphere, this student-designed device proves that reimagining tradition through innovation can light the way to a safer future. The post Campfire device concept gives safer, smarter spark for outdoor experiences first appeared on Yanko Design. #campfire #device #concept #gives #safer
    WWW.YANKODESIGN.COM
    Campfire device concept gives safer, smarter spark for outdoor experiences
    I do not live in a country with a camping culture, although I could see its appeal if you live in areas where you could actually go camping. But seeing all the wildfire disasters in the US and even in South Korea makes me a bit concerned about the dangers of lighting fires out in the open fields or even by the beach. While of course there have been no connections to campfires from these incidents, there are a lot of dangers that can come from these campfires even if they seem fun and romantic. The Magic Campfire is a concept designed by students from the Shenzhen Technology University. It offers a safer and more environmentally friendly alternative to open flames. Its advanced electronic illumination simulates the warm, flickering glow of a real fire without the associated risks, making it suitable for various outdoor settings. It is constructed from durable, recyclable materials and embraces eco-conscious principles without compromising the device’s performance. Designers: Bowen Deng and Yihan Peng from Shenzhen Technology University The device resembles a bundle of elongated flashlights or torches arranged vertically, mimicking the shape and glow of a real campfire. This clever design not only creates a familiar visual effect but also enhances the immersive outdoor atmosphere without the need for open flames. The structure is supported by collapsible legs, making it incredibly easy to set up or pack away, whether you’re at a campsite, on the beach, or in your backyard. Each torch-like unit is detachable, allowing users to remove one and use it as a standalone flashlight — perfect for late-night walks, finding your way in the dark, or navigating uneven terrain. To further elevate the campfire experience, the device includes an integrated outdoor speaker that delivers ambient sound or music, enriching the social atmosphere around the “fire.” In addition to its visual and audio features, it functions as a practical power bank, equipped with a USB Type-C charging port to keep your phone, lanterns, or other essential gadgets powered throughout your outdoor adventure. By merging technology with timeless design, the Magic Campfire offers an engaging outdoor experience that meets modern needs while preserving the nostalgic allure of a real campfire. It allows people to enjoy the warmth, light, and ambiance of a traditional campfire without endangering the environment or public safety. Whether you’re gathering in your backyard, camping in a controlled site, or simply creating a cozy atmosphere, this student-designed device proves that reimagining tradition through innovation can light the way to a safer future. The post Campfire device concept gives safer, smarter spark for outdoor experiences first appeared on Yanko Design.
    0 Комментарии 0 Поделились
  • PSA: Nintendo Switch 2 GameChat Requires Your Phone Number for Verification Purposes

    Nintendo Switch 2's GameChat will require a phone number when setting up the feature.Nintendo's video calling software comes baked into all Nintendo Switch 2 consoles and is being promoted as a key feature of the new system.But it's worth being aware that anyone wanting to set up GameChat will need to verify their identity first by providing Nintendo with a phone number.Nintendo will then send that number a text message, tying your GameChat activity to that phone number. So behave!PlayIf you're under the age of 16, GameChat will be blocked until a parent or guardian using the Parental Controls smart device app allows the use of the feature. They will then be required to add their own phone number for text message verification.Nintendo's website, upon which Eurogamer spotted the above information, appears to suggest that every user with a Nintendo Account will need to do this when playing on a Switch 2, even if the device is shared. IGN has contacted Nintendo for confirmation of this.GameChat can be accessed at any point while playing Switch 2 by pressing the console's new 'C' button found on its various controllers. This will then allow up to four people to video chat together, or 24 to join a group audio call. Within a video call, players can broadcast themselves using a camera peripheral, as well as stream whatever they're currently playing. It's the first time the family-friendly Nintendo has offered this kind of service, after previously lagging behind other console makers with online services in the past. Nintendo Switch 2 System and Accessories GalleryLast week, the tech experts at Digital Foundry revealed the final specs for Nintendo Switch 2, and with it claimed that the GameChat feature has a "significant impact" on system resources to the point where developers are said to be concerned.Digital Foundry said Nintendo provides developers with a GameChat testing tool that simulates API latency and L3 cache misses that the real world GameChat system incurs on the system. This means developers can test this without needing active GameChat sessions running.DF was left wondering whether game performance for the end user is impacted by having GameChat on or off. If GameChat resources are within the system allocation, it shouldn't make any difference. However, given Nintendo provides GameChat emulation tools, the suggestion is there is a hit of some description that developers need to test for.As Digital Foundry put it: "We'll be interested to see how GameChat mayimpact game performance as this does seem to be an area of developer concern." We won’t know for sure until Switch 2 comes out on June 5.Asreminder, GameChat will be free to use for the Switch 2's first 10 months' on sale. After March 31, 2026, GameChat will then require a Nintendo Switch Online membership.Earlier this week, we got our first proper look at a Switch 2 game cartridge, and also heard word that Samsung was reportedly keen to provide OLED screens for a Switch 2 upgrade.Tom Phillips is IGN's News Editor. You can reach Tom at tom_phillips@ign.com or find him on Bluesky @tomphillipseg.bsky.social
    #psa #nintendo #switch #gamechat #requires
    PSA: Nintendo Switch 2 GameChat Requires Your Phone Number for Verification Purposes
    Nintendo Switch 2's GameChat will require a phone number when setting up the feature.Nintendo's video calling software comes baked into all Nintendo Switch 2 consoles and is being promoted as a key feature of the new system.But it's worth being aware that anyone wanting to set up GameChat will need to verify their identity first by providing Nintendo with a phone number.Nintendo will then send that number a text message, tying your GameChat activity to that phone number. So behave!PlayIf you're under the age of 16, GameChat will be blocked until a parent or guardian using the Parental Controls smart device app allows the use of the feature. They will then be required to add their own phone number for text message verification.Nintendo's website, upon which Eurogamer spotted the above information, appears to suggest that every user with a Nintendo Account will need to do this when playing on a Switch 2, even if the device is shared. IGN has contacted Nintendo for confirmation of this.GameChat can be accessed at any point while playing Switch 2 by pressing the console's new 'C' button found on its various controllers. This will then allow up to four people to video chat together, or 24 to join a group audio call. Within a video call, players can broadcast themselves using a camera peripheral, as well as stream whatever they're currently playing. It's the first time the family-friendly Nintendo has offered this kind of service, after previously lagging behind other console makers with online services in the past. Nintendo Switch 2 System and Accessories GalleryLast week, the tech experts at Digital Foundry revealed the final specs for Nintendo Switch 2, and with it claimed that the GameChat feature has a "significant impact" on system resources to the point where developers are said to be concerned.Digital Foundry said Nintendo provides developers with a GameChat testing tool that simulates API latency and L3 cache misses that the real world GameChat system incurs on the system. This means developers can test this without needing active GameChat sessions running.DF was left wondering whether game performance for the end user is impacted by having GameChat on or off. If GameChat resources are within the system allocation, it shouldn't make any difference. However, given Nintendo provides GameChat emulation tools, the suggestion is there is a hit of some description that developers need to test for.As Digital Foundry put it: "We'll be interested to see how GameChat mayimpact game performance as this does seem to be an area of developer concern." We won’t know for sure until Switch 2 comes out on June 5.Asreminder, GameChat will be free to use for the Switch 2's first 10 months' on sale. After March 31, 2026, GameChat will then require a Nintendo Switch Online membership.Earlier this week, we got our first proper look at a Switch 2 game cartridge, and also heard word that Samsung was reportedly keen to provide OLED screens for a Switch 2 upgrade.Tom Phillips is IGN's News Editor. You can reach Tom at tom_phillips@ign.com or find him on Bluesky @tomphillipseg.bsky.social #psa #nintendo #switch #gamechat #requires
    WWW.IGN.COM
    PSA: Nintendo Switch 2 GameChat Requires Your Phone Number for Verification Purposes
    Nintendo Switch 2's GameChat will require a phone number when setting up the feature.Nintendo's video calling software comes baked into all Nintendo Switch 2 consoles and is being promoted as a key feature of the new system.But it's worth being aware that anyone wanting to set up GameChat will need to verify their identity first by providing Nintendo with a phone number (or if you've already linked it, the number already associated with your Nintendo Account).Nintendo will then send that number a text message, tying your GameChat activity to that phone number. So behave!PlayIf you're under the age of 16, GameChat will be blocked until a parent or guardian using the Parental Controls smart device app allows the use of the feature. They will then be required to add their own phone number for text message verification.Nintendo's website, upon which Eurogamer spotted the above information, appears to suggest that every user with a Nintendo Account will need to do this when playing on a Switch 2, even if the device is shared. IGN has contacted Nintendo for confirmation of this.GameChat can be accessed at any point while playing Switch 2 by pressing the console's new 'C' button found on its various controllers. This will then allow up to four people to video chat together, or 24 to join a group audio call. Within a video call, players can broadcast themselves using a camera peripheral (sold separately), as well as stream whatever they're currently playing. It's the first time the family-friendly Nintendo has offered this kind of service, after previously lagging behind other console makers with online services in the past. Nintendo Switch 2 System and Accessories GalleryLast week, the tech experts at Digital Foundry revealed the final specs for Nintendo Switch 2, and with it claimed that the GameChat feature has a "significant impact" on system resources to the point where developers are said to be concerned.Digital Foundry said Nintendo provides developers with a GameChat testing tool that simulates API latency and L3 cache misses that the real world GameChat system incurs on the system. This means developers can test this without needing active GameChat sessions running.DF was left wondering whether game performance for the end user is impacted by having GameChat on or off. If GameChat resources are within the system allocation, it shouldn't make any difference. However, given Nintendo provides GameChat emulation tools, the suggestion is there is a hit of some description that developers need to test for.As Digital Foundry put it: "We'll be interested to see how GameChat may (or may not) impact game performance as this does seem to be an area of developer concern." We won’t know for sure until Switch 2 comes out on June 5.As (another) reminder, GameChat will be free to use for the Switch 2's first 10 months' on sale. After March 31, 2026, GameChat will then require a Nintendo Switch Online membership.Earlier this week, we got our first proper look at a Switch 2 game cartridge, and also heard word that Samsung was reportedly keen to provide OLED screens for a Switch 2 upgrade.Tom Phillips is IGN's News Editor. You can reach Tom at tom_phillips@ign.com or find him on Bluesky @tomphillipseg.bsky.social
    0 Комментарии 0 Поделились
  • Google's Veo 3 Is Already Deepfaking All of YouTube's Most Smooth-Brained Content

    By

    James Pero

    Published May 22, 2025

    |

    Comments|

    Google Veo 3 man-on-the-street video generation. © Screenshot by Gizmodo

    Wake up, babe, new viral AI video generator dropped. This time, it’s not OpenAI’s Sora model in the spotlight, it’s Google’s Veo 3, which was announced on Tuesday during the company’s annual I/O keynote. Naturally, people are eager to see what chaos Veo 3 can wreak, and the results have been, well, chaotic. We’ve got disjointed Michael Bay fodder, talking muffins, self-aware AI sims, puppy-centric pharmaceutical ads—the list goes on. One thing that I keep seeing over and over, however, is—to put it bluntly—AI slop, and a very specific variety. For whatever reason, all of you seem to be absolutely hellbent on getting Veo to conjure up a torrent of smooth-brain YouTube content. The worst part is that this thing is actually kind of good at cranking it out, too. Don’t believe me? Here are the receipts. Is this 100% convincing? No. No, it is not. At a glance, though, most people wouldn’t be able to tell the difference if they’re just scrolling through their social feed mindlessly as one does when they’re using literally any social media site/app. Unboxing not cutting it for you? Well, don’t worry, we’ve got some man-on-the-street slop for your viewing pleasure. Sorry, hawk-tuah girl, it’s the singularity’s turn to capitalize on viral fame.

    Again, Veo’s generation is not perfect by any means, but it’s not exactly unconvincing, either. And there’s more bad news: Your Twitch-like smooth-brain content isn’t safe either. Here’s one of a picture-in-picture-style “Fortnite” stream that simulates gameplay and everything. I say “Fortnite” in scare quotes because this is just an AI representation of what Fortnite looks like, not the real thing. Either way, the only thing worse than mindless game streams is arguably mindless game streams that never even happened. And to be honest, the idea of simulating a simulation makes my brain feel achey, so for that reason alone, I’m going to hard pass. Listen, I’m not trying to be an alarmist here. In the grand scheme of things, AI-generated YouTube, Twitch, or TikTok chum isn’t going to hurt anyone, exactly, but it also doesn’t paint a rosy portrait of our AI-generated future. If there’s one thing we don’t need more of, it’s filler. Social media, without AI entering the equation, is already mostly junk, and it does make one wonder what the results of widespread generative video will really be in the end. Maybe I’ll wind up with AI-generated egg on my face, and video generators like Flow, Google’s “AI filmmaker,” will be a watershed product for real creators, but I have my doubts.

    At the very least, I’d like to see some safeguards if video generation is going to go mainstream. As harmless as AI slop might be, the ability to generate fairly convincing video isn’t one that should be taken lightly. There’s obviously huge potential for misinformation and propaganda, and if all it takes to help mitigate that is watermarking videos created in Veo 3, then it feels like an easy first step. For now, we’ll just have to take the explosion of Veo 3-enabled content with a spoonful of molasses, because there’s a lot of slop to get to, and this might be just the first course.

    Daily Newsletter

    You May Also Like

    Raymond Wong, James Pero, and Kyle Barr

    Published May 22, 2025

    By

    James Pero

    Published May 22, 2025

    By

    Vanessa Taylor

    Published May 22, 2025

    By

    Raymond Wong

    Published May 21, 2025

    By

    James Pero

    Published May 21, 2025

    By

    AJ Dellinger

    Published May 21, 2025
    #google039s #veo #already #deepfaking #all
    Google's Veo 3 Is Already Deepfaking All of YouTube's Most Smooth-Brained Content
    By James Pero Published May 22, 2025 | Comments| Google Veo 3 man-on-the-street video generation. © Screenshot by Gizmodo Wake up, babe, new viral AI video generator dropped. This time, it’s not OpenAI’s Sora model in the spotlight, it’s Google’s Veo 3, which was announced on Tuesday during the company’s annual I/O keynote. Naturally, people are eager to see what chaos Veo 3 can wreak, and the results have been, well, chaotic. We’ve got disjointed Michael Bay fodder, talking muffins, self-aware AI sims, puppy-centric pharmaceutical ads—the list goes on. One thing that I keep seeing over and over, however, is—to put it bluntly—AI slop, and a very specific variety. For whatever reason, all of you seem to be absolutely hellbent on getting Veo to conjure up a torrent of smooth-brain YouTube content. The worst part is that this thing is actually kind of good at cranking it out, too. Don’t believe me? Here are the receipts. Is this 100% convincing? No. No, it is not. At a glance, though, most people wouldn’t be able to tell the difference if they’re just scrolling through their social feed mindlessly as one does when they’re using literally any social media site/app. Unboxing not cutting it for you? Well, don’t worry, we’ve got some man-on-the-street slop for your viewing pleasure. Sorry, hawk-tuah girl, it’s the singularity’s turn to capitalize on viral fame. Again, Veo’s generation is not perfect by any means, but it’s not exactly unconvincing, either. And there’s more bad news: Your Twitch-like smooth-brain content isn’t safe either. Here’s one of a picture-in-picture-style “Fortnite” stream that simulates gameplay and everything. I say “Fortnite” in scare quotes because this is just an AI representation of what Fortnite looks like, not the real thing. Either way, the only thing worse than mindless game streams is arguably mindless game streams that never even happened. And to be honest, the idea of simulating a simulation makes my brain feel achey, so for that reason alone, I’m going to hard pass. Listen, I’m not trying to be an alarmist here. In the grand scheme of things, AI-generated YouTube, Twitch, or TikTok chum isn’t going to hurt anyone, exactly, but it also doesn’t paint a rosy portrait of our AI-generated future. If there’s one thing we don’t need more of, it’s filler. Social media, without AI entering the equation, is already mostly junk, and it does make one wonder what the results of widespread generative video will really be in the end. Maybe I’ll wind up with AI-generated egg on my face, and video generators like Flow, Google’s “AI filmmaker,” will be a watershed product for real creators, but I have my doubts. At the very least, I’d like to see some safeguards if video generation is going to go mainstream. As harmless as AI slop might be, the ability to generate fairly convincing video isn’t one that should be taken lightly. There’s obviously huge potential for misinformation and propaganda, and if all it takes to help mitigate that is watermarking videos created in Veo 3, then it feels like an easy first step. For now, we’ll just have to take the explosion of Veo 3-enabled content with a spoonful of molasses, because there’s a lot of slop to get to, and this might be just the first course. Daily Newsletter You May Also Like Raymond Wong, James Pero, and Kyle Barr Published May 22, 2025 By James Pero Published May 22, 2025 By Vanessa Taylor Published May 22, 2025 By Raymond Wong Published May 21, 2025 By James Pero Published May 21, 2025 By AJ Dellinger Published May 21, 2025 #google039s #veo #already #deepfaking #all
    GIZMODO.COM
    Google's Veo 3 Is Already Deepfaking All of YouTube's Most Smooth-Brained Content
    By James Pero Published May 22, 2025 | Comments (19) | Google Veo 3 man-on-the-street video generation. © Screenshot by Gizmodo Wake up, babe, new viral AI video generator dropped. This time, it’s not OpenAI’s Sora model in the spotlight, it’s Google’s Veo 3, which was announced on Tuesday during the company’s annual I/O keynote. Naturally, people are eager to see what chaos Veo 3 can wreak, and the results have been, well, chaotic. We’ve got disjointed Michael Bay fodder, talking muffins, self-aware AI sims, puppy-centric pharmaceutical ads—the list goes on. One thing that I keep seeing over and over, however, is—to put it bluntly—AI slop, and a very specific variety. For whatever reason, all of you seem to be absolutely hellbent on getting Veo to conjure up a torrent of smooth-brain YouTube content. The worst part is that this thing is actually kind of good at cranking it out, too. Don’t believe me? Here are the receipts. Is this 100% convincing? No. No, it is not. At a glance, though, most people wouldn’t be able to tell the difference if they’re just scrolling through their social feed mindlessly as one does when they’re using literally any social media site/app. Unboxing not cutting it for you? Well, don’t worry, we’ve got some man-on-the-street slop for your viewing pleasure. Sorry, hawk-tuah girl, it’s the singularity’s turn to capitalize on viral fame. Again, Veo’s generation is not perfect by any means, but it’s not exactly unconvincing, either. And there’s more bad news: Your Twitch-like smooth-brain content isn’t safe either. Here’s one of a picture-in-picture-style “Fortnite” stream that simulates gameplay and everything. I say “Fortnite” in scare quotes because this is just an AI representation of what Fortnite looks like, not the real thing. Either way, the only thing worse than mindless game streams is arguably mindless game streams that never even happened. And to be honest, the idea of simulating a simulation makes my brain feel achey, so for that reason alone, I’m going to hard pass. Listen, I’m not trying to be an alarmist here. In the grand scheme of things, AI-generated YouTube, Twitch, or TikTok chum isn’t going to hurt anyone, exactly, but it also doesn’t paint a rosy portrait of our AI-generated future. If there’s one thing we don’t need more of, it’s filler. Social media, without AI entering the equation, is already mostly junk, and it does make one wonder what the results of widespread generative video will really be in the end. Maybe I’ll wind up with AI-generated egg on my face, and video generators like Flow, Google’s “AI filmmaker,” will be a watershed product for real creators, but I have my doubts. At the very least, I’d like to see some safeguards if video generation is going to go mainstream. As harmless as AI slop might be, the ability to generate fairly convincing video isn’t one that should be taken lightly. There’s obviously huge potential for misinformation and propaganda, and if all it takes to help mitigate that is watermarking videos created in Veo 3, then it feels like an easy first step. For now, we’ll just have to take the explosion of Veo 3-enabled content with a spoonful of molasses, because there’s a lot of slop to get to, and this might be just the first course. Daily Newsletter You May Also Like Raymond Wong, James Pero, and Kyle Barr Published May 22, 2025 By James Pero Published May 22, 2025 By Vanessa Taylor Published May 22, 2025 By Raymond Wong Published May 21, 2025 By James Pero Published May 21, 2025 By AJ Dellinger Published May 21, 2025
    0 Комментарии 0 Поделились