• Hanging Art In the Bathroom Is Not As Gross As It Seems—Here's Why Designers LOVE It

    There are a few things an interior designer wouldn’t dare put in a bathroom. Carpet? Definitely not. Only overhead lighting? Design blasphemy. But there is one feature that finds its way into the bathroom all the time—rarely questioned, though maybe it should be—and that’s artwork. We get it: who doesn’t want to add a little personality to a space that otherwise is quite functional? Still, design fans are often split on the addition, especially when it comes to certain types of art. Related StoriesAn oil painting resting above a clawfoot bathtub or a framed graphic print next to a mirror infuses your bathroom with warmth and storytelling, a very necessary addition to a space that's often centered around pure function. “In a bathroom, where surfaces tend to be hard and the layout driven by function, a thoughtful piece can shift the entire ambience,” shares interior designer Linette Dai. “It brings dimension to the everyday.”According to designer Ali Milch, art can transform the entire experience from “routine to restorative.” But, is it the bathroom the bestplace to put a favorite photo or heirloom painting? With moisture in the mix and potential for it being in the “splash zone”, you need to be considerate of the art you bring in and where it’s placed. To help guide your curation, we chatted with interior designers and experts on how to integrate art into your space in a way that is both beautiful and bathroom-appropriate.Be Wary of HumidityMaybe this one is obvious, but when placing art in the bathroom, be sure to look for materials that aren’t prone to water damage. “We recommend framing art with a sealed backing and UV-protective acrylic instead of glass, which is both lighter and more resistant to moisture—an important consideration in steamy bathrooms,” Cathy Glazer, founder of Artfully Walls, shares. “Plus, acrylic is much safer than glass if dropped, especially on hard tile floors, as it won’t shatter.”Dai agrees that acrylic is the way to go when putting framed works into the bathroom, “I usually recommend acrylic glazing to avoid moisture damage. For humid environments, prints or photography mounted directly on aluminum or face-mounted under acrylic are durable and beautiful.”Make It Your Creative CanvasCourtsey of Ali MilchUnless you have a sprawling space, chances are your bathroom’s square footage is limited. Rather than viewing this as a constraint, think about it as an opportunity to get creative. “Because they’re smaller and more self-contained,invite experimentation—think unexpected pieces, playful themes, or striking colors,” shares Glazer. “Art helps turn the bathroom into a moment of surprise and style.”“It doesn’t have to feel stuffy or overly formal,” Milch adds. “In a recent Tribeca project, we installed a kitschy iMessage bubble with the text ‘I love you too’ on the wall facing the entry. It’s a lighthearted, personal touch.”While it’s fun to get whimsical with your bathroom art, Dai also suggests still approaching it with a curated eye and saving anything that is precious or too high-maintenance for the powder room. “In full baths, I tend to be more selective based on how the space is ventilated and used day-to-day,” she shares. “Powder rooms, on the other hand, offer more freedom. That’s where I love incorporating oil paintings. They bring soul and a sense of history, and can make even the smallest space feel elevated.”Keep Materials And Size In MindAnother material worth considering adding? Ceramics. “Ceramic pieces also work beautifully, especially when there’s open shelving or decorative niches to display them,” shares Milch. Be wary of larger-scale sculptures, as they could potentially be slightly disruptive to the space. “Any type of artwork can work in a bathroom depending on the spatial allowances, but the typical bathroom is suited to wall hangings versus sculptures,” says Sarah Latham of L Interiors.And don’t forget to be mindful of scale. “As for size, I always opt for larger pieces in smaller spaces, it may feel counter-intuitive, but it makes a tight space feel larger,” Anastasia Casey of The Interior Collective shares. “I look for works that complement the finishes and palette without overwhelming it.”Let It Set The ToneCourtesy of Annie SloanArtwork in the bathroom doesn’t just decorate it; it can define it. “In bathrooms, there’s often less visual competition—no bold furniture or patterned textiles—so the art naturally becomes more of a focal point,” Dai adds. “That’s why the mood it sets matters so much. I think more intentionally about subject matter—what someone will see up close, often in moments of solitude,” shares Dai. Whether it’s a serene landscape photo or storied painting, don’t underestimate what a piece of art can do for the most utilitarian room in the house. With the right materials and placement, it can hold its own—moisture and all—while adding a design moment and feels considered and unexpected.Follow House Beautiful on Instagram and TikTok.
    #hanging #art #bathroom #not #gross
    Hanging Art In the Bathroom Is Not As Gross As It Seems—Here's Why Designers LOVE It
    There are a few things an interior designer wouldn’t dare put in a bathroom. Carpet? Definitely not. Only overhead lighting? Design blasphemy. But there is one feature that finds its way into the bathroom all the time—rarely questioned, though maybe it should be—and that’s artwork. We get it: who doesn’t want to add a little personality to a space that otherwise is quite functional? Still, design fans are often split on the addition, especially when it comes to certain types of art. Related StoriesAn oil painting resting above a clawfoot bathtub or a framed graphic print next to a mirror infuses your bathroom with warmth and storytelling, a very necessary addition to a space that's often centered around pure function. “In a bathroom, where surfaces tend to be hard and the layout driven by function, a thoughtful piece can shift the entire ambience,” shares interior designer Linette Dai. “It brings dimension to the everyday.”According to designer Ali Milch, art can transform the entire experience from “routine to restorative.” But, is it the bathroom the bestplace to put a favorite photo or heirloom painting? With moisture in the mix and potential for it being in the “splash zone”, you need to be considerate of the art you bring in and where it’s placed. To help guide your curation, we chatted with interior designers and experts on how to integrate art into your space in a way that is both beautiful and bathroom-appropriate.Be Wary of HumidityMaybe this one is obvious, but when placing art in the bathroom, be sure to look for materials that aren’t prone to water damage. “We recommend framing art with a sealed backing and UV-protective acrylic instead of glass, which is both lighter and more resistant to moisture—an important consideration in steamy bathrooms,” Cathy Glazer, founder of Artfully Walls, shares. “Plus, acrylic is much safer than glass if dropped, especially on hard tile floors, as it won’t shatter.”Dai agrees that acrylic is the way to go when putting framed works into the bathroom, “I usually recommend acrylic glazing to avoid moisture damage. For humid environments, prints or photography mounted directly on aluminum or face-mounted under acrylic are durable and beautiful.”Make It Your Creative CanvasCourtsey of Ali MilchUnless you have a sprawling space, chances are your bathroom’s square footage is limited. Rather than viewing this as a constraint, think about it as an opportunity to get creative. “Because they’re smaller and more self-contained,invite experimentation—think unexpected pieces, playful themes, or striking colors,” shares Glazer. “Art helps turn the bathroom into a moment of surprise and style.”“It doesn’t have to feel stuffy or overly formal,” Milch adds. “In a recent Tribeca project, we installed a kitschy iMessage bubble with the text ‘I love you too’ on the wall facing the entry. It’s a lighthearted, personal touch.”While it’s fun to get whimsical with your bathroom art, Dai also suggests still approaching it with a curated eye and saving anything that is precious or too high-maintenance for the powder room. “In full baths, I tend to be more selective based on how the space is ventilated and used day-to-day,” she shares. “Powder rooms, on the other hand, offer more freedom. That’s where I love incorporating oil paintings. They bring soul and a sense of history, and can make even the smallest space feel elevated.”Keep Materials And Size In MindAnother material worth considering adding? Ceramics. “Ceramic pieces also work beautifully, especially when there’s open shelving or decorative niches to display them,” shares Milch. Be wary of larger-scale sculptures, as they could potentially be slightly disruptive to the space. “Any type of artwork can work in a bathroom depending on the spatial allowances, but the typical bathroom is suited to wall hangings versus sculptures,” says Sarah Latham of L Interiors.And don’t forget to be mindful of scale. “As for size, I always opt for larger pieces in smaller spaces, it may feel counter-intuitive, but it makes a tight space feel larger,” Anastasia Casey of The Interior Collective shares. “I look for works that complement the finishes and palette without overwhelming it.”Let It Set The ToneCourtesy of Annie SloanArtwork in the bathroom doesn’t just decorate it; it can define it. “In bathrooms, there’s often less visual competition—no bold furniture or patterned textiles—so the art naturally becomes more of a focal point,” Dai adds. “That’s why the mood it sets matters so much. I think more intentionally about subject matter—what someone will see up close, often in moments of solitude,” shares Dai. Whether it’s a serene landscape photo or storied painting, don’t underestimate what a piece of art can do for the most utilitarian room in the house. With the right materials and placement, it can hold its own—moisture and all—while adding a design moment and feels considered and unexpected.Follow House Beautiful on Instagram and TikTok. #hanging #art #bathroom #not #gross
    WWW.HOUSEBEAUTIFUL.COM
    Hanging Art In the Bathroom Is Not As Gross As It Seems—Here's Why Designers LOVE It
    There are a few things an interior designer wouldn’t dare put in a bathroom. Carpet? Definitely not. Only overhead lighting? Design blasphemy. But there is one feature that finds its way into the bathroom all the time—rarely questioned, though maybe it should be—and that’s artwork. We get it: who doesn’t want to add a little personality to a space that otherwise is quite functional? Still, design fans are often split on the addition, especially when it comes to certain types of art. Related StoriesAn oil painting resting above a clawfoot bathtub or a framed graphic print next to a mirror infuses your bathroom with warmth and storytelling, a very necessary addition to a space that's often centered around pure function. “In a bathroom, where surfaces tend to be hard and the layout driven by function, a thoughtful piece can shift the entire ambience,” shares interior designer Linette Dai. “It brings dimension to the everyday.”According to designer Ali Milch, art can transform the entire experience from “routine to restorative.” But, is it the bathroom the best (read: most hygienic) place to put a favorite photo or heirloom painting? With moisture in the mix and potential for it being in the “splash zone” (sorry, but it's true), you need to be considerate of the art you bring in and where it’s placed. To help guide your curation, we chatted with interior designers and experts on how to integrate art into your space in a way that is both beautiful and bathroom-appropriate.Be Wary of HumidityMaybe this one is obvious, but when placing art in the bathroom, be sure to look for materials that aren’t prone to water damage. “We recommend framing art with a sealed backing and UV-protective acrylic instead of glass, which is both lighter and more resistant to moisture—an important consideration in steamy bathrooms,” Cathy Glazer, founder of Artfully Walls, shares. “Plus, acrylic is much safer than glass if dropped, especially on hard tile floors, as it won’t shatter.”Dai agrees that acrylic is the way to go when putting framed works into the bathroom, “I usually recommend acrylic glazing to avoid moisture damage. For humid environments, prints or photography mounted directly on aluminum or face-mounted under acrylic are durable and beautiful.”Make It Your Creative CanvasCourtsey of Ali MilchUnless you have a sprawling space, chances are your bathroom’s square footage is limited. Rather than viewing this as a constraint, think about it as an opportunity to get creative. “Because they’re smaller and more self-contained, [bathrooms] invite experimentation—think unexpected pieces, playful themes, or striking colors,” shares Glazer. “Art helps turn the bathroom into a moment of surprise and style.”“It doesn’t have to feel stuffy or overly formal,” Milch adds. “In a recent Tribeca project, we installed a kitschy iMessage bubble with the text ‘I love you too’ on the wall facing the entry. It’s a lighthearted, personal touch.”While it’s fun to get whimsical with your bathroom art (pro tip: secondhand stores can be a great place for unique finds), Dai also suggests still approaching it with a curated eye and saving anything that is precious or too high-maintenance for the powder room. “In full baths, I tend to be more selective based on how the space is ventilated and used day-to-day,” she shares. “Powder rooms, on the other hand, offer more freedom. That’s where I love incorporating oil paintings. They bring soul and a sense of history, and can make even the smallest space feel elevated.”Keep Materials And Size In MindAnother material worth considering adding? Ceramics. “Ceramic pieces also work beautifully, especially when there’s open shelving or decorative niches to display them,” shares Milch. Be wary of larger-scale sculptures, as they could potentially be slightly disruptive to the space. “Any type of artwork can work in a bathroom depending on the spatial allowances, but the typical bathroom is suited to wall hangings versus sculptures,” says Sarah Latham of L Interiors.And don’t forget to be mindful of scale. “As for size, I always opt for larger pieces in smaller spaces, it may feel counter-intuitive, but it makes a tight space feel larger,” Anastasia Casey of The Interior Collective shares. “I look for works that complement the finishes and palette without overwhelming it.”Let It Set The ToneCourtesy of Annie SloanArtwork in the bathroom doesn’t just decorate it; it can define it. “In bathrooms, there’s often less visual competition—no bold furniture or patterned textiles—so the art naturally becomes more of a focal point,” Dai adds. “That’s why the mood it sets matters so much. I think more intentionally about subject matter—what someone will see up close, often in moments of solitude,” shares Dai. Whether it’s a serene landscape photo or storied painting, don’t underestimate what a piece of art can do for the most utilitarian room in the house. With the right materials and placement, it can hold its own—moisture and all—while adding a design moment and feels considered and unexpected.Follow House Beautiful on Instagram and TikTok.
    0 Комментарии 0 Поделились
  • WWDC 2025: What to expect from this year’s conference

    WWDC 2025, Apple’s annual developers conference, starts at 10 a.m. PT / 1 p.m. ET. Monday. Last year’s event was notable for its focus on AI, and this year, there is considerable pressure on the company to build on its promises, and to change the narrative after months of largely negative headlines.
    As in previous years, the company will focus on software updates and new technologies, including the next version of iOS, which is rumored to have the most significant design changes since the introduction of iOS 7. But iOS 19isn’t the only thing the company will announce at WWDC 2025.
    Here’s how you can watch the keynote livestream.
    iOS is getting the most dramatic design change in over a decade
    When Apple introduced a major overhaul to iOS back in 2013 with the launch of iOS 7, it felt jarring for many users with the shift from the prior skeuomorphic design with gradients and real-world textures to the more colorful, but flat, design style that reflected Apple’s then chief design officer Jony Ive’s taste for minimalism.
    Now, new reports suggest that an upcoming redesign could provoke a similar level of reaction.
    Reports suggest the new design may have elements referencing visionOS, the software powering Apple’s spatial computing headset, the Apple Vision Pro. If true, that means the new OS could feature a transparent interface and more circular app icons that break away from the traditional square format today.
    This visual redesign could be implemented across all of Apple’s ecosystem, according to Bloomberg, providing a more seamless experience for consumers moving between their different devices.

    Techcrunch event

    + on your TechCrunch All Stage pass
    Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections.

    + on your TechCrunch All Stage pass
    Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections.

    Boston, MA
    |
    July 15

    REGISTER NOW

    iOS will change its naming system
    According to Bloomberg, Apple will announce a change in the naming system for iOS at this year’s WWDC. Instead of announcing the next version of iOS as iOS 19, Apple’s operating systems will shift to being named by year. That means we could be set to see the launch of iOS 26 instead, alongside the OSes for other products, including adOS 26, macOS 26, watchOS 26, tvOS 26, and visionOS 26.
    Apple may keep the AI news light this year
    While it might be challenging to top the news related to Apple Intelligence at WWDC 2024, the company is expected to share a few updates on the AI front.
    The company has seemingly been caught flat-footed in the AI race, making announcements about AI capabilities that had yet to ship, leading even some Apple pundits to accuse the company of touting vaporware. While Apple has launched several AI tools like Image Playground, Genmoji, Writing Tools, Photos Clean Up, and more, its promise of an improved Siri, personalized to the end user and able to take action across your apps, has been delayed.
    Meanwhile, Apple has turned to outside companies like OpenAI to give its iPhone a boost in terms of its AI capabilities. At WWDC, it may announce support for other AI chatbots, as well. With Jony Ive now working with Sam Altman on an AI hardware device, Apple is under pressure to catch up on AI progress.
    Image Credits:Nikolas Kokovlis/NurPhoto / Getty Images
    In addition, reports suggest that Apple’s Health app could soon incorporate AI technology, which could include a health chatbot and generative AI insights that provide personalized health-related suggestions based on user data. Additionally, other apps, such as Messages, may receive enhancements with AI capabilities, including a translation feature and polls that offer AI-generated suggestions, per 9to5Mac.
    Apple will likely make the most of a number of smaller OS updates that involve AI, given its underwhelming progress. Reports suggest that these updates could include AI-powered battery management features and an AI-powered Shortcuts app, for instance.
    iPhone users may get a dedicated gaming app
    Bloomberg confirmed a 9to5Mac report that said Apple is developing a dedicated gaming app that will replace the aging Game Center app. The app could include access to Apple Arcade’s subscription-based game store, plus other gaming features like leaderboards, recommendations, and ways to challenge your friends. It could also integrate with iMessage or FaceTime for remote gaming.
    Image Credits:Gabby Jones/Bloomberg / Getty Images
    Updates to Mac, Watch, TV, and more
    Along with the new design, reports suggest that Apple’s other operating systems will get some polish, too. For instance, macOS may also see the new gaming app and benefit from the new AirPods features. It’s also expected to be named macOS Tahoe, in keeping with Apple’s naming convention that references California landmarks.
    Apple TV may get a visual overhaul, but also changes to its user interface, the new gaming app, and other features.
    AirPods to get new features
    In addition to Messages getting a translation feature, Bloomberg reported that Apple could also bring a live-translate language feature to its AirPods wireless Bluetooth earbuds, allowing real-time translation during conversations. The iPhone will translate spoken words from another language for the user and will also translate the user’s response back into that language.
    A new report from 9to5Mac also suggests that AirPods may get new head gestures to complement today’s ability to either nod or shake your head to respond to incoming calls or messages. Plus, AirPods may get features to auto-pause music after you fall asleep, a way to trigger the camera via Camera Control with a touch, a studio-quality mic mode, and an improved pairing experience in shared AirPods.
    Image Credits:Darrell Etherington
    Apple Pencil upgrade
    According to reports, the Apple Pencil is also receiving a new update, one that will benefit users who wish to write in Arabic script. In an effort to cater to customers in UAE, Saudi Arabia, and India, Apple is reportedly launching a new virtual calligraphy feature in iPadOS 19. The company may also introduce a bi-directional keyboard so users can switch between Arabic and English on iPhones and iPads.
    No hardware announcements?
    There haven’t been any rumors regarding new devices, because no hardware is ready for release yet, according to Bloomberg. Although it’s always possible that the company will surprise us with a new Mac Pro announcement, most reports are saying this is highly unlikely at this point.
    Some reports indicate that Apple may also announce support for a new input device for its Vision Pro: spatial controllers. The devices would be motion-aware and designed with interaction in a 3D environment in mind, 9to5Mac says. In addition, Vision Pro could get eye-scrolling support, enabling users to scroll through documents on both native and third-party apps.
    Bloomberg had reported in November that Apple was expected to announce a smart home tablet in March 2025, featuring a 6-inch touchscreen and voice-activated controls. The device was said to include support for Home Control, Siri, and video calls, but has yet to launch. Following the discovery of a filing for “HomeOS” by PMC’s Parker Ortolani, speculation has arisen that Apple may unveil the software for the device at WWDC.
    #wwdc #what #expect #this #years
    WWDC 2025: What to expect from this year’s conference
    WWDC 2025, Apple’s annual developers conference, starts at 10 a.m. PT / 1 p.m. ET. Monday. Last year’s event was notable for its focus on AI, and this year, there is considerable pressure on the company to build on its promises, and to change the narrative after months of largely negative headlines. As in previous years, the company will focus on software updates and new technologies, including the next version of iOS, which is rumored to have the most significant design changes since the introduction of iOS 7. But iOS 19isn’t the only thing the company will announce at WWDC 2025. Here’s how you can watch the keynote livestream. iOS is getting the most dramatic design change in over a decade When Apple introduced a major overhaul to iOS back in 2013 with the launch of iOS 7, it felt jarring for many users with the shift from the prior skeuomorphic design with gradients and real-world textures to the more colorful, but flat, design style that reflected Apple’s then chief design officer Jony Ive’s taste for minimalism. Now, new reports suggest that an upcoming redesign could provoke a similar level of reaction. Reports suggest the new design may have elements referencing visionOS, the software powering Apple’s spatial computing headset, the Apple Vision Pro. If true, that means the new OS could feature a transparent interface and more circular app icons that break away from the traditional square format today. This visual redesign could be implemented across all of Apple’s ecosystem, according to Bloomberg, providing a more seamless experience for consumers moving between their different devices. Techcrunch event + on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. + on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | July 15 REGISTER NOW iOS will change its naming system According to Bloomberg, Apple will announce a change in the naming system for iOS at this year’s WWDC. Instead of announcing the next version of iOS as iOS 19, Apple’s operating systems will shift to being named by year. That means we could be set to see the launch of iOS 26 instead, alongside the OSes for other products, including adOS 26, macOS 26, watchOS 26, tvOS 26, and visionOS 26. Apple may keep the AI news light this year While it might be challenging to top the news related to Apple Intelligence at WWDC 2024, the company is expected to share a few updates on the AI front. The company has seemingly been caught flat-footed in the AI race, making announcements about AI capabilities that had yet to ship, leading even some Apple pundits to accuse the company of touting vaporware. While Apple has launched several AI tools like Image Playground, Genmoji, Writing Tools, Photos Clean Up, and more, its promise of an improved Siri, personalized to the end user and able to take action across your apps, has been delayed. Meanwhile, Apple has turned to outside companies like OpenAI to give its iPhone a boost in terms of its AI capabilities. At WWDC, it may announce support for other AI chatbots, as well. With Jony Ive now working with Sam Altman on an AI hardware device, Apple is under pressure to catch up on AI progress. Image Credits:Nikolas Kokovlis/NurPhoto / Getty Images In addition, reports suggest that Apple’s Health app could soon incorporate AI technology, which could include a health chatbot and generative AI insights that provide personalized health-related suggestions based on user data. Additionally, other apps, such as Messages, may receive enhancements with AI capabilities, including a translation feature and polls that offer AI-generated suggestions, per 9to5Mac. Apple will likely make the most of a number of smaller OS updates that involve AI, given its underwhelming progress. Reports suggest that these updates could include AI-powered battery management features and an AI-powered Shortcuts app, for instance. iPhone users may get a dedicated gaming app Bloomberg confirmed a 9to5Mac report that said Apple is developing a dedicated gaming app that will replace the aging Game Center app. The app could include access to Apple Arcade’s subscription-based game store, plus other gaming features like leaderboards, recommendations, and ways to challenge your friends. It could also integrate with iMessage or FaceTime for remote gaming. Image Credits:Gabby Jones/Bloomberg / Getty Images Updates to Mac, Watch, TV, and more Along with the new design, reports suggest that Apple’s other operating systems will get some polish, too. For instance, macOS may also see the new gaming app and benefit from the new AirPods features. It’s also expected to be named macOS Tahoe, in keeping with Apple’s naming convention that references California landmarks. Apple TV may get a visual overhaul, but also changes to its user interface, the new gaming app, and other features. AirPods to get new features In addition to Messages getting a translation feature, Bloomberg reported that Apple could also bring a live-translate language feature to its AirPods wireless Bluetooth earbuds, allowing real-time translation during conversations. The iPhone will translate spoken words from another language for the user and will also translate the user’s response back into that language. A new report from 9to5Mac also suggests that AirPods may get new head gestures to complement today’s ability to either nod or shake your head to respond to incoming calls or messages. Plus, AirPods may get features to auto-pause music after you fall asleep, a way to trigger the camera via Camera Control with a touch, a studio-quality mic mode, and an improved pairing experience in shared AirPods. Image Credits:Darrell Etherington Apple Pencil upgrade According to reports, the Apple Pencil is also receiving a new update, one that will benefit users who wish to write in Arabic script. In an effort to cater to customers in UAE, Saudi Arabia, and India, Apple is reportedly launching a new virtual calligraphy feature in iPadOS 19. The company may also introduce a bi-directional keyboard so users can switch between Arabic and English on iPhones and iPads. No hardware announcements? There haven’t been any rumors regarding new devices, because no hardware is ready for release yet, according to Bloomberg. Although it’s always possible that the company will surprise us with a new Mac Pro announcement, most reports are saying this is highly unlikely at this point. Some reports indicate that Apple may also announce support for a new input device for its Vision Pro: spatial controllers. The devices would be motion-aware and designed with interaction in a 3D environment in mind, 9to5Mac says. In addition, Vision Pro could get eye-scrolling support, enabling users to scroll through documents on both native and third-party apps. Bloomberg had reported in November that Apple was expected to announce a smart home tablet in March 2025, featuring a 6-inch touchscreen and voice-activated controls. The device was said to include support for Home Control, Siri, and video calls, but has yet to launch. Following the discovery of a filing for “HomeOS” by PMC’s Parker Ortolani, speculation has arisen that Apple may unveil the software for the device at WWDC. #wwdc #what #expect #this #years
    TECHCRUNCH.COM
    WWDC 2025: What to expect from this year’s conference
    WWDC 2025, Apple’s annual developers conference, starts at 10 a.m. PT / 1 p.m. ET. Monday. Last year’s event was notable for its focus on AI, and this year, there is considerable pressure on the company to build on its promises, and to change the narrative after months of largely negative headlines. As in previous years, the company will focus on software updates and new technologies, including the next version of iOS, which is rumored to have the most significant design changes since the introduction of iOS 7. But iOS 19 (or 26, if other rumors about the new naming system are true) isn’t the only thing the company will announce at WWDC 2025. Here’s how you can watch the keynote livestream. iOS is getting the most dramatic design change in over a decade When Apple introduced a major overhaul to iOS back in 2013 with the launch of iOS 7, it felt jarring for many users with the shift from the prior skeuomorphic design with gradients and real-world textures to the more colorful, but flat, design style that reflected Apple’s then chief design officer Jony Ive’s taste for minimalism. Now, new reports suggest that an upcoming redesign could provoke a similar level of reaction. Reports suggest the new design may have elements referencing visionOS, the software powering Apple’s spatial computing headset, the Apple Vision Pro. If true, that means the new OS could feature a transparent interface and more circular app icons that break away from the traditional square format today. This visual redesign could be implemented across all of Apple’s ecosystem (including even CarPlay), according to Bloomberg, providing a more seamless experience for consumers moving between their different devices. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | July 15 REGISTER NOW iOS will change its naming system According to Bloomberg, Apple will announce a change in the naming system for iOS at this year’s WWDC. Instead of announcing the next version of iOS as iOS 19, Apple’s operating systems will shift to being named by year. That means we could be set to see the launch of iOS 26 instead, alongside the OSes for other products, including adOS 26, macOS 26, watchOS 26, tvOS 26, and visionOS 26. Apple may keep the AI news light this year While it might be challenging to top the news related to Apple Intelligence at WWDC 2024, the company is expected to share a few updates on the AI front. The company has seemingly been caught flat-footed in the AI race, making announcements about AI capabilities that had yet to ship, leading even some Apple pundits to accuse the company of touting vaporware. While Apple has launched several AI tools like Image Playground, Genmoji, Writing Tools, Photos Clean Up, and more, its promise of an improved Siri, personalized to the end user and able to take action across your apps, has been delayed. Meanwhile, Apple has turned to outside companies like OpenAI to give its iPhone a boost in terms of its AI capabilities. At WWDC, it may announce support for other AI chatbots, as well. With Jony Ive now working with Sam Altman on an AI hardware device, Apple is under pressure to catch up on AI progress. Image Credits:Nikolas Kokovlis/NurPhoto / Getty Images In addition, reports suggest that Apple’s Health app could soon incorporate AI technology, which could include a health chatbot and generative AI insights that provide personalized health-related suggestions based on user data. Additionally, other apps, such as Messages, may receive enhancements with AI capabilities, including a translation feature and polls that offer AI-generated suggestions, per 9to5Mac. Apple will likely make the most of a number of smaller OS updates that involve AI, given its underwhelming progress. Reports suggest that these updates could include AI-powered battery management features and an AI-powered Shortcuts app, for instance. iPhone users may get a dedicated gaming app Bloomberg confirmed a 9to5Mac report that said Apple is developing a dedicated gaming app that will replace the aging Game Center app. The app could include access to Apple Arcade’s subscription-based game store, plus other gaming features like leaderboards, recommendations, and ways to challenge your friends. It could also integrate with iMessage or FaceTime for remote gaming. Image Credits:Gabby Jones/Bloomberg / Getty Images Updates to Mac, Watch, TV, and more Along with the new design, reports suggest that Apple’s other operating systems will get some polish, too. For instance, macOS may also see the new gaming app and benefit from the new AirPods features. It’s also expected to be named macOS Tahoe, in keeping with Apple’s naming convention that references California landmarks. Apple TV may get a visual overhaul, but also changes to its user interface, the new gaming app, and other features. AirPods to get new features In addition to Messages getting a translation feature, Bloomberg reported that Apple could also bring a live-translate language feature to its AirPods wireless Bluetooth earbuds, allowing real-time translation during conversations. The iPhone will translate spoken words from another language for the user and will also translate the user’s response back into that language. A new report from 9to5Mac also suggests that AirPods may get new head gestures to complement today’s ability to either nod or shake your head to respond to incoming calls or messages. Plus, AirPods may get features to auto-pause music after you fall asleep, a way to trigger the camera via Camera Control with a touch, a studio-quality mic mode, and an improved pairing experience in shared AirPods. Image Credits:Darrell Etherington Apple Pencil upgrade According to reports, the Apple Pencil is also receiving a new update, one that will benefit users who wish to write in Arabic script. In an effort to cater to customers in UAE, Saudi Arabia, and India, Apple is reportedly launching a new virtual calligraphy feature in iPadOS 19. The company may also introduce a bi-directional keyboard so users can switch between Arabic and English on iPhones and iPads. No hardware announcements? There haven’t been any rumors regarding new devices, because no hardware is ready for release yet, according to Bloomberg. Although it’s always possible that the company will surprise us with a new Mac Pro announcement, most reports are saying this is highly unlikely at this point. Some reports indicate that Apple may also announce support for a new input device for its Vision Pro: spatial controllers. The devices would be motion-aware and designed with interaction in a 3D environment in mind, 9to5Mac says. In addition, Vision Pro could get eye-scrolling support, enabling users to scroll through documents on both native and third-party apps. Bloomberg had reported in November that Apple was expected to announce a smart home tablet in March 2025, featuring a 6-inch touchscreen and voice-activated controls. The device was said to include support for Home Control, Siri, and video calls, but has yet to launch. Following the discovery of a filing for “HomeOS” by PMC’s Parker Ortolani, speculation has arisen that Apple may unveil the software for the device at WWDC.
    Like
    Love
    Wow
    Sad
    Angry
    699
    0 Комментарии 0 Поделились
  • A new movie taking on the tech bros

    Hi, friends! Welcome to Installer No. 85, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sean Evans and music fraud and ayahuasca, playing with the new Obsidian Bases feature, obsessing over every Cliche” more times than I’m proud of, installing some Elgato Key Lights to improve my WFH camera look, digging the latest beta of Artifacts, and downloading every podcast I can find because I have 20 hours of driving to do this weekend.I also have for you a very funny new movie about tech CEOs, a new place to WhatsApp, a great new accessory for your phone, a helpful crypto politics explainer, and much more. Short week this week, but still lots going on. Let’s do it.The DropMountainhead. I mean, is there a more me-coded pitch than “Succession vibes, but about tech bros?” It’s about a bunch ofbillionaires who more or less run the world and are also more or less ruining it. You’ll either find this hilarious, way too close to home, or both. WhatsApp for iPad. I will never, ever understand why Meta hates building iPad apps. But it finally launched the most important one! The app itself is extremely fine and exactly what you’d think it would be, but whatever. It exists! DO INSTAGRAM NEXT.Post Games.Polygon, all about video games. It’s only a couple episodes deep, but so far I love the format: it’s really smart and extremely thoughtful, but it’s also very silly in spots. Big fan.The Popsockets Kick-Out Grip. I am a longtime, die-hard Popsockets user and evangelist, and the new model fixes my one gripe with the thing by working as both a landscape and portrait kickstand. is highway robbery for a phone holder, but this is exactly the thing I wanted.“Dance with Sabrina.” A new, real-time competitive rhythm game inside of Fortnite, in which you try to do well enough to earn the right to actually help create the show itself. Super fun concept, though all these games are better with pads, guitars, or really anything but a normal controller.Lazy 2.0. Lazy is a stealthy but fascinating note-taking tool, and it does an unusually good job of integrating with files and apps. The new version is very AI-forward, basically bringing a personalized chatbot and all your notes to your whole computer. Neat!Elden Ring Nightreign. A multiplayer-heavy spinoff of the game that I cannot get my gamer friends to shut up about, even years after it came out. I’ve seen a few people call the game a bit small and repetitive, but next to Elden Ring I suppose most things are.The Tapo DL100 Smart Deadbolt Door Lock. A door lock with, as far as I can tell, every feature I want in a smart lock: a keypad, physical keys, super long battery life, and lots of assistant integrations. It does look… huge? But it’s pretty bland-looking, which is a good thing.Implosion: The Titanic Sub Disaster. One of a few Titan-related documentaries coming this summer, meant to try and explain what led to the awful events of a couple years ago. I haven’t seen this one yet, but the reviews are solid — and the story seems even sadder and more infuriating than we thought.“The growing scandal of $TRUMP.” I love a good Zeke Faux take on crypto, whether it’s a book or a Search Engine episode. This interview with Ezra Klein is a great explainer of how the Trump family got so into crypto and how it’s being used to move money in deeply confusing and clearly corrupt ways. Cameron Faulkner isn’t technically new to The Verge, he’s just newly back at The Verge. In addition to being a commerce editor on our team, he also wrote one of the deepest dives into webcams you’ll ever find, plays a lot of games, has more thoughts about monitors than any reasonable person should, and is extremely my kind of person. Since he’s now so very back, I asked Cam to share his homescreen with us, as I always try to do with new people here. Here it is, plus some info on the apps he uses and why:The phone: Pixel 9 Pro.The wallpaper: It’s an “Emoji Workshop” creation, which is a feature that’s built into Android 14 and more recent updates. It mashes together emoji into the patterns and colors of your choosing. I picked this one because I like sushi, and I love melon / coral color tones.The apps: Google Keep, Settings, Clock, Phone, Chrome, Pocket Casts, Messages, Spotify.I haven’t downloaded a new app in ages. What’s shown on my homescreen has been there, unmoved, for longer than I can remember. I have digital light switches, a to-do list with the greatStuff widget, a simple Google Fit widget to show me how much I moved today, and a couple Google Photos widgets of my lovely wife and son. I could probably function just fine if every app shuffled its location on my homescreen, except for the bottom row. That’s set in stone, never to be fiddled with.I also asked Cameron to share a few things he’s into right now. Here’s what he sent back:Righteous Gemstones on HBO Max. It’s a much smarter comedy than I had assumed, and I’m delighted to have four seasons to catch up on. I’m really digging Clair Obscur: Expedition 33, which achieves the feat of breakneck pacingand a style that rivals Persona 5, which is high praise. I have accrued well over a dozen Switch 2 accessories, and I’m excited to put them to the test once I get a console on launch day.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now, as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“The Devil’s Plan. This Netflix original South Korean reality show locks 14 contestants in a windowless living space that’s part mansion, part prison, part room escape, and challenges them to eliminate each other in a series of complicated tabletop games.” — Travis“If you’re a fan of Drive to Survive, I’m happy to report that the latest season of Netflix’s series on NASCAR is finally good, and a reasonable substitute for that show once you’ve finished it.” — Christopher“I switched to a Pixel 9 Pro XL and Pixel Watch 3 from an iPhone and Apple Watch about 6 months ago and found Open Bubbles, an open source alternative to BlueBubbles that does need a Mac but doesn’t need that Mac to remain on, You just need a one-time hardware identifier from it, then it gives you full iMessage, Find My, FaceTime, and iCloud shared albums on Android and Windows using an email address. So long as you can get your contacts to iMessage your email instead of your number, it works great.” — Tim“Playing Mario Kart 8 Deluxe for the last time before Mario Kart World arrives next week and takes over my life!” — Ravi“With Pocket being killed off I’ve started using my RSS reader — which is Inoreader — instead as a suitable replacement. I only switched over to Pocket after Omnivore shut down.” — James“I just got a Boox Go 10.3 for my birthday and love it. The lack of front lighting is the biggest downfall. It is also only on Android 12 so I cannot load a corporate profile. It feels good to write on just, almost as good as my cheaper fountain pen and paper. It is helping me organize multiple notebooks and scraps of paper.” — Sean“Giving Tweek a bit of a go, and for a lightweight weekly planner it’s beautiful. I also currently use Motion for project management of personal tasks and when I was doing my Master’s. I really like the Gantt view to map out long term personal and study projects.” — Astrid“Might I suggest Elle Griffin’s work at The Elysian? How she’s thinking through speculative futures and a cooperative media system is fascinating.” — Zach“GeForce Now on Steam Deck!” — SteveSigning offOne of the reasons I like making this newsletter with all of you is that it’s a weekly reminder that, hey, actually, there’s a lot of awesome people doing awesome stuff out there on the internet. I spend a lot of my time talking to people who say AI is going to change everything, and we’re all going to just AI ourselves into oblivion and be thrilled about it — a theory I increasingly think is both wrong and horrifying.And then this week I read a blog post from the great Dan Sinker, who called this moment “the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.” You should read the whole thing, but here’s a bit I really loved:“Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous. If you do, the illusion falls apart pretty quickly. The fact that the userbase for AI chatbots has exploded exponentially demonstrates that good enough is, in fact, good enough for most people. Because most people don’t care.”I don’t think this describes everything and everyone, and neither does Sinker, but I do think it’s more true than it should be. And I increasingly think our job, maybe our method of rebellion, is to be people who care, who have taste, who like and share and look for good things, who read and watch and look at those things on purpose instead of just staring slackjawed at whatever slop is placed between the ads they hope we won’t really notice. I think there are a lot of fascinating ways that AI can be useful, but we can’t let it train us to accept slop just because it’s there. Sorry, this got more existential than I anticipated. But I’ve been thinking about it a lot, and I’m going to try and point Installer even more at the stuff that matters, made by people who care. I hope you’ll hold me to that.See you next week!See More:
    #new #movie #taking #tech #bros
    A new movie taking on the tech bros
    Hi, friends! Welcome to Installer No. 85, your guide to the best and Verge-iest stuff in the world.This week, I’ve been reading about Sean Evans and music fraud and ayahuasca, playing with the new Obsidian Bases feature, obsessing over every Cliche” more times than I’m proud of, installing some Elgato Key Lights to improve my WFH camera look, digging the latest beta of Artifacts, and downloading every podcast I can find because I have 20 hours of driving to do this weekend.I also have for you a very funny new movie about tech CEOs, a new place to WhatsApp, a great new accessory for your phone, a helpful crypto politics explainer, and much more. Short week this week, but still lots going on. Let’s do it.The DropMountainhead. I mean, is there a more me-coded pitch than “Succession vibes, but about tech bros?” It’s about a bunch ofbillionaires who more or less run the world and are also more or less ruining it. You’ll either find this hilarious, way too close to home, or both. WhatsApp for iPad. I will never, ever understand why Meta hates building iPad apps. But it finally launched the most important one! The app itself is extremely fine and exactly what you’d think it would be, but whatever. It exists! DO INSTAGRAM NEXT.Post Games.Polygon, all about video games. It’s only a couple episodes deep, but so far I love the format: it’s really smart and extremely thoughtful, but it’s also very silly in spots. Big fan.The Popsockets Kick-Out Grip. I am a longtime, die-hard Popsockets user and evangelist, and the new model fixes my one gripe with the thing by working as both a landscape and portrait kickstand. is highway robbery for a phone holder, but this is exactly the thing I wanted.“Dance with Sabrina.” A new, real-time competitive rhythm game inside of Fortnite, in which you try to do well enough to earn the right to actually help create the show itself. Super fun concept, though all these games are better with pads, guitars, or really anything but a normal controller.Lazy 2.0. Lazy is a stealthy but fascinating note-taking tool, and it does an unusually good job of integrating with files and apps. The new version is very AI-forward, basically bringing a personalized chatbot and all your notes to your whole computer. Neat!Elden Ring Nightreign. A multiplayer-heavy spinoff of the game that I cannot get my gamer friends to shut up about, even years after it came out. I’ve seen a few people call the game a bit small and repetitive, but next to Elden Ring I suppose most things are.The Tapo DL100 Smart Deadbolt Door Lock. A door lock with, as far as I can tell, every feature I want in a smart lock: a keypad, physical keys, super long battery life, and lots of assistant integrations. It does look… huge? But it’s pretty bland-looking, which is a good thing.Implosion: The Titanic Sub Disaster. One of a few Titan-related documentaries coming this summer, meant to try and explain what led to the awful events of a couple years ago. I haven’t seen this one yet, but the reviews are solid — and the story seems even sadder and more infuriating than we thought.“The growing scandal of $TRUMP.” I love a good Zeke Faux take on crypto, whether it’s a book or a Search Engine episode. This interview with Ezra Klein is a great explainer of how the Trump family got so into crypto and how it’s being used to move money in deeply confusing and clearly corrupt ways. Cameron Faulkner isn’t technically new to The Verge, he’s just newly back at The Verge. In addition to being a commerce editor on our team, he also wrote one of the deepest dives into webcams you’ll ever find, plays a lot of games, has more thoughts about monitors than any reasonable person should, and is extremely my kind of person. Since he’s now so very back, I asked Cam to share his homescreen with us, as I always try to do with new people here. Here it is, plus some info on the apps he uses and why:The phone: Pixel 9 Pro.The wallpaper: It’s an “Emoji Workshop” creation, which is a feature that’s built into Android 14 and more recent updates. It mashes together emoji into the patterns and colors of your choosing. I picked this one because I like sushi, and I love melon / coral color tones.The apps: Google Keep, Settings, Clock, Phone, Chrome, Pocket Casts, Messages, Spotify.I haven’t downloaded a new app in ages. What’s shown on my homescreen has been there, unmoved, for longer than I can remember. I have digital light switches, a to-do list with the greatStuff widget, a simple Google Fit widget to show me how much I moved today, and a couple Google Photos widgets of my lovely wife and son. I could probably function just fine if every app shuffled its location on my homescreen, except for the bottom row. That’s set in stone, never to be fiddled with.I also asked Cameron to share a few things he’s into right now. Here’s what he sent back:Righteous Gemstones on HBO Max. It’s a much smarter comedy than I had assumed, and I’m delighted to have four seasons to catch up on. I’m really digging Clair Obscur: Expedition 33, which achieves the feat of breakneck pacingand a style that rivals Persona 5, which is high praise. I have accrued well over a dozen Switch 2 accessories, and I’m excited to put them to the test once I get a console on launch day.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now, as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“The Devil’s Plan. This Netflix original South Korean reality show locks 14 contestants in a windowless living space that’s part mansion, part prison, part room escape, and challenges them to eliminate each other in a series of complicated tabletop games.” — Travis“If you’re a fan of Drive to Survive, I’m happy to report that the latest season of Netflix’s series on NASCAR is finally good, and a reasonable substitute for that show once you’ve finished it.” — Christopher“I switched to a Pixel 9 Pro XL and Pixel Watch 3 from an iPhone and Apple Watch about 6 months ago and found Open Bubbles, an open source alternative to BlueBubbles that does need a Mac but doesn’t need that Mac to remain on, You just need a one-time hardware identifier from it, then it gives you full iMessage, Find My, FaceTime, and iCloud shared albums on Android and Windows using an email address. So long as you can get your contacts to iMessage your email instead of your number, it works great.” — Tim“Playing Mario Kart 8 Deluxe for the last time before Mario Kart World arrives next week and takes over my life!” — Ravi“With Pocket being killed off I’ve started using my RSS reader — which is Inoreader — instead as a suitable replacement. I only switched over to Pocket after Omnivore shut down.” — James“I just got a Boox Go 10.3 for my birthday and love it. The lack of front lighting is the biggest downfall. It is also only on Android 12 so I cannot load a corporate profile. It feels good to write on just, almost as good as my cheaper fountain pen and paper. It is helping me organize multiple notebooks and scraps of paper.” — Sean“Giving Tweek a bit of a go, and for a lightweight weekly planner it’s beautiful. I also currently use Motion for project management of personal tasks and when I was doing my Master’s. I really like the Gantt view to map out long term personal and study projects.” — Astrid“Might I suggest Elle Griffin’s work at The Elysian? How she’s thinking through speculative futures and a cooperative media system is fascinating.” — Zach“GeForce Now on Steam Deck!” — SteveSigning offOne of the reasons I like making this newsletter with all of you is that it’s a weekly reminder that, hey, actually, there’s a lot of awesome people doing awesome stuff out there on the internet. I spend a lot of my time talking to people who say AI is going to change everything, and we’re all going to just AI ourselves into oblivion and be thrilled about it — a theory I increasingly think is both wrong and horrifying.And then this week I read a blog post from the great Dan Sinker, who called this moment “the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.” You should read the whole thing, but here’s a bit I really loved:“Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous. If you do, the illusion falls apart pretty quickly. The fact that the userbase for AI chatbots has exploded exponentially demonstrates that good enough is, in fact, good enough for most people. Because most people don’t care.”I don’t think this describes everything and everyone, and neither does Sinker, but I do think it’s more true than it should be. And I increasingly think our job, maybe our method of rebellion, is to be people who care, who have taste, who like and share and look for good things, who read and watch and look at those things on purpose instead of just staring slackjawed at whatever slop is placed between the ads they hope we won’t really notice. I think there are a lot of fascinating ways that AI can be useful, but we can’t let it train us to accept slop just because it’s there. Sorry, this got more existential than I anticipated. But I’ve been thinking about it a lot, and I’m going to try and point Installer even more at the stuff that matters, made by people who care. I hope you’ll hold me to that.See you next week!See More: #new #movie #taking #tech #bros
    WWW.THEVERGE.COM
    A new movie taking on the tech bros
    Hi, friends! Welcome to Installer No. 85, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, sorry in advance that this week is a tiny bit politics-y, and also you can read all the old editions at the Installer homepage.) This week, I’ve been reading about Sean Evans and music fraud and ayahuasca, playing with the new Obsidian Bases feature, obsessing over every Cliche” more times than I’m proud of, installing some Elgato Key Lights to improve my WFH camera look, digging the latest beta of Artifacts, and downloading every podcast I can find because I have 20 hours of driving to do this weekend.I also have for you a very funny new movie about tech CEOs, a new place to WhatsApp, a great new accessory for your phone, a helpful crypto politics explainer, and much more. Short week this week, but still lots going on. Let’s do it.(As always, the best part of Installer is your ideas and tips. What are you reading / playing / watching / listening to / shopping for / doing with a Raspberry Pi this week? Tell me everything: installer@theverge.com. And if you know someone else who might enjoy Installer, tell them to subscribe here. And if you haven’t subscribed, you should! You’ll get every issue for free, a day early, in your inbox.)The DropMountainhead. I mean, is there a more me-coded pitch than “Succession vibes, but about tech bros?” It’s about a bunch of (pretty recognizable) billionaires who more or less run the world and are also more or less ruining it. You’ll either find this hilarious, way too close to home, or both. WhatsApp for iPad. I will never, ever understand why Meta hates building iPad apps. But it finally launched the most important one! The app itself is extremely fine and exactly what you’d think it would be, but whatever. It exists! DO INSTAGRAM NEXT.Post Games.Polygon, all about video games. It’s only a couple episodes deep, but so far I love the format: it’s really smart and extremely thoughtful, but it’s also very silly in spots. Big fan.The Popsockets Kick-Out Grip. I am a longtime, die-hard Popsockets user and evangelist, and the new model fixes my one gripe with the thing by working as both a landscape and portrait kickstand. $40 is highway robbery for a phone holder, but this is exactly the thing I wanted.“Dance with Sabrina.” A new, real-time competitive rhythm game inside of Fortnite, in which you try to do well enough to earn the right to actually help create the show itself. Super fun concept, though all these games are better with pads, guitars, or really anything but a normal controller.Lazy 2.0. Lazy is a stealthy but fascinating note-taking tool, and it does an unusually good job of integrating with files and apps. The new version is very AI-forward, basically bringing a personalized chatbot and all your notes to your whole computer. Neat!Elden Ring Nightreign. A multiplayer-heavy spinoff of the game that I cannot get my gamer friends to shut up about, even years after it came out. I’ve seen a few people call the game a bit small and repetitive, but next to Elden Ring I suppose most things are.The Tapo DL100 Smart Deadbolt Door Lock. A $70 door lock with, as far as I can tell, every feature I want in a smart lock: a keypad, physical keys, super long battery life, and lots of assistant integrations. It does look… huge? But it’s pretty bland-looking, which is a good thing.Implosion: The Titanic Sub Disaster. One of a few Titan-related documentaries coming this summer, meant to try and explain what led to the awful events of a couple years ago. I haven’t seen this one yet, but the reviews are solid — and the story seems even sadder and more infuriating than we thought.“The growing scandal of $TRUMP.” I love a good Zeke Faux take on crypto, whether it’s a book or a Search Engine episode. This interview with Ezra Klein is a great explainer of how the Trump family got so into crypto and how it’s being used to move money in deeply confusing and clearly corrupt ways. Cameron Faulkner isn’t technically new to The Verge, he’s just newly back at The Verge. In addition to being a commerce editor on our team, he also wrote one of the deepest dives into webcams you’ll ever find, plays a lot of games, has more thoughts about monitors than any reasonable person should, and is extremely my kind of person. Since he’s now so very back, I asked Cam to share his homescreen with us, as I always try to do with new people here. Here it is, plus some info on the apps he uses and why:The phone: Pixel 9 Pro.The wallpaper: It’s an “Emoji Workshop” creation, which is a feature that’s built into Android 14 and more recent updates. It mashes together emoji into the patterns and colors of your choosing. I picked this one because I like sushi, and I love melon / coral color tones.The apps: Google Keep, Settings, Clock, Phone, Chrome, Pocket Casts, Messages, Spotify.I haven’t downloaded a new app in ages. What’s shown on my homescreen has been there, unmoved, for longer than I can remember. I have digital light switches, a to-do list with the great (but paid) Stuff widget, a simple Google Fit widget to show me how much I moved today, and a couple Google Photos widgets of my lovely wife and son. I could probably function just fine if every app shuffled its location on my homescreen, except for the bottom row. That’s set in stone, never to be fiddled with.I also asked Cameron to share a few things he’s into right now. Here’s what he sent back:Righteous Gemstones on HBO Max. It’s a much smarter comedy than I had assumed (but it’s still dumb in the best ways), and I’m delighted to have four seasons to catch up on. I’m really digging Clair Obscur: Expedition 33, which achieves the feat of breakneck pacing (the game equivalent of a page-turner) and a style that rivals Persona 5, which is high praise. I have accrued well over a dozen Switch 2 accessories, and I’m excited to put them to the test once I get a console on launch day.CrowdsourcedHere’s what the Installer community is into this week. I want to know what you’re into right now, as well! Email installer@theverge.com or message me on Signal — @davidpierce.11 — with your recommendations for anything and everything, and we’ll feature some of our favorites here every week. For even more great recommendations, check out the replies to this post on Threads and this post on Bluesky.“The Devil’s Plan. This Netflix original South Korean reality show locks 14 contestants in a windowless living space that’s part mansion, part prison, part room escape, and challenges them to eliminate each other in a series of complicated tabletop games. (If this sounds familiar, it’s a spiritual successor to the beloved series The Genius from the mid-2010s.)” — Travis“If you’re a fan of Drive to Survive, I’m happy to report that the latest season of Netflix’s series on NASCAR is finally good, and a reasonable substitute for that show once you’ve finished it.” — Christopher“I switched to a Pixel 9 Pro XL and Pixel Watch 3 from an iPhone and Apple Watch about 6 months ago and found Open Bubbles, an open source alternative to BlueBubbles that does need a Mac but doesn’t need that Mac to remain on, You just need a one-time hardware identifier from it, then it gives you full iMessage, Find My, FaceTime, and iCloud shared albums on Android and Windows using an email address. So long as you can get your contacts to iMessage your email instead of your number, it works great.” — Tim“Playing Mario Kart 8 Deluxe for the last time before Mario Kart World arrives next week and takes over my life!” — Ravi“With Pocket being killed off I’ve started using my RSS reader — which is Inoreader — instead as a suitable replacement. I only switched over to Pocket after Omnivore shut down.” — James“I just got a Boox Go 10.3 for my birthday and love it. The lack of front lighting is the biggest downfall. It is also only on Android 12 so I cannot load a corporate profile. It feels good to write on just, almost as good as my cheaper fountain pen and paper. It is helping me organize multiple notebooks and scraps of paper.” — Sean“Giving Tweek a bit of a go, and for a lightweight weekly planner it’s beautiful. I also currently use Motion for project management of personal tasks and when I was doing my Master’s. I really like the Gantt view to map out long term personal and study projects. (I also got a student discount for Motion, but it’s still expensive.)” — Astrid“Might I suggest Elle Griffin’s work at The Elysian? How she’s thinking through speculative futures and a cooperative media system is fascinating.” — Zach“GeForce Now on Steam Deck!” — SteveSigning offOne of the reasons I like making this newsletter with all of you is that it’s a weekly reminder that, hey, actually, there’s a lot of awesome people doing awesome stuff out there on the internet. I spend a lot of my time talking to people who say AI is going to change everything, and we’re all going to just AI ourselves into oblivion and be thrilled about it — a theory I increasingly think is both wrong and horrifying.And then this week I read a blog post from the great Dan Sinker, who called this moment “the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.” You should read the whole thing, but here’s a bit I really loved:“Using extraordinary amounts of resources, it has the ability to create something good enough, a squint-and-it-looks-right simulacrum of normality. If you don’t care, it’s miraculous. If you do, the illusion falls apart pretty quickly. The fact that the userbase for AI chatbots has exploded exponentially demonstrates that good enough is, in fact, good enough for most people. Because most people don’t care.”I don’t think this describes everything and everyone, and neither does Sinker, but I do think it’s more true than it should be. And I increasingly think our job, maybe our method of rebellion, is to be people who care, who have taste, who like and share and look for good things, who read and watch and look at those things on purpose instead of just staring slackjawed at whatever slop is placed between the ads they hope we won’t really notice. I think there are a lot of fascinating ways that AI can be useful, but we can’t let it train us to accept slop just because it’s there. Sorry, this got more existential than I anticipated. But I’ve been thinking about it a lot, and I’m going to try and point Installer even more at the stuff that matters, made by people who care. I hope you’ll hold me to that.See you next week!See More:
    0 Комментарии 0 Поделились
  • How to Take Full Control of Notifications on macOS

    App notifications need careful balancing across Android, iOS, Windows, and of course macOS. Too many of them and you're in a state of constant distraction; too few of them, and you risk missing out on something important that one of your programs is trying to tell you.macOS has you covered when it comes to finding the right balance for you, providing a full suite of controls for managing interruptions and staying up to date with your notifications. It doesn't take long to set up these notification options, and then once they're configured, you can use Focus modes for specific scenarios.Note that some individual apps come with their own notification settings, in addition to the options macOS offers you. Discord, for example, gives you a bunch of options for choosing what you get notified about.App notification settingsEvery macOS app should ask for permission to show notifications when you first install it. However you respond to this first request, you can configure notifications for all your installed programs at any time by opening the Apple menu from the macOS menu bar, then choosing System Settings and Notifications.First, you get the chance to set the notification settings for the operating system as a whole: You can choose whether or not pop-ups appear when your Mac is locked, and when the display is asleep, and when you're mirroring or sharing the screen. You're able to control whether previews are shown with notifications by default, and you can also give Apple Intelligence permission to summarize your notifications.

    Setting individual app notifications.
    Credit: Lifehacker

    Next, you can control notifications for individual apps. As with iOS, you get a bit of nuance here: As well as turning notifications on or off completely for each of your apps, you can also change the type of alert that gets shown, and set an app's notification to come with a sound, or not. There's also a toggle switch for showing this app's alerts in the macOS Notification Center, which appears when you click the time and date.There are also settings for notification previews and notification grouping for each individual app. The idea is that you can give your most important programs plenty of prominence, while keeping alerts from more minor apps hidden away in the background. If you set notifications as Alerts, they'll stay in the top right corner of the display until you dismiss them.Focus modesWhen you've got your individual app notification settings configured the way you want them, you can use Focus modes to override these settings. When it comes to macOS, this is most likely going to involve the traditional Do Not Disturb mode, which will block out any alerts for a set period of time.Switch from Notifications to Focus in System Settings on macOS to see the modes available: You can make use of these preset ones, or click Add Focus to create your own. There's also a Share across devices toggle switch just underneath the Focus modes list, which you can turn off if you don't want all your iPhone modes showing up on macOS as well.

    Choosing a Focus mode on a Mac.
    Credit: Lifehacker

    The settings screen for each mode comes with specific settings for which notifications are allowed to show up. Click Allowed People to pick contacts who can trigger notifications when the mode is enabled, and Allowed Apps to do the same for programs. There's also an Intelligent Breakthrough and Silencing toggle switch you can enable to give macOS permission to show notifications it thinks are important.Easy access to your modes is provided via the Control Center on the menu bar, in the top-right corner of the screen, as well as from the System Settings panel. Click this icon, then Focus, and you can pick from your list of modes or head to the Focus Settings screen again.Syncing with iPhones and iPadsThanks to Apple's suite of Continuity features, you can sync call and message notifications across iPhones, iPads, and Macs, if they're all signed into the same Apple account. This is handy if you want to be able to pick up iPhone calls on your laptop, but not so helpful if you're trying to get some work done on macOS without interruption.You've got a few options for extricating yourself from this—including just turning off Messages and FaceTime notifications on macOS. On your iPhone, open Settings then choose Cellular, and you get a Calls on Other Devices menu, where you can turn the toggle switches off for any Macs you've got connected. There's a similar setting for text messages, which you can find from iOS Settings by tapping Apps > Messages > Text Message Forwarding.

    Turning off Mac access to iPhone calls.
    Credit: Lifehacker

    That will stop cellular phone calls, and SMS/RCS messages from showing up on your Mac—though because of the way Apple Continuity works, you're still going to see FaceTime calls and iMessage updates on macOS. To change this, you need to head into FaceTime > Settings and Messages > Settings on your Mac.In both cases, you get the option to disable connections via your cell number, or to sign out of your Apple account completely, for these specific apps. You can pick and choose the options you want, depending on how much connectivity you want on your Mac.
    #how #take #full #control #notifications
    How to Take Full Control of Notifications on macOS
    App notifications need careful balancing across Android, iOS, Windows, and of course macOS. Too many of them and you're in a state of constant distraction; too few of them, and you risk missing out on something important that one of your programs is trying to tell you.macOS has you covered when it comes to finding the right balance for you, providing a full suite of controls for managing interruptions and staying up to date with your notifications. It doesn't take long to set up these notification options, and then once they're configured, you can use Focus modes for specific scenarios.Note that some individual apps come with their own notification settings, in addition to the options macOS offers you. Discord, for example, gives you a bunch of options for choosing what you get notified about.App notification settingsEvery macOS app should ask for permission to show notifications when you first install it. However you respond to this first request, you can configure notifications for all your installed programs at any time by opening the Apple menu from the macOS menu bar, then choosing System Settings and Notifications.First, you get the chance to set the notification settings for the operating system as a whole: You can choose whether or not pop-ups appear when your Mac is locked, and when the display is asleep, and when you're mirroring or sharing the screen. You're able to control whether previews are shown with notifications by default, and you can also give Apple Intelligence permission to summarize your notifications. Setting individual app notifications. Credit: Lifehacker Next, you can control notifications for individual apps. As with iOS, you get a bit of nuance here: As well as turning notifications on or off completely for each of your apps, you can also change the type of alert that gets shown, and set an app's notification to come with a sound, or not. There's also a toggle switch for showing this app's alerts in the macOS Notification Center, which appears when you click the time and date.There are also settings for notification previews and notification grouping for each individual app. The idea is that you can give your most important programs plenty of prominence, while keeping alerts from more minor apps hidden away in the background. If you set notifications as Alerts, they'll stay in the top right corner of the display until you dismiss them.Focus modesWhen you've got your individual app notification settings configured the way you want them, you can use Focus modes to override these settings. When it comes to macOS, this is most likely going to involve the traditional Do Not Disturb mode, which will block out any alerts for a set period of time.Switch from Notifications to Focus in System Settings on macOS to see the modes available: You can make use of these preset ones, or click Add Focus to create your own. There's also a Share across devices toggle switch just underneath the Focus modes list, which you can turn off if you don't want all your iPhone modes showing up on macOS as well. Choosing a Focus mode on a Mac. Credit: Lifehacker The settings screen for each mode comes with specific settings for which notifications are allowed to show up. Click Allowed People to pick contacts who can trigger notifications when the mode is enabled, and Allowed Apps to do the same for programs. There's also an Intelligent Breakthrough and Silencing toggle switch you can enable to give macOS permission to show notifications it thinks are important.Easy access to your modes is provided via the Control Center on the menu bar, in the top-right corner of the screen, as well as from the System Settings panel. Click this icon, then Focus, and you can pick from your list of modes or head to the Focus Settings screen again.Syncing with iPhones and iPadsThanks to Apple's suite of Continuity features, you can sync call and message notifications across iPhones, iPads, and Macs, if they're all signed into the same Apple account. This is handy if you want to be able to pick up iPhone calls on your laptop, but not so helpful if you're trying to get some work done on macOS without interruption.You've got a few options for extricating yourself from this—including just turning off Messages and FaceTime notifications on macOS. On your iPhone, open Settings then choose Cellular, and you get a Calls on Other Devices menu, where you can turn the toggle switches off for any Macs you've got connected. There's a similar setting for text messages, which you can find from iOS Settings by tapping Apps > Messages > Text Message Forwarding. Turning off Mac access to iPhone calls. Credit: Lifehacker That will stop cellular phone calls, and SMS/RCS messages from showing up on your Mac—though because of the way Apple Continuity works, you're still going to see FaceTime calls and iMessage updates on macOS. To change this, you need to head into FaceTime > Settings and Messages > Settings on your Mac.In both cases, you get the option to disable connections via your cell number, or to sign out of your Apple account completely, for these specific apps. You can pick and choose the options you want, depending on how much connectivity you want on your Mac. #how #take #full #control #notifications
    LIFEHACKER.COM
    How to Take Full Control of Notifications on macOS
    App notifications need careful balancing across Android, iOS, Windows, and of course macOS. Too many of them and you're in a state of constant distraction; too few of them, and you risk missing out on something important that one of your programs is trying to tell you.macOS has you covered when it comes to finding the right balance for you, providing a full suite of controls for managing interruptions and staying up to date with your notifications. It doesn't take long to set up these notification options, and then once they're configured, you can use Focus modes for specific scenarios (like working or watching movies).Note that some individual apps come with their own notification settings, in addition to the options macOS offers you (and the tricks you can do with third-party tools). Discord, for example, gives you a bunch of options for choosing what you get notified about.App notification settingsEvery macOS app should ask for permission to show notifications when you first install it. However you respond to this first request, you can configure notifications for all your installed programs at any time by opening the Apple menu from the macOS menu bar (top left), then choosing System Settings and Notifications.First, you get the chance to set the notification settings for the operating system as a whole: You can choose whether or not pop-ups appear when your Mac is locked, and when the display is asleep, and when you're mirroring or sharing the screen. You're able to control whether previews are shown with notifications by default, and you can also give Apple Intelligence permission to summarize your notifications. Setting individual app notifications. Credit: Lifehacker Next, you can control notifications for individual apps. As with iOS, you get a bit of nuance here: As well as turning notifications on or off completely for each of your apps, you can also change the type of alert that gets shown (temporary banner or permanent pop-up), and set an app's notification to come with a sound, or not. There's also a toggle switch for showing this app's alerts in the macOS Notification Center, which appears when you click the time and date (top right).There are also settings for notification previews and notification grouping for each individual app. The idea is that you can give your most important programs plenty of prominence, while keeping alerts from more minor apps hidden away in the background (or turned off altogether). If you set notifications as Alerts, they'll stay in the top right corner of the display until you dismiss them.Focus modesWhen you've got your individual app notification settings configured the way you want them, you can use Focus modes to override these settings. When it comes to macOS, this is most likely going to involve the traditional Do Not Disturb mode, which will block out any alerts for a set period of time.Switch from Notifications to Focus in System Settings on macOS to see the modes available: You can make use of these preset ones, or click Add Focus to create your own. There's also a Share across devices toggle switch just underneath the Focus modes list, which you can turn off if you don't want all your iPhone modes showing up on macOS as well (and vice versa). Choosing a Focus mode on a Mac. Credit: Lifehacker The settings screen for each mode comes with specific settings for which notifications are allowed to show up. Click Allowed People to pick contacts who can trigger notifications when the mode is enabled, and Allowed Apps to do the same for programs. There's also an Intelligent Breakthrough and Silencing toggle switch you can enable to give macOS permission to show notifications it thinks are important.Easy access to your modes is provided via the Control Center on the menu bar, in the top-right corner of the screen (the icon that looks like two toggle switches), as well as from the System Settings panel. Click this icon, then Focus, and you can pick from your list of modes or head to the Focus Settings screen again.Syncing with iPhones and iPadsThanks to Apple's suite of Continuity features, you can sync call and message notifications across iPhones, iPads, and Macs, if they're all signed into the same Apple account. This is handy if you want to be able to pick up iPhone calls on your laptop, but not so helpful if you're trying to get some work done on macOS without interruption.You've got a few options for extricating yourself from this—including just turning off Messages and FaceTime notifications on macOS. On your iPhone, open Settings then choose Cellular, and you get a Calls on Other Devices menu, where you can turn the toggle switches off for any Macs you've got connected. There's a similar setting for text messages, which you can find from iOS Settings by tapping Apps > Messages > Text Message Forwarding. Turning off Mac access to iPhone calls. Credit: Lifehacker That will stop cellular phone calls, and SMS/RCS messages from showing up on your Mac—though because of the way Apple Continuity works, you're still going to see FaceTime calls and iMessage updates on macOS. To change this, you need to head into FaceTime > Settings and Messages > Settings on your Mac.In both cases, you get the option to disable connections via your cell number, or to sign out of your Apple account completely, for these specific apps. You can pick and choose the options you want, depending on how much connectivity you want on your Mac (including the ability to send messages and initiate calls from the desktop).
    0 Комментарии 0 Поделились
  • Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation

    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly.
    import subprocess
    import sys

    def install_packages:
    packages =for package in packages:
    try:
    subprocess.check_callprintexcept subprocess.CalledProcessError:
    printprintinstall_packagesprintWe automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly.
    import os
    import json
    import math
    import requests
    from typing import Dict, List, Any, Annotated, TypedDict
    from datetime import datetime
    import operator

    from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
    from langchain_core.tools import tool
    from langchain_anthropic import ChatAnthropic
    from langgraph.graph import StateGraph, START, END
    from langgraph.prebuilt import ToolNode
    from langgraph.checkpoint.memory import MemorySaver
    from duckduckgo_search import DDGS
    We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions.
    os.environ= "Use Your API Key Here"

    ANTHROPIC_API_KEY = os.getenvWe set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key, while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times.
    from typing import TypedDict

    class AgentState:
    messages: Annotated, operator.add]

    @tool
    def calculator-> str:
    """
    Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more.

    Args:
    expression: Mathematical expression as a string")

    Returns:
    Result of the calculation as a string
    """
    try:
    allowed_names = {
    'abs': abs, 'round': round, 'min': min, 'max': max,
    'sum': sum, 'pow': pow, 'sqrt': math.sqrt,
    'sin': math.sin, 'cos': math.cos, 'tan': math.tan,
    'log': math.log, 'log10': math.log10, 'exp': math.exp,
    'pi': math.pi, 'e': math.e
    }

    expression = expression.replaceresult = evalreturn f"Result: {result}"
    except Exception as e:
    return f"Error in calculation: {str}"
    We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution.
    @tool
    def web_search-> str:
    """
    Search the web for information using DuckDuckGo.

    Args:
    query: Search query string
    num_results: Number of results to returnReturns:
    Search results as formatted string
    """
    try:
    num_results = min, 10)

    with DDGSas ddgs:
    results = list)

    if not results:
    return f"No search results found for: {query}"

    formatted_results = f"Search results for '{query}':\n\n"
    for i, result in enumerate:
    formatted_results += f"{i}. **{result}**\n"
    formatted_results += f" {result}\n"
    formatted_results += f" Source: {result}\n\n"

    return formatted_results
    except Exception as e:
    return f"Error performing web search: {str}"
    We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility.
    @tool
    def weather_info-> str:
    """
    Get current weather information for a city using OpenWeatherMap API.
    Note: This is a mock implementation for demo purposes.

    Args:
    city: Name of the city

    Returns:
    Weather information as a string
    """
    mock_weather = {
    "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65},
    "london": {"temp": 15, "condition": "Rainy", "humidity": 80},
    "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70},
    "paris": {"temp": 18, "condition": "Overcast", "humidity": 75}
    }

    city_lower = city.lowerif city_lower in mock_weather:
    weather = mock_weatherreturn f"Weather in {city}:\n" \
    f"Temperature: {weather}°C\n" \
    f"Condition: {weather}\n" \
    f"Humidity: {weather}%"
    else:
    return f"Weather data not available for {city}."
    We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API.
    @tool
    def text_analyzer-> str:
    """
    Analyze text and provide statistics like word count, character count, etc.

    Args:
    text: Text to analyze

    Returns:
    Text analysis results
    """
    if not text.strip:
    return "Please provide text to analyze."

    words = text.splitsentences = text.split+ text.split+ text.splitsentences =analysis = f"Text Analysis Results:\n"
    analysis += f"• Characters: {len}\n"
    analysis += f"• Characters: {len)}\n"
    analysis += f"• Words: {len}\n"
    analysis += f"• Sentences: {len}\n"
    analysis += f"• Average words per sentence: {len/ max, 1):.1f}\n"
    analysis += f"• Most common word: {max, key=words.count) if words else 'N/A'}"

    return analysis
    The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count, word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit.
    @tool
    def current_time-> str:
    """
    Get the current date and time.

    Returns:
    Current date and time as a formatted string
    """
    now = datetime.nowreturn f"Current date and time: {now.strftime}"
    The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow.
    tools =def create_llm:
    if ANTHROPIC_API_KEY:
    return ChatAnthropicelse:
    class MockLLM:
    def invoke:
    last_message = messages.content if messages else ""

    if anyfor word in):
    import re
    numbers = re.findall\s\w]+', last_message)
    expr = numbersif numbers else "2+2"
    return AIMessage}, "id": "calc1"}])
    elif anyfor word in):
    query = last_message.replace.replace.replace.stripif not query or len< 3:
    query = "python programming"
    return AIMessageelif anyfor word in):
    city = "New York"
    words = last_message.lower.splitfor i, word in enumerate:
    if word == 'in' and i + 1 < len:
    city = words.titlebreak
    return AIMessageelif anyfor word in):
    return AIMessageelif anyfor word in):
    text = last_message.replace.replace.stripif not text:
    text = "Sample text for analysis"
    return AIMessageelse:
    return AIMessagedef bind_tools:
    return self

    printreturn MockLLMllm = create_llmllm_with_tools = llm.bind_toolsWe initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed.
    def agent_node-> Dict:
    """Main agent node that processes messages and decides on tool usage."""
    messages = stateresponse = llm_with_tools.invokereturn {"messages":}

    def should_continue-> str:
    """Determine whether to continue with tool calls or end."""
    last_message = stateif hasattrand last_message.tool_calls:
    return "tools"
    return END
    We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model, and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow.
    def create_agent_graph:
    tool_node = ToolNodeworkflow = StateGraphworkflow.add_nodeworkflow.add_nodeworkflow.add_edgeworkflow.add_conditional_edgesworkflow.add_edgememory = MemorySaverapp = workflow.compilereturn app

    printagent = create_agent_graphprintWe construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application, enabling a structured, memory-aware multi-tool agent ready for deployment.
    def test_agent:
    """Test the agent with various queries."""
    config = {"configurable": {"thread_id": "test-thread"}}

    test_queries =printfor i, query in enumerate:
    printprinttry:
    response = agent.invoke]},
    config=config
    )

    last_message = responseprintexcept Exception as e:
    print}\n")
    The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use.
    def chat_with_agent:
    """Interactive chat function."""
    config = {"configurable": {"thread_id": "interactive-thread"}}

    printprintprintwhile True:
    try:
    user_input = input.stripif user_input.lowerin:
    printbreak
    elif user_input.lower== 'help':
    printprint?'")
    printprintprintprintprintcontinue
    elif not user_input:
    continue

    response = agent.invoke]},
    config=config
    )

    last_message = responseprintexcept KeyboardInterrupt:
    printbreak
    except Exception as e:
    print}\n")
    The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval.
    if __name__ == "__main__":
    test_agentprintprintprintchat_with_agentdef quick_demo:
    """Quick demonstration of agent capabilities."""
    config = {"configurable": {"thread_id": "demo"}}

    demos =printfor category, query in demos:
    printtry:
    response = agent.invoke]},
    config=config
    )
    printexcept Exception as e:
    print}\n")

    printprintprintprintprintfor a quick demonstration")
    printfor interactive chat")
    printprintprintFinally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agentto validate functionality with sample queries, followed by launching the interactive chat_with_agentmode for real-time interaction. The quick_demofunction also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality.
    In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge.

    Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    #stepbystep #guide #build #customizable #multitool
    Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation
    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly. import subprocess import sys def install_packages: packages =for package in packages: try: subprocess.check_callprintexcept subprocess.CalledProcessError: printprintinstall_packagesprintWe automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly. import os import json import math import requests from typing import Dict, List, Any, Annotated, TypedDict from datetime import datetime import operator from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage from langchain_core.tools import tool from langchain_anthropic import ChatAnthropic from langgraph.graph import StateGraph, START, END from langgraph.prebuilt import ToolNode from langgraph.checkpoint.memory import MemorySaver from duckduckgo_search import DDGS We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions. os.environ= "Use Your API Key Here" ANTHROPIC_API_KEY = os.getenvWe set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key, while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times. from typing import TypedDict class AgentState: messages: Annotated, operator.add] @tool def calculator-> str: """ Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more. Args: expression: Mathematical expression as a string") Returns: Result of the calculation as a string """ try: allowed_names = { 'abs': abs, 'round': round, 'min': min, 'max': max, 'sum': sum, 'pow': pow, 'sqrt': math.sqrt, 'sin': math.sin, 'cos': math.cos, 'tan': math.tan, 'log': math.log, 'log10': math.log10, 'exp': math.exp, 'pi': math.pi, 'e': math.e } expression = expression.replaceresult = evalreturn f"Result: {result}" except Exception as e: return f"Error in calculation: {str}" We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution. @tool def web_search-> str: """ Search the web for information using DuckDuckGo. Args: query: Search query string num_results: Number of results to returnReturns: Search results as formatted string """ try: num_results = min, 10) with DDGSas ddgs: results = list) if not results: return f"No search results found for: {query}" formatted_results = f"Search results for '{query}':\n\n" for i, result in enumerate: formatted_results += f"{i}. **{result}**\n" formatted_results += f" {result}\n" formatted_results += f" Source: {result}\n\n" return formatted_results except Exception as e: return f"Error performing web search: {str}" We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility. @tool def weather_info-> str: """ Get current weather information for a city using OpenWeatherMap API. Note: This is a mock implementation for demo purposes. Args: city: Name of the city Returns: Weather information as a string """ mock_weather = { "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65}, "london": {"temp": 15, "condition": "Rainy", "humidity": 80}, "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70}, "paris": {"temp": 18, "condition": "Overcast", "humidity": 75} } city_lower = city.lowerif city_lower in mock_weather: weather = mock_weatherreturn f"Weather in {city}:\n" \ f"Temperature: {weather}°C\n" \ f"Condition: {weather}\n" \ f"Humidity: {weather}%" else: return f"Weather data not available for {city}." We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API. @tool def text_analyzer-> str: """ Analyze text and provide statistics like word count, character count, etc. Args: text: Text to analyze Returns: Text analysis results """ if not text.strip: return "Please provide text to analyze." words = text.splitsentences = text.split+ text.split+ text.splitsentences =analysis = f"Text Analysis Results:\n" analysis += f"• Characters: {len}\n" analysis += f"• Characters: {len)}\n" analysis += f"• Words: {len}\n" analysis += f"• Sentences: {len}\n" analysis += f"• Average words per sentence: {len/ max, 1):.1f}\n" analysis += f"• Most common word: {max, key=words.count) if words else 'N/A'}" return analysis The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count, word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit. @tool def current_time-> str: """ Get the current date and time. Returns: Current date and time as a formatted string """ now = datetime.nowreturn f"Current date and time: {now.strftime}" The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow. tools =def create_llm: if ANTHROPIC_API_KEY: return ChatAnthropicelse: class MockLLM: def invoke: last_message = messages.content if messages else "" if anyfor word in): import re numbers = re.findall\s\w]+', last_message) expr = numbersif numbers else "2+2" return AIMessage}, "id": "calc1"}]) elif anyfor word in): query = last_message.replace.replace.replace.stripif not query or len< 3: query = "python programming" return AIMessageelif anyfor word in): city = "New York" words = last_message.lower.splitfor i, word in enumerate: if word == 'in' and i + 1 < len: city = words.titlebreak return AIMessageelif anyfor word in): return AIMessageelif anyfor word in): text = last_message.replace.replace.stripif not text: text = "Sample text for analysis" return AIMessageelse: return AIMessagedef bind_tools: return self printreturn MockLLMllm = create_llmllm_with_tools = llm.bind_toolsWe initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed. def agent_node-> Dict: """Main agent node that processes messages and decides on tool usage.""" messages = stateresponse = llm_with_tools.invokereturn {"messages":} def should_continue-> str: """Determine whether to continue with tool calls or end.""" last_message = stateif hasattrand last_message.tool_calls: return "tools" return END We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model, and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow. def create_agent_graph: tool_node = ToolNodeworkflow = StateGraphworkflow.add_nodeworkflow.add_nodeworkflow.add_edgeworkflow.add_conditional_edgesworkflow.add_edgememory = MemorySaverapp = workflow.compilereturn app printagent = create_agent_graphprintWe construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application, enabling a structured, memory-aware multi-tool agent ready for deployment. def test_agent: """Test the agent with various queries.""" config = {"configurable": {"thread_id": "test-thread"}} test_queries =printfor i, query in enumerate: printprinttry: response = agent.invoke]}, config=config ) last_message = responseprintexcept Exception as e: print}\n") The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use. def chat_with_agent: """Interactive chat function.""" config = {"configurable": {"thread_id": "interactive-thread"}} printprintprintwhile True: try: user_input = input.stripif user_input.lowerin: printbreak elif user_input.lower== 'help': printprint?'") printprintprintprintprintcontinue elif not user_input: continue response = agent.invoke]}, config=config ) last_message = responseprintexcept KeyboardInterrupt: printbreak except Exception as e: print}\n") The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval. if __name__ == "__main__": test_agentprintprintprintchat_with_agentdef quick_demo: """Quick demonstration of agent capabilities.""" config = {"configurable": {"thread_id": "demo"}} demos =printfor category, query in demos: printtry: response = agent.invoke]}, config=config ) printexcept Exception as e: print}\n") printprintprintprintprintfor a quick demonstration") printfor interactive chat") printprintprintFinally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agentto validate functionality with sample queries, followed by launching the interactive chat_with_agentmode for real-time interaction. The quick_demofunction also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality. In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge. Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding #stepbystep #guide #build #customizable #multitool
    WWW.MARKTECHPOST.COM
    Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent Creation
    In this comprehensive tutorial, we guide users through creating a powerful multi-tool AI agent using LangGraph and Claude, optimized for diverse tasks including mathematical computations, web searches, weather inquiries, text analysis, and real-time information retrieval. It begins by simplifying dependency installations to ensure effortless setup, even for beginners. Users are then introduced to structured implementations of specialized tools, such as a safe calculator, an efficient web-search utility leveraging DuckDuckGo, a mock weather information provider, a detailed text analyzer, and a time-fetching function. The tutorial also clearly delineates the integration of these tools within a sophisticated agent architecture built using LangGraph, illustrating practical usage through interactive examples and clear explanations, facilitating both beginners and advanced developers to deploy custom multi-functional AI agents rapidly. import subprocess import sys def install_packages(): packages = [ "langgraph", "langchain", "langchain-anthropic", "langchain-community", "requests", "python-dotenv", "duckduckgo-search" ] for package in packages: try: subprocess.check_call([sys.executable, "-m", "pip", "install", package, "-q"]) print(f"✓ Installed {package}") except subprocess.CalledProcessError: print(f"✗ Failed to install {package}") print("Installing required packages...") install_packages() print("Installation complete!\n") We automate the installation of essential Python packages required for building a LangGraph-based multi-tool AI agent. It leverages a subprocess to run pip commands silently and ensures each package, ranging from long-chain components to web search and environment handling tools, is installed successfully. This setup streamlines the environment preparation process, making the notebook portable and beginner-friendly. import os import json import math import requests from typing import Dict, List, Any, Annotated, TypedDict from datetime import datetime import operator from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage from langchain_core.tools import tool from langchain_anthropic import ChatAnthropic from langgraph.graph import StateGraph, START, END from langgraph.prebuilt import ToolNode from langgraph.checkpoint.memory import MemorySaver from duckduckgo_search import DDGS We import all the necessary libraries and modules for constructing the multi-tool AI agent. It includes Python standard libraries such as os, json, math, and datetime for general-purpose functionality and external libraries like requests for HTTP calls and duckduckgo_search for implementing web search. The LangChain and LangGraph ecosystems bring in message types, tool decorators, state graph components, and checkpointing utilities, while ChatAnthropic enables integration with the Claude model for conversational intelligence. These imports form the foundational building blocks for defining tools, agent workflows, and interactions. os.environ["ANTHROPIC_API_KEY"] = "Use Your API Key Here" ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY") We set and retrieve the Anthropic API key required to authenticate and interact with Claude models. The os.environ line assigns your API key (which you should replace with a valid key), while os.getenv securely retrieves it for later use in model initialization. This approach ensures the key is accessible throughout the script without hardcoding it multiple times. from typing import TypedDict class AgentState(TypedDict): messages: Annotated[List[BaseMessage], operator.add] @tool def calculator(expression: str) -> str: """ Perform mathematical calculations. Supports basic arithmetic, trigonometry, and more. Args: expression: Mathematical expression as a string (e.g., "2 + 3 * 4", "sin(3.14159/2)") Returns: Result of the calculation as a string """ try: allowed_names = { 'abs': abs, 'round': round, 'min': min, 'max': max, 'sum': sum, 'pow': pow, 'sqrt': math.sqrt, 'sin': math.sin, 'cos': math.cos, 'tan': math.tan, 'log': math.log, 'log10': math.log10, 'exp': math.exp, 'pi': math.pi, 'e': math.e } expression = expression.replace('^', '**') result = eval(expression, {"__builtins__": {}}, allowed_names) return f"Result: {result}" except Exception as e: return f"Error in calculation: {str(e)}" We define the agent’s internal state and implement a robust calculator tool. The AgentState class uses TypedDict to structure agent memory, specifically tracking messages exchanged during the conversation. The calculator function, decorated with @tool to register it as an AI-usable utility, securely evaluates mathematical expressions. It allows for safe computation by limiting available functions to a predefined set from the math module and replacing common syntax like ^ with Python’s exponentiation operator. This ensures the tool can handle simple arithmetic and advanced functions like trigonometry or logarithms while preventing unsafe code execution. @tool def web_search(query: str, num_results: int = 3) -> str: """ Search the web for information using DuckDuckGo. Args: query: Search query string num_results: Number of results to return (default: 3, max: 10) Returns: Search results as formatted string """ try: num_results = min(max(num_results, 1), 10) with DDGS() as ddgs: results = list(ddgs.text(query, max_results=num_results)) if not results: return f"No search results found for: {query}" formatted_results = f"Search results for '{query}':\n\n" for i, result in enumerate(results, 1): formatted_results += f"{i}. **{result['title']}**\n" formatted_results += f" {result['body']}\n" formatted_results += f" Source: {result['href']}\n\n" return formatted_results except Exception as e: return f"Error performing web search: {str(e)}" We define a web_search tool that enables the agent to fetch real-time information from the internet using the DuckDuckGo Search API via the duckduckgo_search Python package. The tool accepts a search query and an optional num_results parameter, ensuring that the number of results returned is between 1 and 10. It opens a DuckDuckGo search session, retrieves the results, and formats them neatly for user-friendly display. If no results are found or an error occurs, the function handles it gracefully by returning an informative message. This tool equips the agent with real-time search capabilities, enhancing responsiveness and utility. @tool def weather_info(city: str) -> str: """ Get current weather information for a city using OpenWeatherMap API. Note: This is a mock implementation for demo purposes. Args: city: Name of the city Returns: Weather information as a string """ mock_weather = { "new york": {"temp": 22, "condition": "Partly Cloudy", "humidity": 65}, "london": {"temp": 15, "condition": "Rainy", "humidity": 80}, "tokyo": {"temp": 28, "condition": "Sunny", "humidity": 70}, "paris": {"temp": 18, "condition": "Overcast", "humidity": 75} } city_lower = city.lower() if city_lower in mock_weather: weather = mock_weather[city_lower] return f"Weather in {city}:\n" \ f"Temperature: {weather['temp']}°C\n" \ f"Condition: {weather['condition']}\n" \ f"Humidity: {weather['humidity']}%" else: return f"Weather data not available for {city}. (This is a demo with limited cities: New York, London, Tokyo, Paris)" We define a weather_info tool that simulates retrieving current weather data for a given city. While it does not connect to a live weather API, it uses a predefined dictionary of mock data for major cities like New York, London, Tokyo, and Paris. Upon receiving a city name, the function normalizes it to lowercase and checks for its presence in the mock dataset. It returns temperature, weather condition, and humidity in a readable format if found. Otherwise, it notifies the user that weather data is unavailable. This tool serves as a placeholder and can later be upgraded to fetch live data from an actual weather API. @tool def text_analyzer(text: str) -> str: """ Analyze text and provide statistics like word count, character count, etc. Args: text: Text to analyze Returns: Text analysis results """ if not text.strip(): return "Please provide text to analyze." words = text.split() sentences = text.split('.') + text.split('!') + text.split('?') sentences = [s.strip() for s in sentences if s.strip()] analysis = f"Text Analysis Results:\n" analysis += f"• Characters (with spaces): {len(text)}\n" analysis += f"• Characters (without spaces): {len(text.replace(' ', ''))}\n" analysis += f"• Words: {len(words)}\n" analysis += f"• Sentences: {len(sentences)}\n" analysis += f"• Average words per sentence: {len(words) / max(len(sentences), 1):.1f}\n" analysis += f"• Most common word: {max(set(words), key=words.count) if words else 'N/A'}" return analysis The text_analyzer tool provides a detailed statistical analysis of a given text input. It calculates metrics such as character count (with and without spaces), word count, sentence count, and average words per sentence, and it identifies the most frequently occurring word. The tool handles empty input gracefully by prompting the user to provide valid text. It uses simple string operations and Python’s set and max functions to extract meaningful insights. It is a valuable utility for language analysis or content quality checks in the AI agent’s toolkit. @tool def current_time() -> str: """ Get the current date and time. Returns: Current date and time as a formatted string """ now = datetime.now() return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}" The current_time tool provides a straightforward way to retrieve the current system date and time in a human-readable format. Using Python’s datetime module, it captures the present moment and formats it as YYYY-MM-DD HH:MM:SS. This utility is particularly useful for time-stamping responses or answering user queries about the current date and time within the AI agent’s interaction flow. tools = [calculator, web_search, weather_info, text_analyzer, current_time] def create_llm(): if ANTHROPIC_API_KEY: return ChatAnthropic( model="claude-3-haiku-20240307", temperature=0.1, max_tokens=1024 ) else: class MockLLM: def invoke(self, messages): last_message = messages[-1].content if messages else "" if any(word in last_message.lower() for word in ['calculate', 'math', '+', '-', '*', '/', 'sqrt', 'sin', 'cos']): import re numbers = re.findall(r'[\d\+\-\*/\.\(\)\s\w]+', last_message) expr = numbers[0] if numbers else "2+2" return AIMessage(content="I'll help you with that calculation.", tool_calls=[{"name": "calculator", "args": {"expression": expr.strip()}, "id": "calc1"}]) elif any(word in last_message.lower() for word in ['search', 'find', 'look up', 'information about']): query = last_message.replace('search for', '').replace('find', '').replace('look up', '').strip() if not query or len(query) < 3: query = "python programming" return AIMessage(content="I'll search for that information.", tool_calls=[{"name": "web_search", "args": {"query": query}, "id": "search1"}]) elif any(word in last_message.lower() for word in ['weather', 'temperature']): city = "New York" words = last_message.lower().split() for i, word in enumerate(words): if word == 'in' and i + 1 < len(words): city = words[i + 1].title() break return AIMessage(content="I'll get the weather information.", tool_calls=[{"name": "weather_info", "args": {"city": city}, "id": "weather1"}]) elif any(word in last_message.lower() for word in ['time', 'date']): return AIMessage(content="I'll get the current time.", tool_calls=[{"name": "current_time", "args": {}, "id": "time1"}]) elif any(word in last_message.lower() for word in ['analyze', 'analysis']): text = last_message.replace('analyze this text:', '').replace('analyze', '').strip() if not text: text = "Sample text for analysis" return AIMessage(content="I'll analyze that text for you.", tool_calls=[{"name": "text_analyzer", "args": {"text": text}, "id": "analyze1"}]) else: return AIMessage(content="Hello! I'm a multi-tool agent powered by Claude. I can help with:\n• Mathematical calculations\n• Web searches\n• Weather information\n• Text analysis\n• Current time/date\n\nWhat would you like me to help you with?") def bind_tools(self, tools): return self print("⚠️ Note: Using mock LLM for demo. Add your ANTHROPIC_API_KEY for full functionality.") return MockLLM() llm = create_llm() llm_with_tools = llm.bind_tools(tools) We initialize the language model that powers the AI agent. If a valid Anthropic API key is available, it uses the Claude 3 Haiku model for high-quality responses. Without an API key, a MockLLM is defined to simulate basic tool-routing behavior based on keyword matching, allowing the agent to function offline with limited capabilities. The bind_tools method links the defined tools to the model, enabling it to invoke them as needed. def agent_node(state: AgentState) -> Dict[str, Any]: """Main agent node that processes messages and decides on tool usage.""" messages = state["messages"] response = llm_with_tools.invoke(messages) return {"messages": [response]} def should_continue(state: AgentState) -> str: """Determine whether to continue with tool calls or end.""" last_message = state["messages"][-1] if hasattr(last_message, 'tool_calls') and last_message.tool_calls: return "tools" return END We define the agent’s core decision-making logic. The agent_node function handles incoming messages, invokes the language model (with tools), and returns the model’s response. The should_continue function then evaluates whether the model’s response includes tool calls. If so, it routes control to the tool execution node; otherwise, it directs the flow to end the interaction. These functions enable dynamic and conditional transitions within the agent’s workflow. def create_agent_graph(): tool_node = ToolNode(tools) workflow = StateGraph(AgentState) workflow.add_node("agent", agent_node) workflow.add_node("tools", tool_node) workflow.add_edge(START, "agent") workflow.add_conditional_edges("agent", should_continue, {"tools": "tools", END: END}) workflow.add_edge("tools", "agent") memory = MemorySaver() app = workflow.compile(checkpointer=memory) return app print("Creating LangGraph Multi-Tool Agent...") agent = create_agent_graph() print("✓ Agent created successfully!\n") We construct the LangGraph-powered workflow that defines the AI agent’s operational structure. It initializes a ToolNode to handle tool executions and uses a StateGraph to organize the flow between agent decisions and tool usage. Nodes and edges are added to manage transitions: starting with the agent, conditionally routing to tools, and looping back as needed. A MemorySaver is integrated for persistent state tracking across turns. The graph is compiled into an executable application (app), enabling a structured, memory-aware multi-tool agent ready for deployment. def test_agent(): """Test the agent with various queries.""" config = {"configurable": {"thread_id": "test-thread"}} test_queries = [ "What's 15 * 7 + 23?", "Search for information about Python programming", "What's the weather like in Tokyo?", "What time is it?", "Analyze this text: 'LangGraph is an amazing framework for building AI agents.'" ] print("🧪 Testing the agent with sample queries...\n") for i, query in enumerate(test_queries, 1): print(f"Query {i}: {query}") print("-" * 50) try: response = agent.invoke( {"messages": [HumanMessage(content=query)]}, config=config ) last_message = response["messages"][-1] print(f"Response: {last_message.content}\n") except Exception as e: print(f"Error: {str(e)}\n") The test_agent function is a validation utility that ensures that the LangGraph agent responds correctly across different use cases. It runs predefined queries, arithmetic, web search, weather, time, and text analysis, and prints the agent’s responses. Using a consistent thread_id for configuration, it invokes the agent with each query. It neatly displays the results, helping developers verify tool integration and conversational logic before moving to interactive or production use. def chat_with_agent(): """Interactive chat function.""" config = {"configurable": {"thread_id": "interactive-thread"}} print("🤖 Multi-Tool Agent Chat") print("Available tools: Calculator, Web Search, Weather Info, Text Analyzer, Current Time") print("Type 'quit' to exit, 'help' for available commands\n") while True: try: user_input = input("You: ").strip() if user_input.lower() in ['quit', 'exit', 'q']: print("Goodbye!") break elif user_input.lower() == 'help': print("\nAvailable commands:") print("• Calculator: 'Calculate 15 * 7 + 23' or 'What's sin(pi/2)?'") print("• Web Search: 'Search for Python tutorials' or 'Find information about AI'") print("• Weather: 'Weather in Tokyo' or 'What's the temperature in London?'") print("• Text Analysis: 'Analyze this text: [your text]'") print("• Current Time: 'What time is it?' or 'Current date'") print("• quit: Exit the chat\n") continue elif not user_input: continue response = agent.invoke( {"messages": [HumanMessage(content=user_input)]}, config=config ) last_message = response["messages"][-1] print(f"Agent: {last_message.content}\n") except KeyboardInterrupt: print("\nGoodbye!") break except Exception as e: print(f"Error: {str(e)}\n") The chat_with_agent function provides an interactive command-line interface for real-time conversations with the LangGraph multi-tool agent. It supports natural language queries and recognizes commands like “help” for usage guidance and “quit” to exit. Each user input is processed through the agent, which dynamically selects and invokes appropriate response tools. The function enhances user engagement by simulating a conversational experience and showcasing the agent’s capabilities in handling various queries, from math and web search to weather, text analysis, and time retrieval. if __name__ == "__main__": test_agent() print("=" * 60) print("🎉 LangGraph Multi-Tool Agent is ready!") print("=" * 60) chat_with_agent() def quick_demo(): """Quick demonstration of agent capabilities.""" config = {"configurable": {"thread_id": "demo"}} demos = [ ("Math", "Calculate the square root of 144 plus 5 times 3"), ("Search", "Find recent news about artificial intelligence"), ("Time", "What's the current date and time?") ] print("🚀 Quick Demo of Agent Capabilities\n") for category, query in demos: print(f"[{category}] Query: {query}") try: response = agent.invoke( {"messages": [HumanMessage(content=query)]}, config=config ) print(f"Response: {response['messages'][-1].content}\n") except Exception as e: print(f"Error: {str(e)}\n") print("\n" + "="*60) print("🔧 Usage Instructions:") print("1. Add your ANTHROPIC_API_KEY to use Claude model") print(" os.environ['ANTHROPIC_API_KEY'] = 'your-anthropic-api-key'") print("2. Run quick_demo() for a quick demonstration") print("3. Run chat_with_agent() for interactive chat") print("4. The agent supports: calculations, web search, weather, text analysis, and time") print("5. Example: 'Calculate 15*7+23' or 'Search for Python tutorials'") print("="*60) Finally, we orchestrate the execution of the LangGraph multi-tool agent. If the script is run directly, it initiates test_agent() to validate functionality with sample queries, followed by launching the interactive chat_with_agent() mode for real-time interaction. The quick_demo() function also briefly showcases the agent’s capabilities in math, search, and time queries. Clear usage instructions are printed at the end, guiding users on configuring the API key, running demonstrations, and interacting with the agent. This provides a smooth onboarding experience for users to explore and extend the agent’s functionality. In conclusion, this step-by-step tutorial gives valuable insights into building an effective multi-tool AI agent leveraging LangGraph and Claude’s generative capabilities. With straightforward explanations and hands-on demonstrations, the guide empowers users to integrate diverse utilities into a cohesive and interactive system. The agent’s flexibility in performing tasks, from complex calculations to dynamic information retrieval, showcases the versatility of modern AI development frameworks. Also, the inclusion of user-friendly functions for both testing and interactive chat enhances practical understanding, enabling immediate application in various contexts. Developers can confidently extend and customize their AI agents with this foundational knowledge. Check out the Notebook on GitHub. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGenAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Microsoft AI Introduces Magentic-UI: An Open-Source Agent Prototype that Works with People to Complete Complex Tasks that Require Multi-Step Planning and Browser UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Anthropic Releases Claude Opus 4 and Claude Sonnet 4: A Technical Leap in Reasoning, Coding, and AI Agent DesignAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    0 Комментарии 0 Поделились
  • Why a new anti-revenge porn law has free speech experts alarmed 

    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes. 
    The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance. 
    “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch.
    Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse.
    “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said. 
    Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.” 
    Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim. 
    Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim. 
    Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity.
    “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement. 
    Proactive monitoring
    McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future. 
    Platforms are already using AI to monitor for harmful content.
    Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal. 
    “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.” 
    Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community. 
    A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim. 
    McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage. 
    Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging.
    Broader free speech implications
    On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law. 
    “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.” 
    While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS.
    On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status. 
     “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said.
    #why #new #antirevenge #porn #law
    Why a new anti-revenge porn law has free speech experts alarmed 
    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.  The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.  “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery. While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said.  Senator Marsha Blackburn, a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.”  Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim.  Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.  Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity. “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.  Proactive monitoring McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future.  Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material. Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.  “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.”  Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community.  A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.  McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.  Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.  “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.”  While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status.   “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said. #why #new #antirevenge #porn #law
    TECHCRUNCH.COM
    Why a new anti-revenge porn law has free speech experts alarmed 
    Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes.  The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim’s takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance.  “Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,” India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery (NCII). While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. “I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it’s gonna be consensual porn,” McKinney said.  Senator Marsha Blackburn (R-TN), a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that “keeping trans content away from children is protecting kids.”  Because of the liability that platforms face if they don’t take down an image within 48 hours of receiving a request, “the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it’s another type of protected speech, or if it’s even relevant to the person who’s making the request,” said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch’s requests for more information about how they’ll verify whether the person requesting a takedown is a victim.  Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim.  Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn’t “reasonably comply” with takedown demands as committing an “unfair or deceptive act or practice” – even if the host isn’t a commercial entity. “This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,” the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement.  Proactive monitoring McKinney predicts that platforms will start moderating content before it’s disseminated so they have fewer problematic posts to take down in the future.  Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material (CSAM). Some of Hive’s customers include Reddit, Giphy, Vevo, Bluesky, and BeReal.  “We were actually one of the tech companies that endorsed that bill,” Guo told TechCrunch. “It’ll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.”  Hive’s model is a software-as-a-service, so the startup doesn’t control how platforms use its product to flag or remove content. But Guo said many clients insert Hive’s API at the point of upload to monitor before anything is sent out to the community.  A Reddit spokesperson told TechCrunch the platform uses “sophisticated internal tools, processes, and teams to address and remove” NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim.  McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to “remove and make reasonable efforts to prevent the reupload” of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn’t include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage.  Meta, Signal, and Apple have not responded to TechCrunch’s request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law.  “And I’m going to use that bill for myself, too, if you don’t mind,” he added. “There’s nobody who gets treated worse than I do online.”  While the audience laughed at the comment, not everyone took it as a joke. Trump hasn’t been shy about suppressing or retaliating against unfavorable speech, whether that’s labeling mainstream media outlets “enemies of the people,” barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump’s demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university’s tax-exempt status.   “At a time when we’re already seeing school boards try to ban books and we’re seeing certain politicians be very explicitly about the types of content they don’t want people to ever see, whether it’s critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,” McKinney said.
    0 Комментарии 0 Поделились
  • Google has a massive mobile opportunity, and it's partly thanks to Apple

    An Android presentation at Google I/O 2025.

    Google

    2025-05-24T09:00:02Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Google's announcements at its I/O developer conference this week had analysts bullish on its AI.
    AI features could be a "Trojan horse" for Google's Android products, Bank of America analysts wrote.
    Apple's AI mess has given Google a major mobile opportunity.

    Google's phones, tablets, and, yes, XR glasses are all about to be supercharged by AI.Google needs to seize this moment. Bank of America analysts this week even called Google's slew of new AI announcements a "Trojan horse" for its device business.For years, Apple's iOS and Google's Android have battled it out. Apple leads in the US in phone sales, though it still trails Android globally. The two have also gradually converged; iOS has become more customizable, while Android has become cleaner and easier to use. As hardware upgrades have slowed in recent years, the focus has shifted to the smarts inside the device.That could be a big problem for Apple. Its AI rollouts have proven lackluster with users, while more enticing promised features have been delayed. The company is reportedly trying to rebuild Siri entirely using large language models. Right now, it's still behind Google and OpenAI, and that gap continues to widen.During Google's I/O conference this week, the search giant bombarded us with new AI features. Perhaps the best example was a particularly grabby demo of Google's "Project Astra" assistant helping someone fix their bike by searching through the bike manual, pulling up a YouTube video, and calling a bike shop to see if certain supplies were in stock.It was, of course, a highly polished promotional video, but it made Siri look generations behind."It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever," wrote Ben Thompson, analyst and Stratechery author, in an I/O dispatch this week."Android is probably going to be the most important canvas for shipping a lot of these capabilities," he added.
    Google's golden opportunityApple has done a good job of locking users into its ecosystem with iMessage blue bubbles, features like FaceTime, and peripherals like the Apple Watch that require an iPhone to use.Google's Pixel phone line, meanwhile, remains a rounding error when compared to global smartphone shipments. That's less of a problem when Google has huge partners like Samsung that bring all of its AI features to billions of Android users globally.While iPhone users will get some of these new features through Google's iOS apps, it's clear that the "universal assistant" the company is building will only see its full potential on Android. Perhaps this could finally get iOS users to make the switch."We're seeing diminishing returns on a hardware upgrade cycle, which means we're now really focused on the software upgrade cycle," Bernstein senior analyst Mark Shmulik told Business Insider.Without major changes by Apple, Shmulik said he sees the gap in capabilities between Android and iOS only widening."If it widens to the point where someone with an iPhone says, 'Well my phone can't do that,' does it finally cause that switching event from what everyone has always considered this incredible lock-in from Apple?" Shmulik said.Beyond smartphonesInternally, Google has been preparing for this moment.The company merged its Pixel, Chrome, and Android teams last year to capitalize on the AI opportunity."We are going to be very fast-moving to not miss this opportunity," Google's Android chief Sameer Samat told BI at last year's I/O. "It's a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment."A year on, Google appears to be doing just that. Much of what the company demoed this week is either rolling out to devices imminently or in the coming weeks.Google still faces the challenge that its relationships with partners like Samsung have come with the express promise that Google won't give its home-grown devices preferential treatment. So, if Google decides to double down on its Pixel phones at the expense of its partners, it could step into a business land mine.Of course, Google needs to think about more than smartphones. Its renewed bet on XR glasses is a bet on what might be the next-generation computing platform. Meta is already selling its own augmented reality glasses, and Apple is now doubling down on its efforts to get its own smart glasses out by the end of 2026, Bloomberg reported.Google this week demoed glasses that have a visual overlay to instantly provide information to wearers, which Meta's glasses lack and Apple's first version will reportedly also not have.The success of Meta's glasses so far is no doubt encouraging news for Google, as a new era of AI devices is ushered in. Now it's poised to get ahead by leveraging its AI chops, and Apple might give it the exact opening it's waited more than a decade for."I don't know about an open goal," said Shmulik of Apple, "but it does feel like they've earned themselves a penalty kick."Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely.
    #google #has #massive #mobile #opportunity
    Google has a massive mobile opportunity, and it's partly thanks to Apple
    An Android presentation at Google I/O 2025. Google 2025-05-24T09:00:02Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Google's announcements at its I/O developer conference this week had analysts bullish on its AI. AI features could be a "Trojan horse" for Google's Android products, Bank of America analysts wrote. Apple's AI mess has given Google a major mobile opportunity. Google's phones, tablets, and, yes, XR glasses are all about to be supercharged by AI.Google needs to seize this moment. Bank of America analysts this week even called Google's slew of new AI announcements a "Trojan horse" for its device business.For years, Apple's iOS and Google's Android have battled it out. Apple leads in the US in phone sales, though it still trails Android globally. The two have also gradually converged; iOS has become more customizable, while Android has become cleaner and easier to use. As hardware upgrades have slowed in recent years, the focus has shifted to the smarts inside the device.That could be a big problem for Apple. Its AI rollouts have proven lackluster with users, while more enticing promised features have been delayed. The company is reportedly trying to rebuild Siri entirely using large language models. Right now, it's still behind Google and OpenAI, and that gap continues to widen.During Google's I/O conference this week, the search giant bombarded us with new AI features. Perhaps the best example was a particularly grabby demo of Google's "Project Astra" assistant helping someone fix their bike by searching through the bike manual, pulling up a YouTube video, and calling a bike shop to see if certain supplies were in stock.It was, of course, a highly polished promotional video, but it made Siri look generations behind."It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever," wrote Ben Thompson, analyst and Stratechery author, in an I/O dispatch this week."Android is probably going to be the most important canvas for shipping a lot of these capabilities," he added. Google's golden opportunityApple has done a good job of locking users into its ecosystem with iMessage blue bubbles, features like FaceTime, and peripherals like the Apple Watch that require an iPhone to use.Google's Pixel phone line, meanwhile, remains a rounding error when compared to global smartphone shipments. That's less of a problem when Google has huge partners like Samsung that bring all of its AI features to billions of Android users globally.While iPhone users will get some of these new features through Google's iOS apps, it's clear that the "universal assistant" the company is building will only see its full potential on Android. Perhaps this could finally get iOS users to make the switch."We're seeing diminishing returns on a hardware upgrade cycle, which means we're now really focused on the software upgrade cycle," Bernstein senior analyst Mark Shmulik told Business Insider.Without major changes by Apple, Shmulik said he sees the gap in capabilities between Android and iOS only widening."If it widens to the point where someone with an iPhone says, 'Well my phone can't do that,' does it finally cause that switching event from what everyone has always considered this incredible lock-in from Apple?" Shmulik said.Beyond smartphonesInternally, Google has been preparing for this moment.The company merged its Pixel, Chrome, and Android teams last year to capitalize on the AI opportunity."We are going to be very fast-moving to not miss this opportunity," Google's Android chief Sameer Samat told BI at last year's I/O. "It's a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment."A year on, Google appears to be doing just that. Much of what the company demoed this week is either rolling out to devices imminently or in the coming weeks.Google still faces the challenge that its relationships with partners like Samsung have come with the express promise that Google won't give its home-grown devices preferential treatment. So, if Google decides to double down on its Pixel phones at the expense of its partners, it could step into a business land mine.Of course, Google needs to think about more than smartphones. Its renewed bet on XR glasses is a bet on what might be the next-generation computing platform. Meta is already selling its own augmented reality glasses, and Apple is now doubling down on its efforts to get its own smart glasses out by the end of 2026, Bloomberg reported.Google this week demoed glasses that have a visual overlay to instantly provide information to wearers, which Meta's glasses lack and Apple's first version will reportedly also not have.The success of Meta's glasses so far is no doubt encouraging news for Google, as a new era of AI devices is ushered in. Now it's poised to get ahead by leveraging its AI chops, and Apple might give it the exact opening it's waited more than a decade for."I don't know about an open goal," said Shmulik of Apple, "but it does feel like they've earned themselves a penalty kick."Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely. #google #has #massive #mobile #opportunity
    WWW.BUSINESSINSIDER.COM
    Google has a massive mobile opportunity, and it's partly thanks to Apple
    An Android presentation at Google I/O 2025. Google 2025-05-24T09:00:02Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Google's announcements at its I/O developer conference this week had analysts bullish on its AI. AI features could be a "Trojan horse" for Google's Android products, Bank of America analysts wrote. Apple's AI mess has given Google a major mobile opportunity. Google's phones, tablets, and, yes, XR glasses are all about to be supercharged by AI.Google needs to seize this moment. Bank of America analysts this week even called Google's slew of new AI announcements a "Trojan horse" for its device business.For years, Apple's iOS and Google's Android have battled it out. Apple leads in the US in phone sales, though it still trails Android globally. The two have also gradually converged; iOS has become more customizable, while Android has become cleaner and easier to use. As hardware upgrades have slowed in recent years, the focus has shifted to the smarts inside the device.That could be a big problem for Apple. Its AI rollouts have proven lackluster with users, while more enticing promised features have been delayed. The company is reportedly trying to rebuild Siri entirely using large language models. Right now, it's still behind Google and OpenAI, and that gap continues to widen.During Google's I/O conference this week, the search giant bombarded us with new AI features. Perhaps the best example was a particularly grabby demo of Google's "Project Astra" assistant helping someone fix their bike by searching through the bike manual, pulling up a YouTube video, and calling a bike shop to see if certain supplies were in stock.It was, of course, a highly polished promotional video, but it made Siri look generations behind."It has long been the case that the best way to bring products to the consumer market is via devices, and that seems truer than ever," wrote Ben Thompson, analyst and Stratechery author, in an I/O dispatch this week."Android is probably going to be the most important canvas for shipping a lot of these capabilities," he added. Google's golden opportunityApple has done a good job of locking users into its ecosystem with iMessage blue bubbles, features like FaceTime, and peripherals like the Apple Watch that require an iPhone to use.Google's Pixel phone line, meanwhile, remains a rounding error when compared to global smartphone shipments. That's less of a problem when Google has huge partners like Samsung that bring all of its AI features to billions of Android users globally.While iPhone users will get some of these new features through Google's iOS apps, it's clear that the "universal assistant" the company is building will only see its full potential on Android. Perhaps this could finally get iOS users to make the switch."We're seeing diminishing returns on a hardware upgrade cycle, which means we're now really focused on the software upgrade cycle," Bernstein senior analyst Mark Shmulik told Business Insider.Without major changes by Apple, Shmulik said he sees the gap in capabilities between Android and iOS only widening."If it widens to the point where someone with an iPhone says, 'Well my phone can't do that,' does it finally cause that switching event from what everyone has always considered this incredible lock-in from Apple?" Shmulik said.Beyond smartphonesInternally, Google has been preparing for this moment.The company merged its Pixel, Chrome, and Android teams last year to capitalize on the AI opportunity."We are going to be very fast-moving to not miss this opportunity," Google's Android chief Sameer Samat told BI at last year's I/O. "It's a once-in-a-generation moment to reinvent what phones can do. We are going to seize that moment."A year on, Google appears to be doing just that. Much of what the company demoed this week is either rolling out to devices imminently or in the coming weeks.Google still faces the challenge that its relationships with partners like Samsung have come with the express promise that Google won't give its home-grown devices preferential treatment. So, if Google decides to double down on its Pixel phones at the expense of its partners, it could step into a business land mine.Of course, Google needs to think about more than smartphones. Its renewed bet on XR glasses is a bet on what might be the next-generation computing platform. Meta is already selling its own augmented reality glasses, and Apple is now doubling down on its efforts to get its own smart glasses out by the end of 2026, Bloomberg reported.Google this week demoed glasses that have a visual overlay to instantly provide information to wearers, which Meta's glasses lack and Apple's first version will reportedly also not have.The success of Meta's glasses so far is no doubt encouraging news for Google, as a new era of AI devices is ushered in. Now it's poised to get ahead by leveraging its AI chops, and Apple might give it the exact opening it's waited more than a decade for."I don't know about an open goal," said Shmulik of Apple, "but it does feel like they've earned themselves a penalty kick."Have something to share? Contact this reporter via email at hlangley@businessinsider.com or Signal at 628-228-1836. Use a personal email address and a nonwork device; here's our guide to sharing information securely.
    0 Комментарии 0 Поделились
  • Apple Intelligence summaries are imperfect, but this one tweak could go a long way

    Among all of the Apple Intelligence features announced at WWDC24 last summer, notification summaries are likely one of the more controversial ones. Users have noticed a number of inaccurate summaries, which has resulted in Apple tweaking the design of notification summaries, as well as disabling it for news stories.
    While these summaries will never be absolutely perfect, there is one way Apple could improve the quality and accuracy of them, and I’d like to see them take this idea into consideration for iOS 19.

    How Notification Summaries work
    Notification summaries aim to help you read through your notifications faster. It works by reading through every notification in a notification stack, and summarizing it, all on device. That sounds great in theory, but Apple’s implementation has one fatal flaw:
    Apple Intelligence can only summarize whats actually presented in a notification. That may sound like a “well, duh” statement, but hear me out.
    A lot of the time, notifications are already written to be brief, that way its easy for users to read within a small notification bubble. The model already has to be small enough to run locally on a chip as slow as an A17 Pro, so ideally, you won’t want to leave a lot of room for guessing.
    We can already see an example of this at work. In the Mail app, the summaries are based on the contents of the entire email, and are a lot more truthful than what can appear in Notification Center.

    Summaries currently lack context
    Group chats on iMessage have reply threads, so it’s easy for someone to concurrently discuss something else in a busy chat. That’s great, but Apple Intelligence has no way of understanding that context. This often results in differing details all merging into one inaccurate summary.
    The model is summarizing already-short notifications in the order they appear, and that isn’t a perfect approach a lot of the time.
    That leads me to my suggestion: allow developers to provide additional background context to Apple’s on device model. With my example of iMessage, Apple could give the model context of what a new text is in response to.
    What I’d like to see Apple do
    If apps could provide additional background context to Apple’s model, many summary inaccuracies could be alleviated. This background context would otherwise be invisible to users, but it’d help guide Apple’s models.
    Back in December, Apple Intelligence inaccurately summarized a BBC News story regarding Luigi Mangione. It was a large misrepresentation of the actual story, to say the least. Apple has since disabled notification summaries for news apps.
    However, in a world where developers could provide background context, it would be possible for the BBC, for example, to provide the lead paragraph of a story. This would give Apple Intelligence more info to work with, allowing for summaries to be more accurate.
    Ultimately, large language models will still do large language model things, especially when it comes to a model that needs to be small enough to run on a smartphone with 8GB of RAM.
    At the end of the day, Apple needs a solution to its notification summaries problem. It certainly cant leave news summaries disabled forever. Bloomberg recently reported that Apple will allow third party developers to work with Apple Intelligence models in iOS 19, which is possibly a sign of whats to come.

    My favorite Apple accessory recommendations:
    Follow Michael: X/Twitter, Bluesky, Instagram

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #apple #intelligence #summaries #are #imperfect
    Apple Intelligence summaries are imperfect, but this one tweak could go a long way
    Among all of the Apple Intelligence features announced at WWDC24 last summer, notification summaries are likely one of the more controversial ones. Users have noticed a number of inaccurate summaries, which has resulted in Apple tweaking the design of notification summaries, as well as disabling it for news stories. While these summaries will never be absolutely perfect, there is one way Apple could improve the quality and accuracy of them, and I’d like to see them take this idea into consideration for iOS 19. How Notification Summaries work Notification summaries aim to help you read through your notifications faster. It works by reading through every notification in a notification stack, and summarizing it, all on device. That sounds great in theory, but Apple’s implementation has one fatal flaw: Apple Intelligence can only summarize whats actually presented in a notification. That may sound like a “well, duh” statement, but hear me out. A lot of the time, notifications are already written to be brief, that way its easy for users to read within a small notification bubble. The model already has to be small enough to run locally on a chip as slow as an A17 Pro, so ideally, you won’t want to leave a lot of room for guessing. We can already see an example of this at work. In the Mail app, the summaries are based on the contents of the entire email, and are a lot more truthful than what can appear in Notification Center. Summaries currently lack context Group chats on iMessage have reply threads, so it’s easy for someone to concurrently discuss something else in a busy chat. That’s great, but Apple Intelligence has no way of understanding that context. This often results in differing details all merging into one inaccurate summary. The model is summarizing already-short notifications in the order they appear, and that isn’t a perfect approach a lot of the time. That leads me to my suggestion: allow developers to provide additional background context to Apple’s on device model. With my example of iMessage, Apple could give the model context of what a new text is in response to. What I’d like to see Apple do If apps could provide additional background context to Apple’s model, many summary inaccuracies could be alleviated. This background context would otherwise be invisible to users, but it’d help guide Apple’s models. Back in December, Apple Intelligence inaccurately summarized a BBC News story regarding Luigi Mangione. It was a large misrepresentation of the actual story, to say the least. Apple has since disabled notification summaries for news apps. However, in a world where developers could provide background context, it would be possible for the BBC, for example, to provide the lead paragraph of a story. This would give Apple Intelligence more info to work with, allowing for summaries to be more accurate. Ultimately, large language models will still do large language model things, especially when it comes to a model that needs to be small enough to run on a smartphone with 8GB of RAM. At the end of the day, Apple needs a solution to its notification summaries problem. It certainly cant leave news summaries disabled forever. Bloomberg recently reported that Apple will allow third party developers to work with Apple Intelligence models in iOS 19, which is possibly a sign of whats to come. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #apple #intelligence #summaries #are #imperfect
    9TO5MAC.COM
    Apple Intelligence summaries are imperfect, but this one tweak could go a long way
    Among all of the Apple Intelligence features announced at WWDC24 last summer, notification summaries are likely one of the more controversial ones. Users have noticed a number of inaccurate summaries, which has resulted in Apple tweaking the design of notification summaries, as well as disabling it for news stories. While these summaries will never be absolutely perfect, there is one way Apple could improve the quality and accuracy of them, and I’d like to see them take this idea into consideration for iOS 19. How Notification Summaries work Notification summaries aim to help you read through your notifications faster. It works by reading through every notification in a notification stack, and summarizing it, all on device. That sounds great in theory, but Apple’s implementation has one fatal flaw: Apple Intelligence can only summarize whats actually presented in a notification. That may sound like a “well, duh” statement, but hear me out. A lot of the time, notifications are already written to be brief, that way its easy for users to read within a small notification bubble. The model already has to be small enough to run locally on a chip as slow as an A17 Pro, so ideally, you won’t want to leave a lot of room for guessing. We can already see an example of this at work. In the Mail app, the summaries are based on the contents of the entire email, and are a lot more truthful than what can appear in Notification Center. Summaries currently lack context Group chats on iMessage have reply threads, so it’s easy for someone to concurrently discuss something else in a busy chat. That’s great, but Apple Intelligence has no way of understanding that context. This often results in differing details all merging into one inaccurate summary. The model is summarizing already-short notifications in the order they appear, and that isn’t a perfect approach a lot of the time. That leads me to my suggestion: allow developers to provide additional background context to Apple’s on device model. With my example of iMessage, Apple could give the model context of what a new text is in response to. What I’d like to see Apple do If apps could provide additional background context to Apple’s model, many summary inaccuracies could be alleviated. This background context would otherwise be invisible to users, but it’d help guide Apple’s models. Back in December, Apple Intelligence inaccurately summarized a BBC News story regarding Luigi Mangione. It was a large misrepresentation of the actual story, to say the least. Apple has since disabled notification summaries for news apps. However, in a world where developers could provide background context, it would be possible for the BBC, for example, to provide the lead paragraph of a story. This would give Apple Intelligence more info to work with, allowing for summaries to be more accurate. Ultimately, large language models will still do large language model things, especially when it comes to a model that needs to be small enough to run on a smartphone with 8GB of RAM. At the end of the day, Apple needs a solution to its notification summaries problem. It certainly cant leave news summaries disabled forever. Bloomberg recently reported that Apple will allow third party developers to work with Apple Intelligence models in iOS 19, which is possibly a sign of whats to come. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Комментарии 0 Поделились
  • Building AI Applications in Ruby

    This is the second in a multi-part series on creating web applications with generative AI integration. Part 1 focused on explaining the AI stack and why the application layer is the best place in the stack to be. Check it out here.

    Table of Contents

    Introduction

    I thought spas were supposed to be relaxing?

    Microservices are for Macrocompanies

    Ruby and Python: Two Sides of the Same Coin

    Recent AI Based Gems

    Summary

    Introduction

    It’s not often that you hear the Ruby language mentioned when discussing AI.

    Python, of course, is the king in this world, and for good reason. The community has coalesced around the language. Most model training is done in PyTorch or TensorFlow these days. Scikit-learn and Keras are also very popular. RAG frameworks such as LangChain and LlamaIndex cater primarily to Python.

    However, when it comes to building web applications with AI integration, I believe Ruby is the better language.

    As the co-founder of an agency dedicated to building MVPs with generative AI integration, I frequently hear potential clients complaining about two things:

    Applications take too long to build

    Developers are quoting insane prices to build custom web apps

    These complaints have a common source: complexity. Modern web apps have a lot more complexity in them than in the good ol’ days. But why is this? Are the benefits brought by complexity worth the cost?

    I thought spas were supposed to be relaxing?

    One big piece of the puzzle is the recent rise of single-page applications. The most popular stack used today in building modern SPAs is MERN . The stack is popular for a few reasons:

    It is a JavaScript-only stack, across both front-end and back-end. Having to only code in only one language is pretty nice!

    SPAs can offer dynamic designs and a “smooth” user experience. Smooth here means that when some piece of data changes, only a part of the site is updated, as opposed to having to reload the whole page. Of course, if you don’t have a modern smartphone, SPAs won’t feel so smooth, as they tend to be pretty heavy. All that JavaScript starts to drag down the performance.

    There is a large ecosystem of libraries and developers with experience in this stack. This is pretty circular logic: is the stack popular because of the ecosystem, or is there an ecosystem because of the popularity? Either way, this point stands.React was created by Meta.

    Lots of money and effort has been thrown at the library, helping to polish and promote the product.

    Unfortunately, there are some downsides of working in the MERN stack, the most critical being the sheer complexity.

    Traditional web development was done using the Model-View-Controllerparadigm. In MVC, all of the logic managing a user’s session is handled in the backend, on the server. Something like fetching a user’s data was done via function calls and SQL statements in the backend. The backend then serves fully built HTML and CSS to the browser, which just has to display it. Hence the name “server”.

    In a SPA, this logic is handled on the user’s browser, in the frontend. SPAs have to handle UI state, application state, and sometimes even server state all in the browser. API calls have to be made to the backend to fetch user data. There is still quite a bit of logic on the backend, mainly exposing data and functionality through APIs.

    To illustrate the difference, let me use the analogy of a commercial kitchen. The customer will be the frontend and the kitchen will be the backend.

    MVCs vs. SPAs. Image generated by ChatGPT.

    Traditional MVC apps are like dining at a full-service restaurant. Yes, there is a lot of complexityin the backend. But the frontend experience is simple and satisfying: all the customer has to do is pick up a fork and eat their food.

    SPAs are like eating at a buffet-style dining restaurant. There is still quite a bit of complexity in the kitchen. But now the customer also has to decide what food to grab, how to combine them, how to arrange them on the plate, where to put the plate when finished, etc.

    Andrej Karpathy had a tweet recently discussing his frustration with attempting to build web apps in 2025. It can be overwhelming for those new to the space.

    The reality of building web apps in 2025 is that it's a bit like assembling IKEA furniture. There's no "full-stack" product with batteries included, you have to piece together and configure many individual services:– frontend / backend– hosting…— Andrej KarpathyMarch 27, 2025

    In order to build MVPs with AI integration rapidly, our agency has decided to forgo the SPA and instead go with the traditional MVC approach. In particular, we have found Ruby on Railsto be the framework best suited to quickly developing and deploying quality apps with AI integration. Ruby on Rails was developed by David Heinemeier Hansson in 2004 and has long been known as a great web framework, but I would argue it has recently made leaps in its ability to incorporate AI into apps, as we will see.

    Django is the most popular Python web framework, and also has a more traditional pattern of development. Unfortunately, in our testing we found Django was simply not as full-featured or “batteries included” as Rails is. As a simple example, Django has no built-in background job system. Nearly all of our apps incorporate background jobs, so to not include this was disappointing. We also prefer how Rails emphasizes simplicity, with Rails 8 encouraging developers to easily self-host their apps instead of going through a provider like Heroku. They also recently released a stack of tools meant to replace external services like Redis.

    “But what about the smooth user experience?” you might ask. The truth is that modern Rails includes several ways of crafting SPA-like experiences without all of the heavy JavaScript. The primary tool is Hotwire, which bundles tools like Turbo and Stimulus. Turbo lets you dynamically change pieces of HTML on your webpage without writing custom JavaScript. For the times where you do need to include custom JavaScript, Stimulus is a minimal JavaScript framework that lets you do just that. Even if you want to use React, you can do so with the react-rails gem. So you can have your cake, and eat it too!

    SPAs are not the only reason for the increase in complexity, however. Another has to do with the advent of the microservices architecture.

    Microservices are for Macrocompanies

    Once again, we find ourselves comparing the simple past with the complexity of today.

    In the past, software was primarily developed as monoliths. A monolithic application means that all the different parts of your app — such as the user interface, business logic, and data handling — are developed, tested, and deployed as one single unit. The code is all typically housed in a single repo.

    Working with a monolith is simple and satisfying. Running a development setup for testing purposes is easy. You are working with a single database schema containing all of your tables, making queries and joins straightforward. Deployment is simple, since you just have one container to look at and modify.

    However, once your company scales to the size of a Google or Amazon, real problems begin to emerge. With hundreds or thousands of developers contributing simultaneously to a single codebase, coordinating changes and managing merge conflicts becomes increasingly difficult. Deployments also become more complex and risky, since even minor changes can blow up the entire application!

    To manage these issues, large companies began to coalesce around the microservices architecture. This is a style of programming where you design your codebase as a set of small, autonomous services. Each service owns its own codebase, data storage, and deployment pipelines. As a simple example, instead of stuffing all of your logic regarding an OpenAI client into your main app, you can move that logic into its own service. To call that service, you would then typically make REST calls, as opposed to function calls. This ups the complexity, but resolves the merge conflict and deployment issues, since each team in the organization gets to work on their own island of code.

    Another benefit to using microservices is that they allow for a polyglot tech stack. This means that each team can code up their service using whatever language they prefer. If one team prefers JavaScript while another likes Python, this is no issue. When we first began our agency, this idea of a polyglot stack pushed us to use a microservices architecture. Not because we had a large team, but because we each wanted to use the “best” language for each functionality. This meant:

    Using Ruby on Rails for web development. It’s been battle-tested in this area for decades.

    Using Python for the AI integration, perhaps deployed with something like FastAPI. Serious AI work requires Python, I was led to believe.

    Two different languages, each focused on its area of specialty. What could go wrong?

    Unfortunately, we found the process of development frustrating. Just setting up our dev environment was time-consuming. Having to wrangle Docker compose files and manage inter-service communication made us wish we could go back to the beauty and simplicity of the monolith. Having to make a REST call and set up the appropriate routing in FastAPI instead of making a simple function call sucked.

    “Surely we can’t develop AI apps in pure Ruby,” I thought. And then I gave it a try.

    And I’m glad I did.

    I found the process of developing an MVP with AI integration in Ruby very satisfying. We were able to sprint where before we were jogging. I loved the emphasis on beauty, simplicity, and developer happiness in the Ruby community. And I found the state of the AI ecosystem in Ruby to be surprisingly mature and getting better every day.

    If you are a Python programmer and are scared off by learning a new language like I was, let me comfort you by discussing the similarities between the Ruby and Python languages.

    Ruby and Python: Two Sides of the Same Coin

    I consider Python and Ruby to be like cousins. Both languages incorporate:

    High-level Interpretation: This means they abstract away a lot of the complexity of low-level programming details, such as memory management.

    Dynamic Typing: Neither language requires you to specify if a variable is an int, float, string, etc. The types are checked at runtime.

    Object-Oriented Programming: Both languages are object-oriented. Both support classes, inheritance, polymorphism, etc. Ruby is more “pure”, in the sense that literally everything is an object, whereas in Python a few thingsare not objects.

    Readable and Concise Syntax: Both are considered easy to learn. Either is great for a first-time learner.

    Wide Ecosystem of Packages: Packages to do all sorts of cool things are available in both languages. In Python they are called libraries, and in Ruby they are called gems.

    The primary difference between the two languages lies in their philosophy and design principles. Python’s core philosophy can be described as:

    There should be one — and preferably only one — obvious way to do something.

    In theory, this should emphasize simplicity, readability, and clarity. Ruby’s philosophy can be described as:

    There’s always more than one way to do something. Maximize developer happiness.

    This was a shock to me when I switched over from Python. Check out this simple example emphasizing this philosophical difference:

    # A fight over philosophy: iterating over an array
    # Pythonic way
    for i in range:
    print# Ruby way, option 1.each do |i|
    puts i
    end

    # Ruby way, option 2
    for i in 1..5
    puts i
    end

    # Ruby way, option 3
    5.times do |i|
    puts i + 1
    end

    # Ruby way, option 4.each { |i| puts i }

    Another difference between the two is syntax style. Python primarily uses indentation to denote code blocks, while Ruby uses do…end or {…} blocks. Most include indentation inside Ruby blocks, but this is entirely optional. Examples of these syntactic differences can be seen in the code shown above.

    There are a lot of other little differences to learn. For example, in Python string interpolation is done using f-strings: f"Hello, {name}!", while in Ruby they are done using hashtags: "Hello, #{name}!". Within a few months, I think any competent Python programmer can transfer their proficiency over to Ruby.

    Recent AI-based Gems

    Despite not being in the conversation when discussing AI, Ruby has had some recent advancements in the world of gems. I will highlight some of the most impressive recent releases that we have been using in our agency to build AI apps:

    RubyLLM — Any GitHub repo that gets more than 2k stars within a few weeks of release deserves a mention, and RubyLLM is definitely worthy. I have used many clunky implementations of LLM providers from libraries like LangChain and LlamaIndex, so using RubyLLM was like a breath of fresh air. As a simple example, let’s take a look at a tutorial demonstrating multi-turn conversations:

    require 'ruby_llm'

    # Create a model and give it instructions
    chat = RubyLLM.chat
    chat.with_instructions "You are a friendly Ruby expert who loves to help beginners."

    # Multi-turn conversation
    chat.ask "Hi! What does attr_reader do in Ruby?"
    # => "Ruby creates a getter method for each symbol...

    # Stream responses in real time
    chat.ask "Could you give me a short example?" do |chunk|
    print chunk.content
    end
    # => "Sure!
    # ```ruby
    # class Person
    # attr...

    Simply amazing. Multi-turn conversations are handled automatically for you. Streaming is a breeze. Compare this to a similar implementation in LangChain:

    from langchain_openai import ChatOpenAI
    from langchain_core.schema import SystemMessage, HumanMessage, AIMessage
    from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

    SYSTEM_PROMPT = "You are a friendly Ruby expert who loves to help beginners."
    chat = ChatOpenAI])

    history =def ask-> None:
    """Stream the answer token-by-token and keep the context in memory."""
    history.append)
    # .stream yields message chunks as they arrive
    for chunk in chat.stream:
    printprint# newline after the answer
    # the final chunk has the full message content
    history.append)

    askaskYikes. And it’s important to note that this is a grug implementation. Want to know how LangChain really expects you to manage memory? Check out these links, but grab a bucket first; you may get sick.

    Neighbors — This is an excellent library to use for nearest-neighbors search in a Rails application. Very useful in a RAG setup. It integrates with Postgres, SQLite, MySQL, MariaDB, and more. It was written by Andrew Kane, the same guy who wrote the pgvector extension that allows Postgres to behave as a vector database.

    Async — This gem had its first official release back in December 2024, and it has been making waves in the Ruby community. Async is a fiber-based framework for Ruby that runs non-blocking I/O tasks concurrently while letting you write simple, sequential code. Fibers are like mini-threads that each have their own mini call stack. While not strictly a gem for AI, it has helped us create features like web scrapers that run blazingly fast across thousands of pages. We have also used it to handle streaming of chunks from LLMs.

    Torch.rb — If you are interested in training deep learning models, then surely you have heard of PyTorch. Well, PyTorch is built on LibTorch, which essentially has a lot of C/C++ code under the hood to perform ML operations quickly. Andrew Kane took LibTorch and made a Ruby adapter over it to create Torch.rb, essentially a Ruby version of PyTorch. Andrew Kane has been a hero in the Ruby AI world, authoring dozens of ML gems for Ruby.

    Summary

    In short: building a web application with AI integration quickly and cheaply requires a monolithic architecture. A monolith demands a monolingual application, which is necessary if your end goal is quality apps delivered with speed. Your main options are either Python or Ruby. If you go with Python, you will probably use Django for your web framework. If you go with Ruby, you will be using Ruby on Rails. At our agency, we found Django’s lack of features disappointing. Rails has impressed us with its feature set and emphasis on simplicity. We were thrilled to find almost no issues on the AI side.

    Of course, there are times where you will not want to use Ruby. If you are conducting research in AI or training machine learning models from scratch, then you will likely want to stick with Python. Research almost never involves building Web Applications. At most you’ll build a simple interface or dashboard in a notebook, but nothing production-ready. You’ll likely want the latest PyTorch updates to ensure your training runs quickly. You may even dive into low-level C/C++ programming to squeeze as much performance as you can out of your hardware. Maybe you’ll even try your hand at Mojo.

    But if your goal is to integrate the latest LLMs — either open or closed source — into web applications, then we believe Ruby to be the far superior option. Give it a shot yourselves!

    In part three of this series, I will dive into a fun experiment: just how simple can we make a web application with AI integration? Stay tuned.

     If you’d like a custom web application with generative AI integration, visit losangelesaiapps.com

    The post Building AI Applications in Ruby appeared first on Towards Data Science.
    #building #applications #ruby
    Building AI Applications in Ruby
    This is the second in a multi-part series on creating web applications with generative AI integration. Part 1 focused on explaining the AI stack and why the application layer is the best place in the stack to be. Check it out here. Table of Contents Introduction I thought spas were supposed to be relaxing? Microservices are for Macrocompanies Ruby and Python: Two Sides of the Same Coin Recent AI Based Gems Summary Introduction It’s not often that you hear the Ruby language mentioned when discussing AI. Python, of course, is the king in this world, and for good reason. The community has coalesced around the language. Most model training is done in PyTorch or TensorFlow these days. Scikit-learn and Keras are also very popular. RAG frameworks such as LangChain and LlamaIndex cater primarily to Python. However, when it comes to building web applications with AI integration, I believe Ruby is the better language. As the co-founder of an agency dedicated to building MVPs with generative AI integration, I frequently hear potential clients complaining about two things: Applications take too long to build Developers are quoting insane prices to build custom web apps These complaints have a common source: complexity. Modern web apps have a lot more complexity in them than in the good ol’ days. But why is this? Are the benefits brought by complexity worth the cost? I thought spas were supposed to be relaxing? One big piece of the puzzle is the recent rise of single-page applications. The most popular stack used today in building modern SPAs is MERN . The stack is popular for a few reasons: It is a JavaScript-only stack, across both front-end and back-end. Having to only code in only one language is pretty nice! SPAs can offer dynamic designs and a “smooth” user experience. Smooth here means that when some piece of data changes, only a part of the site is updated, as opposed to having to reload the whole page. Of course, if you don’t have a modern smartphone, SPAs won’t feel so smooth, as they tend to be pretty heavy. All that JavaScript starts to drag down the performance. There is a large ecosystem of libraries and developers with experience in this stack. This is pretty circular logic: is the stack popular because of the ecosystem, or is there an ecosystem because of the popularity? Either way, this point stands.React was created by Meta. Lots of money and effort has been thrown at the library, helping to polish and promote the product. Unfortunately, there are some downsides of working in the MERN stack, the most critical being the sheer complexity. Traditional web development was done using the Model-View-Controllerparadigm. In MVC, all of the logic managing a user’s session is handled in the backend, on the server. Something like fetching a user’s data was done via function calls and SQL statements in the backend. The backend then serves fully built HTML and CSS to the browser, which just has to display it. Hence the name “server”. In a SPA, this logic is handled on the user’s browser, in the frontend. SPAs have to handle UI state, application state, and sometimes even server state all in the browser. API calls have to be made to the backend to fetch user data. There is still quite a bit of logic on the backend, mainly exposing data and functionality through APIs. To illustrate the difference, let me use the analogy of a commercial kitchen. The customer will be the frontend and the kitchen will be the backend. MVCs vs. SPAs. Image generated by ChatGPT. Traditional MVC apps are like dining at a full-service restaurant. Yes, there is a lot of complexityin the backend. But the frontend experience is simple and satisfying: all the customer has to do is pick up a fork and eat their food. SPAs are like eating at a buffet-style dining restaurant. There is still quite a bit of complexity in the kitchen. But now the customer also has to decide what food to grab, how to combine them, how to arrange them on the plate, where to put the plate when finished, etc. Andrej Karpathy had a tweet recently discussing his frustration with attempting to build web apps in 2025. It can be overwhelming for those new to the space. The reality of building web apps in 2025 is that it's a bit like assembling IKEA furniture. There's no "full-stack" product with batteries included, you have to piece together and configure many individual services:– frontend / backend– hosting…— Andrej KarpathyMarch 27, 2025 In order to build MVPs with AI integration rapidly, our agency has decided to forgo the SPA and instead go with the traditional MVC approach. In particular, we have found Ruby on Railsto be the framework best suited to quickly developing and deploying quality apps with AI integration. Ruby on Rails was developed by David Heinemeier Hansson in 2004 and has long been known as a great web framework, but I would argue it has recently made leaps in its ability to incorporate AI into apps, as we will see. Django is the most popular Python web framework, and also has a more traditional pattern of development. Unfortunately, in our testing we found Django was simply not as full-featured or “batteries included” as Rails is. As a simple example, Django has no built-in background job system. Nearly all of our apps incorporate background jobs, so to not include this was disappointing. We also prefer how Rails emphasizes simplicity, with Rails 8 encouraging developers to easily self-host their apps instead of going through a provider like Heroku. They also recently released a stack of tools meant to replace external services like Redis. “But what about the smooth user experience?” you might ask. The truth is that modern Rails includes several ways of crafting SPA-like experiences without all of the heavy JavaScript. The primary tool is Hotwire, which bundles tools like Turbo and Stimulus. Turbo lets you dynamically change pieces of HTML on your webpage without writing custom JavaScript. For the times where you do need to include custom JavaScript, Stimulus is a minimal JavaScript framework that lets you do just that. Even if you want to use React, you can do so with the react-rails gem. So you can have your cake, and eat it too! SPAs are not the only reason for the increase in complexity, however. Another has to do with the advent of the microservices architecture. Microservices are for Macrocompanies Once again, we find ourselves comparing the simple past with the complexity of today. In the past, software was primarily developed as monoliths. A monolithic application means that all the different parts of your app — such as the user interface, business logic, and data handling — are developed, tested, and deployed as one single unit. The code is all typically housed in a single repo. Working with a monolith is simple and satisfying. Running a development setup for testing purposes is easy. You are working with a single database schema containing all of your tables, making queries and joins straightforward. Deployment is simple, since you just have one container to look at and modify. However, once your company scales to the size of a Google or Amazon, real problems begin to emerge. With hundreds or thousands of developers contributing simultaneously to a single codebase, coordinating changes and managing merge conflicts becomes increasingly difficult. Deployments also become more complex and risky, since even minor changes can blow up the entire application! To manage these issues, large companies began to coalesce around the microservices architecture. This is a style of programming where you design your codebase as a set of small, autonomous services. Each service owns its own codebase, data storage, and deployment pipelines. As a simple example, instead of stuffing all of your logic regarding an OpenAI client into your main app, you can move that logic into its own service. To call that service, you would then typically make REST calls, as opposed to function calls. This ups the complexity, but resolves the merge conflict and deployment issues, since each team in the organization gets to work on their own island of code. Another benefit to using microservices is that they allow for a polyglot tech stack. This means that each team can code up their service using whatever language they prefer. If one team prefers JavaScript while another likes Python, this is no issue. When we first began our agency, this idea of a polyglot stack pushed us to use a microservices architecture. Not because we had a large team, but because we each wanted to use the “best” language for each functionality. This meant: Using Ruby on Rails for web development. It’s been battle-tested in this area for decades. Using Python for the AI integration, perhaps deployed with something like FastAPI. Serious AI work requires Python, I was led to believe. Two different languages, each focused on its area of specialty. What could go wrong? Unfortunately, we found the process of development frustrating. Just setting up our dev environment was time-consuming. Having to wrangle Docker compose files and manage inter-service communication made us wish we could go back to the beauty and simplicity of the monolith. Having to make a REST call and set up the appropriate routing in FastAPI instead of making a simple function call sucked. “Surely we can’t develop AI apps in pure Ruby,” I thought. And then I gave it a try. And I’m glad I did. I found the process of developing an MVP with AI integration in Ruby very satisfying. We were able to sprint where before we were jogging. I loved the emphasis on beauty, simplicity, and developer happiness in the Ruby community. And I found the state of the AI ecosystem in Ruby to be surprisingly mature and getting better every day. If you are a Python programmer and are scared off by learning a new language like I was, let me comfort you by discussing the similarities between the Ruby and Python languages. Ruby and Python: Two Sides of the Same Coin I consider Python and Ruby to be like cousins. Both languages incorporate: High-level Interpretation: This means they abstract away a lot of the complexity of low-level programming details, such as memory management. Dynamic Typing: Neither language requires you to specify if a variable is an int, float, string, etc. The types are checked at runtime. Object-Oriented Programming: Both languages are object-oriented. Both support classes, inheritance, polymorphism, etc. Ruby is more “pure”, in the sense that literally everything is an object, whereas in Python a few thingsare not objects. Readable and Concise Syntax: Both are considered easy to learn. Either is great for a first-time learner. Wide Ecosystem of Packages: Packages to do all sorts of cool things are available in both languages. In Python they are called libraries, and in Ruby they are called gems. The primary difference between the two languages lies in their philosophy and design principles. Python’s core philosophy can be described as: There should be one — and preferably only one — obvious way to do something. In theory, this should emphasize simplicity, readability, and clarity. Ruby’s philosophy can be described as: There’s always more than one way to do something. Maximize developer happiness. This was a shock to me when I switched over from Python. Check out this simple example emphasizing this philosophical difference: # A fight over philosophy: iterating over an array # Pythonic way for i in range: print# Ruby way, option 1.each do |i| puts i end # Ruby way, option 2 for i in 1..5 puts i end # Ruby way, option 3 5.times do |i| puts i + 1 end # Ruby way, option 4.each { |i| puts i } Another difference between the two is syntax style. Python primarily uses indentation to denote code blocks, while Ruby uses do…end or {…} blocks. Most include indentation inside Ruby blocks, but this is entirely optional. Examples of these syntactic differences can be seen in the code shown above. There are a lot of other little differences to learn. For example, in Python string interpolation is done using f-strings: f"Hello, {name}!", while in Ruby they are done using hashtags: "Hello, #{name}!". Within a few months, I think any competent Python programmer can transfer their proficiency over to Ruby. Recent AI-based Gems Despite not being in the conversation when discussing AI, Ruby has had some recent advancements in the world of gems. I will highlight some of the most impressive recent releases that we have been using in our agency to build AI apps: RubyLLM — Any GitHub repo that gets more than 2k stars within a few weeks of release deserves a mention, and RubyLLM is definitely worthy. I have used many clunky implementations of LLM providers from libraries like LangChain and LlamaIndex, so using RubyLLM was like a breath of fresh air. As a simple example, let’s take a look at a tutorial demonstrating multi-turn conversations: require 'ruby_llm' # Create a model and give it instructions chat = RubyLLM.chat chat.with_instructions "You are a friendly Ruby expert who loves to help beginners." # Multi-turn conversation chat.ask "Hi! What does attr_reader do in Ruby?" # => "Ruby creates a getter method for each symbol... # Stream responses in real time chat.ask "Could you give me a short example?" do |chunk| print chunk.content end # => "Sure! # ```ruby # class Person # attr... Simply amazing. Multi-turn conversations are handled automatically for you. Streaming is a breeze. Compare this to a similar implementation in LangChain: from langchain_openai import ChatOpenAI from langchain_core.schema import SystemMessage, HumanMessage, AIMessage from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler SYSTEM_PROMPT = "You are a friendly Ruby expert who loves to help beginners." chat = ChatOpenAI]) history =def ask-> None: """Stream the answer token-by-token and keep the context in memory.""" history.append) # .stream yields message chunks as they arrive for chunk in chat.stream: printprint# newline after the answer # the final chunk has the full message content history.append) askaskYikes. And it’s important to note that this is a grug implementation. Want to know how LangChain really expects you to manage memory? Check out these links, but grab a bucket first; you may get sick. Neighbors — This is an excellent library to use for nearest-neighbors search in a Rails application. Very useful in a RAG setup. It integrates with Postgres, SQLite, MySQL, MariaDB, and more. It was written by Andrew Kane, the same guy who wrote the pgvector extension that allows Postgres to behave as a vector database. Async — This gem had its first official release back in December 2024, and it has been making waves in the Ruby community. Async is a fiber-based framework for Ruby that runs non-blocking I/O tasks concurrently while letting you write simple, sequential code. Fibers are like mini-threads that each have their own mini call stack. While not strictly a gem for AI, it has helped us create features like web scrapers that run blazingly fast across thousands of pages. We have also used it to handle streaming of chunks from LLMs. Torch.rb — If you are interested in training deep learning models, then surely you have heard of PyTorch. Well, PyTorch is built on LibTorch, which essentially has a lot of C/C++ code under the hood to perform ML operations quickly. Andrew Kane took LibTorch and made a Ruby adapter over it to create Torch.rb, essentially a Ruby version of PyTorch. Andrew Kane has been a hero in the Ruby AI world, authoring dozens of ML gems for Ruby. Summary In short: building a web application with AI integration quickly and cheaply requires a monolithic architecture. A monolith demands a monolingual application, which is necessary if your end goal is quality apps delivered with speed. Your main options are either Python or Ruby. If you go with Python, you will probably use Django for your web framework. If you go with Ruby, you will be using Ruby on Rails. At our agency, we found Django’s lack of features disappointing. Rails has impressed us with its feature set and emphasis on simplicity. We were thrilled to find almost no issues on the AI side. Of course, there are times where you will not want to use Ruby. If you are conducting research in AI or training machine learning models from scratch, then you will likely want to stick with Python. Research almost never involves building Web Applications. At most you’ll build a simple interface or dashboard in a notebook, but nothing production-ready. You’ll likely want the latest PyTorch updates to ensure your training runs quickly. You may even dive into low-level C/C++ programming to squeeze as much performance as you can out of your hardware. Maybe you’ll even try your hand at Mojo. But if your goal is to integrate the latest LLMs — either open or closed source — into web applications, then we believe Ruby to be the far superior option. Give it a shot yourselves! In part three of this series, I will dive into a fun experiment: just how simple can we make a web application with AI integration? Stay tuned.  If you’d like a custom web application with generative AI integration, visit losangelesaiapps.com The post Building AI Applications in Ruby appeared first on Towards Data Science. #building #applications #ruby
    TOWARDSDATASCIENCE.COM
    Building AI Applications in Ruby
    This is the second in a multi-part series on creating web applications with generative AI integration. Part 1 focused on explaining the AI stack and why the application layer is the best place in the stack to be. Check it out here. Table of Contents Introduction I thought spas were supposed to be relaxing? Microservices are for Macrocompanies Ruby and Python: Two Sides of the Same Coin Recent AI Based Gems Summary Introduction It’s not often that you hear the Ruby language mentioned when discussing AI. Python, of course, is the king in this world, and for good reason. The community has coalesced around the language. Most model training is done in PyTorch or TensorFlow these days. Scikit-learn and Keras are also very popular. RAG frameworks such as LangChain and LlamaIndex cater primarily to Python. However, when it comes to building web applications with AI integration, I believe Ruby is the better language. As the co-founder of an agency dedicated to building MVPs with generative AI integration, I frequently hear potential clients complaining about two things: Applications take too long to build Developers are quoting insane prices to build custom web apps These complaints have a common source: complexity. Modern web apps have a lot more complexity in them than in the good ol’ days. But why is this? Are the benefits brought by complexity worth the cost? I thought spas were supposed to be relaxing? One big piece of the puzzle is the recent rise of single-page applications (SPAs). The most popular stack used today in building modern SPAs is MERN (MongoDB, Express.js, React.js, Node.js). The stack is popular for a few reasons: It is a JavaScript-only stack, across both front-end and back-end. Having to only code in only one language is pretty nice! SPAs can offer dynamic designs and a “smooth” user experience. Smooth here means that when some piece of data changes, only a part of the site is updated, as opposed to having to reload the whole page. Of course, if you don’t have a modern smartphone, SPAs won’t feel so smooth, as they tend to be pretty heavy. All that JavaScript starts to drag down the performance. There is a large ecosystem of libraries and developers with experience in this stack. This is pretty circular logic: is the stack popular because of the ecosystem, or is there an ecosystem because of the popularity? Either way, this point stands.React was created by Meta. Lots of money and effort has been thrown at the library, helping to polish and promote the product. Unfortunately, there are some downsides of working in the MERN stack, the most critical being the sheer complexity. Traditional web development was done using the Model-View-Controller (MVC) paradigm. In MVC, all of the logic managing a user’s session is handled in the backend, on the server. Something like fetching a user’s data was done via function calls and SQL statements in the backend. The backend then serves fully built HTML and CSS to the browser, which just has to display it. Hence the name “server”. In a SPA, this logic is handled on the user’s browser, in the frontend. SPAs have to handle UI state, application state, and sometimes even server state all in the browser. API calls have to be made to the backend to fetch user data. There is still quite a bit of logic on the backend, mainly exposing data and functionality through APIs. To illustrate the difference, let me use the analogy of a commercial kitchen. The customer will be the frontend and the kitchen will be the backend. MVCs vs. SPAs. Image generated by ChatGPT. Traditional MVC apps are like dining at a full-service restaurant. Yes, there is a lot of complexity (and yelling, if The Bear is to be believed) in the backend. But the frontend experience is simple and satisfying: all the customer has to do is pick up a fork and eat their food. SPAs are like eating at a buffet-style dining restaurant. There is still quite a bit of complexity in the kitchen. But now the customer also has to decide what food to grab, how to combine them, how to arrange them on the plate, where to put the plate when finished, etc. Andrej Karpathy had a tweet recently discussing his frustration with attempting to build web apps in 2025. It can be overwhelming for those new to the space. The reality of building web apps in 2025 is that it's a bit like assembling IKEA furniture. There's no "full-stack" product with batteries included, you have to piece together and configure many individual services:– frontend / backend (e.g. React, Next.js, APIs)– hosting…— Andrej Karpathy (@karpathy) March 27, 2025 In order to build MVPs with AI integration rapidly, our agency has decided to forgo the SPA and instead go with the traditional MVC approach. In particular, we have found Ruby on Rails (often denoted as Rails) to be the framework best suited to quickly developing and deploying quality apps with AI integration. Ruby on Rails was developed by David Heinemeier Hansson in 2004 and has long been known as a great web framework, but I would argue it has recently made leaps in its ability to incorporate AI into apps, as we will see. Django is the most popular Python web framework, and also has a more traditional pattern of development. Unfortunately, in our testing we found Django was simply not as full-featured or “batteries included” as Rails is. As a simple example, Django has no built-in background job system. Nearly all of our apps incorporate background jobs, so to not include this was disappointing. We also prefer how Rails emphasizes simplicity, with Rails 8 encouraging developers to easily self-host their apps instead of going through a provider like Heroku. They also recently released a stack of tools meant to replace external services like Redis. “But what about the smooth user experience?” you might ask. The truth is that modern Rails includes several ways of crafting SPA-like experiences without all of the heavy JavaScript. The primary tool is Hotwire, which bundles tools like Turbo and Stimulus. Turbo lets you dynamically change pieces of HTML on your webpage without writing custom JavaScript. For the times where you do need to include custom JavaScript, Stimulus is a minimal JavaScript framework that lets you do just that. Even if you want to use React, you can do so with the react-rails gem. So you can have your cake, and eat it too! SPAs are not the only reason for the increase in complexity, however. Another has to do with the advent of the microservices architecture. Microservices are for Macrocompanies Once again, we find ourselves comparing the simple past with the complexity of today. In the past, software was primarily developed as monoliths. A monolithic application means that all the different parts of your app — such as the user interface, business logic, and data handling — are developed, tested, and deployed as one single unit. The code is all typically housed in a single repo. Working with a monolith is simple and satisfying. Running a development setup for testing purposes is easy. You are working with a single database schema containing all of your tables, making queries and joins straightforward. Deployment is simple, since you just have one container to look at and modify. However, once your company scales to the size of a Google or Amazon, real problems begin to emerge. With hundreds or thousands of developers contributing simultaneously to a single codebase, coordinating changes and managing merge conflicts becomes increasingly difficult. Deployments also become more complex and risky, since even minor changes can blow up the entire application! To manage these issues, large companies began to coalesce around the microservices architecture. This is a style of programming where you design your codebase as a set of small, autonomous services. Each service owns its own codebase, data storage, and deployment pipelines. As a simple example, instead of stuffing all of your logic regarding an OpenAI client into your main app, you can move that logic into its own service. To call that service, you would then typically make REST calls, as opposed to function calls. This ups the complexity, but resolves the merge conflict and deployment issues, since each team in the organization gets to work on their own island of code. Another benefit to using microservices is that they allow for a polyglot tech stack. This means that each team can code up their service using whatever language they prefer. If one team prefers JavaScript while another likes Python, this is no issue. When we first began our agency, this idea of a polyglot stack pushed us to use a microservices architecture. Not because we had a large team, but because we each wanted to use the “best” language for each functionality. This meant: Using Ruby on Rails for web development. It’s been battle-tested in this area for decades. Using Python for the AI integration, perhaps deployed with something like FastAPI. Serious AI work requires Python, I was led to believe. Two different languages, each focused on its area of specialty. What could go wrong? Unfortunately, we found the process of development frustrating. Just setting up our dev environment was time-consuming. Having to wrangle Docker compose files and manage inter-service communication made us wish we could go back to the beauty and simplicity of the monolith. Having to make a REST call and set up the appropriate routing in FastAPI instead of making a simple function call sucked. “Surely we can’t develop AI apps in pure Ruby,” I thought. And then I gave it a try. And I’m glad I did. I found the process of developing an MVP with AI integration in Ruby very satisfying. We were able to sprint where before we were jogging. I loved the emphasis on beauty, simplicity, and developer happiness in the Ruby community. And I found the state of the AI ecosystem in Ruby to be surprisingly mature and getting better every day. If you are a Python programmer and are scared off by learning a new language like I was, let me comfort you by discussing the similarities between the Ruby and Python languages. Ruby and Python: Two Sides of the Same Coin I consider Python and Ruby to be like cousins. Both languages incorporate: High-level Interpretation: This means they abstract away a lot of the complexity of low-level programming details, such as memory management. Dynamic Typing: Neither language requires you to specify if a variable is an int, float, string, etc. The types are checked at runtime. Object-Oriented Programming: Both languages are object-oriented. Both support classes, inheritance, polymorphism, etc. Ruby is more “pure”, in the sense that literally everything is an object, whereas in Python a few things (such as if and for statements) are not objects. Readable and Concise Syntax: Both are considered easy to learn. Either is great for a first-time learner. Wide Ecosystem of Packages: Packages to do all sorts of cool things are available in both languages. In Python they are called libraries, and in Ruby they are called gems. The primary difference between the two languages lies in their philosophy and design principles. Python’s core philosophy can be described as: There should be one — and preferably only one — obvious way to do something. In theory, this should emphasize simplicity, readability, and clarity. Ruby’s philosophy can be described as: There’s always more than one way to do something. Maximize developer happiness. This was a shock to me when I switched over from Python. Check out this simple example emphasizing this philosophical difference: # A fight over philosophy: iterating over an array # Pythonic way for i in range(1, 6): print(i) # Ruby way, option 1 (1..5).each do |i| puts i end # Ruby way, option 2 for i in 1..5 puts i end # Ruby way, option 3 5.times do |i| puts i + 1 end # Ruby way, option 4 (1..5).each { |i| puts i } Another difference between the two is syntax style. Python primarily uses indentation to denote code blocks, while Ruby uses do…end or {…} blocks. Most include indentation inside Ruby blocks, but this is entirely optional. Examples of these syntactic differences can be seen in the code shown above. There are a lot of other little differences to learn. For example, in Python string interpolation is done using f-strings: f"Hello, {name}!", while in Ruby they are done using hashtags: "Hello, #{name}!". Within a few months, I think any competent Python programmer can transfer their proficiency over to Ruby. Recent AI-based Gems Despite not being in the conversation when discussing AI, Ruby has had some recent advancements in the world of gems. I will highlight some of the most impressive recent releases that we have been using in our agency to build AI apps: RubyLLM (link) — Any GitHub repo that gets more than 2k stars within a few weeks of release deserves a mention, and RubyLLM is definitely worthy. I have used many clunky implementations of LLM providers from libraries like LangChain and LlamaIndex, so using RubyLLM was like a breath of fresh air. As a simple example, let’s take a look at a tutorial demonstrating multi-turn conversations: require 'ruby_llm' # Create a model and give it instructions chat = RubyLLM.chat chat.with_instructions "You are a friendly Ruby expert who loves to help beginners." # Multi-turn conversation chat.ask "Hi! What does attr_reader do in Ruby?" # => "Ruby creates a getter method for each symbol... # Stream responses in real time chat.ask "Could you give me a short example?" do |chunk| print chunk.content end # => "Sure! # ```ruby # class Person # attr... Simply amazing. Multi-turn conversations are handled automatically for you. Streaming is a breeze. Compare this to a similar implementation in LangChain: from langchain_openai import ChatOpenAI from langchain_core.schema import SystemMessage, HumanMessage, AIMessage from langchain_core.callbacks.streaming_stdout import StreamingStdOutCallbackHandler SYSTEM_PROMPT = "You are a friendly Ruby expert who loves to help beginners." chat = ChatOpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()]) history = [SystemMessage(content=SYSTEM_PROMPT)] def ask(user_text: str) -> None: """Stream the answer token-by-token and keep the context in memory.""" history.append(HumanMessage(content=user_text)) # .stream yields message chunks as they arrive for chunk in chat.stream(history): print(chunk.content, end="", flush=True) print() # newline after the answer # the final chunk has the full message content history.append(AIMessage(content=chunk.content)) ask("Hi! What does attr_reader do in Ruby?") ask("Great - could you show a short example with attr_accessor?") Yikes. And it’s important to note that this is a grug implementation. Want to know how LangChain really expects you to manage memory? Check out these links, but grab a bucket first; you may get sick. Neighbors (link) — This is an excellent library to use for nearest-neighbors search in a Rails application. Very useful in a RAG setup. It integrates with Postgres, SQLite, MySQL, MariaDB, and more. It was written by Andrew Kane, the same guy who wrote the pgvector extension that allows Postgres to behave as a vector database. Async (link) — This gem had its first official release back in December 2024, and it has been making waves in the Ruby community. Async is a fiber-based framework for Ruby that runs non-blocking I/O tasks concurrently while letting you write simple, sequential code. Fibers are like mini-threads that each have their own mini call stack. While not strictly a gem for AI, it has helped us create features like web scrapers that run blazingly fast across thousands of pages. We have also used it to handle streaming of chunks from LLMs. Torch.rb (link) — If you are interested in training deep learning models, then surely you have heard of PyTorch. Well, PyTorch is built on LibTorch, which essentially has a lot of C/C++ code under the hood to perform ML operations quickly. Andrew Kane took LibTorch and made a Ruby adapter over it to create Torch.rb, essentially a Ruby version of PyTorch. Andrew Kane has been a hero in the Ruby AI world, authoring dozens of ML gems for Ruby. Summary In short: building a web application with AI integration quickly and cheaply requires a monolithic architecture. A monolith demands a monolingual application, which is necessary if your end goal is quality apps delivered with speed. Your main options are either Python or Ruby. If you go with Python, you will probably use Django for your web framework. If you go with Ruby, you will be using Ruby on Rails. At our agency, we found Django’s lack of features disappointing. Rails has impressed us with its feature set and emphasis on simplicity. We were thrilled to find almost no issues on the AI side. Of course, there are times where you will not want to use Ruby. If you are conducting research in AI or training machine learning models from scratch, then you will likely want to stick with Python. Research almost never involves building Web Applications. At most you’ll build a simple interface or dashboard in a notebook, but nothing production-ready. You’ll likely want the latest PyTorch updates to ensure your training runs quickly. You may even dive into low-level C/C++ programming to squeeze as much performance as you can out of your hardware. Maybe you’ll even try your hand at Mojo. But if your goal is to integrate the latest LLMs — either open or closed source — into web applications, then we believe Ruby to be the far superior option. Give it a shot yourselves! In part three of this series, I will dive into a fun experiment: just how simple can we make a web application with AI integration? Stay tuned.  If you’d like a custom web application with generative AI integration, visit losangelesaiapps.com The post Building AI Applications in Ruby appeared first on Towards Data Science.
    0 Комментарии 0 Поделились
  • Why governments keep losing the ‘war on encryption’

    Reports that prominent American national security officials used a freely available encrypted messaging app, coupled with the rise of authoritarian policies around the world, have led to a surge in interest in encrypted apps like Signal and WhatsApp. These apps prevent anyone, including the government and the app companies themselves, from reading messages they intercept.

    The spotlight on encrypted apps is also a reminder of the complex debate pitting government interests against individual liberties. Governments desire to monitor everyday communications for law enforcement, national security and sometimes darker purposes. On the other hand, citizens and businesses claim the right to enjoy private digital discussions in today’s online world.

    The positions governments take often are framed as a “war on encryption” by technology policy experts and civil liberties advocates. As a cybersecurity researcher, I’ve followed the debate for nearly 30 years and remain convinced that this is not a fight that governments can easily win.

    Understanding the ‘golden key’

    Traditionally, strong encryption capabilities were considered military technologies crucial to national security and not available to the public. However, in 1991, computer scientist Phil Zimmermann released a new type of encryption software called Pretty Good Privacy. It was free, open-source software available on the internet that anyone could download. PGP allowed people to exchange email and files securely, accessible only to those with the shared decryption key, in ways similar to highly secured government systems.

    Following an investigation into Zimmermann, the U.S. government came to realize that technology develops faster than law and began to explore remedies. It also began to understand that once something is placed on the internet, neither laws nor policy can control its global availability.

    Fearing that terrorists or criminals might use such technology to plan attacks, arrange financing or recruit members, the Clinton administration advocated a system called the Clipper Chip, based on a concept of key escrow. The idea was to give a trusted third party access to the encryption system and the government could use that access when it demonstrated a law enforcement or national security need.

    Clipper was based on the idea of a “golden key,” namely, a way for those with good intentions – intelligence services, police – to access encrypted data, while keeping people with bad intentions – criminals, terrorists – out.

    Clipper Chip devices never gained traction outside the U.S. government, in part because its encryption algorithm was classified and couldn’t be publicly peer-reviewed. However, in the years since, governments around the world have continued to embrace the golden key concept as they grapple with the constant stream of technology developments reshaping how people access and share information.

    Following Edward Snowden’s disclosures about global surveillance of digital communications in 2013, Google and Apple took steps to make it virtually impossible for anyone but an authorized user to access data on a smartphone. Even a court order was ineffective, much to the chagrin of law enforcement. In Apple’s case, the company’s approach to privacy and security was tested in 2016 when the company refused to build a mechanism to help the FBI break into an encrypted iPhone owned by a suspect in the San Bernardino terrorist attack.

    At its core, encryption is, fundamentally, very complicated math. And while the golden key concept continues to hold allure for governments, it is mathematically difficult to achieve with an acceptable degree of trust. And even if it was viable, implementing it in practice makes the internet less safe. Security experts agree that any backdoor access, even if hidden or controlled by a trusted entity, is vulnerable to hacking.

    Competing justifications and tech realities

    Governments around the world continue to wrestle with the proliferation of strong encryption in messaging tools, social media and virtual private networks.

    For example, rather than embrace a technical golden key, a recent proposal in France would have provided the government the ability to add a hidden “ghost” participant to any encrypted chat for surveillance purposes. However, legislators removed this from the final proposal after civil liberties and cybersecurity experts warned that such an approach would undermine basic cybersecurity practices and trust in secure systems.

    In 2025, the U.K. government secretly ordered Apple to add a backdoor to its encryption services worldwide. Rather than comply, Apple removed the ability for its iPhone and iCloud customers in the U.K. to use its Advanced Data Protection encryption features. In this case, Apple chose to defend its users’ security in the face of government mandates, which ironically now means that users in the U.K. may be less secure.

    In the United States, provisions removed from the 2020 EARN IT bill would have forced companies to scan online messages and photos to guard against child exploitation by creating a golden-key-type hidden backdoor. Opponents viewed this as a stealth way of bypassing end-to-end encryption. The bill did not advance to a full vote when it was last reintroduced in the 2023-2024 legislative session.

    Opposing scanning for child sexual abuse material is a controversial concern when encryption is involved: Although Apple received significant public backlash over its plans to scan user devices for such material in ways that users claimed violated Apple’s privacy stance, victims of child abuse have sued the company for not better protecting children.

    Even privacy-centric Switzerland and the European Union are exploring ways of dealing with digital surveillance and privacy in an encrypted world.

    The laws of math and physics, not politics

    Governments usually claim that weakening encryption is necessary to fight crime and protect the nation – and there is a valid concern there. However, when that argument fails to win the day, they often turn to claiming to need backdoors to protect children from exploitation.

    From a cybersecurity perspective, it is nearly impossible to create a backdoor to a communications product that is only accessible for certain purposes or under certain conditions. If a passageway exists, it’s only a matter of time before it is exploited for nefarious purposes. In other words, creating what is essentially a software vulnerability to help the good guys will inevitably end up helping the bad guys, too.

    Often overlooked in this debate is that if encryption is weakened to improve surveillance for governmental purposes, it will drive criminals and terrorists further underground. Using different or homegrown technologies, they will still be able to exchange information in ways that governments can’t readily access. But everyone else’s digital security will be needlessly diminished.

    This lack of online privacy and security is especially dangerous for journalists, activists, domestic violence survivors and other at-risk communities around the world.

    Encryption obeys the laws of math and physics, not politics. Once invented, it can’t be un-invented, even if it frustrates governments. Along those lines, if governments are struggling with strong encryption now, how will they contend with a world when everyone is using significantly more complex techniques like quantum cryptography?

    Governments remain in an unenviable position regarding strong encryption. Ironically, one of the countermeasures the government recommended in response to China’s hacking of global telephone systems in the Salt Typhoon attacks was to use strong encryption in messaging apps such as Signal or iMessage.

    Reconciling that with their ongoing quest to weaken or restrict strong encryption for their own surveillance interests will be a difficult challenge to overcome.

    Richard Forno is a teaching professor of computer science and electrical engineering, and assistant director of the UMBC Cybersecurity Institute at the University of Maryland, Baltimore County.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #why #governments #keep #losing #war
    Why governments keep losing the ‘war on encryption’
    Reports that prominent American national security officials used a freely available encrypted messaging app, coupled with the rise of authoritarian policies around the world, have led to a surge in interest in encrypted apps like Signal and WhatsApp. These apps prevent anyone, including the government and the app companies themselves, from reading messages they intercept. The spotlight on encrypted apps is also a reminder of the complex debate pitting government interests against individual liberties. Governments desire to monitor everyday communications for law enforcement, national security and sometimes darker purposes. On the other hand, citizens and businesses claim the right to enjoy private digital discussions in today’s online world. The positions governments take often are framed as a “war on encryption” by technology policy experts and civil liberties advocates. As a cybersecurity researcher, I’ve followed the debate for nearly 30 years and remain convinced that this is not a fight that governments can easily win. Understanding the ‘golden key’ Traditionally, strong encryption capabilities were considered military technologies crucial to national security and not available to the public. However, in 1991, computer scientist Phil Zimmermann released a new type of encryption software called Pretty Good Privacy. It was free, open-source software available on the internet that anyone could download. PGP allowed people to exchange email and files securely, accessible only to those with the shared decryption key, in ways similar to highly secured government systems. Following an investigation into Zimmermann, the U.S. government came to realize that technology develops faster than law and began to explore remedies. It also began to understand that once something is placed on the internet, neither laws nor policy can control its global availability. Fearing that terrorists or criminals might use such technology to plan attacks, arrange financing or recruit members, the Clinton administration advocated a system called the Clipper Chip, based on a concept of key escrow. The idea was to give a trusted third party access to the encryption system and the government could use that access when it demonstrated a law enforcement or national security need. Clipper was based on the idea of a “golden key,” namely, a way for those with good intentions – intelligence services, police – to access encrypted data, while keeping people with bad intentions – criminals, terrorists – out. Clipper Chip devices never gained traction outside the U.S. government, in part because its encryption algorithm was classified and couldn’t be publicly peer-reviewed. However, in the years since, governments around the world have continued to embrace the golden key concept as they grapple with the constant stream of technology developments reshaping how people access and share information. Following Edward Snowden’s disclosures about global surveillance of digital communications in 2013, Google and Apple took steps to make it virtually impossible for anyone but an authorized user to access data on a smartphone. Even a court order was ineffective, much to the chagrin of law enforcement. In Apple’s case, the company’s approach to privacy and security was tested in 2016 when the company refused to build a mechanism to help the FBI break into an encrypted iPhone owned by a suspect in the San Bernardino terrorist attack. At its core, encryption is, fundamentally, very complicated math. And while the golden key concept continues to hold allure for governments, it is mathematically difficult to achieve with an acceptable degree of trust. And even if it was viable, implementing it in practice makes the internet less safe. Security experts agree that any backdoor access, even if hidden or controlled by a trusted entity, is vulnerable to hacking. Competing justifications and tech realities Governments around the world continue to wrestle with the proliferation of strong encryption in messaging tools, social media and virtual private networks. For example, rather than embrace a technical golden key, a recent proposal in France would have provided the government the ability to add a hidden “ghost” participant to any encrypted chat for surveillance purposes. However, legislators removed this from the final proposal after civil liberties and cybersecurity experts warned that such an approach would undermine basic cybersecurity practices and trust in secure systems. In 2025, the U.K. government secretly ordered Apple to add a backdoor to its encryption services worldwide. Rather than comply, Apple removed the ability for its iPhone and iCloud customers in the U.K. to use its Advanced Data Protection encryption features. In this case, Apple chose to defend its users’ security in the face of government mandates, which ironically now means that users in the U.K. may be less secure. In the United States, provisions removed from the 2020 EARN IT bill would have forced companies to scan online messages and photos to guard against child exploitation by creating a golden-key-type hidden backdoor. Opponents viewed this as a stealth way of bypassing end-to-end encryption. The bill did not advance to a full vote when it was last reintroduced in the 2023-2024 legislative session. Opposing scanning for child sexual abuse material is a controversial concern when encryption is involved: Although Apple received significant public backlash over its plans to scan user devices for such material in ways that users claimed violated Apple’s privacy stance, victims of child abuse have sued the company for not better protecting children. Even privacy-centric Switzerland and the European Union are exploring ways of dealing with digital surveillance and privacy in an encrypted world. The laws of math and physics, not politics Governments usually claim that weakening encryption is necessary to fight crime and protect the nation – and there is a valid concern there. However, when that argument fails to win the day, they often turn to claiming to need backdoors to protect children from exploitation. From a cybersecurity perspective, it is nearly impossible to create a backdoor to a communications product that is only accessible for certain purposes or under certain conditions. If a passageway exists, it’s only a matter of time before it is exploited for nefarious purposes. In other words, creating what is essentially a software vulnerability to help the good guys will inevitably end up helping the bad guys, too. Often overlooked in this debate is that if encryption is weakened to improve surveillance for governmental purposes, it will drive criminals and terrorists further underground. Using different or homegrown technologies, they will still be able to exchange information in ways that governments can’t readily access. But everyone else’s digital security will be needlessly diminished. This lack of online privacy and security is especially dangerous for journalists, activists, domestic violence survivors and other at-risk communities around the world. Encryption obeys the laws of math and physics, not politics. Once invented, it can’t be un-invented, even if it frustrates governments. Along those lines, if governments are struggling with strong encryption now, how will they contend with a world when everyone is using significantly more complex techniques like quantum cryptography? Governments remain in an unenviable position regarding strong encryption. Ironically, one of the countermeasures the government recommended in response to China’s hacking of global telephone systems in the Salt Typhoon attacks was to use strong encryption in messaging apps such as Signal or iMessage. Reconciling that with their ongoing quest to weaken or restrict strong encryption for their own surveillance interests will be a difficult challenge to overcome. Richard Forno is a teaching professor of computer science and electrical engineering, and assistant director of the UMBC Cybersecurity Institute at the University of Maryland, Baltimore County. This article is republished from The Conversation under a Creative Commons license. Read the original article. #why #governments #keep #losing #war
    WWW.FASTCOMPANY.COM
    Why governments keep losing the ‘war on encryption’
    Reports that prominent American national security officials used a freely available encrypted messaging app, coupled with the rise of authoritarian policies around the world, have led to a surge in interest in encrypted apps like Signal and WhatsApp. These apps prevent anyone, including the government and the app companies themselves, from reading messages they intercept. The spotlight on encrypted apps is also a reminder of the complex debate pitting government interests against individual liberties. Governments desire to monitor everyday communications for law enforcement, national security and sometimes darker purposes. On the other hand, citizens and businesses claim the right to enjoy private digital discussions in today’s online world. The positions governments take often are framed as a “war on encryption” by technology policy experts and civil liberties advocates. As a cybersecurity researcher, I’ve followed the debate for nearly 30 years and remain convinced that this is not a fight that governments can easily win. Understanding the ‘golden key’ Traditionally, strong encryption capabilities were considered military technologies crucial to national security and not available to the public. However, in 1991, computer scientist Phil Zimmermann released a new type of encryption software called Pretty Good Privacy (PGP). It was free, open-source software available on the internet that anyone could download. PGP allowed people to exchange email and files securely, accessible only to those with the shared decryption key, in ways similar to highly secured government systems. Following an investigation into Zimmermann, the U.S. government came to realize that technology develops faster than law and began to explore remedies. It also began to understand that once something is placed on the internet, neither laws nor policy can control its global availability. Fearing that terrorists or criminals might use such technology to plan attacks, arrange financing or recruit members, the Clinton administration advocated a system called the Clipper Chip, based on a concept of key escrow. The idea was to give a trusted third party access to the encryption system and the government could use that access when it demonstrated a law enforcement or national security need. Clipper was based on the idea of a “golden key,” namely, a way for those with good intentions – intelligence services, police – to access encrypted data, while keeping people with bad intentions – criminals, terrorists – out. Clipper Chip devices never gained traction outside the U.S. government, in part because its encryption algorithm was classified and couldn’t be publicly peer-reviewed. However, in the years since, governments around the world have continued to embrace the golden key concept as they grapple with the constant stream of technology developments reshaping how people access and share information. Following Edward Snowden’s disclosures about global surveillance of digital communications in 2013, Google and Apple took steps to make it virtually impossible for anyone but an authorized user to access data on a smartphone. Even a court order was ineffective, much to the chagrin of law enforcement. In Apple’s case, the company’s approach to privacy and security was tested in 2016 when the company refused to build a mechanism to help the FBI break into an encrypted iPhone owned by a suspect in the San Bernardino terrorist attack. At its core, encryption is, fundamentally, very complicated math. And while the golden key concept continues to hold allure for governments, it is mathematically difficult to achieve with an acceptable degree of trust. And even if it was viable, implementing it in practice makes the internet less safe. Security experts agree that any backdoor access, even if hidden or controlled by a trusted entity, is vulnerable to hacking. Competing justifications and tech realities Governments around the world continue to wrestle with the proliferation of strong encryption in messaging tools, social media and virtual private networks. For example, rather than embrace a technical golden key, a recent proposal in France would have provided the government the ability to add a hidden “ghost” participant to any encrypted chat for surveillance purposes. However, legislators removed this from the final proposal after civil liberties and cybersecurity experts warned that such an approach would undermine basic cybersecurity practices and trust in secure systems. In 2025, the U.K. government secretly ordered Apple to add a backdoor to its encryption services worldwide. Rather than comply, Apple removed the ability for its iPhone and iCloud customers in the U.K. to use its Advanced Data Protection encryption features. In this case, Apple chose to defend its users’ security in the face of government mandates, which ironically now means that users in the U.K. may be less secure. In the United States, provisions removed from the 2020 EARN IT bill would have forced companies to scan online messages and photos to guard against child exploitation by creating a golden-key-type hidden backdoor. Opponents viewed this as a stealth way of bypassing end-to-end encryption. The bill did not advance to a full vote when it was last reintroduced in the 2023-2024 legislative session. Opposing scanning for child sexual abuse material is a controversial concern when encryption is involved: Although Apple received significant public backlash over its plans to scan user devices for such material in ways that users claimed violated Apple’s privacy stance, victims of child abuse have sued the company for not better protecting children. Even privacy-centric Switzerland and the European Union are exploring ways of dealing with digital surveillance and privacy in an encrypted world. The laws of math and physics, not politics Governments usually claim that weakening encryption is necessary to fight crime and protect the nation – and there is a valid concern there. However, when that argument fails to win the day, they often turn to claiming to need backdoors to protect children from exploitation. From a cybersecurity perspective, it is nearly impossible to create a backdoor to a communications product that is only accessible for certain purposes or under certain conditions. If a passageway exists, it’s only a matter of time before it is exploited for nefarious purposes. In other words, creating what is essentially a software vulnerability to help the good guys will inevitably end up helping the bad guys, too. Often overlooked in this debate is that if encryption is weakened to improve surveillance for governmental purposes, it will drive criminals and terrorists further underground. Using different or homegrown technologies, they will still be able to exchange information in ways that governments can’t readily access. But everyone else’s digital security will be needlessly diminished. This lack of online privacy and security is especially dangerous for journalists, activists, domestic violence survivors and other at-risk communities around the world. Encryption obeys the laws of math and physics, not politics. Once invented, it can’t be un-invented, even if it frustrates governments. Along those lines, if governments are struggling with strong encryption now, how will they contend with a world when everyone is using significantly more complex techniques like quantum cryptography? Governments remain in an unenviable position regarding strong encryption. Ironically, one of the countermeasures the government recommended in response to China’s hacking of global telephone systems in the Salt Typhoon attacks was to use strong encryption in messaging apps such as Signal or iMessage. Reconciling that with their ongoing quest to weaken or restrict strong encryption for their own surveillance interests will be a difficult challenge to overcome. Richard Forno is a teaching professor of computer science and electrical engineering, and assistant director of the UMBC Cybersecurity Institute at the University of Maryland, Baltimore County. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Комментарии 0 Поделились
Расширенные страницы