• NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR

    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop.
    This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR.
    The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework.
    The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs.
    Generating Safe and Adaptive Driving Trajectories
    Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically.
    Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research.
    The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one.
    GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories.
    GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories.
    This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions.

    NVIDIA Automotive Research at CVPR 
    More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more.
    In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+.
    The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs:

    Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting
    Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models
    Zero-Shot 4D Lidar Panoptic Segmentation
    NVILA: Efficient Frontier Visual Language Models
    RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models
    OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning

    Explore automotive workshops and tutorials at CVPR, including:

    Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA
    Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA
    Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe
    Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA
    Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe
    Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone
    LatinX in CV Workshop, featuring Leal-Taixe
    Workshop on Exploring the Next Generation of Data, featuring Alvarez
    Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA
    Continuous Data Cycle via Foundation Models, led by NVIDIA
    Distillation of Foundation Models for Autonomous Driving, led by NVIDIA

    Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang.
    Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
    The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    #nvidia #scores #consecutive #win #endtoend
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognitionconference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehiclesimulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoringmethod, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion ModelsFoundationStereo: Zero-Shot Stereo MatchingZero-Shot Monocular Scene Flow Estimation in the WildDifix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model. #nvidia #scores #consecutive #win #endtoend
    BLOGS.NVIDIA.COM
    NVIDIA Scores Consecutive Win for End-to-End Autonomous Driving Grand Challenge at CVPR
    NVIDIA was today named an Autonomous Grand Challenge winner at the Computer Vision and Pattern Recognition (CVPR) conference, held this week in Nashville, Tennessee. The announcement was made at the Embodied Intelligence for Autonomous Systems on the Horizon Workshop. This marks the second consecutive year that NVIDIA’s topped the leaderboard in the End-to-End Driving at Scale category and the third year in a row winning an Autonomous Grand Challenge award at CVPR. The theme of this year’s challenge was “Towards Generalizable Embodied Systems” — based on NAVSIM v2, a data-driven, nonreactive autonomous vehicle (AV) simulation framework. The challenge offered researchers the opportunity to explore ways to handle unexpected situations, beyond using only real-world human driving data, to accelerate the development of smarter, safer AVs. Generating Safe and Adaptive Driving Trajectories Participants of the challenge were tasked with generating driving trajectories from multi-sensor data in a semi-reactive simulation, where the ego vehicle’s plan is fixed at the start, but background traffic changes dynamically. Submissions were evaluated using the Extended Predictive Driver Model Score, which measures safety, comfort, compliance and generalization across real-world and synthetic scenarios — pushing the boundaries of robust and generalizable autonomous driving research. The NVIDIA AV Applied Research Team’s key innovation was the Generalized Trajectory Scoring (GTRS) method, which generates a variety of trajectories and progressively filters out the best one. GTRS model architecture showing a unified system for generating and scoring diverse driving trajectories using diffusion- and vocabulary-based trajectories. GTRS introduces a combination of coarse sets of trajectories covering a wide range of situations and fine-grained trajectories for safety-critical situations, created using a diffusion policy conditioned on the environment. GTRS then uses a transformer decoder distilled from perception-dependent metrics, focusing on safety, comfort and traffic rule compliance. This decoder progressively filters out the most promising trajectory candidates by capturing subtle but critical differences between similar trajectories. This system has proved to generalize well to a wide range of scenarios, achieving state-of-the-art results on challenging benchmarks and enabling robust, adaptive trajectory selection in diverse and challenging driving conditions. NVIDIA Automotive Research at CVPR  More than 60 NVIDIA papers were accepted for CVPR 2025, spanning automotive, healthcare, robotics and more. In automotive, NVIDIA researchers are advancing physical AI with innovation in perception, planning and data generation. This year, three NVIDIA papers were nominated for the Best Paper Award: FoundationStereo, Zero-Shot Monocular Scene Flow and Difix3D+. The NVIDIA papers listed below showcase breakthroughs in stereo depth estimation, monocular motion understanding, 3D reconstruction, closed-loop planning, vision-language modeling and generative simulation — all critical to building safer, more generalizable AVs: Diffusion Renderer: Neural Inverse and Forward Rendering With Video Diffusion Models (Read more in this blog.) FoundationStereo: Zero-Shot Stereo Matching (Best Paper nominee) Zero-Shot Monocular Scene Flow Estimation in the Wild (Best Paper nominee) Difix3D+: Improving 3D Reconstructions With Single-Step Diffusion Models (Best Paper nominee) 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting Closed-Loop Supervised Fine-Tuning of Tokenized Traffic Models Zero-Shot 4D Lidar Panoptic Segmentation NVILA: Efficient Frontier Visual Language Models RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving With Counterfactual Reasoning Explore automotive workshops and tutorials at CVPR, including: Workshop on Data-Driven Autonomous Driving Simulation, featuring Marco Pavone, senior director of AV research at NVIDIA, and Sanja Fidler, vice president of AI research at NVIDIA Workshop on Autonomous Driving, featuring Laura Leal-Taixe, senior research manager at NVIDIA Workshop on Open-World 3D Scene Understanding with Foundation Models, featuring Leal-Taixe Safe Artificial Intelligence for All Domains, featuring Jose Alvarez, director of AV applied research at NVIDIA Workshop on Foundation Models for V2X-Based Cooperative Autonomous Driving, featuring Pavone and Leal-Taixe Workshop on Multi-Agent Embodied Intelligent Systems Meet Generative AI Era, featuring Pavone LatinX in CV Workshop, featuring Leal-Taixe Workshop on Exploring the Next Generation of Data, featuring Alvarez Full-Stack, GPU-Based Acceleration of Deep Learning and Foundation Models, led by NVIDIA Continuous Data Cycle via Foundation Models, led by NVIDIA Distillation of Foundation Models for Autonomous Driving, led by NVIDIA Explore the NVIDIA research papers to be presented at CVPR and watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang. Learn more about NVIDIA Research, a global team of hundreds of scientists and engineers focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. The featured image above shows how an autonomous vehicle adapts its trajectory to navigate an urban environment with dynamic traffic using the GTRS model.
    Like
    Love
    Wow
    Angry
    27
    0 Commentarios 0 Acciones
  • Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals

    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals
    She spent nearly 40 years taking theater and dance pictures, providing glimpses behind the scenes and creating images that the public couldn’t otherwise access

    Stephanie Rudig

    - Freelance Writer

    June 11, 2025

    Photographer Martha Swope sitting on a floor covered with prints of her photos in 1987
    Andrea Legge / © NYPL

    Martha Swope wanted to be a dancer. She moved from her home state of Texas to New York to attend the School of American Ballet, hoping to start a career in dance. Swope also happened to be an amateur photographer. So, in 1957, a fellow classmate invited her to bring her camera and document rehearsals for a little theater show he was working on. The classmate was director and choreographer Jerome Robbins, and the show was West Side Story.
    One of those rehearsal shots ended up in Life magazine, and Swope quickly started getting professional bookings. It’s notoriously tough to make it on Broadway, but through photography, Swope carved out a career capturing theater and dance. Over the course of nearly four decades, she photographed hundreds more rehearsals, productions and promotional studio shots.

    Unidentified male chorus members dancing during rehearsals for musical West Side Story in 1957

    Martha Swope / © NYPL

    At a time when live performances were not often or easily captured, Swope’s photographs caught the animated moments and distilled the essence of a show into a single image: André De Shields clad in a jumpsuit as the title character in The Wiz, Patti LuPone with her arms raised overhead in Evita, the cast of Cats leaping in feline formations, a close-up of a forlorn Sheryl Lee Ralph in Dreamgirls and the row of dancers obscuring their faces with their headshots in A Chorus Line were all captured by Swope’s camera. She was also the house photographer for the New York City Ballet and the Martha Graham Dance Company and photographed other major dance companies such as the Ailey School.
    Her vision of the stage became fairly ubiquitous, with Playbill reporting that in the late 1970s, two-thirds of Broadway productions were photographed by Swope, meaning her work dominated theater and dance coverage. Carol Rosegg was early in her photography career when she heard that Swope was looking for an assistant. “I didn't frankly even know who she was,” Rosegg says. “Then the press agent who told me said, ‘Pick up any New York Times and you’ll find out.’”
    Swope’s background as a dancer likely equipped her to press the shutter at the exact right moment to capture movement, and to know when everyone on stage was precisely posed. She taught herself photography and early on used a Brownie camera, a simple box model made by Kodak. “She was what she described as ‘a dancer with a Brownie,’” says Barbara Stratyner, a historian of the performing arts who curated exhibitions of Swope’s work at the New York Public Library.

    An ensemble of dancers in rehearsal for the stage production Cats in 1982

    Martha Swope / © NYPL

    “Dance was her first love,” Rosegg says. “She knew everything about dance. She would never use a photo of a dancer whose foot was wrong; the feet had to be perfect.”
    According to Rosegg, once the photo subjects knew she was shooting, “the anxiety level came down a little bit.” They knew that they’d look good in the resulting photos, and they likely trusted her intuition as a fellow dancer. Swope moved with the bearing of a dancer and often stood with her feet in ballet’s fourth position while she shot. She continued to take dance classes throughout her life, including at the prestigious Martha Graham School. Stratyner says, “As Graham got older,was, I think, the only person who was allowed to photograph rehearsals, because Graham didn’t want rehearsals shown.”
    Photographic technology and the theater and dance landscapes evolved greatly over the course of Swope’s career. Rosegg points out that at the start of her own career, cameras didn’t even automatically advance the film after each shot. She explains the delicate nature of working with film, saying, “When you were shooting film, you actually had to compose, because you had 35 shots and then you had to change your film.” Swope also worked during a period of changing over from all black-and-white photos to a mixture of black-and-white and color photography. Rosegg notes that simultaneously, Swope would shoot black-and-white, and she herself would shoot color. Looking at Swope’s portfolio is also an examination of increasingly crisp photo production. Advances in photography made shooting in the dark or capturing subjects under blinding stage lights easier, and they allowed for better zooming in from afar.

    Martha Graham rehearses dancer Takako Asakawa and others in Heretic, a dance work choreographed by Graham, in 1986

    Martha Swope / © NYPL

    It’s much more common nowadays to get a look behind the curtain of theater productions via social media. “The theater photographers of today need to supply so much content,” Rosegg says. “We didn’t have any of that, and getting to go backstage was kind of a big deal.”
    Photographers coming to document a rehearsal once might have been seen as an intrusion, but now, as Rosegg puts it, “everybody is desperate for you to come, and if you’re not there, they’re shooting it on their iPhone.”
    Even with exclusive behind-the-scenes access to the hottest tickets in town and the biggest stars of the day, Swope remained unpretentious. She lived and worked in a brownstone with her apartment above her studio, where the film was developed in a closet and the bathroom served as a darkroom. Rosegg recalls that a phone sat in the darkroom so they could be reached while printing, and she would be amazed at the big-name producers and theater glitterati who rang in while she was making prints in an unventilated space.

    From left to right: Paul Winfield, Ruby Dee, Marsha Jackson and Denzel Washington in the stage production Checkmates in 1988

    Martha Swope / © NYPL

    Swope’s approachability extended to how she chose to preserve her work. She originally sold her body of work to Time Life, and, according to Stratyner, she was unhappy with the way the photos became relatively inaccessible. She took back the rights to her collection and donated it to the New York Public Library, where many photos can be accessed by researchers in person, and the entire array of photos is available online to the public in the Digital Collections. Searching “Martha Swope” yields over 50,000 items from more than 800 productions, featuring a huge variety of figures, from a white-suited John Travolta busting a disco move in Saturday Night Fever to Andrew Lloyd Webber with Nancy Reagan at a performance of Phantom of the Opera.
    Swope’s extensive career was recognized in 2004 with a special Tony Award, a Tony Honors for Excellence in Theater, which are given intermittently to notable figures in theater who operate outside of traditional awards categories. She also received a lifetime achievement award from the League of Professional Theater Women in 2007. Though she retired in 1994 and died in 2017, her work still reverberates through dance and Broadway history today. For decades, she captured the fleeting moments of theater that would otherwise never be seen by the public. And her passion was clear and straightforward. As she once told an interviewer: “I’m not interested in what’s going on on my side of the camera. I’m interested in what’s happening on the other side.”

    Get the latest Travel & Culture stories in your inbox.
    #meet #martha #swope #legendary #broadway
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals She spent nearly 40 years taking theater and dance pictures, providing glimpses behind the scenes and creating images that the public couldn’t otherwise access Stephanie Rudig - Freelance Writer June 11, 2025 Photographer Martha Swope sitting on a floor covered with prints of her photos in 1987 Andrea Legge / © NYPL Martha Swope wanted to be a dancer. She moved from her home state of Texas to New York to attend the School of American Ballet, hoping to start a career in dance. Swope also happened to be an amateur photographer. So, in 1957, a fellow classmate invited her to bring her camera and document rehearsals for a little theater show he was working on. The classmate was director and choreographer Jerome Robbins, and the show was West Side Story. One of those rehearsal shots ended up in Life magazine, and Swope quickly started getting professional bookings. It’s notoriously tough to make it on Broadway, but through photography, Swope carved out a career capturing theater and dance. Over the course of nearly four decades, she photographed hundreds more rehearsals, productions and promotional studio shots. Unidentified male chorus members dancing during rehearsals for musical West Side Story in 1957 Martha Swope / © NYPL At a time when live performances were not often or easily captured, Swope’s photographs caught the animated moments and distilled the essence of a show into a single image: André De Shields clad in a jumpsuit as the title character in The Wiz, Patti LuPone with her arms raised overhead in Evita, the cast of Cats leaping in feline formations, a close-up of a forlorn Sheryl Lee Ralph in Dreamgirls and the row of dancers obscuring their faces with their headshots in A Chorus Line were all captured by Swope’s camera. She was also the house photographer for the New York City Ballet and the Martha Graham Dance Company and photographed other major dance companies such as the Ailey School. Her vision of the stage became fairly ubiquitous, with Playbill reporting that in the late 1970s, two-thirds of Broadway productions were photographed by Swope, meaning her work dominated theater and dance coverage. Carol Rosegg was early in her photography career when she heard that Swope was looking for an assistant. “I didn't frankly even know who she was,” Rosegg says. “Then the press agent who told me said, ‘Pick up any New York Times and you’ll find out.’” Swope’s background as a dancer likely equipped her to press the shutter at the exact right moment to capture movement, and to know when everyone on stage was precisely posed. She taught herself photography and early on used a Brownie camera, a simple box model made by Kodak. “She was what she described as ‘a dancer with a Brownie,’” says Barbara Stratyner, a historian of the performing arts who curated exhibitions of Swope’s work at the New York Public Library. An ensemble of dancers in rehearsal for the stage production Cats in 1982 Martha Swope / © NYPL “Dance was her first love,” Rosegg says. “She knew everything about dance. She would never use a photo of a dancer whose foot was wrong; the feet had to be perfect.” According to Rosegg, once the photo subjects knew she was shooting, “the anxiety level came down a little bit.” They knew that they’d look good in the resulting photos, and they likely trusted her intuition as a fellow dancer. Swope moved with the bearing of a dancer and often stood with her feet in ballet’s fourth position while she shot. She continued to take dance classes throughout her life, including at the prestigious Martha Graham School. Stratyner says, “As Graham got older,was, I think, the only person who was allowed to photograph rehearsals, because Graham didn’t want rehearsals shown.” Photographic technology and the theater and dance landscapes evolved greatly over the course of Swope’s career. Rosegg points out that at the start of her own career, cameras didn’t even automatically advance the film after each shot. She explains the delicate nature of working with film, saying, “When you were shooting film, you actually had to compose, because you had 35 shots and then you had to change your film.” Swope also worked during a period of changing over from all black-and-white photos to a mixture of black-and-white and color photography. Rosegg notes that simultaneously, Swope would shoot black-and-white, and she herself would shoot color. Looking at Swope’s portfolio is also an examination of increasingly crisp photo production. Advances in photography made shooting in the dark or capturing subjects under blinding stage lights easier, and they allowed for better zooming in from afar. Martha Graham rehearses dancer Takako Asakawa and others in Heretic, a dance work choreographed by Graham, in 1986 Martha Swope / © NYPL It’s much more common nowadays to get a look behind the curtain of theater productions via social media. “The theater photographers of today need to supply so much content,” Rosegg says. “We didn’t have any of that, and getting to go backstage was kind of a big deal.” Photographers coming to document a rehearsal once might have been seen as an intrusion, but now, as Rosegg puts it, “everybody is desperate for you to come, and if you’re not there, they’re shooting it on their iPhone.” Even with exclusive behind-the-scenes access to the hottest tickets in town and the biggest stars of the day, Swope remained unpretentious. She lived and worked in a brownstone with her apartment above her studio, where the film was developed in a closet and the bathroom served as a darkroom. Rosegg recalls that a phone sat in the darkroom so they could be reached while printing, and she would be amazed at the big-name producers and theater glitterati who rang in while she was making prints in an unventilated space. From left to right: Paul Winfield, Ruby Dee, Marsha Jackson and Denzel Washington in the stage production Checkmates in 1988 Martha Swope / © NYPL Swope’s approachability extended to how she chose to preserve her work. She originally sold her body of work to Time Life, and, according to Stratyner, she was unhappy with the way the photos became relatively inaccessible. She took back the rights to her collection and donated it to the New York Public Library, where many photos can be accessed by researchers in person, and the entire array of photos is available online to the public in the Digital Collections. Searching “Martha Swope” yields over 50,000 items from more than 800 productions, featuring a huge variety of figures, from a white-suited John Travolta busting a disco move in Saturday Night Fever to Andrew Lloyd Webber with Nancy Reagan at a performance of Phantom of the Opera. Swope’s extensive career was recognized in 2004 with a special Tony Award, a Tony Honors for Excellence in Theater, which are given intermittently to notable figures in theater who operate outside of traditional awards categories. She also received a lifetime achievement award from the League of Professional Theater Women in 2007. Though she retired in 1994 and died in 2017, her work still reverberates through dance and Broadway history today. For decades, she captured the fleeting moments of theater that would otherwise never be seen by the public. And her passion was clear and straightforward. As she once told an interviewer: “I’m not interested in what’s going on on my side of the camera. I’m interested in what’s happening on the other side.” Get the latest Travel & Culture stories in your inbox. #meet #martha #swope #legendary #broadway
    WWW.SMITHSONIANMAG.COM
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals She spent nearly 40 years taking theater and dance pictures, providing glimpses behind the scenes and creating images that the public couldn’t otherwise access Stephanie Rudig - Freelance Writer June 11, 2025 Photographer Martha Swope sitting on a floor covered with prints of her photos in 1987 Andrea Legge / © NYPL Martha Swope wanted to be a dancer. She moved from her home state of Texas to New York to attend the School of American Ballet, hoping to start a career in dance. Swope also happened to be an amateur photographer. So, in 1957, a fellow classmate invited her to bring her camera and document rehearsals for a little theater show he was working on. The classmate was director and choreographer Jerome Robbins, and the show was West Side Story. One of those rehearsal shots ended up in Life magazine, and Swope quickly started getting professional bookings. It’s notoriously tough to make it on Broadway, but through photography, Swope carved out a career capturing theater and dance. Over the course of nearly four decades, she photographed hundreds more rehearsals, productions and promotional studio shots. Unidentified male chorus members dancing during rehearsals for musical West Side Story in 1957 Martha Swope / © NYPL At a time when live performances were not often or easily captured, Swope’s photographs caught the animated moments and distilled the essence of a show into a single image: André De Shields clad in a jumpsuit as the title character in The Wiz, Patti LuPone with her arms raised overhead in Evita, the cast of Cats leaping in feline formations, a close-up of a forlorn Sheryl Lee Ralph in Dreamgirls and the row of dancers obscuring their faces with their headshots in A Chorus Line were all captured by Swope’s camera. She was also the house photographer for the New York City Ballet and the Martha Graham Dance Company and photographed other major dance companies such as the Ailey School. Her vision of the stage became fairly ubiquitous, with Playbill reporting that in the late 1970s, two-thirds of Broadway productions were photographed by Swope, meaning her work dominated theater and dance coverage. Carol Rosegg was early in her photography career when she heard that Swope was looking for an assistant. “I didn't frankly even know who she was,” Rosegg says. “Then the press agent who told me said, ‘Pick up any New York Times and you’ll find out.’” Swope’s background as a dancer likely equipped her to press the shutter at the exact right moment to capture movement, and to know when everyone on stage was precisely posed. She taught herself photography and early on used a Brownie camera, a simple box model made by Kodak. “She was what she described as ‘a dancer with a Brownie,’” says Barbara Stratyner, a historian of the performing arts who curated exhibitions of Swope’s work at the New York Public Library. An ensemble of dancers in rehearsal for the stage production Cats in 1982 Martha Swope / © NYPL “Dance was her first love,” Rosegg says. “She knew everything about dance. She would never use a photo of a dancer whose foot was wrong; the feet had to be perfect.” According to Rosegg, once the photo subjects knew she was shooting, “the anxiety level came down a little bit.” They knew that they’d look good in the resulting photos, and they likely trusted her intuition as a fellow dancer. Swope moved with the bearing of a dancer and often stood with her feet in ballet’s fourth position while she shot. She continued to take dance classes throughout her life, including at the prestigious Martha Graham School. Stratyner says, “As Graham got older, [Swope] was, I think, the only person who was allowed to photograph rehearsals, because Graham didn’t want rehearsals shown.” Photographic technology and the theater and dance landscapes evolved greatly over the course of Swope’s career. Rosegg points out that at the start of her own career, cameras didn’t even automatically advance the film after each shot. She explains the delicate nature of working with film, saying, “When you were shooting film, you actually had to compose, because you had 35 shots and then you had to change your film.” Swope also worked during a period of changing over from all black-and-white photos to a mixture of black-and-white and color photography. Rosegg notes that simultaneously, Swope would shoot black-and-white, and she herself would shoot color. Looking at Swope’s portfolio is also an examination of increasingly crisp photo production. Advances in photography made shooting in the dark or capturing subjects under blinding stage lights easier, and they allowed for better zooming in from afar. Martha Graham rehearses dancer Takako Asakawa and others in Heretic, a dance work choreographed by Graham, in 1986 Martha Swope / © NYPL It’s much more common nowadays to get a look behind the curtain of theater productions via social media. “The theater photographers of today need to supply so much content,” Rosegg says. “We didn’t have any of that, and getting to go backstage was kind of a big deal.” Photographers coming to document a rehearsal once might have been seen as an intrusion, but now, as Rosegg puts it, “everybody is desperate for you to come, and if you’re not there, they’re shooting it on their iPhone.” Even with exclusive behind-the-scenes access to the hottest tickets in town and the biggest stars of the day, Swope remained unpretentious. She lived and worked in a brownstone with her apartment above her studio, where the film was developed in a closet and the bathroom served as a darkroom. Rosegg recalls that a phone sat in the darkroom so they could be reached while printing, and she would be amazed at the big-name producers and theater glitterati who rang in while she was making prints in an unventilated space. From left to right: Paul Winfield, Ruby Dee, Marsha Jackson and Denzel Washington in the stage production Checkmates in 1988 Martha Swope / © NYPL Swope’s approachability extended to how she chose to preserve her work. She originally sold her body of work to Time Life, and, according to Stratyner, she was unhappy with the way the photos became relatively inaccessible. She took back the rights to her collection and donated it to the New York Public Library, where many photos can be accessed by researchers in person, and the entire array of photos is available online to the public in the Digital Collections. Searching “Martha Swope” yields over 50,000 items from more than 800 productions, featuring a huge variety of figures, from a white-suited John Travolta busting a disco move in Saturday Night Fever to Andrew Lloyd Webber with Nancy Reagan at a performance of Phantom of the Opera. Swope’s extensive career was recognized in 2004 with a special Tony Award, a Tony Honors for Excellence in Theater, which are given intermittently to notable figures in theater who operate outside of traditional awards categories. She also received a lifetime achievement award from the League of Professional Theater Women in 2007. Though she retired in 1994 and died in 2017, her work still reverberates through dance and Broadway history today. For decades, she captured the fleeting moments of theater that would otherwise never be seen by the public. And her passion was clear and straightforward. As she once told an interviewer: “I’m not interested in what’s going on on my side of the camera. I’m interested in what’s happening on the other side.” Get the latest Travel & Culture stories in your inbox.
    0 Commentarios 0 Acciones
  • The DeepSeek R1 update proves its an active threat to OpenAI and Google

    DeepSeek's R1 update, plus the rest of the AI news this week.
    Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images

    This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch.

    You May Also Like

    To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer, the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago.

    Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant. 

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires. 

    Related Stories

    The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr, which could be available for sale later this year for just And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

    Topics
    OpenAI
    DeepSeek

    Cecily Mauran
    Tech Reporter

    Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.
    #deepseek #update #proves #its #active
    The DeepSeek R1 update proves its an active threat to OpenAI and Google
    DeepSeek's R1 update, plus the rest of the AI news this week. Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch. You May Also Like To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer, the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago. Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant.  Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires.  Related Stories The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr, which could be available for sale later this year for just And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. Topics OpenAI DeepSeek Cecily Mauran Tech Reporter Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran. #deepseek #update #proves #its #active
    MASHABLE.COM
    The DeepSeek R1 update proves its an active threat to OpenAI and Google
    DeepSeek's R1 update, plus the rest of the AI news this week. Credit: Thomas Fuller / SOPA Images / LightRocket / Getty Images This week, DeepSeek released an updated version of its R1 model on HuggingFace, reigniting the open-source versus closed-source competition. The updated version, called DeekSeek-R1-0528, has 685 billion parameters, an upgrade from January's version, which had 671 billion. Unlike OpenAI and Google's models, which are famously closed-source, DeepSeek's model weights are publicly available. According to the benchmarks, the R1-0528 update has improved reasoning and inference capabilities and is closing the gap with OpenAI's o3 and Google's Gemini 2.5 Pro. DeepSeek also introduced a distilled version of R1-0528 using Alibaba's Qwen3 8B model. This is an example of a lightweight model that is less capable but also requires less computing power. DeepSeek-R1-0528-Qwen3-8B outperforms both Google's latest lightweight model Gemini-2.5-Flash-Thinking-0520 and OpenAI's o3-mini in certain benchmarks. But the bigger deal is that DeekSeek's distilled model can reportedly run on a single GPU, according to TechCrunch. You May Also Like To… distill all this information, the Chinese rival is catching up to its U.S. competitors with an open-weight approach that's cheaper and more accessible. Plus, DeepSeek continues to prove that AI models may not require as much computing power as OpenAI, Google, and other AI heavyweights currently use. Suffice to say, watch this space.That said, DeepSeek's models also have their drawbacks. According to one AI developer (via TechCrunch), the new DeepSeek update is even more censored than its previous version when it comes to criticism of the Chinese government. Of course, a lot more happened in the AI world over the past few days. After last week's parade of AI events from Google, Anthropic, and Microsoft, this week was lighter on product and feature news. That's one reason DeepSeek's R1 update captured the AI world's attention this week. In other AI news, Anthropic finally gets voice mode, AI influencers go viral, Anthropic's CEO warns of mass layoffs, and an AI-generated kangaroo. Google's Veo 3 takes the internet by stormOn virtually every social media platform, users are freaking out about the new Veo 3, Google's new AI video model. The results are impressive, and we're already seeing short films made entirely with Veo 3. Not bad for a product that came out 11 days ago. Not to be outdone by AI video artists, a reporter from The Wall Street Journal made a short film about herself and a robot using Veo 3.Mashable's Tech Editor Timothy Werth recapped Veo's big week and had a simple conclusion: We're so cooked.More AI product news: Claude's new voice mode and the beginning of the agentic browser eraAfter last week's barrage, this week was lighter on the volume of AI news. But what was announced this week is no less significant.  Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Anthropic finally introduced its own voice mode for Claude to compete with ChatGPT, Grok, and Gemini. The feature is currently in beta on mobile for the Claude app and will even be available to free plans with a limit of 20 to 30 voice conversations per day. Anthropic says you can ask Claude to summarize your calendar or read documents out loud. Paying subscribers can connect to Google Workspace for Calendar, Gmail, and Docs access. OpenAI is exploring the ability to sign into third-party apps with ChatGPT. We don't know much yet, but the company posted an interest form on its site for developers using Codex, its engineering agent, to add this capability to their own apps. It may not sound like a big deal, but it basically means users could easily link their personalized ChatGPT memories and settings to third-party apps, much like the way it works when you sign into a new app with your Google account.Opera announced a new agentic AI browser called Neon. "Much more than a place to view web pages, Neon can browse with you or for you, take action, and help you get things done," the announcement read. That includes a chatbot interface within the browser and the ability to fill in web forms for tasks like booking trips and shopping. The announcement, which included a promo video of a humanoid robot browsing the robot, which is scant on details but says Neon will be a "premium subscription product" and has a waitlist to sign up.The browser has suddenly become a new frontier for agentic AI, now that it's capable of automating web search tasks. Perplexity is working on a similar tool called Comet, and The Browser Company pivoted from its Arc browser to a more AI-centric browser called Dia. All of this is happening while Google might be forced to sell off Chrome, which OpenAI has kindly offered to take off its hands. Dario Amodei's prediction about AI replacing entry-level jobs is already starting to happenAnthropic CEO Dario Amodei warned in an interview with Axios that AI could "wipe out half of all entry-level white-collar jobs." Amodei's predictions might be spot on because a new study from VC firm SignalFire found that hiring for entry-level jobs is down to 7 percent from 25 percent in the previous year. Some of that is due to changes in the economic climate, but AI is definitely a factor since firms are opting to automate the less-technical aspects of work that would've been taken on by new hires.  Related Stories The latest in AI culture: That AI-generated kangaroo, Judge Judy, and everything elseGoogle wants you to know its AI overviews reach 1.5 billion people a month. They probably don't want you to know AI Overviews still struggles to count, spell, and know what year it is. As Mashable's Tim Marcin put it, would AI Overviews pass concussion protocol?The proposal of a 10-year ban on states regulating AI is pretty unpopular, according to a poll from Common Sense Media. The survey found that 57 percent of respondents opposed the moratorium, including half of the Republican respondents. As Mashable's Rebecca Ruiz reported, "the vast majority of respondents, regardless of their political affiliation, agreed that Congress shouldn't ban states from enacting or enforcing their own youth online safety and privacy laws."In the private sector, The New York Times signed a licensing deal with Amazon to allow their editorial content to be used for Amazon's AI models. The details are unclear, but from the outside, this seems like a change of tune from the Times, which is currently suing OpenAI for copyright infringement for allegedly using its content to train its models. That viral video of an emotional support kangaroo holding a plane ticket and being denied boarding? It's AI-generated, of course. Slightly more obvious, but no less creepy is another viral trend of using AI to turn public figures like Emmanuel Macron and Judge Judy into babies. These are strange AI-slop-infested times we're living in. AI has some positive uses too. This week, we learned about a new humanoid robot from HuggingFace called HopeJr (with engineering by The Robot Studio), which could be available for sale later this year for just $3,000.And to end this recap on a high note, the nonprofit Colossal Foundation has developed an AI algorithm to detect the bird calls of the near-extinct tooth-billed pigeon. Also known as the "little dodo," the tooth-billed pigeon is Samoa's national bird, and scientists are using the bioacoustic algorithm to locate and protect them. Want to get the latest AI news, from new product features to viral trends? Check back next week for another AI news recap, and in the meantime, follow @cecily_mauran and @mashable for more news.Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems. Topics OpenAI DeepSeek Cecily Mauran Tech Reporter Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.
    0 Commentarios 0 Acciones
  • Small-Batch + Cold-Shipped Kloo Refines Coffee Concentrate into a Luxury

    Kloo didn’t set out to eliminate the ritual of making coffee; they set out to refine it. Equal parts culinary secret weapon and everyday indulgence, Kloo is a small-batch coffee concentrate that delivers bold, specialty-grade flavor to everything from your first morning cup to cocktails and desserts. It’s part chef’s tool, part personal luxury – crafted for those who love to cook, love to host, and love a good cup of coffee.
    Kloo’s frosted glass bottle adorned with an artful screen-printed design, looks more like a fine spirit than a morning essential. The logo – a maze-like looped ‘K’ – reflects the brand’s ethos: complexity distilled into simplicity. It’s a bottle that signals premium, not just in flavor, but in form.

    Founded by mother-daughter duo Claudia Snoh and Mariella Cho, Kloo was created from a shared obsession with the nuance of great coffee. Mariella, a certified Q Grader, developed Kloo’s proprietary “super concentrate” brewing method to bring out the purest, most expressive flavor of each bean – then aged each batch for up to 21 days to deepen body and complexity.

    From sourcing to shipping, every detail is intentional. Kloo uses only specialty-grade beans, roasted in-house and brewed in small batches. The concentrate is then cold-shipped and kept refrigerated to preserve every note.
    A taste of the single-origin varieties:
    Colombia: Almond, maple syrup, blackberry
    Kenya: Grapefruit, lemongrass, dark chocolate
    Ethiopia: Peach, jasmine, wild berry
    Guatemala: Toffee, burnt toast, dark chocolate
    Each profile is bold enough to stand on its own, yet balanced enough to complement whatever you’re making.

    Unlike many concentrates, Kloo’s strength and consistency make it a natural fit for chefs and bakers, especially in large batches where precision matters. Whether you’re stirring it into a sauce or folding it into a batter, Kloo delivers depth, not bitterness. It’s a shortcut that doesn’t feel like one. And while it’s a favorite among chefs, it also belongs in every home cook’s fridge. You’ll find yourself reaching for it more than you expect – whether for an impromptu dessert, a 4pm boost, or an elevated cocktail.

    For those who love to gather, Kloo is a quiet revolution. It makes the art of hosting feel seamless – adding flavor, elegance, and just a little flair to your moments of connection. One of the best-kept secrets of the seasoned host? Bookend your gathering with memorable moments. Start high, end high – and do it with something that’s bold, caffeinated, and effortlessly chic.
    Welcome your guests with a low-ABV drink, perfect for warm afternoons.

    Kloo Stout
    1.5 oz Kloo coffee concentrate
    12 oz chocolatey stout or lager
    Preparation: Add chilled Kloo to the bottom of a pint glass, then slowly pour in the beer and let it mix naturally. Smooth, rich, and just unexpected enough to be a conversation starter.

    Close the evening by serving guests an easy and elegant dessert that never disappoints.
    Kloo Affogato
    1 scoop vanilla gelato
    1 shotKloo concentrate
    Preparation: Pour Kloo directly over the gelato just before serving. Dessert and coffee, all in one beautiful moment.
    Like most devout daily coffee drinkers, I’ve always been skeptical of concentrates – too often they’re bitter, flat, or forgettable. Kloo is different. It doesn’t replace the ritual of great coffee; it respects it, while making room for all the ways we actually live. Whether you’re brewing slowly, moving quickly, cooking for others, or just trying to get out the door, Kloo brings depth and intention – without asking you to compromise.

    For more information on Kloo, visit drinkkloo.com.
    Photography courtesy of Kloo.
    #smallbatch #coldshipped #kloo #refines #coffee
    Small-Batch + Cold-Shipped Kloo Refines Coffee Concentrate into a Luxury
    Kloo didn’t set out to eliminate the ritual of making coffee; they set out to refine it. Equal parts culinary secret weapon and everyday indulgence, Kloo is a small-batch coffee concentrate that delivers bold, specialty-grade flavor to everything from your first morning cup to cocktails and desserts. It’s part chef’s tool, part personal luxury – crafted for those who love to cook, love to host, and love a good cup of coffee. Kloo’s frosted glass bottle adorned with an artful screen-printed design, looks more like a fine spirit than a morning essential. The logo – a maze-like looped ‘K’ – reflects the brand’s ethos: complexity distilled into simplicity. It’s a bottle that signals premium, not just in flavor, but in form. Founded by mother-daughter duo Claudia Snoh and Mariella Cho, Kloo was created from a shared obsession with the nuance of great coffee. Mariella, a certified Q Grader, developed Kloo’s proprietary “super concentrate” brewing method to bring out the purest, most expressive flavor of each bean – then aged each batch for up to 21 days to deepen body and complexity. From sourcing to shipping, every detail is intentional. Kloo uses only specialty-grade beans, roasted in-house and brewed in small batches. The concentrate is then cold-shipped and kept refrigerated to preserve every note. A taste of the single-origin varieties: Colombia: Almond, maple syrup, blackberry Kenya: Grapefruit, lemongrass, dark chocolate Ethiopia: Peach, jasmine, wild berry Guatemala: Toffee, burnt toast, dark chocolate Each profile is bold enough to stand on its own, yet balanced enough to complement whatever you’re making. Unlike many concentrates, Kloo’s strength and consistency make it a natural fit for chefs and bakers, especially in large batches where precision matters. Whether you’re stirring it into a sauce or folding it into a batter, Kloo delivers depth, not bitterness. It’s a shortcut that doesn’t feel like one. And while it’s a favorite among chefs, it also belongs in every home cook’s fridge. You’ll find yourself reaching for it more than you expect – whether for an impromptu dessert, a 4pm boost, or an elevated cocktail. For those who love to gather, Kloo is a quiet revolution. It makes the art of hosting feel seamless – adding flavor, elegance, and just a little flair to your moments of connection. One of the best-kept secrets of the seasoned host? Bookend your gathering with memorable moments. Start high, end high – and do it with something that’s bold, caffeinated, and effortlessly chic. Welcome your guests with a low-ABV drink, perfect for warm afternoons. Kloo Stout 1.5 oz Kloo coffee concentrate 12 oz chocolatey stout or lager Preparation: Add chilled Kloo to the bottom of a pint glass, then slowly pour in the beer and let it mix naturally. Smooth, rich, and just unexpected enough to be a conversation starter. Close the evening by serving guests an easy and elegant dessert that never disappoints. Kloo Affogato 1 scoop vanilla gelato 1 shotKloo concentrate Preparation: Pour Kloo directly over the gelato just before serving. Dessert and coffee, all in one beautiful moment. Like most devout daily coffee drinkers, I’ve always been skeptical of concentrates – too often they’re bitter, flat, or forgettable. Kloo is different. It doesn’t replace the ritual of great coffee; it respects it, while making room for all the ways we actually live. Whether you’re brewing slowly, moving quickly, cooking for others, or just trying to get out the door, Kloo brings depth and intention – without asking you to compromise. For more information on Kloo, visit drinkkloo.com. Photography courtesy of Kloo. #smallbatch #coldshipped #kloo #refines #coffee
    DESIGN-MILK.COM
    Small-Batch + Cold-Shipped Kloo Refines Coffee Concentrate into a Luxury
    Kloo didn’t set out to eliminate the ritual of making coffee; they set out to refine it. Equal parts culinary secret weapon and everyday indulgence, Kloo is a small-batch coffee concentrate that delivers bold, specialty-grade flavor to everything from your first morning cup to cocktails and desserts. It’s part chef’s tool, part personal luxury – crafted for those who love to cook, love to host, and love a good cup of coffee. Kloo’s frosted glass bottle adorned with an artful screen-printed design, looks more like a fine spirit than a morning essential. The logo – a maze-like looped ‘K’ – reflects the brand’s ethos: complexity distilled into simplicity. It’s a bottle that signals premium, not just in flavor, but in form. Founded by mother-daughter duo Claudia Snoh and Mariella Cho, Kloo was created from a shared obsession with the nuance of great coffee. Mariella, a certified Q Grader (the coffee world’s version of a sommelier), developed Kloo’s proprietary “super concentrate” brewing method to bring out the purest, most expressive flavor of each bean – then aged each batch for up to 21 days to deepen body and complexity. From sourcing to shipping, every detail is intentional. Kloo uses only specialty-grade beans (each scoring 85+ by Q Graders), roasted in-house and brewed in small batches. The concentrate is then cold-shipped and kept refrigerated to preserve every note. A taste of the single-origin varieties: Colombia (Venecia, Cundinamarca): Almond, maple syrup, blackberry Kenya (Karundu, Nyeri): Grapefruit, lemongrass, dark chocolate Ethiopia (Adado, Yirgacheffe): Peach, jasmine, wild berry Guatemala (Pasajquim, Atitlán): Toffee, burnt toast, dark chocolate Each profile is bold enough to stand on its own, yet balanced enough to complement whatever you’re making. Unlike many concentrates, Kloo’s strength and consistency make it a natural fit for chefs and bakers, especially in large batches where precision matters. Whether you’re stirring it into a sauce or folding it into a batter, Kloo delivers depth, not bitterness. It’s a shortcut that doesn’t feel like one. And while it’s a favorite among chefs, it also belongs in every home cook’s fridge. You’ll find yourself reaching for it more than you expect – whether for an impromptu dessert, a 4pm boost, or an elevated cocktail. For those who love to gather, Kloo is a quiet revolution. It makes the art of hosting feel seamless – adding flavor, elegance, and just a little flair to your moments of connection. One of the best-kept secrets of the seasoned host? Bookend your gathering with memorable moments. Start high, end high – and do it with something that’s bold, caffeinated, and effortlessly chic. Welcome your guests with a low-ABV drink, perfect for warm afternoons. Kloo Stout 1.5 oz Kloo coffee concentrate 12 oz chocolatey stout or lager Preparation: Add chilled Kloo to the bottom of a pint glass, then slowly pour in the beer and let it mix naturally. Smooth, rich, and just unexpected enough to be a conversation starter. Close the evening by serving guests an easy and elegant dessert that never disappoints. Kloo Affogato 1 scoop vanilla gelato 1 shot (about 1.5 oz) Kloo concentrate Preparation: Pour Kloo directly over the gelato just before serving. Dessert and coffee, all in one beautiful moment. Like most devout daily coffee drinkers, I’ve always been skeptical of concentrates – too often they’re bitter, flat, or forgettable. Kloo is different. It doesn’t replace the ritual of great coffee; it respects it, while making room for all the ways we actually live. Whether you’re brewing slowly, moving quickly, cooking for others, or just trying to get out the door, Kloo brings depth and intention – without asking you to compromise. For more information on Kloo, visit drinkkloo.com. Photography courtesy of Kloo.
    0 Commentarios 0 Acciones
  • Villa Air / ARK-architecture

    Villa Air / ARK-architectureSave this picture!© Bilel KhemakhemHouses•Tunis, Tunisia

    Architects:
    ARK-architecture
    Area
    Area of this architecture project

    Area: 
    1500 m²

    Year
    Completion year of this architecture project

    Year: 

    2024

    Photographs

    Photographs:Bilel Khemakhem

    Manufacturers
    Brands with products used in this architecture project

    Manufacturers:  Trespa, Elements, QUICK-STEP, REVIGLASS, Saint Gobain Glass, Schüco, TOSHIBAMore SpecsLess Specs
    this picture!
    Text description provided by the architects. Villa Air is a distilled expression of contemporary architecture rooted in the Tunisian landscape. Set within a two-hectare plot in Morneg, this 1,500 m² residence unfolds as a meditative dialogue between built form and topography. The site, defined by its gentle slope and sweeping views, culminates in the striking silhouette of the Jbal Errsas mountain range—a natural horizon that anchors the architectural narrative. From the outset, the project embraces a central duality: the tension between gravitas and lightness, between groundedness and suspension. This dialectic, subtly embedded in the villa's name, structures the entire composition. Distributed across three levels, the house is articulated as a series of horizontal strata punctuated by bold cantilevers. These projections—remarkably slender at just 45 cm thick—embody both structural daring and environmental responsiveness, casting precise shadow lines that temper the Mediterranean sun.this picture!this picture!this picture!Rather than asserting dominance over the terrain, the architecture yields to it. The villa engages the land with measured restraint, allowing the natural contours to guide its form. A textured finish in earthy tones fosters chromatic continuity with the ground, while the massing cascades along the slope, suggesting a geological emergence rather than an architectural imposition. The principal façade distills the project's ethos: a calibrated composition of apertures that frames the landscape as a sequence of living tableaux. Each elevation is attuned to its orientation, choreographing a spatial experience that is both immersive and contemplative. Here, architecture acts not as a boundary, but as a lens.this picture!Materiality is approached with deliberate restraint. Pristine white volumes capture the shifting Mediterranean light, animating surfaces in a daily choreography of shadows. Travertine and timber introduce tactile warmth, while concrete elements — subtly tinted with sand pigments — ground the building in its context and enhance its material belonging. Internally, the spatial organization privileges continuity and flow. Circulations are not mere connectors, but choreographed transitions. Double-height volumes channel daylight deep into the core, while vertical pathways become elevated promenades offering ever-evolving perspectives of the surrounding landscape.this picture!this picture!this picture!The architecture explores a central paradox: the reconciliation of intimacy with openness, of enclosure with exposure. This tension is resolved through a refined gradation of thresholds, where interiors dissolve into terraces and open platforms, softening the boundaries between inside and out. Twin infinity pools extend the architectural geometry toward the horizon, amplifying the sensation of lightness and spatial suspension. Water and sky converge in a silent dialogue, completing the project's aspiration to exist not merely in the landscape but in symbiosis with it. Villa Air stands as a testament to a site-specific Mediterranean modernism — one that privileges clarity, precision, and sensory depth. More than a functional residence, it evokes a poetic condition of dwelling: a place where form, matter, and perception converge in quiet resonance.this picture!

    Project gallerySee allShow less
    About this officeARK-architectureOffice•••
    MaterialConcreteMaterials and TagsPublished on May 30, 2025Cite: "Villa Air / ARK-architecture" 30 May 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #villa #air #arkarchitecture
    Villa Air / ARK-architecture
    Villa Air / ARK-architectureSave this picture!© Bilel KhemakhemHouses•Tunis, Tunisia Architects: ARK-architecture Area Area of this architecture project Area:  1500 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Bilel Khemakhem Manufacturers Brands with products used in this architecture project Manufacturers:  Trespa, Elements, QUICK-STEP, REVIGLASS, Saint Gobain Glass, Schüco, TOSHIBAMore SpecsLess Specs this picture! Text description provided by the architects. Villa Air is a distilled expression of contemporary architecture rooted in the Tunisian landscape. Set within a two-hectare plot in Morneg, this 1,500 m² residence unfolds as a meditative dialogue between built form and topography. The site, defined by its gentle slope and sweeping views, culminates in the striking silhouette of the Jbal Errsas mountain range—a natural horizon that anchors the architectural narrative. From the outset, the project embraces a central duality: the tension between gravitas and lightness, between groundedness and suspension. This dialectic, subtly embedded in the villa's name, structures the entire composition. Distributed across three levels, the house is articulated as a series of horizontal strata punctuated by bold cantilevers. These projections—remarkably slender at just 45 cm thick—embody both structural daring and environmental responsiveness, casting precise shadow lines that temper the Mediterranean sun.this picture!this picture!this picture!Rather than asserting dominance over the terrain, the architecture yields to it. The villa engages the land with measured restraint, allowing the natural contours to guide its form. A textured finish in earthy tones fosters chromatic continuity with the ground, while the massing cascades along the slope, suggesting a geological emergence rather than an architectural imposition. The principal façade distills the project's ethos: a calibrated composition of apertures that frames the landscape as a sequence of living tableaux. Each elevation is attuned to its orientation, choreographing a spatial experience that is both immersive and contemplative. Here, architecture acts not as a boundary, but as a lens.this picture!Materiality is approached with deliberate restraint. Pristine white volumes capture the shifting Mediterranean light, animating surfaces in a daily choreography of shadows. Travertine and timber introduce tactile warmth, while concrete elements — subtly tinted with sand pigments — ground the building in its context and enhance its material belonging. Internally, the spatial organization privileges continuity and flow. Circulations are not mere connectors, but choreographed transitions. Double-height volumes channel daylight deep into the core, while vertical pathways become elevated promenades offering ever-evolving perspectives of the surrounding landscape.this picture!this picture!this picture!The architecture explores a central paradox: the reconciliation of intimacy with openness, of enclosure with exposure. This tension is resolved through a refined gradation of thresholds, where interiors dissolve into terraces and open platforms, softening the boundaries between inside and out. Twin infinity pools extend the architectural geometry toward the horizon, amplifying the sensation of lightness and spatial suspension. Water and sky converge in a silent dialogue, completing the project's aspiration to exist not merely in the landscape but in symbiosis with it. Villa Air stands as a testament to a site-specific Mediterranean modernism — one that privileges clarity, precision, and sensory depth. More than a functional residence, it evokes a poetic condition of dwelling: a place where form, matter, and perception converge in quiet resonance.this picture! Project gallerySee allShow less About this officeARK-architectureOffice••• MaterialConcreteMaterials and TagsPublished on May 30, 2025Cite: "Villa Air / ARK-architecture" 30 May 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #villa #air #arkarchitecture
    WWW.ARCHDAILY.COM
    Villa Air / ARK-architecture
    Villa Air / ARK-architectureSave this picture!© Bilel KhemakhemHouses•Tunis, Tunisia Architects: ARK-architecture Area Area of this architecture project Area:  1500 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Bilel Khemakhem Manufacturers Brands with products used in this architecture project Manufacturers:  Trespa, Elements, QUICK-STEP, REVIGLASS, Saint Gobain Glass, Schüco, TOSHIBAMore SpecsLess Specs Save this picture! Text description provided by the architects. Villa Air is a distilled expression of contemporary architecture rooted in the Tunisian landscape. Set within a two-hectare plot in Morneg, this 1,500 m² residence unfolds as a meditative dialogue between built form and topography. The site, defined by its gentle slope and sweeping views, culminates in the striking silhouette of the Jbal Errsas mountain range—a natural horizon that anchors the architectural narrative. From the outset, the project embraces a central duality: the tension between gravitas and lightness, between groundedness and suspension. This dialectic, subtly embedded in the villa's name, structures the entire composition. Distributed across three levels, the house is articulated as a series of horizontal strata punctuated by bold cantilevers. These projections—remarkably slender at just 45 cm thick—embody both structural daring and environmental responsiveness, casting precise shadow lines that temper the Mediterranean sun.Save this picture!Save this picture!Save this picture!Rather than asserting dominance over the terrain, the architecture yields to it. The villa engages the land with measured restraint, allowing the natural contours to guide its form. A textured finish in earthy tones fosters chromatic continuity with the ground, while the massing cascades along the slope, suggesting a geological emergence rather than an architectural imposition. The principal façade distills the project's ethos: a calibrated composition of apertures that frames the landscape as a sequence of living tableaux. Each elevation is attuned to its orientation, choreographing a spatial experience that is both immersive and contemplative. Here, architecture acts not as a boundary, but as a lens.Save this picture!Materiality is approached with deliberate restraint. Pristine white volumes capture the shifting Mediterranean light, animating surfaces in a daily choreography of shadows. Travertine and timber introduce tactile warmth, while concrete elements — subtly tinted with sand pigments — ground the building in its context and enhance its material belonging. Internally, the spatial organization privileges continuity and flow. Circulations are not mere connectors, but choreographed transitions. Double-height volumes channel daylight deep into the core, while vertical pathways become elevated promenades offering ever-evolving perspectives of the surrounding landscape.Save this picture!Save this picture!Save this picture!The architecture explores a central paradox: the reconciliation of intimacy with openness, of enclosure with exposure. This tension is resolved through a refined gradation of thresholds, where interiors dissolve into terraces and open platforms, softening the boundaries between inside and out. Twin infinity pools extend the architectural geometry toward the horizon, amplifying the sensation of lightness and spatial suspension. Water and sky converge in a silent dialogue, completing the project's aspiration to exist not merely in the landscape but in symbiosis with it. Villa Air stands as a testament to a site-specific Mediterranean modernism — one that privileges clarity, precision, and sensory depth. More than a functional residence, it evokes a poetic condition of dwelling: a place where form, matter, and perception converge in quiet resonance.Save this picture! Project gallerySee allShow less About this officeARK-architectureOffice••• MaterialConcreteMaterials and TagsPublished on May 30, 2025Cite: "Villa Air / ARK-architecture" 30 May 2025. ArchDaily. Accessed . <https://www.archdaily.com/1030593/villa-air-ark-architecture&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Commentarios 0 Acciones
  • Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day

    PROS:
    Discreet, elegant, and unobtrusive design that doesn't scream "tech"
    Lightweight and comfortable premium frame
    Focuses on essential experiences without the unnecessary cruft
    Impressive transcription and teleprompter features
    Long battery life and effortless charging case design
    CONS:
    No speakers for calls or audio feedbackTemple tips touch controls can be a bit cumbersome
    A bit expensive

    RATINGS:
    AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:With a simple design and useful features, the Even Realities G1 smart glasses prove that you don't need all the bells and whistles to provide an experience.
    Every day, we’re flooded with more information than our already overworked minds can handle. Our smartphones and computers put all this information at our fingertips, connecting us to the rest of the world while ironically disconnecting us from the people around us. Smart glasses and XR headsets promise to bring all this information right in front of us, bridging the gap that divides physical and virtual realities. And yet at the same time, they erect a wall that separates us from the here and now.
    It’s against this backdrop that Even Realities chose to take a bold step in the opposite direction. In both form and function, the Even Realities G1 smart glasses cut down on the cruft and promise a distilled experience that focuses only on what you really need to get through a busy day. More importantly, it delivers it in a minimalist design that doesn’t get in your way. Or at least that’s the spiel. Just in time for the upcoming Father’s Day celebration, we got to test what the Even Realities G1 has to offer, especially to some of the busiest people in our families: the dads juggling work responsibilities while trying to stay present for their loved ones.
    Designer: Even Realities
    Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.
    Aesthetics

    You probably wouldn’t even be able to tell the Even Realities G1 is wearable tech if you meet someone on the street wearing a pair. Sure, they might look like slightly retro Pantos, but they’re a far cry from even the slimmest XR glasses from the likes of Xreal or Viture. You can clearly see the eyes of the person wearing them, and the tech is practically invisible, which is exactly the point.
    The design of the Even Realities G1 is on the plain and minimal side, a stark contrast to the majority of smart glasses and XR/AR headsets currently in the market, even those claiming to be fashionable and stylish. Sure, it’s not going to compete with high-end luxury spectacles, but they’re not entirely off the mark either. Unless you look really closely, you might simply presume them to be a pair of thick-framed glasses.

    The form of the glasses might be simple, but their construction is anything but. The frame is made from magnesium alloy with a coating that’s fused with sandstone, while the temples use a titanium alloy on the outer sides and soft silicone on the inner surfaces. The mixture of quality materials not only gives the Even Realities G1 a premium character but also a lightweight form that’s only ever so slightly heavier than your run-of-the-mill prescription eyeglasses.
    While the G1 most looks like normal eyewear, the temple tips are dead giveaways that things are not what they seem. The blocky, paddle-shaped tips that house batteries and electronics are definitely larger than what you’d find on most glasses. They’re not obnoxiously big, but they do tend to stick out a bit, and they’re hard to “unsee” once you’ve noticed their presence.
    Despite looking quite aesthetic, the Even Realities G1 isn’t pretending to be some posh fashion accessory. After all, the circular G1A and rectangular G1B options hardly cover all possible eyewear designs, and the limited color selection won’t suit everyone’s tastes. Rather than something you flaunt or call attention to, these smart glasses are designed to be an “everyday wear” and disappear into the background, making tech invisible without making it unusable, perfect for the dad who wants to stay connected without looking like he’s wearing a gadget at the family barbecue.
    Ergonomics

    If you’ve ever tried any of those hi-tech wearables promising the next wave of computing, then you’d probably know that you’d never wear any of those glasses or visors for more than just an hour or two every day. They may have impressive technologies and apps, but they become practically useless once you take them off, especially when you have to step out into the real world.
    In contrast, the Even Realities G1 is something you’d be able to wear for hours on end, indoors or outdoors. Made from lightweight materials with a construction that even throws away screws to reduce the heft, it’s almost mind-blowing to think that the glasses houses any electronics at all. This level of comfort is honestly the G1’s most important asset, because it allows people to experience its smart features far longer than any Quest or Viture.

    When it comes to eyewear, however, prescription lenses have always been a sore point for many consumers, and this is no exception. Because it integrates waveguide optics into the lens, you’ll have to pay extra to have customized prescription lenses when you buy an Even Realities G1. It can be a bit nerve-wracking to ensure you get all the measurements and figures right, especially since you can’t return or exchange glasses with customized lenses.
    While the G1 eyeglasses are definitely comfortable to wear, the same can’t exactly be said when it comes to manually interacting with them. While most smart glasses and headsets have controls near your temples, the G1’s touch-sensitive areas are at the temple tips, which would be sitting behind your ears when you’re wearing the glasses. They might feel awkward to reach, and those with long hairstyles might find it difficult to use. Fortunately, you will rarely touch those tips except to activate some functions, but it can still be an unsatisfactory experience when you do.
    Performance

    The Even Realities G1 takes a brilliantly focused approach to smart eyewear, prioritizing elegant design and practical functionality over unnecessary tech bloat. The 640×200 green monochrome display may seem modest, but it’s deliberate choice that enables the G1 to maintain a sleek, stylish profile. The absence of cameras and speakers isn’t a limitation but a thoughtful design decision that enhances both wearability and privacy, allowing users to seamlessly integrate this technology into their daily lives without social awkwardness. The magic of the G1 lies in its delivery of information directly to your field of vision in a way that not only delights but also transforms how you interact with digital content.

    The core Even Realities G1 experience revolves around bringing only critical information to your attention and keeping distractions away, all without disconnecting you from reality and the people around you. Its text-centric interface, displayed by two micro-LED displays, one on each lens, ensures that information is distilled down to its most essential. And there’s no denying the retro charm of a green dot-matrix screen in front of your eyes, even if the color won’t work well against light or bright objects.
    The Even Realities G1 experience starts with the dashboard, which you can summon just by tilting your head up a bit, an angle that you can set on the companion mobile app. One side shows the date and time, temperature, number of notifications, and your next appointment. The other side can be configured to show one of your saved quick notes, news, stocks, or even your current location. None of these items are interactive, and you’ll have to dive into the mobile app to actually get any further information.

    With Father’s Day approaching, it’s worth noting how the G1’s floating heads-up display, visible only to the wearer, helps dads stay effortlessly connected, organized, and present. The QuickNote and Calendar features are particularly valuable for fathers juggling work and family responsibilities, allowing them to process their to-do lists perfectly on schedule without missing a beat of family time. Spending quality time with your child then suddenly remembering you need to buy batteries on your next errand run? No more frantically scampering for pen and paper or even your phone; just tap and speak.
    Of course, the smart glasses really shine when it comes to the, well, smart functionality, most of which unsurprisingly revolve around words, both spoken and displayed. Transcription, which is used when making Quick Notes, records your voice and saves it alongside the transcribed text. Fathers who find themselves in never-ending meetings no longer need to worry about missing a beat. Not only do they get to keep notes, but they also receive a summary and recap thanks to the G1’s AI capabilities, a game-changer for busy dads who need to process information efficiently.

    Translation can make international trips quite fun, at least for some interactions, as you’ll be able to see actual translated captions floating in the air like subtitles on a video. Dads who give a lot of talks, business presentations, interviews, or broadcast videos will definitely love the Teleprompter feature, which can advance the script just based on the words you’re speaking. No more worrying about missing important points during that big presentation, leaving more mental bandwidth for what really matters. It’s also perfect for a captivating Career Day show that will do your kid proud.

    The accuracy of Even Realities’ speech recognition and AI is fairly good, though there are times when it will require a bit of patience and understanding. There’s a noticeable delay when translating what people say in real time, for example, and it might miss words if the person is speaking too quickly. Navigation can be a hit or miss, depending on your location, and the visual direction prompts are not always reliable.

    The latter is also one of the cases where the absence of built-in speakers feels a bit more pronounced. There’s no audio feedback, which could be useful for guided turn-by-turn navigation. Even AI can hear you, but it can’t talk back to you. Everything will be delivered only through text you have to read, which might not always be possible in some cases. Admittedly, the addition of such hardware, no matter how small, will also add weight to the glasses, so Even Realities chose their battles wisely.

    The Even Realities G1 is advertised to last for 1.5 days, and it indeed lasts at least more than a day. The stylish wireless charging case, which has a built-in 2,000mAh battery, extends that uptime to five days. Charging the glasses is as simple as putting them inside the case, no need to align any contact points, as long as you remember to fold the left arm first before the right arm. Oddly enough, there’s no battery level indicator on the glasses, even in the dashboard HUD.
    Even Realities focused on making the G1 simple, both in design and in operation. Sometimes even to the point of oversimplification. To reduce complexity, for example, each side of the glasses connects to a smartphone separately via Bluetooth, which unfortunately increases the risk of the two sides being out of sync if one or the other connection drops. Turning the glasses into shades is a simple case of slapping on clip-on shades that are not only an additional expense but also something you could lose somewhere.
    Sustainability

    By cutting down on the volume of the product, Even Realities also helps cut down waste material, especially the use of plastics. The G1 utilizes more metals than plastic, not only delivering a premium design but also preferring more renewable materials. The company is particularly proud of its packaging as well, which uses 100% recyclable, eco-friendly cardboard.
    While magnesium and titanium alloys contribute to the durability of the product, the Even Realities G1 is not exactly what you might consider to be a weather-proof piece of wearable tech. It has no formal IP rating, and the glasses are only said to be resistant to splashes and light rain. It can accompany you on your runs, sure, but you’ll have to treat it with much care. Not that it will have much practical use during your workouts in the first place.
    Value

    Discreet, useful, and simple, the Even Realities G1 smart glasses proudly stand in opposition to the literal heavyweights of the smart eyewear market that are practically strapping a computer on your face. It offers an experience that focuses on the most important functions and information you’d want to have in front of your eyes and pushes unnecessary distractions out of your sight. Most importantly, however, it keeps the whole world clearly in view, allowing you to connect to your digital life without disconnecting you from the people around you.

    The Even Realities G1 would almost be perfect for this hyper-focused use case if not for its price tag. At it’s easily one of the more expensive pairs of smart spectacles you’ll see on the market, and that’s only for the glasses themselves. For custom prescription lenses, you need to add another on top, not to mention theclip-on shades for those extra bright days. Given its limited functionality, the G1 definitely feels a bit overpriced. But when you consider how lightweight, distraction-free, and useful it can be, it comes off more as an investment for the future.
    For family and friends looking for a meaningful tech gift this Father’s Day, the G1 offers something truly unique: a way to stay on top of work responsibilities while remaining fully present for family moments. Whether capturing quick thoughts during a child’s soccer game or discreetly checking calendar reminders during family dinner, these glasses help dads maintain that delicate balance between connectivity and presence.
    Verdict

    It’s hard to escape the overabundance of information that we deal with every day, both from the world around us, as well as our own stash of notes and to-do lists. Unfortunately, the tools that we always have with us, our smartphones, computers, and smartwatches, are poor guardians against this flood. And now smart glasses are coming, promising access to all of that and threatening to further drown us with information we don’t really need.

    The Even Realities G1 is both a breath of fresh air and a bold statement against that trend. Not only is it lightweight and comfortable, but it even looks like normal glasses! Rather than throw everything and the kitchen sink into it, its design and functionality are completely intentional, focusing only on essential experiences and features to keep you productive. It’s not trying to turn you into Tony Stark, but it will help make you feel like a superhero as you breeze through your tasks while still being present to the people who really matter the most in your life.

    For the dad who wants to stay connected without being distracted, who needs to manage information without being overwhelmed by it, the Even Realities G1 might just be the perfect Father’s Day gift: a tool that helps him be both the professional he needs to be and the father he wants to be, all without missing a moment of what truly matters.
    Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.The post Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day first appeared on Yanko Design.
    #even #realities #glasses #review #smart
    Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day
    PROS: Discreet, elegant, and unobtrusive design that doesn't scream "tech" Lightweight and comfortable premium frame Focuses on essential experiences without the unnecessary cruft Impressive transcription and teleprompter features Long battery life and effortless charging case design CONS: No speakers for calls or audio feedbackTemple tips touch controls can be a bit cumbersome A bit expensive RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:With a simple design and useful features, the Even Realities G1 smart glasses prove that you don't need all the bells and whistles to provide an experience. Every day, we’re flooded with more information than our already overworked minds can handle. Our smartphones and computers put all this information at our fingertips, connecting us to the rest of the world while ironically disconnecting us from the people around us. Smart glasses and XR headsets promise to bring all this information right in front of us, bridging the gap that divides physical and virtual realities. And yet at the same time, they erect a wall that separates us from the here and now. It’s against this backdrop that Even Realities chose to take a bold step in the opposite direction. In both form and function, the Even Realities G1 smart glasses cut down on the cruft and promise a distilled experience that focuses only on what you really need to get through a busy day. More importantly, it delivers it in a minimalist design that doesn’t get in your way. Or at least that’s the spiel. Just in time for the upcoming Father’s Day celebration, we got to test what the Even Realities G1 has to offer, especially to some of the busiest people in our families: the dads juggling work responsibilities while trying to stay present for their loved ones. Designer: Even Realities Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025. Aesthetics You probably wouldn’t even be able to tell the Even Realities G1 is wearable tech if you meet someone on the street wearing a pair. Sure, they might look like slightly retro Pantos, but they’re a far cry from even the slimmest XR glasses from the likes of Xreal or Viture. You can clearly see the eyes of the person wearing them, and the tech is practically invisible, which is exactly the point. The design of the Even Realities G1 is on the plain and minimal side, a stark contrast to the majority of smart glasses and XR/AR headsets currently in the market, even those claiming to be fashionable and stylish. Sure, it’s not going to compete with high-end luxury spectacles, but they’re not entirely off the mark either. Unless you look really closely, you might simply presume them to be a pair of thick-framed glasses. The form of the glasses might be simple, but their construction is anything but. The frame is made from magnesium alloy with a coating that’s fused with sandstone, while the temples use a titanium alloy on the outer sides and soft silicone on the inner surfaces. The mixture of quality materials not only gives the Even Realities G1 a premium character but also a lightweight form that’s only ever so slightly heavier than your run-of-the-mill prescription eyeglasses. While the G1 most looks like normal eyewear, the temple tips are dead giveaways that things are not what they seem. The blocky, paddle-shaped tips that house batteries and electronics are definitely larger than what you’d find on most glasses. They’re not obnoxiously big, but they do tend to stick out a bit, and they’re hard to “unsee” once you’ve noticed their presence. Despite looking quite aesthetic, the Even Realities G1 isn’t pretending to be some posh fashion accessory. After all, the circular G1A and rectangular G1B options hardly cover all possible eyewear designs, and the limited color selection won’t suit everyone’s tastes. Rather than something you flaunt or call attention to, these smart glasses are designed to be an “everyday wear” and disappear into the background, making tech invisible without making it unusable, perfect for the dad who wants to stay connected without looking like he’s wearing a gadget at the family barbecue. Ergonomics If you’ve ever tried any of those hi-tech wearables promising the next wave of computing, then you’d probably know that you’d never wear any of those glasses or visors for more than just an hour or two every day. They may have impressive technologies and apps, but they become practically useless once you take them off, especially when you have to step out into the real world. In contrast, the Even Realities G1 is something you’d be able to wear for hours on end, indoors or outdoors. Made from lightweight materials with a construction that even throws away screws to reduce the heft, it’s almost mind-blowing to think that the glasses houses any electronics at all. This level of comfort is honestly the G1’s most important asset, because it allows people to experience its smart features far longer than any Quest or Viture. When it comes to eyewear, however, prescription lenses have always been a sore point for many consumers, and this is no exception. Because it integrates waveguide optics into the lens, you’ll have to pay extra to have customized prescription lenses when you buy an Even Realities G1. It can be a bit nerve-wracking to ensure you get all the measurements and figures right, especially since you can’t return or exchange glasses with customized lenses. While the G1 eyeglasses are definitely comfortable to wear, the same can’t exactly be said when it comes to manually interacting with them. While most smart glasses and headsets have controls near your temples, the G1’s touch-sensitive areas are at the temple tips, which would be sitting behind your ears when you’re wearing the glasses. They might feel awkward to reach, and those with long hairstyles might find it difficult to use. Fortunately, you will rarely touch those tips except to activate some functions, but it can still be an unsatisfactory experience when you do. Performance The Even Realities G1 takes a brilliantly focused approach to smart eyewear, prioritizing elegant design and practical functionality over unnecessary tech bloat. The 640×200 green monochrome display may seem modest, but it’s deliberate choice that enables the G1 to maintain a sleek, stylish profile. The absence of cameras and speakers isn’t a limitation but a thoughtful design decision that enhances both wearability and privacy, allowing users to seamlessly integrate this technology into their daily lives without social awkwardness. The magic of the G1 lies in its delivery of information directly to your field of vision in a way that not only delights but also transforms how you interact with digital content. The core Even Realities G1 experience revolves around bringing only critical information to your attention and keeping distractions away, all without disconnecting you from reality and the people around you. Its text-centric interface, displayed by two micro-LED displays, one on each lens, ensures that information is distilled down to its most essential. And there’s no denying the retro charm of a green dot-matrix screen in front of your eyes, even if the color won’t work well against light or bright objects. The Even Realities G1 experience starts with the dashboard, which you can summon just by tilting your head up a bit, an angle that you can set on the companion mobile app. One side shows the date and time, temperature, number of notifications, and your next appointment. The other side can be configured to show one of your saved quick notes, news, stocks, or even your current location. None of these items are interactive, and you’ll have to dive into the mobile app to actually get any further information. With Father’s Day approaching, it’s worth noting how the G1’s floating heads-up display, visible only to the wearer, helps dads stay effortlessly connected, organized, and present. The QuickNote and Calendar features are particularly valuable for fathers juggling work and family responsibilities, allowing them to process their to-do lists perfectly on schedule without missing a beat of family time. Spending quality time with your child then suddenly remembering you need to buy batteries on your next errand run? No more frantically scampering for pen and paper or even your phone; just tap and speak. Of course, the smart glasses really shine when it comes to the, well, smart functionality, most of which unsurprisingly revolve around words, both spoken and displayed. Transcription, which is used when making Quick Notes, records your voice and saves it alongside the transcribed text. Fathers who find themselves in never-ending meetings no longer need to worry about missing a beat. Not only do they get to keep notes, but they also receive a summary and recap thanks to the G1’s AI capabilities, a game-changer for busy dads who need to process information efficiently. Translation can make international trips quite fun, at least for some interactions, as you’ll be able to see actual translated captions floating in the air like subtitles on a video. Dads who give a lot of talks, business presentations, interviews, or broadcast videos will definitely love the Teleprompter feature, which can advance the script just based on the words you’re speaking. No more worrying about missing important points during that big presentation, leaving more mental bandwidth for what really matters. It’s also perfect for a captivating Career Day show that will do your kid proud. The accuracy of Even Realities’ speech recognition and AI is fairly good, though there are times when it will require a bit of patience and understanding. There’s a noticeable delay when translating what people say in real time, for example, and it might miss words if the person is speaking too quickly. Navigation can be a hit or miss, depending on your location, and the visual direction prompts are not always reliable. The latter is also one of the cases where the absence of built-in speakers feels a bit more pronounced. There’s no audio feedback, which could be useful for guided turn-by-turn navigation. Even AI can hear you, but it can’t talk back to you. Everything will be delivered only through text you have to read, which might not always be possible in some cases. Admittedly, the addition of such hardware, no matter how small, will also add weight to the glasses, so Even Realities chose their battles wisely. The Even Realities G1 is advertised to last for 1.5 days, and it indeed lasts at least more than a day. The stylish wireless charging case, which has a built-in 2,000mAh battery, extends that uptime to five days. Charging the glasses is as simple as putting them inside the case, no need to align any contact points, as long as you remember to fold the left arm first before the right arm. Oddly enough, there’s no battery level indicator on the glasses, even in the dashboard HUD. Even Realities focused on making the G1 simple, both in design and in operation. Sometimes even to the point of oversimplification. To reduce complexity, for example, each side of the glasses connects to a smartphone separately via Bluetooth, which unfortunately increases the risk of the two sides being out of sync if one or the other connection drops. Turning the glasses into shades is a simple case of slapping on clip-on shades that are not only an additional expense but also something you could lose somewhere. Sustainability By cutting down on the volume of the product, Even Realities also helps cut down waste material, especially the use of plastics. The G1 utilizes more metals than plastic, not only delivering a premium design but also preferring more renewable materials. The company is particularly proud of its packaging as well, which uses 100% recyclable, eco-friendly cardboard. While magnesium and titanium alloys contribute to the durability of the product, the Even Realities G1 is not exactly what you might consider to be a weather-proof piece of wearable tech. It has no formal IP rating, and the glasses are only said to be resistant to splashes and light rain. It can accompany you on your runs, sure, but you’ll have to treat it with much care. Not that it will have much practical use during your workouts in the first place. Value Discreet, useful, and simple, the Even Realities G1 smart glasses proudly stand in opposition to the literal heavyweights of the smart eyewear market that are practically strapping a computer on your face. It offers an experience that focuses on the most important functions and information you’d want to have in front of your eyes and pushes unnecessary distractions out of your sight. Most importantly, however, it keeps the whole world clearly in view, allowing you to connect to your digital life without disconnecting you from the people around you. The Even Realities G1 would almost be perfect for this hyper-focused use case if not for its price tag. At it’s easily one of the more expensive pairs of smart spectacles you’ll see on the market, and that’s only for the glasses themselves. For custom prescription lenses, you need to add another on top, not to mention theclip-on shades for those extra bright days. Given its limited functionality, the G1 definitely feels a bit overpriced. But when you consider how lightweight, distraction-free, and useful it can be, it comes off more as an investment for the future. For family and friends looking for a meaningful tech gift this Father’s Day, the G1 offers something truly unique: a way to stay on top of work responsibilities while remaining fully present for family moments. Whether capturing quick thoughts during a child’s soccer game or discreetly checking calendar reminders during family dinner, these glasses help dads maintain that delicate balance between connectivity and presence. Verdict It’s hard to escape the overabundance of information that we deal with every day, both from the world around us, as well as our own stash of notes and to-do lists. Unfortunately, the tools that we always have with us, our smartphones, computers, and smartwatches, are poor guardians against this flood. And now smart glasses are coming, promising access to all of that and threatening to further drown us with information we don’t really need. The Even Realities G1 is both a breath of fresh air and a bold statement against that trend. Not only is it lightweight and comfortable, but it even looks like normal glasses! Rather than throw everything and the kitchen sink into it, its design and functionality are completely intentional, focusing only on essential experiences and features to keep you productive. It’s not trying to turn you into Tony Stark, but it will help make you feel like a superhero as you breeze through your tasks while still being present to the people who really matter the most in your life. For the dad who wants to stay connected without being distracted, who needs to manage information without being overwhelmed by it, the Even Realities G1 might just be the perfect Father’s Day gift: a tool that helps him be both the professional he needs to be and the father he wants to be, all without missing a moment of what truly matters. Click Here to Buy Now: Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.The post Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day first appeared on Yanko Design. #even #realities #glasses #review #smart
    WWW.YANKODESIGN.COM
    Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day
    PROS: Discreet, elegant, and unobtrusive design that doesn't scream "tech" Lightweight and comfortable premium frame Focuses on essential experiences without the unnecessary cruft Impressive transcription and teleprompter features Long battery life and effortless charging case design CONS: No speakers for calls or audio feedback (especially during navigation) Temple tips touch controls can be a bit cumbersome A bit expensive RATINGS: AESTHETICSERGONOMICSPERFORMANCESUSTAINABILITY / REPAIRABILITYVALUE FOR MONEYEDITOR'S QUOTE:With a simple design and useful features, the Even Realities G1 smart glasses prove that you don't need all the bells and whistles to provide an experience. Every day, we’re flooded with more information than our already overworked minds can handle. Our smartphones and computers put all this information at our fingertips, connecting us to the rest of the world while ironically disconnecting us from the people around us. Smart glasses and XR headsets promise to bring all this information right in front of us, bridging the gap that divides physical and virtual realities. And yet at the same time, they erect a wall that separates us from the here and now. It’s against this backdrop that Even Realities chose to take a bold step in the opposite direction. In both form and function, the Even Realities G1 smart glasses cut down on the cruft and promise a distilled experience that focuses only on what you really need to get through a busy day. More importantly, it delivers it in a minimalist design that doesn’t get in your way. Or at least that’s the spiel. Just in time for the upcoming Father’s Day celebration, we got to test what the Even Realities G1 has to offer, especially to some of the busiest people in our families: the dads juggling work responsibilities while trying to stay present for their loved ones. Designer: Even Realities Click Here to Buy Now: $599. Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025. Aesthetics You probably wouldn’t even be able to tell the Even Realities G1 is wearable tech if you meet someone on the street wearing a pair. Sure, they might look like slightly retro Pantos, but they’re a far cry from even the slimmest XR glasses from the likes of Xreal or Viture. You can clearly see the eyes of the person wearing them, and the tech is practically invisible, which is exactly the point. The design of the Even Realities G1 is on the plain and minimal side, a stark contrast to the majority of smart glasses and XR/AR headsets currently in the market, even those claiming to be fashionable and stylish. Sure, it’s not going to compete with high-end luxury spectacles, but they’re not entirely off the mark either. Unless you look really closely, you might simply presume them to be a pair of thick-framed glasses. The form of the glasses might be simple, but their construction is anything but. The frame is made from magnesium alloy with a coating that’s fused with sandstone, while the temples use a titanium alloy on the outer sides and soft silicone on the inner surfaces. The mixture of quality materials not only gives the Even Realities G1 a premium character but also a lightweight form that’s only ever so slightly heavier than your run-of-the-mill prescription eyeglasses. While the G1 most looks like normal eyewear, the temple tips are dead giveaways that things are not what they seem. The blocky, paddle-shaped tips that house batteries and electronics are definitely larger than what you’d find on most glasses. They’re not obnoxiously big, but they do tend to stick out a bit, and they’re hard to “unsee” once you’ve noticed their presence. Despite looking quite aesthetic, the Even Realities G1 isn’t pretending to be some posh fashion accessory. After all, the circular G1A and rectangular G1B options hardly cover all possible eyewear designs, and the limited color selection won’t suit everyone’s tastes. Rather than something you flaunt or call attention to, these smart glasses are designed to be an “everyday wear” and disappear into the background, making tech invisible without making it unusable, perfect for the dad who wants to stay connected without looking like he’s wearing a gadget at the family barbecue. Ergonomics If you’ve ever tried any of those hi-tech wearables promising the next wave of computing, then you’d probably know that you’d never wear any of those glasses or visors for more than just an hour or two every day. They may have impressive technologies and apps, but they become practically useless once you take them off, especially when you have to step out into the real world. In contrast, the Even Realities G1 is something you’d be able to wear for hours on end, indoors or outdoors. Made from lightweight materials with a construction that even throws away screws to reduce the heft, it’s almost mind-blowing to think that the glasses houses any electronics at all. This level of comfort is honestly the G1’s most important asset, because it allows people to experience its smart features far longer than any Quest or Viture. When it comes to eyewear, however, prescription lenses have always been a sore point for many consumers, and this is no exception. Because it integrates waveguide optics into the lens, you’ll have to pay extra to have customized prescription lenses when you buy an Even Realities G1. It can be a bit nerve-wracking to ensure you get all the measurements and figures right, especially since you can’t return or exchange glasses with customized lenses. While the G1 eyeglasses are definitely comfortable to wear, the same can’t exactly be said when it comes to manually interacting with them. While most smart glasses and headsets have controls near your temples, the G1’s touch-sensitive areas are at the temple tips, which would be sitting behind your ears when you’re wearing the glasses. They might feel awkward to reach, and those with long hairstyles might find it difficult to use. Fortunately, you will rarely touch those tips except to activate some functions, but it can still be an unsatisfactory experience when you do. Performance The Even Realities G1 takes a brilliantly focused approach to smart eyewear, prioritizing elegant design and practical functionality over unnecessary tech bloat. The 640×200 green monochrome display may seem modest, but it’s deliberate choice that enables the G1 to maintain a sleek, stylish profile. The absence of cameras and speakers isn’t a limitation but a thoughtful design decision that enhances both wearability and privacy, allowing users to seamlessly integrate this technology into their daily lives without social awkwardness. The magic of the G1 lies in its delivery of information directly to your field of vision in a way that not only delights but also transforms how you interact with digital content. The core Even Realities G1 experience revolves around bringing only critical information to your attention and keeping distractions away, all without disconnecting you from reality and the people around you. Its text-centric interface, displayed by two micro-LED displays, one on each lens, ensures that information is distilled down to its most essential. And there’s no denying the retro charm of a green dot-matrix screen in front of your eyes, even if the color won’t work well against light or bright objects. The Even Realities G1 experience starts with the dashboard, which you can summon just by tilting your head up a bit, an angle that you can set on the companion mobile app. One side shows the date and time, temperature, number of notifications, and your next appointment. The other side can be configured to show one of your saved quick notes, news, stocks, or even your current location. None of these items are interactive, and you’ll have to dive into the mobile app to actually get any further information. With Father’s Day approaching, it’s worth noting how the G1’s floating heads-up display, visible only to the wearer, helps dads stay effortlessly connected, organized, and present. The QuickNote and Calendar features are particularly valuable for fathers juggling work and family responsibilities, allowing them to process their to-do lists perfectly on schedule without missing a beat of family time. Spending quality time with your child then suddenly remembering you need to buy batteries on your next errand run? No more frantically scampering for pen and paper or even your phone; just tap and speak. Of course, the smart glasses really shine when it comes to the, well, smart functionality, most of which unsurprisingly revolve around words, both spoken and displayed. Transcription, which is used when making Quick Notes, records your voice and saves it alongside the transcribed text. Fathers who find themselves in never-ending meetings no longer need to worry about missing a beat. Not only do they get to keep notes, but they also receive a summary and recap thanks to the G1’s AI capabilities, a game-changer for busy dads who need to process information efficiently. Translation can make international trips quite fun, at least for some interactions, as you’ll be able to see actual translated captions floating in the air like subtitles on a video. Dads who give a lot of talks, business presentations, interviews, or broadcast videos will definitely love the Teleprompter feature, which can advance the script just based on the words you’re speaking. No more worrying about missing important points during that big presentation, leaving more mental bandwidth for what really matters. It’s also perfect for a captivating Career Day show that will do your kid proud. The accuracy of Even Realities’ speech recognition and AI is fairly good, though there are times when it will require a bit of patience and understanding. There’s a noticeable delay when translating what people say in real time, for example, and it might miss words if the person is speaking too quickly. Navigation can be a hit or miss, depending on your location, and the visual direction prompts are not always reliable. The latter is also one of the cases where the absence of built-in speakers feels a bit more pronounced. There’s no audio feedback, which could be useful for guided turn-by-turn navigation. Even AI can hear you, but it can’t talk back to you. Everything will be delivered only through text you have to read, which might not always be possible in some cases. Admittedly, the addition of such hardware, no matter how small, will also add weight to the glasses, so Even Realities chose their battles wisely. The Even Realities G1 is advertised to last for 1.5 days, and it indeed lasts at least more than a day. The stylish wireless charging case, which has a built-in 2,000mAh battery, extends that uptime to five days. Charging the glasses is as simple as putting them inside the case, no need to align any contact points, as long as you remember to fold the left arm first before the right arm. Oddly enough, there’s no battery level indicator on the glasses, even in the dashboard HUD. Even Realities focused on making the G1 simple, both in design and in operation. Sometimes even to the point of oversimplification. To reduce complexity, for example, each side of the glasses connects to a smartphone separately via Bluetooth, which unfortunately increases the risk of the two sides being out of sync if one or the other connection drops. Turning the glasses into shades is a simple case of slapping on clip-on shades that are not only an additional expense but also something you could lose somewhere. Sustainability By cutting down on the volume of the product, Even Realities also helps cut down waste material, especially the use of plastics. The G1 utilizes more metals than plastic, not only delivering a premium design but also preferring more renewable materials. The company is particularly proud of its packaging as well, which uses 100% recyclable, eco-friendly cardboard. While magnesium and titanium alloys contribute to the durability of the product, the Even Realities G1 is not exactly what you might consider to be a weather-proof piece of wearable tech. It has no formal IP rating, and the glasses are only said to be resistant to splashes and light rain. It can accompany you on your runs, sure, but you’ll have to treat it with much care. Not that it will have much practical use during your workouts in the first place. Value Discreet, useful, and simple, the Even Realities G1 smart glasses proudly stand in opposition to the literal heavyweights of the smart eyewear market that are practically strapping a computer on your face. It offers an experience that focuses on the most important functions and information you’d want to have in front of your eyes and pushes unnecessary distractions out of your sight. Most importantly, however, it keeps the whole world clearly in view, allowing you to connect to your digital life without disconnecting you from the people around you. The Even Realities G1 would almost be perfect for this hyper-focused use case if not for its price tag. At $599, it’s easily one of the more expensive pairs of smart spectacles you’ll see on the market, and that’s only for the glasses themselves. For custom prescription lenses, you need to add another $150 on top, not to mention the $50 (normally $100) clip-on shades for those extra bright days. Given its limited functionality, the G1 definitely feels a bit overpriced. But when you consider how lightweight, distraction-free, and useful it can be, it comes off more as an investment for the future. For family and friends looking for a meaningful tech gift this Father’s Day, the G1 offers something truly unique: a way to stay on top of work responsibilities while remaining fully present for family moments. Whether capturing quick thoughts during a child’s soccer game or discreetly checking calendar reminders during family dinner, these glasses help dads maintain that delicate balance between connectivity and presence. Verdict It’s hard to escape the overabundance of information that we deal with every day, both from the world around us, as well as our own stash of notes and to-do lists. Unfortunately, the tools that we always have with us, our smartphones, computers, and smartwatches, are poor guardians against this flood. And now smart glasses are coming, promising access to all of that and threatening to further drown us with information we don’t really need. The Even Realities G1 is both a breath of fresh air and a bold statement against that trend. Not only is it lightweight and comfortable, but it even looks like normal glasses! Rather than throw everything and the kitchen sink into it, its design and functionality are completely intentional, focusing only on essential experiences and features to keep you productive. It’s not trying to turn you into Tony Stark, but it will help make you feel like a superhero as you breeze through your tasks while still being present to the people who really matter the most in your life. For the dad who wants to stay connected without being distracted, who needs to manage information without being overwhelmed by it, the Even Realities G1 might just be the perfect Father’s Day gift: a tool that helps him be both the professional he needs to be and the father he wants to be, all without missing a moment of what truly matters. Click Here to Buy Now: $599. Exclusive Father’s Day Special – Get 50% Off the G1 Clip + Clip Pouch! Hurry, offer ends June 15, 2025.The post Even Realities G1 Glasses Review: Smart, Subtle, and Perfect for Father’s Day first appeared on Yanko Design.
    0 Commentarios 0 Acciones