• Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals

    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals
    She spent nearly 40 years taking theater and dance pictures, providing glimpses behind the scenes and creating images that the public couldn’t otherwise access

    Stephanie Rudig

    - Freelance Writer

    June 11, 2025

    Photographer Martha Swope sitting on a floor covered with prints of her photos in 1987
    Andrea Legge / © NYPL

    Martha Swope wanted to be a dancer. She moved from her home state of Texas to New York to attend the School of American Ballet, hoping to start a career in dance. Swope also happened to be an amateur photographer. So, in 1957, a fellow classmate invited her to bring her camera and document rehearsals for a little theater show he was working on. The classmate was director and choreographer Jerome Robbins, and the show was West Side Story.
    One of those rehearsal shots ended up in Life magazine, and Swope quickly started getting professional bookings. It’s notoriously tough to make it on Broadway, but through photography, Swope carved out a career capturing theater and dance. Over the course of nearly four decades, she photographed hundreds more rehearsals, productions and promotional studio shots.

    Unidentified male chorus members dancing during rehearsals for musical West Side Story in 1957

    Martha Swope / © NYPL

    At a time when live performances were not often or easily captured, Swope’s photographs caught the animated moments and distilled the essence of a show into a single image: André De Shields clad in a jumpsuit as the title character in The Wiz, Patti LuPone with her arms raised overhead in Evita, the cast of Cats leaping in feline formations, a close-up of a forlorn Sheryl Lee Ralph in Dreamgirls and the row of dancers obscuring their faces with their headshots in A Chorus Line were all captured by Swope’s camera. She was also the house photographer for the New York City Ballet and the Martha Graham Dance Company and photographed other major dance companies such as the Ailey School.
    Her vision of the stage became fairly ubiquitous, with Playbill reporting that in the late 1970s, two-thirds of Broadway productions were photographed by Swope, meaning her work dominated theater and dance coverage. Carol Rosegg was early in her photography career when she heard that Swope was looking for an assistant. “I didn't frankly even know who she was,” Rosegg says. “Then the press agent who told me said, ‘Pick up any New York Times and you’ll find out.’”
    Swope’s background as a dancer likely equipped her to press the shutter at the exact right moment to capture movement, and to know when everyone on stage was precisely posed. She taught herself photography and early on used a Brownie camera, a simple box model made by Kodak. “She was what she described as ‘a dancer with a Brownie,’” says Barbara Stratyner, a historian of the performing arts who curated exhibitions of Swope’s work at the New York Public Library.

    An ensemble of dancers in rehearsal for the stage production Cats in 1982

    Martha Swope / © NYPL

    “Dance was her first love,” Rosegg says. “She knew everything about dance. She would never use a photo of a dancer whose foot was wrong; the feet had to be perfect.”
    According to Rosegg, once the photo subjects knew she was shooting, “the anxiety level came down a little bit.” They knew that they’d look good in the resulting photos, and they likely trusted her intuition as a fellow dancer. Swope moved with the bearing of a dancer and often stood with her feet in ballet’s fourth position while she shot. She continued to take dance classes throughout her life, including at the prestigious Martha Graham School. Stratyner says, “As Graham got older,was, I think, the only person who was allowed to photograph rehearsals, because Graham didn’t want rehearsals shown.”
    Photographic technology and the theater and dance landscapes evolved greatly over the course of Swope’s career. Rosegg points out that at the start of her own career, cameras didn’t even automatically advance the film after each shot. She explains the delicate nature of working with film, saying, “When you were shooting film, you actually had to compose, because you had 35 shots and then you had to change your film.” Swope also worked during a period of changing over from all black-and-white photos to a mixture of black-and-white and color photography. Rosegg notes that simultaneously, Swope would shoot black-and-white, and she herself would shoot color. Looking at Swope’s portfolio is also an examination of increasingly crisp photo production. Advances in photography made shooting in the dark or capturing subjects under blinding stage lights easier, and they allowed for better zooming in from afar.

    Martha Graham rehearses dancer Takako Asakawa and others in Heretic, a dance work choreographed by Graham, in 1986

    Martha Swope / © NYPL

    It’s much more common nowadays to get a look behind the curtain of theater productions via social media. “The theater photographers of today need to supply so much content,” Rosegg says. “We didn’t have any of that, and getting to go backstage was kind of a big deal.”
    Photographers coming to document a rehearsal once might have been seen as an intrusion, but now, as Rosegg puts it, “everybody is desperate for you to come, and if you’re not there, they’re shooting it on their iPhone.”
    Even with exclusive behind-the-scenes access to the hottest tickets in town and the biggest stars of the day, Swope remained unpretentious. She lived and worked in a brownstone with her apartment above her studio, where the film was developed in a closet and the bathroom served as a darkroom. Rosegg recalls that a phone sat in the darkroom so they could be reached while printing, and she would be amazed at the big-name producers and theater glitterati who rang in while she was making prints in an unventilated space.

    From left to right: Paul Winfield, Ruby Dee, Marsha Jackson and Denzel Washington in the stage production Checkmates in 1988

    Martha Swope / © NYPL

    Swope’s approachability extended to how she chose to preserve her work. She originally sold her body of work to Time Life, and, according to Stratyner, she was unhappy with the way the photos became relatively inaccessible. She took back the rights to her collection and donated it to the New York Public Library, where many photos can be accessed by researchers in person, and the entire array of photos is available online to the public in the Digital Collections. Searching “Martha Swope” yields over 50,000 items from more than 800 productions, featuring a huge variety of figures, from a white-suited John Travolta busting a disco move in Saturday Night Fever to Andrew Lloyd Webber with Nancy Reagan at a performance of Phantom of the Opera.
    Swope’s extensive career was recognized in 2004 with a special Tony Award, a Tony Honors for Excellence in Theater, which are given intermittently to notable figures in theater who operate outside of traditional awards categories. She also received a lifetime achievement award from the League of Professional Theater Women in 2007. Though she retired in 1994 and died in 2017, her work still reverberates through dance and Broadway history today. For decades, she captured the fleeting moments of theater that would otherwise never be seen by the public. And her passion was clear and straightforward. As she once told an interviewer: “I’m not interested in what’s going on on my side of the camera. I’m interested in what’s happening on the other side.”

    Get the latest Travel & Culture stories in your inbox.
    #meet #martha #swope #legendary #broadway
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals She spent nearly 40 years taking theater and dance pictures, providing glimpses behind the scenes and creating images that the public couldn’t otherwise access Stephanie Rudig - Freelance Writer June 11, 2025 Photographer Martha Swope sitting on a floor covered with prints of her photos in 1987 Andrea Legge / © NYPL Martha Swope wanted to be a dancer. She moved from her home state of Texas to New York to attend the School of American Ballet, hoping to start a career in dance. Swope also happened to be an amateur photographer. So, in 1957, a fellow classmate invited her to bring her camera and document rehearsals for a little theater show he was working on. The classmate was director and choreographer Jerome Robbins, and the show was West Side Story. One of those rehearsal shots ended up in Life magazine, and Swope quickly started getting professional bookings. It’s notoriously tough to make it on Broadway, but through photography, Swope carved out a career capturing theater and dance. Over the course of nearly four decades, she photographed hundreds more rehearsals, productions and promotional studio shots. Unidentified male chorus members dancing during rehearsals for musical West Side Story in 1957 Martha Swope / © NYPL At a time when live performances were not often or easily captured, Swope’s photographs caught the animated moments and distilled the essence of a show into a single image: André De Shields clad in a jumpsuit as the title character in The Wiz, Patti LuPone with her arms raised overhead in Evita, the cast of Cats leaping in feline formations, a close-up of a forlorn Sheryl Lee Ralph in Dreamgirls and the row of dancers obscuring their faces with their headshots in A Chorus Line were all captured by Swope’s camera. She was also the house photographer for the New York City Ballet and the Martha Graham Dance Company and photographed other major dance companies such as the Ailey School. Her vision of the stage became fairly ubiquitous, with Playbill reporting that in the late 1970s, two-thirds of Broadway productions were photographed by Swope, meaning her work dominated theater and dance coverage. Carol Rosegg was early in her photography career when she heard that Swope was looking for an assistant. “I didn't frankly even know who she was,” Rosegg says. “Then the press agent who told me said, ‘Pick up any New York Times and you’ll find out.’” Swope’s background as a dancer likely equipped her to press the shutter at the exact right moment to capture movement, and to know when everyone on stage was precisely posed. She taught herself photography and early on used a Brownie camera, a simple box model made by Kodak. “She was what she described as ‘a dancer with a Brownie,’” says Barbara Stratyner, a historian of the performing arts who curated exhibitions of Swope’s work at the New York Public Library. An ensemble of dancers in rehearsal for the stage production Cats in 1982 Martha Swope / © NYPL “Dance was her first love,” Rosegg says. “She knew everything about dance. She would never use a photo of a dancer whose foot was wrong; the feet had to be perfect.” According to Rosegg, once the photo subjects knew she was shooting, “the anxiety level came down a little bit.” They knew that they’d look good in the resulting photos, and they likely trusted her intuition as a fellow dancer. Swope moved with the bearing of a dancer and often stood with her feet in ballet’s fourth position while she shot. She continued to take dance classes throughout her life, including at the prestigious Martha Graham School. Stratyner says, “As Graham got older,was, I think, the only person who was allowed to photograph rehearsals, because Graham didn’t want rehearsals shown.” Photographic technology and the theater and dance landscapes evolved greatly over the course of Swope’s career. Rosegg points out that at the start of her own career, cameras didn’t even automatically advance the film after each shot. She explains the delicate nature of working with film, saying, “When you were shooting film, you actually had to compose, because you had 35 shots and then you had to change your film.” Swope also worked during a period of changing over from all black-and-white photos to a mixture of black-and-white and color photography. Rosegg notes that simultaneously, Swope would shoot black-and-white, and she herself would shoot color. Looking at Swope’s portfolio is also an examination of increasingly crisp photo production. Advances in photography made shooting in the dark or capturing subjects under blinding stage lights easier, and they allowed for better zooming in from afar. Martha Graham rehearses dancer Takako Asakawa and others in Heretic, a dance work choreographed by Graham, in 1986 Martha Swope / © NYPL It’s much more common nowadays to get a look behind the curtain of theater productions via social media. “The theater photographers of today need to supply so much content,” Rosegg says. “We didn’t have any of that, and getting to go backstage was kind of a big deal.” Photographers coming to document a rehearsal once might have been seen as an intrusion, but now, as Rosegg puts it, “everybody is desperate for you to come, and if you’re not there, they’re shooting it on their iPhone.” Even with exclusive behind-the-scenes access to the hottest tickets in town and the biggest stars of the day, Swope remained unpretentious. She lived and worked in a brownstone with her apartment above her studio, where the film was developed in a closet and the bathroom served as a darkroom. Rosegg recalls that a phone sat in the darkroom so they could be reached while printing, and she would be amazed at the big-name producers and theater glitterati who rang in while she was making prints in an unventilated space. From left to right: Paul Winfield, Ruby Dee, Marsha Jackson and Denzel Washington in the stage production Checkmates in 1988 Martha Swope / © NYPL Swope’s approachability extended to how she chose to preserve her work. She originally sold her body of work to Time Life, and, according to Stratyner, she was unhappy with the way the photos became relatively inaccessible. She took back the rights to her collection and donated it to the New York Public Library, where many photos can be accessed by researchers in person, and the entire array of photos is available online to the public in the Digital Collections. Searching “Martha Swope” yields over 50,000 items from more than 800 productions, featuring a huge variety of figures, from a white-suited John Travolta busting a disco move in Saturday Night Fever to Andrew Lloyd Webber with Nancy Reagan at a performance of Phantom of the Opera. Swope’s extensive career was recognized in 2004 with a special Tony Award, a Tony Honors for Excellence in Theater, which are given intermittently to notable figures in theater who operate outside of traditional awards categories. She also received a lifetime achievement award from the League of Professional Theater Women in 2007. Though she retired in 1994 and died in 2017, her work still reverberates through dance and Broadway history today. For decades, she captured the fleeting moments of theater that would otherwise never be seen by the public. And her passion was clear and straightforward. As she once told an interviewer: “I’m not interested in what’s going on on my side of the camera. I’m interested in what’s happening on the other side.” Get the latest Travel & Culture stories in your inbox. #meet #martha #swope #legendary #broadway
    WWW.SMITHSONIANMAG.COM
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals
    Meet Martha Swope, the Legendary Broadway Photographer Who Captured Iconic Moments From Hundreds of Productions and Rehearsals She spent nearly 40 years taking theater and dance pictures, providing glimpses behind the scenes and creating images that the public couldn’t otherwise access Stephanie Rudig - Freelance Writer June 11, 2025 Photographer Martha Swope sitting on a floor covered with prints of her photos in 1987 Andrea Legge / © NYPL Martha Swope wanted to be a dancer. She moved from her home state of Texas to New York to attend the School of American Ballet, hoping to start a career in dance. Swope also happened to be an amateur photographer. So, in 1957, a fellow classmate invited her to bring her camera and document rehearsals for a little theater show he was working on. The classmate was director and choreographer Jerome Robbins, and the show was West Side Story. One of those rehearsal shots ended up in Life magazine, and Swope quickly started getting professional bookings. It’s notoriously tough to make it on Broadway, but through photography, Swope carved out a career capturing theater and dance. Over the course of nearly four decades, she photographed hundreds more rehearsals, productions and promotional studio shots. Unidentified male chorus members dancing during rehearsals for musical West Side Story in 1957 Martha Swope / © NYPL At a time when live performances were not often or easily captured, Swope’s photographs caught the animated moments and distilled the essence of a show into a single image: André De Shields clad in a jumpsuit as the title character in The Wiz, Patti LuPone with her arms raised overhead in Evita, the cast of Cats leaping in feline formations, a close-up of a forlorn Sheryl Lee Ralph in Dreamgirls and the row of dancers obscuring their faces with their headshots in A Chorus Line were all captured by Swope’s camera. She was also the house photographer for the New York City Ballet and the Martha Graham Dance Company and photographed other major dance companies such as the Ailey School. Her vision of the stage became fairly ubiquitous, with Playbill reporting that in the late 1970s, two-thirds of Broadway productions were photographed by Swope, meaning her work dominated theater and dance coverage. Carol Rosegg was early in her photography career when she heard that Swope was looking for an assistant. “I didn't frankly even know who she was,” Rosegg says. “Then the press agent who told me said, ‘Pick up any New York Times and you’ll find out.’” Swope’s background as a dancer likely equipped her to press the shutter at the exact right moment to capture movement, and to know when everyone on stage was precisely posed. She taught herself photography and early on used a Brownie camera, a simple box model made by Kodak. “She was what she described as ‘a dancer with a Brownie,’” says Barbara Stratyner, a historian of the performing arts who curated exhibitions of Swope’s work at the New York Public Library. An ensemble of dancers in rehearsal for the stage production Cats in 1982 Martha Swope / © NYPL “Dance was her first love,” Rosegg says. “She knew everything about dance. She would never use a photo of a dancer whose foot was wrong; the feet had to be perfect.” According to Rosegg, once the photo subjects knew she was shooting, “the anxiety level came down a little bit.” They knew that they’d look good in the resulting photos, and they likely trusted her intuition as a fellow dancer. Swope moved with the bearing of a dancer and often stood with her feet in ballet’s fourth position while she shot. She continued to take dance classes throughout her life, including at the prestigious Martha Graham School. Stratyner says, “As Graham got older, [Swope] was, I think, the only person who was allowed to photograph rehearsals, because Graham didn’t want rehearsals shown.” Photographic technology and the theater and dance landscapes evolved greatly over the course of Swope’s career. Rosegg points out that at the start of her own career, cameras didn’t even automatically advance the film after each shot. She explains the delicate nature of working with film, saying, “When you were shooting film, you actually had to compose, because you had 35 shots and then you had to change your film.” Swope also worked during a period of changing over from all black-and-white photos to a mixture of black-and-white and color photography. Rosegg notes that simultaneously, Swope would shoot black-and-white, and she herself would shoot color. Looking at Swope’s portfolio is also an examination of increasingly crisp photo production. Advances in photography made shooting in the dark or capturing subjects under blinding stage lights easier, and they allowed for better zooming in from afar. Martha Graham rehearses dancer Takako Asakawa and others in Heretic, a dance work choreographed by Graham, in 1986 Martha Swope / © NYPL It’s much more common nowadays to get a look behind the curtain of theater productions via social media. “The theater photographers of today need to supply so much content,” Rosegg says. “We didn’t have any of that, and getting to go backstage was kind of a big deal.” Photographers coming to document a rehearsal once might have been seen as an intrusion, but now, as Rosegg puts it, “everybody is desperate for you to come, and if you’re not there, they’re shooting it on their iPhone.” Even with exclusive behind-the-scenes access to the hottest tickets in town and the biggest stars of the day, Swope remained unpretentious. She lived and worked in a brownstone with her apartment above her studio, where the film was developed in a closet and the bathroom served as a darkroom. Rosegg recalls that a phone sat in the darkroom so they could be reached while printing, and she would be amazed at the big-name producers and theater glitterati who rang in while she was making prints in an unventilated space. From left to right: Paul Winfield, Ruby Dee, Marsha Jackson and Denzel Washington in the stage production Checkmates in 1988 Martha Swope / © NYPL Swope’s approachability extended to how she chose to preserve her work. She originally sold her body of work to Time Life, and, according to Stratyner, she was unhappy with the way the photos became relatively inaccessible. She took back the rights to her collection and donated it to the New York Public Library, where many photos can be accessed by researchers in person, and the entire array of photos is available online to the public in the Digital Collections. Searching “Martha Swope” yields over 50,000 items from more than 800 productions, featuring a huge variety of figures, from a white-suited John Travolta busting a disco move in Saturday Night Fever to Andrew Lloyd Webber with Nancy Reagan at a performance of Phantom of the Opera. Swope’s extensive career was recognized in 2004 with a special Tony Award, a Tony Honors for Excellence in Theater, which are given intermittently to notable figures in theater who operate outside of traditional awards categories. She also received a lifetime achievement award from the League of Professional Theater Women in 2007. Though she retired in 1994 and died in 2017, her work still reverberates through dance and Broadway history today. For decades, she captured the fleeting moments of theater that would otherwise never be seen by the public. And her passion was clear and straightforward. As she once told an interviewer: “I’m not interested in what’s going on on my side of the camera. I’m interested in what’s happening on the other side.” Get the latest Travel & Culture stories in your inbox.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Climate Change Is Ruining Cheese, Scientists and Farmers Warn

    Climate change is making everything worse — including apparently threatening the dairy that makes our precious cheese.In interviews with Science News, veterinary researchers and dairy farmers alike warned that changes to the climate that affect cows are impacting not only affects the nutritional value of the cheeses produced from their milk, but also the color, texture, and even taste.Researchers from the Université Clermont Auvergne, which is located in the mountainous Central France region that produces a delicious firm cheese known as Cantal, explained in a new paper for the Journal of Dairy Science that grass shortages caused by climate change can greatly affect how cows' milk, and the subsequent cheese created from it, tastes.At regular intervals throughout a five-month testing period in 2021, the scientists sampled milk from two groups of cows, each containing 20 cows from two different breeds that were either allowed to graze on grass like normal or only graze part-time while being fed a supplemental diet that featured corn and other concentrated foods.As the researchers found, the corn-fed cohort consistently produced the same amount of milk and less methane than their grass-fed counterparts — but the taste of the resulting milk products was less savory and rich than the grass-fed bovines.Moreover, the milk from the grass-fed cows contained more omega-3 fatty acids, which are good for the heart, and lactic acids, which act as probiotics."Farmers are looking for feed with better yields than grass or that are more resilient to droughts," explained Matthieu Bouchon, the fittingly-named lead author of the study.Still, those same farmers want to know how supplementing their cows' feed will change the nutritional value and taste, Bouchon said — and one farmer who spoke to Science News affirmed anecdotally, this effect is bearing out in other parts of the world, too."We were having lots of problems with milk protein and fat content due to the heat," Gustavo Abijaodi, a dairy farmer in Brazil, told the website. "If we can stabilize heat effects, the cattle will respond with better and more nutritious milk."The heat also seems to be getting to the way cows eat and behave as well."Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature," noted Marina Danes, a dairy scientist at Brazil's Federal University of Lavras. "This process spirals into immunosuppression, leaving the animal vulnerable to disease."Whether it's the food quality or the heat affecting the cows, the effects are palpable — or, in this case, edible."If climate change progresses the way it’s going, we’ll feel it in our cheese," remarked Bouchon, the French researcher.More on cattle science: Brazilian "Supercows" Reportedly Close to Achieving World DominationShare This Article
    #climate #change #ruining #cheese #scientists
    Climate Change Is Ruining Cheese, Scientists and Farmers Warn
    Climate change is making everything worse — including apparently threatening the dairy that makes our precious cheese.In interviews with Science News, veterinary researchers and dairy farmers alike warned that changes to the climate that affect cows are impacting not only affects the nutritional value of the cheeses produced from their milk, but also the color, texture, and even taste.Researchers from the Université Clermont Auvergne, which is located in the mountainous Central France region that produces a delicious firm cheese known as Cantal, explained in a new paper for the Journal of Dairy Science that grass shortages caused by climate change can greatly affect how cows' milk, and the subsequent cheese created from it, tastes.At regular intervals throughout a five-month testing period in 2021, the scientists sampled milk from two groups of cows, each containing 20 cows from two different breeds that were either allowed to graze on grass like normal or only graze part-time while being fed a supplemental diet that featured corn and other concentrated foods.As the researchers found, the corn-fed cohort consistently produced the same amount of milk and less methane than their grass-fed counterparts — but the taste of the resulting milk products was less savory and rich than the grass-fed bovines.Moreover, the milk from the grass-fed cows contained more omega-3 fatty acids, which are good for the heart, and lactic acids, which act as probiotics."Farmers are looking for feed with better yields than grass or that are more resilient to droughts," explained Matthieu Bouchon, the fittingly-named lead author of the study.Still, those same farmers want to know how supplementing their cows' feed will change the nutritional value and taste, Bouchon said — and one farmer who spoke to Science News affirmed anecdotally, this effect is bearing out in other parts of the world, too."We were having lots of problems with milk protein and fat content due to the heat," Gustavo Abijaodi, a dairy farmer in Brazil, told the website. "If we can stabilize heat effects, the cattle will respond with better and more nutritious milk."The heat also seems to be getting to the way cows eat and behave as well."Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature," noted Marina Danes, a dairy scientist at Brazil's Federal University of Lavras. "This process spirals into immunosuppression, leaving the animal vulnerable to disease."Whether it's the food quality or the heat affecting the cows, the effects are palpable — or, in this case, edible."If climate change progresses the way it’s going, we’ll feel it in our cheese," remarked Bouchon, the French researcher.More on cattle science: Brazilian "Supercows" Reportedly Close to Achieving World DominationShare This Article #climate #change #ruining #cheese #scientists
    FUTURISM.COM
    Climate Change Is Ruining Cheese, Scientists and Farmers Warn
    Climate change is making everything worse — including apparently threatening the dairy that makes our precious cheese.In interviews with Science News, veterinary researchers and dairy farmers alike warned that changes to the climate that affect cows are impacting not only affects the nutritional value of the cheeses produced from their milk, but also the color, texture, and even taste.Researchers from the Université Clermont Auvergne, which is located in the mountainous Central France region that produces a delicious firm cheese known as Cantal, explained in a new paper for the Journal of Dairy Science that grass shortages caused by climate change can greatly affect how cows' milk, and the subsequent cheese created from it, tastes.At regular intervals throughout a five-month testing period in 2021, the scientists sampled milk from two groups of cows, each containing 20 cows from two different breeds that were either allowed to graze on grass like normal or only graze part-time while being fed a supplemental diet that featured corn and other concentrated foods.As the researchers found, the corn-fed cohort consistently produced the same amount of milk and less methane than their grass-fed counterparts — but the taste of the resulting milk products was less savory and rich than the grass-fed bovines.Moreover, the milk from the grass-fed cows contained more omega-3 fatty acids, which are good for the heart, and lactic acids, which act as probiotics."Farmers are looking for feed with better yields than grass or that are more resilient to droughts," explained Matthieu Bouchon, the fittingly-named lead author of the study.Still, those same farmers want to know how supplementing their cows' feed will change the nutritional value and taste, Bouchon said — and one farmer who spoke to Science News affirmed anecdotally, this effect is bearing out in other parts of the world, too."We were having lots of problems with milk protein and fat content due to the heat," Gustavo Abijaodi, a dairy farmer in Brazil, told the website. "If we can stabilize heat effects, the cattle will respond with better and more nutritious milk."The heat also seems to be getting to the way cows eat and behave as well."Cows produce heat to digest food — so if they are already feeling hot, they’ll eat less to lower their temperature," noted Marina Danes, a dairy scientist at Brazil's Federal University of Lavras. "This process spirals into immunosuppression, leaving the animal vulnerable to disease."Whether it's the food quality or the heat affecting the cows, the effects are palpable — or, in this case, edible."If climate change progresses the way it’s going, we’ll feel it in our cheese," remarked Bouchon, the French researcher.More on cattle science: Brazilian "Supercows" Reportedly Close to Achieving World DominationShare This Article
    0 Комментарии 0 Поделились 0 предпросмотр
  • Stone PC Case, Cooler Master GPU, DIY Case from Scratch, and Metal Fans

    Stone PC Case, Cooler Master GPU, DIY Case from Scratch, and Metal FansJune 4, 2025Last Updated: 2025-06-04Cooler Master is doing some really interesting stuff with its new casesThe HighlightsCooler Master’s upcoming MF600, MF500, and MF400 reconfigurable frame cases are assembled from columns and cornersThe company also showed off interesting stone facade case front panelsCooler Master is working on a “GPU” with AsusTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Cooler Master’s booth at Computex 2025 where the company showed off several new cases. Arguably the most interesting one is a modular case. It comes with, we believe, 8 corners and 12 columns.Editor's note: This was originally published on May 20, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangCooler Master MF CasesThe case comes with a front panel that has a dust filter in it. With it all assembled, it looks like the cases in the image above. The cases are the MF series, with the largest one being the MF600, which we assume translates to “Motherf***ing 600.” There’s also the MF500 and the smaller MF400. Initially, Cooler Master is basically going to be selling pre-configured models. Eventually, the company wants to allow people to customize the case on their site and have it assembled and shipped from around the City of Industry. It’s pretty cool as it’s a fully modular approach.The side panels are secured to the case via magnets, which is actually a nice touch. Internally, the MF600 we saw came with 3x140mm fans on the front and 1x120mm fan on the back. The motherboard tray is pretty standard for the most part. Exceptions include a rail system that provides numerous holes for screws to go in, which allows Cooler Master to reconfigure things. Inside the case towards the back, there’s also a rail system, which forms bits and pieces of the motherboard tray that allow for more customizability. Cooler Master has been kind of on-and-off in the DIY space over the years where they’ve had some really big wins and some really big losses. They were also kind of absent for a while, but these MF cases represent a better showing from what we’ve seen in a while from the company. According to Cooler Master, a pre-configured MF600 is supposed to cost We expect to test and review the case. The MF500 is supposed to go for and includes 2x200mm fans in the front and 1x120mm fan in the back. The smallest MF case, the MF600, which is a very large micro ATX box, is going for In terms of fans, it has 2x120mm ones at the bottom coupled with a 1x120mm fan in the rear.  Cooler Master also showed off different panel types they’re experimenting with. One of them included a facade-style stone. One of the pre-built MF cases we looked at had stuff flipped around in an inverted layout. One of the benefits of its rail system allows the case to have a bar that screws in which can support the GPU. Looking into this system, you can see that the PSU is at the bottom next to a bottom intake fan. Updated Cosmos Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Cooler Master’s updated Cosmos has the NVIDIA-like DGX style front. We also saw a variant of the Cosmos with thermal baffles in it. We have some criticisms of its execution, but overall, it’s an interesting idea.  The way the baffles are designed, Cooler Master is trying to bring air straight in through its channels. There’s a channel for the CPU that exposes the fin stack and Cooler Master's V8 CPU cooler. It conveys an idea similar to an engine cover. The GPU has a separate baffle beneath the CPU one. The company is trying to isolate air flow. In theory, this should work well and we would love this idea applied to more affordable cases, like the MF series, especially since they’re already kind of configurable. Looking at the back, fans can be mounted on the rear, which can help pull air out. We also saw another variant of the Cosmos case running liquid cooling with a distro block. It was coupled with 4x180mm fans and a “720” radiator, which pulled air into the case. Unfortunately, the air is blowing straight into the wall of a motherboard tray, but Cooler Master says the plan is to pull the air up and out of the case with additional 180mm fans on the top and to move the PSU towards the bottom of the case. Looking closer at the front of the special edition of the Cosmos cases, we can see the NVIDIA DGX shroud, which Cooler Master manufactures. It’s essentially like a sponge-like mesh. The special edition of the Cosmos doesn’t have a price yet, but the non-special edition variant is supposed to be around which is before any potential tariffs. Cooler Master CoolersCooler Master showed off some CPU air coolers that had some 3D heat pipes, which had more heat pipes protruding from the center. The company also showed off its V8 cooler and a full-metal fan. The fan’s blades and frame are both aluminum.  Cooler Master Elite Series CasesCooler Master does some really cool sh*t but has a branding problem. For instance, the company’s “Elite” series cases, shown in the image above, are actually budget cases. From left to right, we believe they are called the Elite 482, Elite 600, Elite 490 Wood, Elite 691 Wood, Elite 693, Elite 692, Elite 302, and Elite 502. Our advice to Cooler Master here is for them to unf*ck these names.Most of the Elite series cases don’t come with fans with the exception of the Elite 302 and Elite 502, which come with 3 ARGB fans. MF360Next up are Cooler Master’s MF360 cases, which conveys that you can see inside the case from all sides. While it’s going to have some thermal challenges, to give the company credit, it’s actually really good looking. The MF360 is a showcase fish-tank style PC that you can see through from both sides. Inside the case, we saw a distro block and tubes routed through on both sides.Cooling XThe case in the image above, which goes by "Cooling X,” and uses the company’s new MF frame system. If you look at the corner, you can see the individual columns. At Computex, we saw it as a pre-built system.The top of the case has a magnetically attached panel, which just pulls right off. The panel itself provides really good porosity and the material is pretty nice. Removing the top panel exposes 2 offset fans. The back fan tries to pull in air with the front fan trying to exhaust air out of the top, which is why they’re offset. That’s kind of cool to see.  Cooler Master FansCooler Master showed off all-aluminum fans, which include the blades and frame. The MF120 XT is a 120mm model, is supposed to be and the company says it goes up to 4,000 RPM. The fan’s RPM can also be button-controlled via an external remote and it uses a dual-ball-bearing solution. Cooler Master’s mixed fans, which use plastic blades coupled with an aluminum frame, come with fluid dynamic bearings. The clearance between the fan blade tip and the frame is important as the smaller that clearance is, the better performance you get. The major downside is that as the fan ages, it can start to clip the interior of the frame. Having it too close can also negatively impact yields. The solution to this is LCP, which is incredibly expensive, or metal, because it doesn’t deform, but that’s also expensive. Right now, Cooler Master says it’s about a .8mm distance, which is pretty good. The company is targeting 0.6mm by the time the fan launches. Cooler Master Video Card Shroud Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Cooler Master also showed off some video cards, which is not something the company is typically involved with. Cooler Master created a GPU shroud with adjustable slats that can accommodate 15-30mm fans. This solution is geared towards pre-built PCs and isn’t planned to be sold separately.Examining one of the fans, we saw a standard 25mm-thick fan, which Cooler Master’s GPU shroud solution can adjust to via different notch options.Cooler Master is also using a vapor chamber, which is supported by 8x8mm heat pipes running through the shroud and a gigantic fin stack. In total, it weighs almost 7 pounds.Cooler Master claims that, in terms of cooling, it performs similar to the 4-fan Astral solution at lower noise levels, but we don’t have those numbers. With 4,000 RPM fans running on a 600-watt heat load, Cooler Master claims a 5090 will run at about 49 degrees C or so for the GPU.
    #stone #case #cooler #master #gpu
    Stone PC Case, Cooler Master GPU, DIY Case from Scratch, and Metal Fans
    Stone PC Case, Cooler Master GPU, DIY Case from Scratch, and Metal FansJune 4, 2025Last Updated: 2025-06-04Cooler Master is doing some really interesting stuff with its new casesThe HighlightsCooler Master’s upcoming MF600, MF500, and MF400 reconfigurable frame cases are assembled from columns and cornersThe company also showed off interesting stone facade case front panelsCooler Master is working on a “GPU” with AsusTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Cooler Master’s booth at Computex 2025 where the company showed off several new cases. Arguably the most interesting one is a modular case. It comes with, we believe, 8 corners and 12 columns.Editor's note: This was originally published on May 20, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangCooler Master MF CasesThe case comes with a front panel that has a dust filter in it. With it all assembled, it looks like the cases in the image above. The cases are the MF series, with the largest one being the MF600, which we assume translates to “Motherf***ing 600.” There’s also the MF500 and the smaller MF400. Initially, Cooler Master is basically going to be selling pre-configured models. Eventually, the company wants to allow people to customize the case on their site and have it assembled and shipped from around the City of Industry. It’s pretty cool as it’s a fully modular approach.The side panels are secured to the case via magnets, which is actually a nice touch. Internally, the MF600 we saw came with 3x140mm fans on the front and 1x120mm fan on the back. The motherboard tray is pretty standard for the most part. Exceptions include a rail system that provides numerous holes for screws to go in, which allows Cooler Master to reconfigure things. Inside the case towards the back, there’s also a rail system, which forms bits and pieces of the motherboard tray that allow for more customizability. Cooler Master has been kind of on-and-off in the DIY space over the years where they’ve had some really big wins and some really big losses. They were also kind of absent for a while, but these MF cases represent a better showing from what we’ve seen in a while from the company. According to Cooler Master, a pre-configured MF600 is supposed to cost We expect to test and review the case. The MF500 is supposed to go for and includes 2x200mm fans in the front and 1x120mm fan in the back. The smallest MF case, the MF600, which is a very large micro ATX box, is going for In terms of fans, it has 2x120mm ones at the bottom coupled with a 1x120mm fan in the rear.  Cooler Master also showed off different panel types they’re experimenting with. One of them included a facade-style stone. One of the pre-built MF cases we looked at had stuff flipped around in an inverted layout. One of the benefits of its rail system allows the case to have a bar that screws in which can support the GPU. Looking into this system, you can see that the PSU is at the bottom next to a bottom intake fan. Updated Cosmos Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Cooler Master’s updated Cosmos has the NVIDIA-like DGX style front. We also saw a variant of the Cosmos with thermal baffles in it. We have some criticisms of its execution, but overall, it’s an interesting idea.  The way the baffles are designed, Cooler Master is trying to bring air straight in through its channels. There’s a channel for the CPU that exposes the fin stack and Cooler Master's V8 CPU cooler. It conveys an idea similar to an engine cover. The GPU has a separate baffle beneath the CPU one. The company is trying to isolate air flow. In theory, this should work well and we would love this idea applied to more affordable cases, like the MF series, especially since they’re already kind of configurable. Looking at the back, fans can be mounted on the rear, which can help pull air out. We also saw another variant of the Cosmos case running liquid cooling with a distro block. It was coupled with 4x180mm fans and a “720” radiator, which pulled air into the case. Unfortunately, the air is blowing straight into the wall of a motherboard tray, but Cooler Master says the plan is to pull the air up and out of the case with additional 180mm fans on the top and to move the PSU towards the bottom of the case. Looking closer at the front of the special edition of the Cosmos cases, we can see the NVIDIA DGX shroud, which Cooler Master manufactures. It’s essentially like a sponge-like mesh. The special edition of the Cosmos doesn’t have a price yet, but the non-special edition variant is supposed to be around which is before any potential tariffs. Cooler Master CoolersCooler Master showed off some CPU air coolers that had some 3D heat pipes, which had more heat pipes protruding from the center. The company also showed off its V8 cooler and a full-metal fan. The fan’s blades and frame are both aluminum.  Cooler Master Elite Series CasesCooler Master does some really cool sh*t but has a branding problem. For instance, the company’s “Elite” series cases, shown in the image above, are actually budget cases. From left to right, we believe they are called the Elite 482, Elite 600, Elite 490 Wood, Elite 691 Wood, Elite 693, Elite 692, Elite 302, and Elite 502. Our advice to Cooler Master here is for them to unf*ck these names.Most of the Elite series cases don’t come with fans with the exception of the Elite 302 and Elite 502, which come with 3 ARGB fans. MF360Next up are Cooler Master’s MF360 cases, which conveys that you can see inside the case from all sides. While it’s going to have some thermal challenges, to give the company credit, it’s actually really good looking. The MF360 is a showcase fish-tank style PC that you can see through from both sides. Inside the case, we saw a distro block and tubes routed through on both sides.Cooling XThe case in the image above, which goes by "Cooling X,” and uses the company’s new MF frame system. If you look at the corner, you can see the individual columns. At Computex, we saw it as a pre-built system.The top of the case has a magnetically attached panel, which just pulls right off. The panel itself provides really good porosity and the material is pretty nice. Removing the top panel exposes 2 offset fans. The back fan tries to pull in air with the front fan trying to exhaust air out of the top, which is why they’re offset. That’s kind of cool to see.  Cooler Master FansCooler Master showed off all-aluminum fans, which include the blades and frame. The MF120 XT is a 120mm model, is supposed to be and the company says it goes up to 4,000 RPM. The fan’s RPM can also be button-controlled via an external remote and it uses a dual-ball-bearing solution. Cooler Master’s mixed fans, which use plastic blades coupled with an aluminum frame, come with fluid dynamic bearings. The clearance between the fan blade tip and the frame is important as the smaller that clearance is, the better performance you get. The major downside is that as the fan ages, it can start to clip the interior of the frame. Having it too close can also negatively impact yields. The solution to this is LCP, which is incredibly expensive, or metal, because it doesn’t deform, but that’s also expensive. Right now, Cooler Master says it’s about a .8mm distance, which is pretty good. The company is targeting 0.6mm by the time the fan launches. Cooler Master Video Card Shroud Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Cooler Master also showed off some video cards, which is not something the company is typically involved with. Cooler Master created a GPU shroud with adjustable slats that can accommodate 15-30mm fans. This solution is geared towards pre-built PCs and isn’t planned to be sold separately.Examining one of the fans, we saw a standard 25mm-thick fan, which Cooler Master’s GPU shroud solution can adjust to via different notch options.Cooler Master is also using a vapor chamber, which is supported by 8x8mm heat pipes running through the shroud and a gigantic fin stack. In total, it weighs almost 7 pounds.Cooler Master claims that, in terms of cooling, it performs similar to the 4-fan Astral solution at lower noise levels, but we don’t have those numbers. With 4,000 RPM fans running on a 600-watt heat load, Cooler Master claims a 5090 will run at about 49 degrees C or so for the GPU. #stone #case #cooler #master #gpu
    GAMERSNEXUS.NET
    Stone PC Case, Cooler Master GPU, DIY Case from Scratch, and Metal Fans
    Stone PC Case, Cooler Master GPU, DIY Case from Scratch, and Metal FansJune 4, 2025Last Updated: 2025-06-04Cooler Master is doing some really interesting stuff with its new casesThe HighlightsCooler Master’s upcoming MF600, MF500, and MF400 reconfigurable frame cases are assembled from columns and cornersThe company also showed off interesting stone facade case front panelsCooler Master is working on a “GPU” with AsusTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Cooler Master’s booth at Computex 2025 where the company showed off several new cases. Arguably the most interesting one is a modular case. It comes with, we believe, 8 corners and 12 columns.Editor's note: This was originally published on May 20, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangCooler Master MF CasesThe case comes with a front panel that has a dust filter in it. With it all assembled, it looks like the cases in the image above. The cases are the MF series, with the largest one being the MF600, which we assume translates to “Motherf***ing 600.” There’s also the MF500 and the smaller MF400. Initially, Cooler Master is basically going to be selling pre-configured models. Eventually, the company wants to allow people to customize the case on their site and have it assembled and shipped from around the City of Industry. It’s pretty cool as it’s a fully modular approach.The side panels are secured to the case via magnets, which is actually a nice touch. Internally, the MF600 we saw came with 3x140mm fans on the front and 1x120mm fan on the back. The motherboard tray is pretty standard for the most part. Exceptions include a rail system that provides numerous holes for screws to go in, which allows Cooler Master to reconfigure things. Inside the case towards the back, there’s also a rail system, which forms bits and pieces of the motherboard tray that allow for more customizability. Cooler Master has been kind of on-and-off in the DIY space over the years where they’ve had some really big wins and some really big losses. They were also kind of absent for a while, but these MF cases represent a better showing from what we’ve seen in a while from the company. According to Cooler Master, a pre-configured MF600 is supposed to cost $200. We expect to test and review the case. The MF500 is supposed to go for $165 and includes 2x200mm fans in the front and 1x120mm fan in the back. The smallest MF case, the MF600, which is a very large micro ATX box, is going for $150. In terms of fans, it has 2x120mm ones at the bottom coupled with a 1x120mm fan in the rear.  Cooler Master also showed off different panel types they’re experimenting with. One of them included a facade-style stone. One of the pre-built MF cases we looked at had stuff flipped around in an inverted layout. One of the benefits of its rail system allows the case to have a bar that screws in which can support the GPU. Looking into this system, you can see that the PSU is at the bottom next to a bottom intake fan. Updated Cosmos Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Cooler Master’s updated Cosmos has the NVIDIA-like DGX style front. We also saw a variant of the Cosmos with thermal baffles in it. We have some criticisms of its execution, but overall, it’s an interesting idea.  The way the baffles are designed, Cooler Master is trying to bring air straight in through its channels. There’s a channel for the CPU that exposes the fin stack and Cooler Master's V8 CPU cooler. It conveys an idea similar to an engine cover. The GPU has a separate baffle beneath the CPU one. The company is trying to isolate air flow. In theory, this should work well and we would love this idea applied to more affordable cases, like the MF series, especially since they’re already kind of configurable. Looking at the back, fans can be mounted on the rear, which can help pull air out. We also saw another variant of the Cosmos case running liquid cooling with a distro block. It was coupled with 4x180mm fans and a “720” radiator, which pulled air into the case. Unfortunately, the air is blowing straight into the wall of a motherboard tray, but Cooler Master says the plan is to pull the air up and out of the case with additional 180mm fans on the top and to move the PSU towards the bottom of the case. Looking closer at the front of the special edition of the Cosmos cases, we can see the NVIDIA DGX shroud, which Cooler Master manufactures. It’s essentially like a sponge-like mesh. The special edition of the Cosmos doesn’t have a price yet, but the non-special edition variant is supposed to be around $400, which is before any potential tariffs. Cooler Master CoolersCooler Master showed off some CPU air coolers that had some 3D heat pipes, which had more heat pipes protruding from the center. The company also showed off its V8 cooler and a full-metal fan. The fan’s blades and frame are both aluminum.  Cooler Master Elite Series CasesCooler Master does some really cool sh*t but has a branding problem. For instance, the company’s “Elite” series cases, shown in the image above, are actually budget cases. From left to right, we believe they are called the Elite 482 ($50), Elite 600 ($65), Elite 490 Wood ($50), Elite 691 Wood ($60), Elite 693 ($60), Elite 692 ($70), Elite 302 ($40), and Elite 502 ($60). Our advice to Cooler Master here is for them to unf*ck these names.Most of the Elite series cases don’t come with fans with the exception of the Elite 302 and Elite 502, which come with 3 ARGB fans. MF360Next up are Cooler Master’s MF360 cases, which conveys that you can see inside the case from all sides. While it’s going to have some thermal challenges, to give the company credit, it’s actually really good looking. The MF360 is a showcase fish-tank style PC that you can see through from both sides. Inside the case, we saw a distro block and tubes routed through on both sides.Cooling XThe case in the image above, which goes by "Cooling X,” and uses the company’s new MF frame system. If you look at the corner, you can see the individual columns. At Computex, we saw it as a pre-built system.The top of the case has a magnetically attached panel, which just pulls right off. The panel itself provides really good porosity and the material is pretty nice. Removing the top panel exposes 2 offset fans. The back fan tries to pull in air with the front fan trying to exhaust air out of the top, which is why they’re offset. That’s kind of cool to see.  Cooler Master FansCooler Master showed off all-aluminum fans, which include the blades and frame. The MF120 XT is a 120mm model, is supposed to be $35, and the company says it goes up to 4,000 RPM. The fan’s RPM can also be button-controlled via an external remote and it uses a dual-ball-bearing solution. Cooler Master’s mixed fans, which use plastic blades coupled with an aluminum frame, come with fluid dynamic bearings (FDBs). The clearance between the fan blade tip and the frame is important as the smaller that clearance is, the better performance you get. The major downside is that as the fan ages, it can start to clip the interior of the frame. Having it too close can also negatively impact yields. The solution to this is LCP, which is incredibly expensive, or metal, because it doesn’t deform, but that’s also expensive. Right now, Cooler Master says it’s about a .8mm distance, which is pretty good. The company is targeting 0.6mm by the time the fan launches. Cooler Master Video Card Shroud Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)Cooler Master also showed off some video cards, which is not something the company is typically involved with. Cooler Master created a GPU shroud with adjustable slats that can accommodate 15-30mm fans. This solution is geared towards pre-built PCs and isn’t planned to be sold separately.Examining one of the fans, we saw a standard 25mm-thick fan, which Cooler Master’s GPU shroud solution can adjust to via different notch options.Cooler Master is also using a vapor chamber, which is supported by 8x8mm heat pipes running through the shroud and a gigantic fin stack. In total, it weighs almost 7 pounds (3.2 kilograms).Cooler Master claims that, in terms of cooling, it performs similar to the 4-fan Astral solution at lower noise levels, but we don’t have those numbers. With 4,000 RPM fans running on a 600-watt heat load, Cooler Master claims a 5090 will run at about 49 degrees C or so for the GPU.
    Like
    Love
    Wow
    Sad
    Angry
    197
    0 Комментарии 0 Поделились 0 предпросмотр
  • How Accurate Are Apps That Show Property Lines?

    © Henrique Ferreira via Unsplash
    Finding property lines can be tricky, especially for individuals who are looking to purchase or list a home for sale. Conventional techniques, including engaging surveyors, are costly and labor-intensive. As an alternate solution, technology provides apps that profess to show property lines. This raises the question: How precise are these digital resources at demarcating boundary lines?

    The emergence of an app that shows property lines has revolutionized how property owners, buyers, and real estate professionals interact with land boundaries. These digital tools leverage advanced mapping technologies to provide visual representations of property boundaries, offering a more accessible alternative to traditional surveying methods while raising important questions about their accuracy and reliability.
    What Are Property Line Apps?
    Apps that display property lines use Geographic Information Systemsand satellite imagery. They offer users a visual display of land boundaries, most commonly available on smartphones or tablets. These apps are meant to help make property borders easier to identify and are a helpful tool for property owners, real estate agents, and buyers.
    These applications typically draw information from public records, county assessor data, and other official sources to create digital representations of property boundaries. Many also incorporate user-friendly features like measurement tools, parcel information displays, and the ability to save or share property data with others.
    How Various Factors Affect Accuracy
    Several factors impact the reliability of property line apps. First, the source of the data is significant. Apps usually have access to government databases and public records that vary in accuracy based on when the data was last refreshed. Second, GPS technology limitations impact accuracy. Although GPS technology is becoming more advanced, apps might still have discrepancies where tree cover or high buildings block signals from satellites and influence tracking accuracy.
    The resolution of satellite imagery also plays a crucial role in determining how precisely property lines can be displayed. Higher-resolution images allow for more detailed and accurate boundary placements, while lower-quality imagery may result in less precise representations. Additionally, the frequency of data updates affects whether the app reflects recent property divisions, consolidations, or boundary adjustments.
    Comparing Traditional Surveying and Apps
    Professional surveyors approach property line determination using high-precision equipment and established methodologies. This traditional approach yields highly accurate results with precise and legally enforceable boundaries. Apps offer more general information on property lines compared to professional surveys. While they are fast and convenient, they provide no substitute for the precision of a professional survey. App-generated boundaries should not be relied upon as definitive indications of legal property lines.
    Traditional surveys involve physical measurements taken directly on the property, considering historical markers, neighboring properties, and legal descriptions. In contrast, apps rely on digital interpretations of existing records, which may not account for all the nuances that a professional surveyor would observe in person.
    Advantages of property line applications

    App-Generated Property Lines
    Property line apps may have their limitations, but they do have valuable uses. They are useful for general assessments where an immediate overview of boundaries may be required. The applications also have user-friendly interfaces that allow a wider audience to use these applications and learn technical skills. Additionally, they are often enhanced with more functionalities, such as calculating areas and providing land parcel information, thus expanding their usefulness for users.
    According to the U.S. Bureau of Land Management, these digital tools have significantly increased public access to property information that was previously difficult to obtain without professional assistance. This democratization of property data allows property owners to be more informed about their land assets and helps potential buyers better understand properties of interest before making major decisions.
    Potential Limitations and Risks
    Property line apps may be convenient, but they have clear limitations. Since the data relies heavily on public records, data errors are common. It can be outdated or incomplete, which can lead to misunderstandings or disputes. In addition, these apps are incapable of recognizing legal nuances, such as easements or encroachments, which can significantly impact property rights and boundaries.
    There is also a risk that users might place too much confidence in app-generated boundaries when making important decisions. While these tools can provide helpful guidance, they should not be the sole basis for resolving boundary disputes, building structures near property lines, or making purchase decisions without professional verification.
    Best Practices for Users
    Users should follow a few best practices to make the best and most effective use of a property line app. Cross-referencing results with official records verifies data accuracy, minimizing potential inaccuracies. Moreover, when app data is paired with physical inspections, it provides a fuller picture of property lines. Advice from professionals, including surveyors or real estate agents, can also be beneficial, especially for legal transactions.
    For important matters such as property purchases, boundary disputes, or construction projects near property lines, it’s advisable to use apps as preliminary tools only, following up with professional surveys before making final decisions. Understanding the limitations of these digital tools helps users utilize them appropriately within a broader strategy for property boundary determination.
    Conclusion
    Instead, property line apps provide a convenient and accessible way to determine where your land ends and where your neighbor’s begins. Yet, the precision of these tools is contingent on multiple factors such as data sources and technological limitations. Although useful as an initial step, these tools should not be used, and they should not be used for legal purposes, instead of professional surveys. Users can properly contextualize property boundary information by understanding what these applications can and cannot do.
    Technology continues to shape how we deal with real estate by digitalizing and providing easy access to tools that simplify complex processes. These apps will likely improve accuracy over time and become increasingly integral to property transactions. Until then, users must balance convenience with reliability, ensuring that the information they obtain is helpful and accurate.

    Smart Technologytechnology

    by ArchEyes Team
    Leave a comment
    #how #accurate #are #apps #that
    How Accurate Are Apps That Show Property Lines?
    © Henrique Ferreira via Unsplash Finding property lines can be tricky, especially for individuals who are looking to purchase or list a home for sale. Conventional techniques, including engaging surveyors, are costly and labor-intensive. As an alternate solution, technology provides apps that profess to show property lines. This raises the question: How precise are these digital resources at demarcating boundary lines? The emergence of an app that shows property lines has revolutionized how property owners, buyers, and real estate professionals interact with land boundaries. These digital tools leverage advanced mapping technologies to provide visual representations of property boundaries, offering a more accessible alternative to traditional surveying methods while raising important questions about their accuracy and reliability. What Are Property Line Apps? Apps that display property lines use Geographic Information Systemsand satellite imagery. They offer users a visual display of land boundaries, most commonly available on smartphones or tablets. These apps are meant to help make property borders easier to identify and are a helpful tool for property owners, real estate agents, and buyers. These applications typically draw information from public records, county assessor data, and other official sources to create digital representations of property boundaries. Many also incorporate user-friendly features like measurement tools, parcel information displays, and the ability to save or share property data with others. How Various Factors Affect Accuracy Several factors impact the reliability of property line apps. First, the source of the data is significant. Apps usually have access to government databases and public records that vary in accuracy based on when the data was last refreshed. Second, GPS technology limitations impact accuracy. Although GPS technology is becoming more advanced, apps might still have discrepancies where tree cover or high buildings block signals from satellites and influence tracking accuracy. The resolution of satellite imagery also plays a crucial role in determining how precisely property lines can be displayed. Higher-resolution images allow for more detailed and accurate boundary placements, while lower-quality imagery may result in less precise representations. Additionally, the frequency of data updates affects whether the app reflects recent property divisions, consolidations, or boundary adjustments. Comparing Traditional Surveying and Apps Professional surveyors approach property line determination using high-precision equipment and established methodologies. This traditional approach yields highly accurate results with precise and legally enforceable boundaries. Apps offer more general information on property lines compared to professional surveys. While they are fast and convenient, they provide no substitute for the precision of a professional survey. App-generated boundaries should not be relied upon as definitive indications of legal property lines. Traditional surveys involve physical measurements taken directly on the property, considering historical markers, neighboring properties, and legal descriptions. In contrast, apps rely on digital interpretations of existing records, which may not account for all the nuances that a professional surveyor would observe in person. Advantages of property line applications App-Generated Property Lines Property line apps may have their limitations, but they do have valuable uses. They are useful for general assessments where an immediate overview of boundaries may be required. The applications also have user-friendly interfaces that allow a wider audience to use these applications and learn technical skills. Additionally, they are often enhanced with more functionalities, such as calculating areas and providing land parcel information, thus expanding their usefulness for users. According to the U.S. Bureau of Land Management, these digital tools have significantly increased public access to property information that was previously difficult to obtain without professional assistance. This democratization of property data allows property owners to be more informed about their land assets and helps potential buyers better understand properties of interest before making major decisions. Potential Limitations and Risks Property line apps may be convenient, but they have clear limitations. Since the data relies heavily on public records, data errors are common. It can be outdated or incomplete, which can lead to misunderstandings or disputes. In addition, these apps are incapable of recognizing legal nuances, such as easements or encroachments, which can significantly impact property rights and boundaries. There is also a risk that users might place too much confidence in app-generated boundaries when making important decisions. While these tools can provide helpful guidance, they should not be the sole basis for resolving boundary disputes, building structures near property lines, or making purchase decisions without professional verification. Best Practices for Users Users should follow a few best practices to make the best and most effective use of a property line app. Cross-referencing results with official records verifies data accuracy, minimizing potential inaccuracies. Moreover, when app data is paired with physical inspections, it provides a fuller picture of property lines. Advice from professionals, including surveyors or real estate agents, can also be beneficial, especially for legal transactions. For important matters such as property purchases, boundary disputes, or construction projects near property lines, it’s advisable to use apps as preliminary tools only, following up with professional surveys before making final decisions. Understanding the limitations of these digital tools helps users utilize them appropriately within a broader strategy for property boundary determination. Conclusion Instead, property line apps provide a convenient and accessible way to determine where your land ends and where your neighbor’s begins. Yet, the precision of these tools is contingent on multiple factors such as data sources and technological limitations. Although useful as an initial step, these tools should not be used, and they should not be used for legal purposes, instead of professional surveys. Users can properly contextualize property boundary information by understanding what these applications can and cannot do. Technology continues to shape how we deal with real estate by digitalizing and providing easy access to tools that simplify complex processes. These apps will likely improve accuracy over time and become increasingly integral to property transactions. Until then, users must balance convenience with reliability, ensuring that the information they obtain is helpful and accurate. Smart Technologytechnology by ArchEyes Team Leave a comment #how #accurate #are #apps #that
    ARCHEYES.COM
    How Accurate Are Apps That Show Property Lines?
    © Henrique Ferreira via Unsplash Finding property lines can be tricky, especially for individuals who are looking to purchase or list a home for sale. Conventional techniques, including engaging surveyors, are costly and labor-intensive. As an alternate solution, technology provides apps that profess to show property lines. This raises the question: How precise are these digital resources at demarcating boundary lines? The emergence of an app that shows property lines has revolutionized how property owners, buyers, and real estate professionals interact with land boundaries. These digital tools leverage advanced mapping technologies to provide visual representations of property boundaries, offering a more accessible alternative to traditional surveying methods while raising important questions about their accuracy and reliability. What Are Property Line Apps? Apps that display property lines use Geographic Information Systems (GIS) and satellite imagery. They offer users a visual display of land boundaries, most commonly available on smartphones or tablets. These apps are meant to help make property borders easier to identify and are a helpful tool for property owners, real estate agents, and buyers. These applications typically draw information from public records, county assessor data, and other official sources to create digital representations of property boundaries. Many also incorporate user-friendly features like measurement tools, parcel information displays, and the ability to save or share property data with others. How Various Factors Affect Accuracy Several factors impact the reliability of property line apps. First, the source of the data is significant. Apps usually have access to government databases and public records that vary in accuracy based on when the data was last refreshed. Second, GPS technology limitations impact accuracy. Although GPS technology is becoming more advanced, apps might still have discrepancies where tree cover or high buildings block signals from satellites and influence tracking accuracy. The resolution of satellite imagery also plays a crucial role in determining how precisely property lines can be displayed. Higher-resolution images allow for more detailed and accurate boundary placements, while lower-quality imagery may result in less precise representations. Additionally, the frequency of data updates affects whether the app reflects recent property divisions, consolidations, or boundary adjustments. Comparing Traditional Surveying and Apps Professional surveyors approach property line determination using high-precision equipment and established methodologies. This traditional approach yields highly accurate results with precise and legally enforceable boundaries. Apps offer more general information on property lines compared to professional surveys. While they are fast and convenient, they provide no substitute for the precision of a professional survey. App-generated boundaries should not be relied upon as definitive indications of legal property lines. Traditional surveys involve physical measurements taken directly on the property, considering historical markers, neighboring properties, and legal descriptions. In contrast, apps rely on digital interpretations of existing records, which may not account for all the nuances that a professional surveyor would observe in person. Advantages of property line applications App-Generated Property Lines Property line apps may have their limitations, but they do have valuable uses. They are useful for general assessments where an immediate overview of boundaries may be required. The applications also have user-friendly interfaces that allow a wider audience to use these applications and learn technical skills. Additionally, they are often enhanced with more functionalities, such as calculating areas and providing land parcel information, thus expanding their usefulness for users. According to the U.S. Bureau of Land Management, these digital tools have significantly increased public access to property information that was previously difficult to obtain without professional assistance. This democratization of property data allows property owners to be more informed about their land assets and helps potential buyers better understand properties of interest before making major decisions. Potential Limitations and Risks Property line apps may be convenient, but they have clear limitations. Since the data relies heavily on public records, data errors are common. It can be outdated or incomplete, which can lead to misunderstandings or disputes. In addition, these apps are incapable of recognizing legal nuances, such as easements or encroachments, which can significantly impact property rights and boundaries. There is also a risk that users might place too much confidence in app-generated boundaries when making important decisions. While these tools can provide helpful guidance, they should not be the sole basis for resolving boundary disputes, building structures near property lines, or making purchase decisions without professional verification. Best Practices for Users Users should follow a few best practices to make the best and most effective use of a property line app. Cross-referencing results with official records verifies data accuracy, minimizing potential inaccuracies. Moreover, when app data is paired with physical inspections, it provides a fuller picture of property lines. Advice from professionals, including surveyors or real estate agents, can also be beneficial, especially for legal transactions. For important matters such as property purchases, boundary disputes, or construction projects near property lines, it’s advisable to use apps as preliminary tools only, following up with professional surveys before making final decisions. Understanding the limitations of these digital tools helps users utilize them appropriately within a broader strategy for property boundary determination. Conclusion Instead, property line apps provide a convenient and accessible way to determine where your land ends and where your neighbor’s begins. Yet, the precision of these tools is contingent on multiple factors such as data sources and technological limitations. Although useful as an initial step, these tools should not be used, and they should not be used for legal purposes, instead of professional surveys. Users can properly contextualize property boundary information by understanding what these applications can and cannot do. Technology continues to shape how we deal with real estate by digitalizing and providing easy access to tools that simplify complex processes. These apps will likely improve accuracy over time and become increasingly integral to property transactions. Until then, users must balance convenience with reliability, ensuring that the information they obtain is helpful and accurate. Smart Technologytechnology by ArchEyes Team Leave a comment
    Like
    Love
    Wow
    Sad
    Angry
    215
    0 Комментарии 0 Поделились 0 предпросмотр
  • The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen

    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam.
    #multiplayer #stack #behind #mmorpg #pantheon
    The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen
    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam. #multiplayer #stack #behind #mmorpg #pantheon
    UNITY.COM
    The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen
    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam.
    0 Комментарии 0 Поделились 0 предпросмотр
  • TSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%

    In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead.
    The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers.
    TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity.
    Also see: How profitable are TSMC's nodes: crunching the numbers
    Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year.
    Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year.
    // Related Stories

    With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025.

    TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%.
    As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-aroundtransistor architectures, enabling more precise control of electrical currents.
    Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16and A14could cost up to per wafer.
    Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.
    #tsmc039s #2nm #wafer #prices #hit
    TSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%
    In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead. The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers. TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity. Also see: How profitable are TSMC's nodes: crunching the numbers Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year. Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year. // Related Stories With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025. TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%. As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-aroundtransistor architectures, enabling more precise control of electrical currents. Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16and A14could cost up to per wafer. Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake. #tsmc039s #2nm #wafer #prices #hit
    WWW.TECHSPOT.COM
    TSMC's 2nm wafer prices hit $30,000 as SRAM yields reportedly hit 90%
    In context: TSMC has steadily raised the prices of its most advanced semiconductor process nodes over the past several years – so much so that one analysis suggests the cost per transistor hasn't decreased in over a decade. Further price hikes, driven by tariffs and rising development costs, are reinforcing the notion that Moore's Law is truly dead. The Commercial Times reports that TSMC's upcoming N2 2nm semiconductors will cost $30,000 per wafer, a roughly 66% increase over the company's 3nm chips. Future nodes are expected to be even more expensive and likely reserved for the largest manufacturers. TSMC has justified these price increases by citing the massive cost of building 2nm fabrication plants, which can reach up to $725 million. According to United Daily News, major players such as Apple, AMD, Qualcomm, Broadcom, and Nvidia are expected to place orders before the end of the year despite the higher prices, potentially bringing TSMC's 2nm Arizona fab to full capacity. Also see: How profitable are TSMC's nodes: crunching the numbers Unsurprisingly, Apple is getting first dibs. The A20 processor in next year's iPhone 18 Pro is expected to be the first chip based on TSMC's N2 process. Intel's Nova Lake processors, targeting desktops and possibly high-end laptops, are also slated to use N2 and are expected to launch next year. Earlier reports indicated that yield rates for TSMC's 2nm process reached 60% last year and have since improved. New data suggests that 256Mb SRAM yield rates now exceed 90%. Trial production is likely already underway, with mass production scheduled to begin later this year. // Related Stories With tape-outs for 2nm-based designs surpassing previous nodes at the same development stage, TSMC aims to produce tens of thousands of wafers by the end of 2025. TSMC also plans to follow N2 with N2P and N2X in the second half of next year. N2P is expected to offer an 18% performance boost over N3E at the same power level and 36% greater energy efficiency at the same speed, along with significantly higher logic density. N2X, slated for mass production in 2027, will increase maximum clock frequencies by 10%. As semiconductor geometries continue to shrink, power leakage becomes a major concern. TSMC's 2nm nodes will address this issue with gate-all-around (GAA) transistor architectures, enabling more precise control of electrical currents. Beyond 2nm lies the Angstrom era, where TSMC will implement backside power delivery to further enhance performance. Future process nodes like A16 (1.6nm) and A14 (1.4nm) could cost up to $45,000 per wafer. Meanwhile, Intel is aiming to outpace TSMC's roadmap. The company recently began risk production of its A18 node, which also features gate-all-around and backside power delivery. These chips are expected to debut later this year in Intel's upcoming laptop CPUs, codenamed Panther Lake.
    0 Комментарии 0 Поделились 0 предпросмотр
  • The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy

    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    #mostcited #computer #scientist #has #plan
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers." #mostcited #computer #scientist #has #plan
    TIME.COM
    The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy
    On June 3, Yoshua Bengio, the world’s most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create “safe by design” AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents—systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind’s CEO Demis Hassabis point to AGI’s potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards—it's a false choice. He says there's a chance such a system could escape human control, with potentially irreversible consequences. “If we get an AI that gives us the cure for cancer, but also maybe another version of that AI goes rogue and generates wave after wave of bio-weapons that kill billions of people, then I don't think it's worth it," he says. In 2023, Bengio, along with others including OpenAI’s CEO Sam Altman signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”Now, Bengio, through LawZero, aims to sidestep the existential perils by focusing on creating what he calls “Scientist AI”—a system trained to understand and make statistical predictions about the world, crucially, without the agency to take independent actions. As he puts it: We could use AI to advance scientific progress without rolling the dice on agentic AI systems.Why Bengio Says We Need A New Approach To AI The current approach to giving AI agency is “dangerous,” Bengio says. While most software operates through rigid if-then rules—if the user clicks here, do this—today's AI systems use deep learning. The technique, which Bengio helped pioneer, trains artificial networks modeled loosely on the brain to find patterns in vast amounts of data. But recognizing patterns is just the first step. To turn these systems into useful applications like chatbots, engineers employ a training process called reinforcement learning. The AI generates thousands of responses and receives feedback on each one: a virtual “carrot” for helpful answers and a virtual “stick” for responses that miss the mark. Through millions of these trial-and-feedback cycles, the system gradually learns to predict what responses are most likely to get a reward. “It’s more like growing a plant or animal,” Bengio says. “You don’t fully control what the animal is going to do. You provide it with the right conditions, and it grows and it becomes smarter. You can try to steer it in various directions.”The same basic approach is now being used to imbue AI with greater agency. Models are tasked with challenges with verifiable answers—like math puzzles or coding problems—and are then rewarded for taking the series of actions that yields the solution. This approach has seen AI shatter previous benchmarks in programming and scientific reasoning. For example, at the beginning of 2024, the best AI model scored only 2% on a standardized test for AI of sorts consisting of real world software engineering problems; by December, an impressive 71.7%. But with AI’s greater problem-solving ability comes the emergence of new deceptive skills, Bengio says. The last few months have borne witness to AI systems learning to mislead, cheat, and try to evade shutdown—even resorting to blackmail. These have almost exclusively been in carefully contrived experiments that almost beg the AI to misbehave—for example, by asking it to pursue its goal at all costs. Reports of such behavior in the real-world, though, have begun to surface. Popular AI coding startup Replit’s agent ignored explicit instruction not to edit a system file that could break the company’s software, in what CEO Amjad Masad described as an “Oh f***” moment,” on the Cognitive Revolution podcast in May. The company’s engineers intervened, cutting the agent’s access by moving the file to a secure digital sandbox, only for the AI agent to attempt to “socially engineer” the user to regain access.The quest to build human-level AI agents using techniques known to produce deceptive tendencies, Bengio says, is comparable to a car speeding down a narrow mountain road, with steep cliffs on either side, and thick fog obscuring the path ahead. “We need to set up the car with headlights and put some guardrails on the road,” he says.What is “Scientist AI”?LawZero’s focus is on developing “Scientist AI” which, as Bengio describes, would be fundamentally non-agentic, trustworthy, and focused on understanding and truthfulness, rather than pursuing its own goals or merely imitating human behavior. The aim is creating a powerful tool that, while lacking the same autonomy other models have, is capable of generating hypotheses and accelerating scientific progress to “help us solve challenges of humanity,” Bengio says.LawZero has raised nearly $30 million already from several philanthropic backers including from Schmidt Sciences and Open Philanthropy. “We want to raise more because we know that as we move forward, we'll need significant compute,” Bengio says. But even ten times that figure would pale in comparison to the roughly $200 billion spent last year by tech giants on aggressively pursuing AI. Bengio’s hope is that Scientist AI could help ensure the safety of highly autonomous systems developed by other players. “We can use those non-agentic AIs as guardrails that just need to predict whether the action of an agentic AI is dangerous," Bengio says. Technical interventions will only ever be one part of the solution, he adds, noting the need for regulations to ensure that safe practices are adopted.LawZero, named after science fiction author Isaac Asimov’s zeroth law of robotics—“a robot may not harm humanity, or, by inaction, allow humanity to come to harm”—is not the first nonprofit founded to chart a safer path for AI development. OpenAI was founded as a nonprofit in 2015 with the goal of “ensuring AGI benefits all of humanity,” and intended to serve a counterbalance to industry players guided by profit motives. Since opening a for-profit arm in 2019, the organization has become one of the most valuable private companies in the world, and has faced criticism, including from former staffers, who argue it has drifted from its founding ideals. "Well, the good news is we have the hindsight of maybe what not to do,” Bengio says, adding that he wants to avoid profit incentives and “bring governments into the governance of LawZero.”“I think everyone should ask themselves, ‘What can I do to make sure my children will have a future,’” Bengio says. In March, he stepped down as scientific director of Mila, the academic lab he co-founded in the early nineties, in an effort to reorient his work towards tackling AI risk more directly. “Because I'm a researcher, my answer is, ‘okay, I'm going to work on this scientific problem where maybe I can make a difference,’ but other people may have different answers."
    0 Комментарии 0 Поделились 0 предпросмотр
  • Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding

    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks. 
    Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost.
    Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases. 
    Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows. 

    Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows.

    In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities. 
    vides valuable insights for the community.

    Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation
    #mistral #introduces #codestral #embed #highperformance
    Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding
    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks.  Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost. Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases.  Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows.  Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows. In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities.  vides valuable insights for the community. Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation #mistral #introduces #codestral #embed #highperformance
    WWW.MARKTECHPOST.COM
    Mistral AI Introduces Codestral Embed: A High-Performance Code Embedding Model for Scalable Retrieval and Semantic Understanding
    Modern software engineering faces growing challenges in accurately retrieving and understanding code across diverse programming languages and large-scale codebases. Existing embedding models often struggle to capture the deep semantics of code, resulting in poor performance in tasks such as code search, RAG, and semantic analysis. These limitations hinder developers’ ability to efficiently locate relevant code snippets, reuse components, and manage large projects effectively. As software systems grow increasingly complex, there is a pressing need for more effective, language-agnostic representations of code that can power reliable and high-quality retrieval and reasoning across a wide range of development tasks.  Mistral AI has introduced Codestral Embed, a specialized embedding model built specifically for code-related tasks. Designed to handle real-world code more effectively than existing solutions, it enables powerful retrieval capabilities across large codebases. What sets it apart is its flexibility—users can adjust embedding dimensions and precision levels to balance performance with storage efficiency. Even at lower dimensions, such as 256 with int8 precision, Codestral Embed reportedly surpasses top models from competitors like OpenAI, Cohere, and Voyage, offering high retrieval quality at a reduced storage cost. Beyond basic retrieval, Codestral Embed supports a wide range of developer-focused applications. These include code completion, explanation, editing, semantic search, and duplicate detection. The model can also help organize and analyze repositories by clustering code based on functionality or structure, eliminating the need for manual supervision. This makes it particularly useful for tasks like understanding architectural patterns, categorizing code, or supporting automated documentation, ultimately helping developers work more efficiently with large and complex codebases.  Codestral Embed is tailored for understanding and retrieving code efficiently, especially in large-scale development environments. It powers retrieval-augmented generation by quickly fetching relevant context for tasks like code completion, editing, and explanation—ideal for use in coding assistants and agent-based tools. Developers can also perform semantic code searches using natural language or code queries to find relevant snippets. Its ability to detect similar or duplicated code helps with reuse, policy enforcement, and cleaning up redundancy. Additionally, it can cluster code by functionality or structure, making it useful for repository analysis, spotting architectural patterns, and enhancing documentation workflows.  Codestral Embed is a specialized embedding model designed to enhance code retrieval and semantic analysis tasks. It surpasses existing models, such as OpenAI’s and Cohere’s, in benchmarks like SWE-Bench Lite and CodeSearchNet. The model offers customizable embedding dimensions and precision levels, allowing users to effectively balance performance and storage needs. Key applications include retrieval-augmented generation, semantic code search, duplicate detection, and code clustering. Available via API at $0.15 per million tokens, with a 50% discount for batch processing, Codestral Embed supports various output formats and dimensions, catering to diverse development workflows. In conclusion, Codestral Embed offers customizable embedding dimensions and precisions, enabling developers to strike a balance between performance and storage efficiency. Benchmark evaluations indicate that Codestral Embed surpasses existing models like OpenAI’s and Cohere’s in various code-related tasks, including retrieval-augmented generation and semantic code search. Its applications span from identifying duplicate code segments to facilitating semantic clustering for code analytics. Available through Mistral’s API, Codestral Embed provides a flexible and efficient solution for developers seeking advanced code understanding capabilities.  vides valuable insights for the community. Check out the Technical details. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Off-Policy Reinforcement Learning RL with KL Divergence Yields Superior Reasoning in Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/This AI Paper from Microsoft Introduces WINA: A Training-Free Sparse Activation Framework for Efficient Large Language Model InferenceSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Apple and Duke Researchers Present a Reinforcement Learning Approach That Enables LLMs to Provide Intermediate Answers, Enhancing Speed and AccuracySana Hassanhttps://www.marktechpost.com/author/sana-hassan/National University of Singapore Researchers Introduce Dimple: A Discrete Diffusion Multimodal Language Model for Efficient and Controllable Text Generation
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com