• Ants Do Poop and They Even Use Toilets to Fertilize Their Own Gardens

    Key Takeaways on Ant PoopDo ants poop? Yes. Any creature that eats will poop and ants are no exception. Because ants live in close quarters, they need to protect the colony from their feces so bacteria and fungus doesn't infect their health. This is why they use toilet chambers. Whether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. But some ants find a use for that stuff. One such species is the leafcutter ant that takes little clippings of leaves and uses these leaves to grow a very particular fungus that they then eat.Like urban humans, ants live in close quarters. Ant colonies can be home to thousands, even tens of thousands of individuals, depending on the species. And like any creature that eats, ants poop. When you combine close quarters and loads of feces, you have a recipe for disease, says Jessica Ware, curator and division chair of Invertebrate Zoology at the American Museum of Natural History. “Ant poop can harbor bacteria, and because it contains partly undigested food, it can grow bacteria and fungus that could threaten the health of the colony,” Ware says. But ant colonies aren’t seething beds of disease. That’s because ants are scrupulous about hygiene.Ants Do Poop and Ant Toilets Are RealAnt colony underground with ant chambers.To keep themselves and their nests clean, ants have evolved some interesting housekeeping strategies. Some types of ants actually have toilets — or at least something we might call toilets. Their nests are very complicated, with lots of different tunnels and chambers, explains Ware, and one of those chambers is a toilet chamber. Ants don’t visit the toilet when they feel the call of nature. Instead, worker ants who are on latrine duty collect the poop and carry it to the toilet chamber, which is located far away from other parts of the nest. What Does Ant Poop Look Like? This isn’t as messy a chore as it sounds. Like most insects, ants are water-limited, says Ware, so they try to get as much liquid out of their food as possible. This results in small, hard, usually black or brownish pellets of poop. The poop is dry and hard enough so that for ant species that don’t have indoor toilet chambers, the workers can just kick the poop out of the nest.Ants Use Poop as FertilizerWhether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. Well, at least most types of ants don’t. Some ants find a use for that stuff. One such species is the leafcutter ant. “They basically take little clippings of leaves and use these leaves to grow a very particular fungus that they then eat,” says Ware. “They don't eat the leaves, they eat the fungus.” And yep, they use their poop to fertilize their crops. “They’re basically gardeners,” Ware says. If you’d like to see leafcutter ants at work in their gardens and you happen to be in the New York City area, drop by the American Museum of Natural History. They have a large colony of fungus-gardening ants on display.Other Insects That Use ToiletsAnts may have toilets, but termites have even wilder ways of dealing with their wastes. Termites and ants might seem similar at first sight, but they aren’t closely related. Ants are more closely related to bees, while termites are more closely related to cockroaches, explains Aram Mikaelyan, an entomologist at North Carolina State University who studies the co-evolution of insects and their gut microbiomes. So ants’ and termites’ styles of social living evolved independently, and their solutions to the waste problem are quite different.“Termites have found a way to not distance themselves from the feces,” says Mikaelyan. “Instead, they use the feces itself as building material.” They’re able to do this because they feed on wood, Mikaelyan explains. When wood passes through the termites’ digestive systems into the poop, it enables a type of bacteria called Actinobacteria. These bacteria are the source of many antibiotics that humans use.So that unusual building material acts as a disinfectant. Mikaelyan describes it as “a living disinfectant wall, like a Clorox wall, almost.”Insect HygieneIt may seem surprising that ants and termites are so tidy and concerned with hygiene, but it’s really not uncommon. “Insects in general are cleaner than we think,” says Ware. “We often think of insects as being really gross, but most insects don’t want to lie in their own filth.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:The American Society of Microbiology. The Leaf-cutter Ant’s 50 Million Years of FarmingAvery Hurt is a freelance science journalist. In addition to writing for Discover, she writes regularly for a variety of outlets, both print and online, including National Geographic, Science News Explores, Medscape, and WebMD. She’s the author of Bullet With Your Name on It: What You Will Probably Die From and What You Can Do About It, Clerisy Press 2007, as well as several books for young readers. Avery got her start in journalism while attending university, writing for the school newspaper and editing the student non-fiction magazine. Though she writes about all areas of science, she is particularly interested in neuroscience, the science of consciousness, and AI–interests she developed while earning a degree in philosophy.
    #ants #poop #they #even #use
    Ants Do Poop and They Even Use Toilets to Fertilize Their Own Gardens
    Key Takeaways on Ant PoopDo ants poop? Yes. Any creature that eats will poop and ants are no exception. Because ants live in close quarters, they need to protect the colony from their feces so bacteria and fungus doesn't infect their health. This is why they use toilet chambers. Whether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. But some ants find a use for that stuff. One such species is the leafcutter ant that takes little clippings of leaves and uses these leaves to grow a very particular fungus that they then eat.Like urban humans, ants live in close quarters. Ant colonies can be home to thousands, even tens of thousands of individuals, depending on the species. And like any creature that eats, ants poop. When you combine close quarters and loads of feces, you have a recipe for disease, says Jessica Ware, curator and division chair of Invertebrate Zoology at the American Museum of Natural History. “Ant poop can harbor bacteria, and because it contains partly undigested food, it can grow bacteria and fungus that could threaten the health of the colony,” Ware says. But ant colonies aren’t seething beds of disease. That’s because ants are scrupulous about hygiene.Ants Do Poop and Ant Toilets Are RealAnt colony underground with ant chambers.To keep themselves and their nests clean, ants have evolved some interesting housekeeping strategies. Some types of ants actually have toilets — or at least something we might call toilets. Their nests are very complicated, with lots of different tunnels and chambers, explains Ware, and one of those chambers is a toilet chamber. Ants don’t visit the toilet when they feel the call of nature. Instead, worker ants who are on latrine duty collect the poop and carry it to the toilet chamber, which is located far away from other parts of the nest. What Does Ant Poop Look Like? This isn’t as messy a chore as it sounds. Like most insects, ants are water-limited, says Ware, so they try to get as much liquid out of their food as possible. This results in small, hard, usually black or brownish pellets of poop. The poop is dry and hard enough so that for ant species that don’t have indoor toilet chambers, the workers can just kick the poop out of the nest.Ants Use Poop as FertilizerWhether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. Well, at least most types of ants don’t. Some ants find a use for that stuff. One such species is the leafcutter ant. “They basically take little clippings of leaves and use these leaves to grow a very particular fungus that they then eat,” says Ware. “They don't eat the leaves, they eat the fungus.” And yep, they use their poop to fertilize their crops. “They’re basically gardeners,” Ware says. If you’d like to see leafcutter ants at work in their gardens and you happen to be in the New York City area, drop by the American Museum of Natural History. They have a large colony of fungus-gardening ants on display.Other Insects That Use ToiletsAnts may have toilets, but termites have even wilder ways of dealing with their wastes. Termites and ants might seem similar at first sight, but they aren’t closely related. Ants are more closely related to bees, while termites are more closely related to cockroaches, explains Aram Mikaelyan, an entomologist at North Carolina State University who studies the co-evolution of insects and their gut microbiomes. So ants’ and termites’ styles of social living evolved independently, and their solutions to the waste problem are quite different.“Termites have found a way to not distance themselves from the feces,” says Mikaelyan. “Instead, they use the feces itself as building material.” They’re able to do this because they feed on wood, Mikaelyan explains. When wood passes through the termites’ digestive systems into the poop, it enables a type of bacteria called Actinobacteria. These bacteria are the source of many antibiotics that humans use.So that unusual building material acts as a disinfectant. Mikaelyan describes it as “a living disinfectant wall, like a Clorox wall, almost.”Insect HygieneIt may seem surprising that ants and termites are so tidy and concerned with hygiene, but it’s really not uncommon. “Insects in general are cleaner than we think,” says Ware. “We often think of insects as being really gross, but most insects don’t want to lie in their own filth.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:The American Society of Microbiology. The Leaf-cutter Ant’s 50 Million Years of FarmingAvery Hurt is a freelance science journalist. In addition to writing for Discover, she writes regularly for a variety of outlets, both print and online, including National Geographic, Science News Explores, Medscape, and WebMD. She’s the author of Bullet With Your Name on It: What You Will Probably Die From and What You Can Do About It, Clerisy Press 2007, as well as several books for young readers. Avery got her start in journalism while attending university, writing for the school newspaper and editing the student non-fiction magazine. Though she writes about all areas of science, she is particularly interested in neuroscience, the science of consciousness, and AI–interests she developed while earning a degree in philosophy. #ants #poop #they #even #use
    WWW.DISCOVERMAGAZINE.COM
    Ants Do Poop and They Even Use Toilets to Fertilize Their Own Gardens
    Key Takeaways on Ant PoopDo ants poop? Yes. Any creature that eats will poop and ants are no exception. Because ants live in close quarters, they need to protect the colony from their feces so bacteria and fungus doesn't infect their health. This is why they use toilet chambers. Whether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. But some ants find a use for that stuff. One such species is the leafcutter ant that takes little clippings of leaves and uses these leaves to grow a very particular fungus that they then eat.Like urban humans, ants live in close quarters. Ant colonies can be home to thousands, even tens of thousands of individuals, depending on the species. And like any creature that eats, ants poop. When you combine close quarters and loads of feces, you have a recipe for disease, says Jessica Ware, curator and division chair of Invertebrate Zoology at the American Museum of Natural History. “Ant poop can harbor bacteria, and because it contains partly undigested food, it can grow bacteria and fungus that could threaten the health of the colony,” Ware says. But ant colonies aren’t seething beds of disease. That’s because ants are scrupulous about hygiene.Ants Do Poop and Ant Toilets Are RealAnt colony underground with ant chambers. (Image Credit: Lidok_L/Shutterstock)To keep themselves and their nests clean, ants have evolved some interesting housekeeping strategies. Some types of ants actually have toilets — or at least something we might call toilets. Their nests are very complicated, with lots of different tunnels and chambers, explains Ware, and one of those chambers is a toilet chamber. Ants don’t visit the toilet when they feel the call of nature. Instead, worker ants who are on latrine duty collect the poop and carry it to the toilet chamber, which is located far away from other parts of the nest. What Does Ant Poop Look Like? This isn’t as messy a chore as it sounds. Like most insects, ants are water-limited, says Ware, so they try to get as much liquid out of their food as possible. This results in small, hard, usually black or brownish pellets of poop. The poop is dry and hard enough so that for ant species that don’t have indoor toilet chambers, the workers can just kick the poop out of the nest.Ants Use Poop as FertilizerWhether they isolate it in a toilet chamber or kick it to the curb, ants don’t keep their waste around. Well, at least most types of ants don’t. Some ants find a use for that stuff. One such species is the leafcutter ant. “They basically take little clippings of leaves and use these leaves to grow a very particular fungus that they then eat,” says Ware. “They don't eat the leaves, they eat the fungus.” And yep, they use their poop to fertilize their crops. “They’re basically gardeners,” Ware says. If you’d like to see leafcutter ants at work in their gardens and you happen to be in the New York City area, drop by the American Museum of Natural History. They have a large colony of fungus-gardening ants on display.Other Insects That Use ToiletsAnts may have toilets, but termites have even wilder ways of dealing with their wastes. Termites and ants might seem similar at first sight, but they aren’t closely related. Ants are more closely related to bees, while termites are more closely related to cockroaches, explains Aram Mikaelyan, an entomologist at North Carolina State University who studies the co-evolution of insects and their gut microbiomes. So ants’ and termites’ styles of social living evolved independently, and their solutions to the waste problem are quite different.“Termites have found a way to not distance themselves from the feces,” says Mikaelyan. “Instead, they use the feces itself as building material.” They’re able to do this because they feed on wood, Mikaelyan explains. When wood passes through the termites’ digestive systems into the poop, it enables a type of bacteria called Actinobacteria. These bacteria are the source of many antibiotics that humans use. (Leafcutter ants also use Actinobacteria to keep their fungus gardens free of parasites.) So that unusual building material acts as a disinfectant. Mikaelyan describes it as “a living disinfectant wall, like a Clorox wall, almost.”Insect HygieneIt may seem surprising that ants and termites are so tidy and concerned with hygiene, but it’s really not uncommon. “Insects in general are cleaner than we think,” says Ware. “We often think of insects as being really gross, but most insects don’t want to lie in their own filth.”Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:The American Society of Microbiology. The Leaf-cutter Ant’s 50 Million Years of FarmingAvery Hurt is a freelance science journalist. In addition to writing for Discover, she writes regularly for a variety of outlets, both print and online, including National Geographic, Science News Explores, Medscape, and WebMD. She’s the author of Bullet With Your Name on It: What You Will Probably Die From and What You Can Do About It, Clerisy Press 2007, as well as several books for young readers. Avery got her start in journalism while attending university, writing for the school newspaper and editing the student non-fiction magazine. Though she writes about all areas of science, she is particularly interested in neuroscience, the science of consciousness, and AI–interests she developed while earning a degree in philosophy.
    0 Commentarii 0 Distribuiri
  • Four Strategies for Getting Better Sleep Away From Home

    Sleep can be a mysterious process even under ideal conditions, but when you’re in a completely alien environment like a hotel room or other temporary lodging it can become seemingly impossible. But if you take a little control over your environment, you can get more—and better—sleep no matter where you find yourself at night.Make the space feel more like homeStudies have shown that aspects of our home environment like sound and smell can help us be more relaxed and and happy when we’re away, so replicating those aspects of your life in an unfamiliar spot can help you sleep:Sound. If you normally sleep with a white noise machine, bring it with you when you travel, or find a travel-size model or phone app that simulates it.Smell. Everyone’s home has a unique scent map. Bringing those scents with you can trick your brain into feeling “at home” in a strange place. Using the same lotions, shampoos, and soaps on the road can recreate that scent matrix. Bringing an item of clothing that smells like the dryer sheets or detergent you use at home into bed with you can also help make an unfamiliar bed seem inviting.Routine. Another way to make an unfamiliar place seem more like home is to keep to your usual routine. However you approach bedtime at home—whether it’s reading a book, meditating for a few moments, or watching a little mindless television—do it as much as possible in your temporary digs. Try to hit the sack around the same time as usual, if you can, and keep to the same bathroom routine as well. Control the environmentAs much as possible, you want to control the physical environment that you’re sleeping in. If you’re used to sleeping in a pitch-black room, block light sources as much as possible by clipping curtains shut, putting tape or Post-It notes over incidental light sources like alarms and thermostats, and blocking gaps under doors that allow light to leak in.If you prefer some light while you’re sleeping, bring a nightlight with you that you can plug in to make sure even the darkest room is illuminated. And adjust the temperature, if you can—most people sleep better when the room is a little on the cool side, about 60 to 65 degrees. But if you’re used to sleeping in a warmer or even colder environment, try to get as close to that as you can.Select a strategic locationIf you have control over the location of your room, use that control to select a spot that’s conducive to a good night’s sleep. That starts with the location of the building itself—if you have a choice of guest rooms or hotels to spend the night, choose one far away from busy streets or other sources of noise. Then look for a spot that’s far from common areas like elevators or lobbies—or your friend’s living room where everyone stays up all night chatting.Get out of bedFinally, if you’re struggling to fall asleep in a strange place despite all of these efforts, give up and get out of bed. Forcing yourself to lie there and count the minutes as they slip past you just reinforces the connection between stress and anxiety and that bed, making it even less likely that you’ll fall asleep. Instead, after about 20 minutes it’s best to get up and do something relaxing for a short period of time. This resets your body and mind and breaks the association between frustration and the bed, making it easier to relax when you try again.
    #four #strategies #getting #better #sleep
    Four Strategies for Getting Better Sleep Away From Home
    Sleep can be a mysterious process even under ideal conditions, but when you’re in a completely alien environment like a hotel room or other temporary lodging it can become seemingly impossible. But if you take a little control over your environment, you can get more—and better—sleep no matter where you find yourself at night.Make the space feel more like homeStudies have shown that aspects of our home environment like sound and smell can help us be more relaxed and and happy when we’re away, so replicating those aspects of your life in an unfamiliar spot can help you sleep:Sound. If you normally sleep with a white noise machine, bring it with you when you travel, or find a travel-size model or phone app that simulates it.Smell. Everyone’s home has a unique scent map. Bringing those scents with you can trick your brain into feeling “at home” in a strange place. Using the same lotions, shampoos, and soaps on the road can recreate that scent matrix. Bringing an item of clothing that smells like the dryer sheets or detergent you use at home into bed with you can also help make an unfamiliar bed seem inviting.Routine. Another way to make an unfamiliar place seem more like home is to keep to your usual routine. However you approach bedtime at home—whether it’s reading a book, meditating for a few moments, or watching a little mindless television—do it as much as possible in your temporary digs. Try to hit the sack around the same time as usual, if you can, and keep to the same bathroom routine as well. Control the environmentAs much as possible, you want to control the physical environment that you’re sleeping in. If you’re used to sleeping in a pitch-black room, block light sources as much as possible by clipping curtains shut, putting tape or Post-It notes over incidental light sources like alarms and thermostats, and blocking gaps under doors that allow light to leak in.If you prefer some light while you’re sleeping, bring a nightlight with you that you can plug in to make sure even the darkest room is illuminated. And adjust the temperature, if you can—most people sleep better when the room is a little on the cool side, about 60 to 65 degrees. But if you’re used to sleeping in a warmer or even colder environment, try to get as close to that as you can.Select a strategic locationIf you have control over the location of your room, use that control to select a spot that’s conducive to a good night’s sleep. That starts with the location of the building itself—if you have a choice of guest rooms or hotels to spend the night, choose one far away from busy streets or other sources of noise. Then look for a spot that’s far from common areas like elevators or lobbies—or your friend’s living room where everyone stays up all night chatting.Get out of bedFinally, if you’re struggling to fall asleep in a strange place despite all of these efforts, give up and get out of bed. Forcing yourself to lie there and count the minutes as they slip past you just reinforces the connection between stress and anxiety and that bed, making it even less likely that you’ll fall asleep. Instead, after about 20 minutes it’s best to get up and do something relaxing for a short period of time. This resets your body and mind and breaks the association between frustration and the bed, making it easier to relax when you try again. #four #strategies #getting #better #sleep
    LIFEHACKER.COM
    Four Strategies for Getting Better Sleep Away From Home
    Sleep can be a mysterious process even under ideal conditions, but when you’re in a completely alien environment like a hotel room or other temporary lodging it can become seemingly impossible. But if you take a little control over your environment, you can get more—and better—sleep no matter where you find yourself at night.Make the space feel more like homeStudies have shown that aspects of our home environment like sound and smell can help us be more relaxed and and happy when we’re away, so replicating those aspects of your life in an unfamiliar spot can help you sleep:Sound. If you normally sleep with a white noise machine, bring it with you when you travel, or find a travel-size model or phone app that simulates it.Smell. Everyone’s home has a unique scent map. Bringing those scents with you can trick your brain into feeling “at home” in a strange place. Using the same lotions, shampoos, and soaps on the road can recreate that scent matrix. Bringing an item of clothing that smells like the dryer sheets or detergent you use at home into bed with you can also help make an unfamiliar bed seem inviting.Routine. Another way to make an unfamiliar place seem more like home is to keep to your usual routine. However you approach bedtime at home—whether it’s reading a book, meditating for a few moments, or watching a little mindless television—do it as much as possible in your temporary digs. Try to hit the sack around the same time as usual, if you can, and keep to the same bathroom routine as well. Control the environmentAs much as possible, you want to control the physical environment that you’re sleeping in. If you’re used to sleeping in a pitch-black room, block light sources as much as possible by clipping curtains shut (binder clips work well for this), putting tape or Post-It notes over incidental light sources like alarms and thermostats, and blocking gaps under doors that allow light to leak in.If you prefer some light while you’re sleeping, bring a nightlight with you that you can plug in to make sure even the darkest room is illuminated. And adjust the temperature, if you can—most people sleep better when the room is a little on the cool side, about 60 to 65 degrees. But if you’re used to sleeping in a warmer or even colder environment, try to get as close to that as you can.Select a strategic locationIf you have control over the location of your room (when staying at a hotel, for example), use that control to select a spot that’s conducive to a good night’s sleep. That starts with the location of the building itself—if you have a choice of guest rooms or hotels to spend the night, choose one far away from busy streets or other sources of noise. Then look for a spot that’s far from common areas like elevators or lobbies—or your friend’s living room where everyone stays up all night chatting.Get out of bed (for a little while) Finally, if you’re struggling to fall asleep in a strange place despite all of these efforts, give up and get out of bed. Forcing yourself to lie there and count the minutes as they slip past you just reinforces the connection between stress and anxiety and that bed, making it even less likely that you’ll fall asleep. Instead, after about 20 minutes it’s best to get up and do something relaxing for a short period of time. This resets your body and mind and breaks the association between frustration and the bed, making it easier to relax when you try again.
    0 Commentarii 0 Distribuiri
  • From Private Parts to Peckham's Medusa: Inside Anna Ginsburg's animated world

    When Anna Ginsburg opened her talk at OFFF Barcelona with her showreel, it landed like a punch to the heart and gut all at once. Immense, emotional, awesome. That three-word review wasn't just for the reel – it set the tone for a talk that was unflinchingly honest, joyously weird, and brimming with creative intensity.
    Anna began her career making music videos, which she admitted were a kind of creative scaffolding: "I didn't yet know what I wanted to say about the world, so I used music as a skeleton to hang visuals on."
    It gave her the freedom to experiment visually and technically with rotoscoping, stop motion and shooting live-action. It was an opportunity to be playful and have fun until she had something pressing to say. Then, Anna began to move into more meaningful territory, blending narrative and aesthetic experimentation.
    Alongside music videos, she became increasingly drawn to animated documentaries. "It's a powerful and overlooked genre," she explained. "When it's just voice recordings and not video, people are more candid. You're protecting your subject, so they're more honest."

    Talking genitals and creative liberation: The making of Private Parts
    A formative moment in Anna's personal and creative life occurred when she saw the artwork 'The Great Wall of Vagina' by Jamie McCartney at the age of 19. It followed an awkward teenage discovery years earlier when, after finally achieving her first orgasm, she proudly shared the news with friends and was met with horror. "Boys got high-fived. Girls got shamed."
    That gap between female pleasure and cultural discomfort became the starting point for Private Parts, her now-famous animated short about masturbation and sexual equality. It began as a personal experiment, sketching vulvas in her studio, imagining what their facial expressions might be. Then, she started interviewing friends about their experiences and animating vulvas to match their voices.
    When It's Nice That and Channel 4 emailed her looking for submissions for a late-night slot, Anna shared a clip of two vulvas in casual conversation, and they were immediately sold. With a shoestring budget of £2,000 and a five-week deadline, she rallied 11 illustrators to help bring the film to life. "I set up a Dropbox, and talking genitals started flooding in from the four corners of the world while I was sitting in my bedroom at my mum's," she laughed.
    One standout moment came from an Amsterdam-based designer who created a CGI Rubik's Cube vagina, then took two weeks off work to spray paint 100 versions of it. The result of what started as a passion project is an iconic, hilarious, and touching film that still resonates ten years on.

    From humour to heartbreak: What Is Beauty
    The talk shifted gear when Anna began to speak about her younger sister's anorexia. In 2017, during her sister's third hospitalisation, Anna found herself questioning the roots of beauty ideals, particularly in Western culture. Witnessing her sister's pain reframed how she saw her own body.
    This sparked a deep dive into beauty through the ages, from the Venus of Willendorf, a 28,000-year-old fertility goddess, to the Versace supermodels of the 1990s and the surgically sculpted Kardashians of today.
    "You realise the pace of the change in beauty ideals," she says. "If you revisit the skeletal female bodies which defined the super skinny era of the 2000s and compare it to the enhanced curves of today, you realise that trying to keep up is not only futile; it's extremely dangerous."
    She also explored the disturbing trend of dismemberment in advertising – shots taken where the heads are intentionally out of frame – and the impact this has on self-perception. Her response was What Is Beauty, released in 2018 on International Women's Day and her sister's birthday. The short film went viral, amassing over 20 million views.
    "It was a love letter to her," Anna said. "Because it didn't have English dialogue, it travelled globally. The simplicity made it resonate." And despite its runaway success, it brought her zero income. "Then I made the worst advert for a bank the world has ever seen," she joked. "I made money, but it broke my creative spirit."

    Enter the Hag: Animation, myth and millennial angst
    OFFF attendees were also treated to the world-exclusive first look at Hag, Anna's new animated short, three years in the making. It's her most ambitious and most personal project yet. Made with the support of the BFI, awarding National Lottery funding, Has is a 16-minute fantasy set in a surreal version of Peckham. The main character is a childless, single, disillusioned woman with snakes for hair.
    "I had just broken up with a lockdown boyfriend after struggling with doubts for nearly 2 years,"' she reveals. "The next day, I was at a baby shower surrounded by friends with rings and babies who recoiled at my touch. I was surrounded by flies, and a dog was doing a poo right next to me. I just felt like a hag."
    Drawing on Greek mythology, Anna reimagines Medusa not as a jealous monster but as a feminist figure of rage, autonomy and misinterpretation. "I didn't know she was a rape victim until I started researching," she told me after the talk. "The story of Athena cursing her out of jealousy is such a tired trope. What if it was solidarity? What if the snakes were power?"
    In Hag, the character initially fights with her snakes – violently clipping them back in shame and battling with them – but by the end, they align. She embraces her monstrous self. "It's a metaphor for learning to love the parts of yourself you've been told are wrong," Anna said. "That journey is universal."

    Making the personal politicalTelling a story so autobiographical wasn't easy. "It's exposing," Anna admitted. "My past work dealt with issues in the world. This one is about how I feel in the world." Even her ex-boyfriend plays himself. "Luckily, he's funny and cool about it. Otherwise, it would've been a disaster."
    She did worry about dramatising the baby shower scene too much. "None of those women were horrible in real life, but for the film, we needed to crank up the emotional tension," she says. "I just wanted to show that societal pressures make women feel monstrous whether they decide to conform or not. This is not a battle between hags and non-hags. These feelings affect us all."
    Co-writing the script with her dear friend and writer Miranda Latimer really helped. "It felt less exposing as we'd both lived versions of the same thing. Collaboration is liberating and makes me feel safer when being so honest," Anna explains.

    Sisterhood, generations and the pressure to conform
    It was very clear from our chat that Anna's younger sisters are a recurring thread throughout her work. "They've helped me understand the world through a Gen Z lens," she said. "Stalking my youngest sister on Instagram was how I noticed the way girls crop their faces or hide behind scribbles. It's dehumanising."
    That intergenerational awareness fuels many of her ideas. "I definitely wouldn't have made What Is Beauty without Maya. Seeing what she was going through just unlocked something."
    She's also keenly aware of the gender gap in healthcare. "So many women I know are living with pain, going years without a diagnosis. It's infuriating. If I get asked to work on anything to do with women's health, I'll say yes."

    Medusa, millennials, and the meaning of self-love
    One of Hag's most biting commentaries is about millennial self-care culture. "There's a scene in the character's bedroom – it's got a faded Dumbledore poster, self-help books, a flashing 'Namaste' sign. It's a shrine to the broken millennial."
    She laughs: "Self-love became a commodity. An expensive candle, a jade roller, and an oil burner from Muji. Like, really? That's it?" Her film pokes at the performative of wellness while still holding space for genuine vulnerability.
    This same self-awareness informs her reflections on generational shifts. "Gen Z is going through the same thing, just with a different flavour. It's all about skincare routines now – 11 steps for a 14-year-old. It's wild."

    Feminism with fangsAnna's feminism is open, intersectional, and laced with humour. "My mum's a lesbian and a Child Protection lawyer who helped to make rape within marriage illegal in the UK," she shared. "She sometimes jokes that my work is a bit basic. But I'm OK with that – I think there's space for approachable feminism, too."
    Importantly, she wants to bring everyone into the conversation. "It means so much when men come up to me after talks. I don't want to alienate anyone. These stories are about people, not just women."
    What's Next?
    Hag will officially premiere later this year, and it's likely to resonate far and wide. It's raw, mythic, funny and furious – and thoroughly modern.
    As Anna put it: "I've been experiencing external pressure and internal longing while making this film. So I'm basically becoming a hag while making Hag."
    As far as metamorphoses go, that's one we'll happily watch unfold.
    #private #parts #peckham039s #medusa #inside
    From Private Parts to Peckham's Medusa: Inside Anna Ginsburg's animated world
    When Anna Ginsburg opened her talk at OFFF Barcelona with her showreel, it landed like a punch to the heart and gut all at once. Immense, emotional, awesome. That three-word review wasn't just for the reel – it set the tone for a talk that was unflinchingly honest, joyously weird, and brimming with creative intensity. Anna began her career making music videos, which she admitted were a kind of creative scaffolding: "I didn't yet know what I wanted to say about the world, so I used music as a skeleton to hang visuals on." It gave her the freedom to experiment visually and technically with rotoscoping, stop motion and shooting live-action. It was an opportunity to be playful and have fun until she had something pressing to say. Then, Anna began to move into more meaningful territory, blending narrative and aesthetic experimentation. Alongside music videos, she became increasingly drawn to animated documentaries. "It's a powerful and overlooked genre," she explained. "When it's just voice recordings and not video, people are more candid. You're protecting your subject, so they're more honest." Talking genitals and creative liberation: The making of Private Parts A formative moment in Anna's personal and creative life occurred when she saw the artwork 'The Great Wall of Vagina' by Jamie McCartney at the age of 19. It followed an awkward teenage discovery years earlier when, after finally achieving her first orgasm, she proudly shared the news with friends and was met with horror. "Boys got high-fived. Girls got shamed." That gap between female pleasure and cultural discomfort became the starting point for Private Parts, her now-famous animated short about masturbation and sexual equality. It began as a personal experiment, sketching vulvas in her studio, imagining what their facial expressions might be. Then, she started interviewing friends about their experiences and animating vulvas to match their voices. When It's Nice That and Channel 4 emailed her looking for submissions for a late-night slot, Anna shared a clip of two vulvas in casual conversation, and they were immediately sold. With a shoestring budget of £2,000 and a five-week deadline, she rallied 11 illustrators to help bring the film to life. "I set up a Dropbox, and talking genitals started flooding in from the four corners of the world while I was sitting in my bedroom at my mum's," she laughed. One standout moment came from an Amsterdam-based designer who created a CGI Rubik's Cube vagina, then took two weeks off work to spray paint 100 versions of it. The result of what started as a passion project is an iconic, hilarious, and touching film that still resonates ten years on. From humour to heartbreak: What Is Beauty The talk shifted gear when Anna began to speak about her younger sister's anorexia. In 2017, during her sister's third hospitalisation, Anna found herself questioning the roots of beauty ideals, particularly in Western culture. Witnessing her sister's pain reframed how she saw her own body. This sparked a deep dive into beauty through the ages, from the Venus of Willendorf, a 28,000-year-old fertility goddess, to the Versace supermodels of the 1990s and the surgically sculpted Kardashians of today. "You realise the pace of the change in beauty ideals," she says. "If you revisit the skeletal female bodies which defined the super skinny era of the 2000s and compare it to the enhanced curves of today, you realise that trying to keep up is not only futile; it's extremely dangerous." She also explored the disturbing trend of dismemberment in advertising – shots taken where the heads are intentionally out of frame – and the impact this has on self-perception. Her response was What Is Beauty, released in 2018 on International Women's Day and her sister's birthday. The short film went viral, amassing over 20 million views. "It was a love letter to her," Anna said. "Because it didn't have English dialogue, it travelled globally. The simplicity made it resonate." And despite its runaway success, it brought her zero income. "Then I made the worst advert for a bank the world has ever seen," she joked. "I made money, but it broke my creative spirit." Enter the Hag: Animation, myth and millennial angst OFFF attendees were also treated to the world-exclusive first look at Hag, Anna's new animated short, three years in the making. It's her most ambitious and most personal project yet. Made with the support of the BFI, awarding National Lottery funding, Has is a 16-minute fantasy set in a surreal version of Peckham. The main character is a childless, single, disillusioned woman with snakes for hair. "I had just broken up with a lockdown boyfriend after struggling with doubts for nearly 2 years,"' she reveals. "The next day, I was at a baby shower surrounded by friends with rings and babies who recoiled at my touch. I was surrounded by flies, and a dog was doing a poo right next to me. I just felt like a hag." Drawing on Greek mythology, Anna reimagines Medusa not as a jealous monster but as a feminist figure of rage, autonomy and misinterpretation. "I didn't know she was a rape victim until I started researching," she told me after the talk. "The story of Athena cursing her out of jealousy is such a tired trope. What if it was solidarity? What if the snakes were power?" In Hag, the character initially fights with her snakes – violently clipping them back in shame and battling with them – but by the end, they align. She embraces her monstrous self. "It's a metaphor for learning to love the parts of yourself you've been told are wrong," Anna said. "That journey is universal." Making the personal politicalTelling a story so autobiographical wasn't easy. "It's exposing," Anna admitted. "My past work dealt with issues in the world. This one is about how I feel in the world." Even her ex-boyfriend plays himself. "Luckily, he's funny and cool about it. Otherwise, it would've been a disaster." She did worry about dramatising the baby shower scene too much. "None of those women were horrible in real life, but for the film, we needed to crank up the emotional tension," she says. "I just wanted to show that societal pressures make women feel monstrous whether they decide to conform or not. This is not a battle between hags and non-hags. These feelings affect us all." Co-writing the script with her dear friend and writer Miranda Latimer really helped. "It felt less exposing as we'd both lived versions of the same thing. Collaboration is liberating and makes me feel safer when being so honest," Anna explains. Sisterhood, generations and the pressure to conform It was very clear from our chat that Anna's younger sisters are a recurring thread throughout her work. "They've helped me understand the world through a Gen Z lens," she said. "Stalking my youngest sister on Instagram was how I noticed the way girls crop their faces or hide behind scribbles. It's dehumanising." That intergenerational awareness fuels many of her ideas. "I definitely wouldn't have made What Is Beauty without Maya. Seeing what she was going through just unlocked something." She's also keenly aware of the gender gap in healthcare. "So many women I know are living with pain, going years without a diagnosis. It's infuriating. If I get asked to work on anything to do with women's health, I'll say yes." Medusa, millennials, and the meaning of self-love One of Hag's most biting commentaries is about millennial self-care culture. "There's a scene in the character's bedroom – it's got a faded Dumbledore poster, self-help books, a flashing 'Namaste' sign. It's a shrine to the broken millennial." She laughs: "Self-love became a commodity. An expensive candle, a jade roller, and an oil burner from Muji. Like, really? That's it?" Her film pokes at the performative of wellness while still holding space for genuine vulnerability. This same self-awareness informs her reflections on generational shifts. "Gen Z is going through the same thing, just with a different flavour. It's all about skincare routines now – 11 steps for a 14-year-old. It's wild." Feminism with fangsAnna's feminism is open, intersectional, and laced with humour. "My mum's a lesbian and a Child Protection lawyer who helped to make rape within marriage illegal in the UK," she shared. "She sometimes jokes that my work is a bit basic. But I'm OK with that – I think there's space for approachable feminism, too." Importantly, she wants to bring everyone into the conversation. "It means so much when men come up to me after talks. I don't want to alienate anyone. These stories are about people, not just women." What's Next? Hag will officially premiere later this year, and it's likely to resonate far and wide. It's raw, mythic, funny and furious – and thoroughly modern. As Anna put it: "I've been experiencing external pressure and internal longing while making this film. So I'm basically becoming a hag while making Hag." As far as metamorphoses go, that's one we'll happily watch unfold. #private #parts #peckham039s #medusa #inside
    WWW.CREATIVEBOOM.COM
    From Private Parts to Peckham's Medusa: Inside Anna Ginsburg's animated world
    When Anna Ginsburg opened her talk at OFFF Barcelona with her showreel, it landed like a punch to the heart and gut all at once. Immense, emotional, awesome. That three-word review wasn't just for the reel – it set the tone for a talk that was unflinchingly honest, joyously weird, and brimming with creative intensity. Anna began her career making music videos, which she admitted were a kind of creative scaffolding: "I didn't yet know what I wanted to say about the world, so I used music as a skeleton to hang visuals on." It gave her the freedom to experiment visually and technically with rotoscoping, stop motion and shooting live-action. It was an opportunity to be playful and have fun until she had something pressing to say. Then, Anna began to move into more meaningful territory, blending narrative and aesthetic experimentation. Alongside music videos, she became increasingly drawn to animated documentaries. "It's a powerful and overlooked genre," she explained. "When it's just voice recordings and not video, people are more candid. You're protecting your subject, so they're more honest." Talking genitals and creative liberation: The making of Private Parts A formative moment in Anna's personal and creative life occurred when she saw the artwork 'The Great Wall of Vagina' by Jamie McCartney at the age of 19. It followed an awkward teenage discovery years earlier when, after finally achieving her first orgasm (post-Cruel Intentions viewing), she proudly shared the news with friends and was met with horror. "Boys got high-fived. Girls got shamed." That gap between female pleasure and cultural discomfort became the starting point for Private Parts, her now-famous animated short about masturbation and sexual equality. It began as a personal experiment, sketching vulvas in her studio, imagining what their facial expressions might be. Then, she started interviewing friends about their experiences and animating vulvas to match their voices. When It's Nice That and Channel 4 emailed her looking for submissions for a late-night slot, Anna shared a clip of two vulvas in casual conversation, and they were immediately sold. With a shoestring budget of £2,000 and a five-week deadline, she rallied 11 illustrators to help bring the film to life. "I set up a Dropbox, and talking genitals started flooding in from the four corners of the world while I was sitting in my bedroom at my mum's," she laughed. One standout moment came from an Amsterdam-based designer who created a CGI Rubik's Cube vagina, then took two weeks off work to spray paint 100 versions of it. The result of what started as a passion project is an iconic, hilarious, and touching film that still resonates ten years on. From humour to heartbreak: What Is Beauty The talk shifted gear when Anna began to speak about her younger sister's anorexia. In 2017, during her sister's third hospitalisation, Anna found herself questioning the roots of beauty ideals, particularly in Western culture. Witnessing her sister's pain reframed how she saw her own body. This sparked a deep dive into beauty through the ages, from the Venus of Willendorf, a 28,000-year-old fertility goddess, to the Versace supermodels of the 1990s and the surgically sculpted Kardashians of today. "You realise the pace of the change in beauty ideals," she says. "If you revisit the skeletal female bodies which defined the super skinny era of the 2000s and compare it to the enhanced curves of today, you realise that trying to keep up is not only futile; it's extremely dangerous." She also explored the disturbing trend of dismemberment in advertising – shots taken where the heads are intentionally out of frame – and the impact this has on self-perception. Her response was What Is Beauty, released in 2018 on International Women's Day and her sister's birthday. The short film went viral, amassing over 20 million views. "It was a love letter to her," Anna said. "Because it didn't have English dialogue, it travelled globally. The simplicity made it resonate." And despite its runaway success, it brought her zero income. "Then I made the worst advert for a bank the world has ever seen," she joked. "I made money, but it broke my creative spirit." Enter the Hag: Animation, myth and millennial angst OFFF attendees were also treated to the world-exclusive first look at Hag, Anna's new animated short, three years in the making. It's her most ambitious and most personal project yet. Made with the support of the BFI, awarding National Lottery funding, Has is a 16-minute fantasy set in a surreal version of Peckham. The main character is a childless, single, disillusioned woman with snakes for hair. "I had just broken up with a lockdown boyfriend after struggling with doubts for nearly 2 years,"' she reveals. "The next day, I was at a baby shower surrounded by friends with rings and babies who recoiled at my touch. I was surrounded by flies, and a dog was doing a poo right next to me. I just felt like a hag." Drawing on Greek mythology, Anna reimagines Medusa not as a jealous monster but as a feminist figure of rage, autonomy and misinterpretation. "I didn't know she was a rape victim until I started researching," she told me after the talk. "The story of Athena cursing her out of jealousy is such a tired trope. What if it was solidarity? What if the snakes were power?" In Hag, the character initially fights with her snakes – violently clipping them back in shame and battling with them – but by the end, they align. She embraces her monstrous self. "It's a metaphor for learning to love the parts of yourself you've been told are wrong," Anna said. "That journey is universal." Making the personal political (and funny) Telling a story so autobiographical wasn't easy. "It's exposing," Anna admitted. "My past work dealt with issues in the world. This one is about how I feel in the world." Even her ex-boyfriend plays himself. "Luckily, he's funny and cool about it. Otherwise, it would've been a disaster." She did worry about dramatising the baby shower scene too much. "None of those women were horrible in real life, but for the film, we needed to crank up the emotional tension," she says. "I just wanted to show that societal pressures make women feel monstrous whether they decide to conform or not. This is not a battle between hags and non-hags. These feelings affect us all." Co-writing the script with her dear friend and writer Miranda Latimer really helped. "It felt less exposing as we'd both lived versions of the same thing. Collaboration is liberating and makes me feel safer when being so honest," Anna explains. Sisterhood, generations and the pressure to conform It was very clear from our chat that Anna's younger sisters are a recurring thread throughout her work. "They've helped me understand the world through a Gen Z lens," she said. "Stalking my youngest sister on Instagram was how I noticed the way girls crop their faces or hide behind scribbles. It's dehumanising." That intergenerational awareness fuels many of her ideas. "I definitely wouldn't have made What Is Beauty without Maya. Seeing what she was going through just unlocked something." She's also keenly aware of the gender gap in healthcare. "So many women I know are living with pain, going years without a diagnosis. It's infuriating. If I get asked to work on anything to do with women's health, I'll say yes." Medusa, millennials, and the meaning of self-love One of Hag's most biting commentaries is about millennial self-care culture. "There's a scene in the character's bedroom – it's got a faded Dumbledore poster, self-help books, a flashing 'Namaste' sign. It's a shrine to the broken millennial." She laughs: "Self-love became a commodity. An expensive candle, a jade roller, and an oil burner from Muji. Like, really? That's it?" Her film pokes at the performative of wellness while still holding space for genuine vulnerability. This same self-awareness informs her reflections on generational shifts. "Gen Z is going through the same thing, just with a different flavour. It's all about skincare routines now – 11 steps for a 14-year-old. It's wild." Feminism with fangs (and a sense of humour) Anna's feminism is open, intersectional, and laced with humour. "My mum's a lesbian and a Child Protection lawyer who helped to make rape within marriage illegal in the UK," she shared. "She sometimes jokes that my work is a bit basic. But I'm OK with that – I think there's space for approachable feminism, too." Importantly, she wants to bring everyone into the conversation. "It means so much when men come up to me after talks. I don't want to alienate anyone. These stories are about people, not just women." What's Next? Hag will officially premiere later this year, and it's likely to resonate far and wide. It's raw, mythic, funny and furious – and thoroughly modern. As Anna put it: "I've been experiencing external pressure and internal longing while making this film. So I'm basically becoming a hag while making Hag." As far as metamorphoses go, that's one we'll happily watch unfold.
    0 Commentarii 0 Distribuiri
  • Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models

    While large reasoning modelshave shown impressive capabilities in short-context reasoning through reinforcement learning, these gains do not generalize well to long-context scenarios. Applications such as multi-document QA, research synthesis, and legal or financial analysis require models to process and reason over sequences exceeding 100K tokens. However, RL optimization in such regimes is plagued by slower reward convergence, unstable policy updates due to KL divergence fluctuations, and reduced exploration resulting from entropy collapse. These bottlenecks reveal a fundamental gap in transitioning LRMs from short-context proficiency to long-context generalization.
    QwenLong-L1: A Structured RL Framework for Long-Context Adaptation
    To address these limitations, the Qwen Research team introduces QwenLong-L1, a novel RL framework designed to adapt LRMs to long-context reasoning tasks. The framework is structured into three key stages:

    Warm-up Supervised Fine-Tuning: Provides a stable initialization for the policy model by training on curated question-context-answer triplets, ensuring basic competence in contextual comprehension and answer extraction.
    Curriculum-Guided Phased Reinforcement Learning: Introduces a staged training process with gradually increasing context lengths. This progression enables the model to incrementally acquire long-context reasoning behaviors without destabilizing policy updates.
    Difficulty-Aware Retrospective Sampling: Enhances exploration by maintaining and reusing hard examples from previous phases, weighted by their difficulty, to encourage deeper reasoning and robustness across diverse inputs.

    These stages are complemented by hybrid reward mechanisms—combining rule-based exact match verification with semantic evaluation by a lightweight LLM—ensuring both precision and recall during policy training.

    Technical Design and Methodological Advantages
    QwenLong-L1 integrates recent advances in group-relative RL optimization, specifically GRPO and DAPO, to mitigate the computational overhead associated with long-context value estimation:

    GRPO estimates advantage by normalizing rewards within sampled groups, eliminating the need for a separate value network and encouraging diverse generation patterns.
    DAPO incorporates mechanisms such as dynamic sampling, overlength penalty shaping, and asymmetric clipping thresholds to prevent entropy collapse and mitigate length biases during training.

    The reward function is defined as the maximum of two signals: a deterministic rule-based match and a semantic judgment from a compact evaluator model. This hybrid approach avoids overfitting to rigid formats while maintaining answer correctness across varied notations and phrasings.
    Moreover, the framework is optimized via progressive context scaling, where the RL process transitions from 20K-token to 60K-token input lengths in controlled phases, stabilizing training dynamics and facilitating policy generalization.
    Experimental Results and Benchmark Performance
    QwenLong-L1 was evaluated on seven long-context document QA benchmarks, including DocMath, Frames, 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. The 32B variant, QwenLong-L1-32B, demonstrated strong empirical performance:

    It outperformed baseline models such as R1-Distill-Qwen-32B by 5.1 points and exceeded leading proprietary systems like OpenAI-o3-mini and Qwen3-235B-A22B.
    Its performance was comparable to Claude-3.7-Sonnet-Thinking, indicating competitive reasoning capabilities under extreme context lengths.
    Pass@K analysis revealed consistent improvements with increased sampling, achieving a Pass@2 average of 73.7, surpassing DeepSeek-R1 and OpenAI-o1-preview, even at low sampling rates.

    Ablation studies further validated the individual contributions of SFT, phased RL, and retrospective sampling. Notably, RL played a decisive role in enabling emergent reasoning behaviors such as grounding, subgoal setting, verification, and backtracking—traits not effectively induced by supervised fine-tuning alone.
    Conclusion
    QwenLong-L1 represents a systematic approach to equipping LRMs with robust long-context reasoning capabilities through reinforcement learning. Its design effectively bridges the gap between short-context expertise and the demands of information-dense environments by combining supervised initialization, curriculum-driven context scaling, and hybrid evaluation strategies. The framework not only achieves state-of-the-art results across long-context benchmarks but also demonstrates the emergence of interpretable reasoning patterns during training.

    Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA Releases Llama Nemotron Nano 4B: An Efficient Open Reasoning Model Optimized for Edge AI and Scientific TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation to Build an AI Agent with Live Python Execution and Automated ValidationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent CreationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen
    #qwen #researchers #proposes #qwenlongl1 #reinforcement
    Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models
    While large reasoning modelshave shown impressive capabilities in short-context reasoning through reinforcement learning, these gains do not generalize well to long-context scenarios. Applications such as multi-document QA, research synthesis, and legal or financial analysis require models to process and reason over sequences exceeding 100K tokens. However, RL optimization in such regimes is plagued by slower reward convergence, unstable policy updates due to KL divergence fluctuations, and reduced exploration resulting from entropy collapse. These bottlenecks reveal a fundamental gap in transitioning LRMs from short-context proficiency to long-context generalization. QwenLong-L1: A Structured RL Framework for Long-Context Adaptation To address these limitations, the Qwen Research team introduces QwenLong-L1, a novel RL framework designed to adapt LRMs to long-context reasoning tasks. The framework is structured into three key stages: Warm-up Supervised Fine-Tuning: Provides a stable initialization for the policy model by training on curated question-context-answer triplets, ensuring basic competence in contextual comprehension and answer extraction. Curriculum-Guided Phased Reinforcement Learning: Introduces a staged training process with gradually increasing context lengths. This progression enables the model to incrementally acquire long-context reasoning behaviors without destabilizing policy updates. Difficulty-Aware Retrospective Sampling: Enhances exploration by maintaining and reusing hard examples from previous phases, weighted by their difficulty, to encourage deeper reasoning and robustness across diverse inputs. These stages are complemented by hybrid reward mechanisms—combining rule-based exact match verification with semantic evaluation by a lightweight LLM—ensuring both precision and recall during policy training. Technical Design and Methodological Advantages QwenLong-L1 integrates recent advances in group-relative RL optimization, specifically GRPO and DAPO, to mitigate the computational overhead associated with long-context value estimation: GRPO estimates advantage by normalizing rewards within sampled groups, eliminating the need for a separate value network and encouraging diverse generation patterns. DAPO incorporates mechanisms such as dynamic sampling, overlength penalty shaping, and asymmetric clipping thresholds to prevent entropy collapse and mitigate length biases during training. The reward function is defined as the maximum of two signals: a deterministic rule-based match and a semantic judgment from a compact evaluator model. This hybrid approach avoids overfitting to rigid formats while maintaining answer correctness across varied notations and phrasings. Moreover, the framework is optimized via progressive context scaling, where the RL process transitions from 20K-token to 60K-token input lengths in controlled phases, stabilizing training dynamics and facilitating policy generalization. Experimental Results and Benchmark Performance QwenLong-L1 was evaluated on seven long-context document QA benchmarks, including DocMath, Frames, 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. The 32B variant, QwenLong-L1-32B, demonstrated strong empirical performance: It outperformed baseline models such as R1-Distill-Qwen-32B by 5.1 points and exceeded leading proprietary systems like OpenAI-o3-mini and Qwen3-235B-A22B. Its performance was comparable to Claude-3.7-Sonnet-Thinking, indicating competitive reasoning capabilities under extreme context lengths. Pass@K analysis revealed consistent improvements with increased sampling, achieving a Pass@2 average of 73.7, surpassing DeepSeek-R1 and OpenAI-o1-preview, even at low sampling rates. Ablation studies further validated the individual contributions of SFT, phased RL, and retrospective sampling. Notably, RL played a decisive role in enabling emergent reasoning behaviors such as grounding, subgoal setting, verification, and backtracking—traits not effectively induced by supervised fine-tuning alone. Conclusion QwenLong-L1 represents a systematic approach to equipping LRMs with robust long-context reasoning capabilities through reinforcement learning. Its design effectively bridges the gap between short-context expertise and the demands of information-dense environments by combining supervised initialization, curriculum-driven context scaling, and hybrid evaluation strategies. The framework not only achieves state-of-the-art results across long-context benchmarks but also demonstrates the emergence of interpretable reasoning patterns during training. Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA Releases Llama Nemotron Nano 4B: An Efficient Open Reasoning Model Optimized for Edge AI and Scientific TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation to Build an AI Agent with Live Python Execution and Automated ValidationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent CreationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen #qwen #researchers #proposes #qwenlongl1 #reinforcement
    WWW.MARKTECHPOST.COM
    Qwen Researchers Proposes QwenLong-L1: A Reinforcement Learning Framework for Long-Context Reasoning in Large Language Models
    While large reasoning models (LRMs) have shown impressive capabilities in short-context reasoning through reinforcement learning (RL), these gains do not generalize well to long-context scenarios. Applications such as multi-document QA, research synthesis, and legal or financial analysis require models to process and reason over sequences exceeding 100K tokens. However, RL optimization in such regimes is plagued by slower reward convergence, unstable policy updates due to KL divergence fluctuations, and reduced exploration resulting from entropy collapse. These bottlenecks reveal a fundamental gap in transitioning LRMs from short-context proficiency to long-context generalization. QwenLong-L1: A Structured RL Framework for Long-Context Adaptation To address these limitations, the Qwen Research team introduces QwenLong-L1, a novel RL framework designed to adapt LRMs to long-context reasoning tasks. The framework is structured into three key stages: Warm-up Supervised Fine-Tuning (SFT): Provides a stable initialization for the policy model by training on curated question-context-answer triplets, ensuring basic competence in contextual comprehension and answer extraction. Curriculum-Guided Phased Reinforcement Learning: Introduces a staged training process with gradually increasing context lengths. This progression enables the model to incrementally acquire long-context reasoning behaviors without destabilizing policy updates. Difficulty-Aware Retrospective Sampling: Enhances exploration by maintaining and reusing hard examples from previous phases, weighted by their difficulty, to encourage deeper reasoning and robustness across diverse inputs. These stages are complemented by hybrid reward mechanisms—combining rule-based exact match verification with semantic evaluation by a lightweight LLM—ensuring both precision and recall during policy training. Technical Design and Methodological Advantages QwenLong-L1 integrates recent advances in group-relative RL optimization, specifically GRPO and DAPO, to mitigate the computational overhead associated with long-context value estimation: GRPO estimates advantage by normalizing rewards within sampled groups, eliminating the need for a separate value network and encouraging diverse generation patterns. DAPO incorporates mechanisms such as dynamic sampling, overlength penalty shaping, and asymmetric clipping thresholds to prevent entropy collapse and mitigate length biases during training. The reward function is defined as the maximum of two signals: a deterministic rule-based match and a semantic judgment from a compact evaluator model (e.g., Qwen2.5-1.5B). This hybrid approach avoids overfitting to rigid formats while maintaining answer correctness across varied notations and phrasings. Moreover, the framework is optimized via progressive context scaling, where the RL process transitions from 20K-token to 60K-token input lengths in controlled phases, stabilizing training dynamics and facilitating policy generalization. Experimental Results and Benchmark Performance QwenLong-L1 was evaluated on seven long-context document QA benchmarks, including DocMath, Frames, 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, and Qasper. The 32B variant, QwenLong-L1-32B, demonstrated strong empirical performance: It outperformed baseline models such as R1-Distill-Qwen-32B by 5.1 points and exceeded leading proprietary systems like OpenAI-o3-mini and Qwen3-235B-A22B. Its performance was comparable to Claude-3.7-Sonnet-Thinking, indicating competitive reasoning capabilities under extreme context lengths. Pass@K analysis revealed consistent improvements with increased sampling, achieving a Pass@2 average of 73.7, surpassing DeepSeek-R1 and OpenAI-o1-preview, even at low sampling rates. Ablation studies further validated the individual contributions of SFT, phased RL, and retrospective sampling. Notably, RL played a decisive role in enabling emergent reasoning behaviors such as grounding, subgoal setting, verification, and backtracking—traits not effectively induced by supervised fine-tuning alone. Conclusion QwenLong-L1 represents a systematic approach to equipping LRMs with robust long-context reasoning capabilities through reinforcement learning. Its design effectively bridges the gap between short-context expertise and the demands of information-dense environments by combining supervised initialization, curriculum-driven context scaling, and hybrid evaluation strategies. The framework not only achieves state-of-the-art results across long-context benchmarks but also demonstrates the emergence of interpretable reasoning patterns during training. Check out the Paper, Model on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA Releases Llama Nemotron Nano 4B: An Efficient Open Reasoning Model Optimized for Edge AI and Scientific TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation to Build an AI Agent with Live Python Execution and Automated ValidationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Step-by-Step Guide to Build a Customizable Multi-Tool AI Agent with LangGraph and Claude for Dynamic Agent CreationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Comprehensive Coding Guide to Crafting Advanced Round-Robin Multi-Agent Workflows with Microsoft AutoGen
    4 Commentarii 0 Distribuiri
  • Microsoft Adds Gen AI Features to Paint, Snipping Tool, and Notepad

    Microsoft has added a slew of new generative AI features to Paint, Snipping Tool, and Notepad, courtesy of Microsoft Copilot. But unfortunately for many, some of these new features are only available for those with Copilot-compatible Windows machines.If you’re still using Microsoft Paint, you can use the new AI-powered feature to create custom stickers by simply typing in a prompt, though you'll need a Copilot compatible device to get it to work. To give the new feature a test drive, you’ll need to click the Sticker Generator button in the Copilot menu. From there, you can type in a description of the sticker you want to create—for example, “monkey wearing a suit”—and hit the Generate button.Paint will then generate a set of unique stickers based on your prompt. If you then fall in love with your new AI sticker, you can access all your recently generated stickers by tapping the new Stickers option in the Paint toolbar.Copilot can also now help you slim down the amount of time it takes to edit your clippings. The new feature, Perfect Screenshot, will resize your clipping based on the content in your selection using AI. You can enable the feature by holding the Ctrl keyboard shortcut after activating the Snipping Tool while selecting the region of your screen you want to capture. Unfortunately, Perfect Screenshot in Snipping Tool will be available only on Copilot+ PCs.Recommended by Our EditorsIn addition, you can now write new content in Notepad using generative AI by entering a prompt. You'll need to place your cursor where you want to insert new text, or select the content you’d like to use as a reference. Then right-click and choose Write, select Write from the Copilot menu, or use the Ctrl + Q keyboard shortcut. You'll then need to enter your instruction into the dialog and click Send. The AI-generated output will appear directly on the canvas.You can select Keep Text to add it to your document or Discard if it doesn’t fit your needs. You can also continue refining the output by entering follow-up prompts to evolve your draft further. But to use write, you'll need to make sure you have enough of Microsoft's new AI credits.But these new features may ultimately seem small fry compared with what Microsoft says it has lined up for the future. In a keynote at Microsoft’s Build conference earlier this week, CEO Satya Nadella made some hugely ambitious promises that AI is ready to start transforming the experiences of Microsoft users, while announcing new AI-focused tools for developers. Nadella said the tech world is in the middle of “another platform shift,” equivalent to 1991, when Win32 developer tools were rolling out, or 1996, when a variety of companies built new development tools designed for the internet.
    #microsoft #adds #gen #features #paint
    Microsoft Adds Gen AI Features to Paint, Snipping Tool, and Notepad
    Microsoft has added a slew of new generative AI features to Paint, Snipping Tool, and Notepad, courtesy of Microsoft Copilot. But unfortunately for many, some of these new features are only available for those with Copilot-compatible Windows machines.If you’re still using Microsoft Paint, you can use the new AI-powered feature to create custom stickers by simply typing in a prompt, though you'll need a Copilot compatible device to get it to work. To give the new feature a test drive, you’ll need to click the Sticker Generator button in the Copilot menu. From there, you can type in a description of the sticker you want to create—for example, “monkey wearing a suit”—and hit the Generate button.Paint will then generate a set of unique stickers based on your prompt. If you then fall in love with your new AI sticker, you can access all your recently generated stickers by tapping the new Stickers option in the Paint toolbar.Copilot can also now help you slim down the amount of time it takes to edit your clippings. The new feature, Perfect Screenshot, will resize your clipping based on the content in your selection using AI. You can enable the feature by holding the Ctrl keyboard shortcut after activating the Snipping Tool while selecting the region of your screen you want to capture. Unfortunately, Perfect Screenshot in Snipping Tool will be available only on Copilot+ PCs.Recommended by Our EditorsIn addition, you can now write new content in Notepad using generative AI by entering a prompt. You'll need to place your cursor where you want to insert new text, or select the content you’d like to use as a reference. Then right-click and choose Write, select Write from the Copilot menu, or use the Ctrl + Q keyboard shortcut. You'll then need to enter your instruction into the dialog and click Send. The AI-generated output will appear directly on the canvas.You can select Keep Text to add it to your document or Discard if it doesn’t fit your needs. You can also continue refining the output by entering follow-up prompts to evolve your draft further. But to use write, you'll need to make sure you have enough of Microsoft's new AI credits.But these new features may ultimately seem small fry compared with what Microsoft says it has lined up for the future. In a keynote at Microsoft’s Build conference earlier this week, CEO Satya Nadella made some hugely ambitious promises that AI is ready to start transforming the experiences of Microsoft users, while announcing new AI-focused tools for developers. Nadella said the tech world is in the middle of “another platform shift,” equivalent to 1991, when Win32 developer tools were rolling out, or 1996, when a variety of companies built new development tools designed for the internet. #microsoft #adds #gen #features #paint
    ME.PCMAG.COM
    Microsoft Adds Gen AI Features to Paint, Snipping Tool, and Notepad
    Microsoft has added a slew of new generative AI features to Paint, Snipping Tool, and Notepad, courtesy of Microsoft Copilot. But unfortunately for many, some of these new features are only available for those with Copilot-compatible Windows machines.If you’re still using Microsoft Paint, you can use the new AI-powered feature to create custom stickers by simply typing in a prompt, though you'll need a Copilot compatible device to get it to work. To give the new feature a test drive, you’ll need to click the Sticker Generator button in the Copilot menu. From there, you can type in a description of the sticker you want to create—for example, “monkey wearing a suit”—and hit the Generate button.Paint will then generate a set of unique stickers based on your prompt. If you then fall in love with your new AI sticker, you can access all your recently generated stickers by tapping the new Stickers option in the Paint toolbar.Copilot can also now help you slim down the amount of time it takes to edit your clippings. The new feature, Perfect Screenshot, will resize your clipping based on the content in your selection using AI. You can enable the feature by holding the Ctrl keyboard shortcut after activating the Snipping Tool while selecting the region of your screen you want to capture. Unfortunately, Perfect Screenshot in Snipping Tool will be available only on Copilot+ PCs.Recommended by Our EditorsIn addition, you can now write new content in Notepad using generative AI by entering a prompt. You'll need to place your cursor where you want to insert new text, or select the content you’d like to use as a reference. Then right-click and choose Write, select Write from the Copilot menu, or use the Ctrl + Q keyboard shortcut. You'll then need to enter your instruction into the dialog and click Send. The AI-generated output will appear directly on the canvas.You can select Keep Text to add it to your document or Discard if it doesn’t fit your needs. You can also continue refining the output by entering follow-up prompts to evolve your draft further. But to use write, you'll need to make sure you have enough of Microsoft's new AI credits.But these new features may ultimately seem small fry compared with what Microsoft says it has lined up for the future. In a keynote at Microsoft’s Build conference earlier this week, CEO Satya Nadella made some hugely ambitious promises that AI is ready to start transforming the experiences of Microsoft users, while announcing new AI-focused tools for developers. Nadella said the tech world is in the middle of “another platform shift,” equivalent to 1991, when Win32 developer tools were rolling out, or 1996, when a variety of companies built new development tools designed for the internet.
    0 Commentarii 0 Distribuiri
  • TCL QM7K review: stunning image quality for an affordable price

    TCL QM7K

    MSRP Score Details

    “The TCL QM7K offers a stunning image for its price point, bringing premium-level picture quality to your living room without costing a small fortune.”

    Pros

    Fantastic color accuracy

    Impressive contrast

    Excellent brightness

    Decently wide viewing angle

    Cons

    Reflective screen

    Unimpressive sound

    “Why you can trust Digital Trends – We have a 20-year history of testing, reviewing, and rating products, services and apps to help you make a sound buying decision. Find out more about how we test and score products.“

    Recommended Videos

    We finally got our hands on the TCL QM7K Mini-LED QLED, winner of our Top Tech of CES 2025 award. Earlier this year we reviewed the QM6K and were impressed with its value and performance, so we’re excited to put the QM7K through its paces.
    TCL continues to impress in the midrange and I’m happy to say, the QM7K did not disappoint. Mini-LED screen technology is making for gorgeous displays with incredible contrast more affordable for the average consumer, and TCL is really showing what the technology can do with this new entry.
    There’s a good chance that this isn’t the last model we’ll hear about from TCL this year as the company has switched to a staggered release approach for its 2025 models, but for now let’s soak in the QM7K and all it has to offer.
    TCL QM7K specs

    Sizes
    55, 65, 75, 85, 98, and 115 inches

    Pricing
    and Display type
    QD-Mini LED

    Operating system
    Google TV

    Screen resolution
    4K Ultra HDHDR support
    Dolby Vision, Dolby Vision Gaming, Dolby Vision IQ, HDR 10+, HDR10, HLG

    Native refresh rate
    144Hz

    Gaming features
    Auto Game Mode, AMD FreeSync Premium Pro, Game Accelerator 288, VRRAudio support
    Dolby Atmos, Dolby Digital +, DTS: Virtual XConnectivity
    4 HDMI, USB 3.0, USB 2.0, Ethernet, S/PDIF, ATSC 1.0 Tuner

    Affordable price means a less premium build
    Andre Revilla / Digital Trends
    The QM7K targets that affordable middle ground between a true budget TV and the premium flagship models of today. It aims to be within reach of most consumers, particularly in the smaller 55- or 65-inch models.
    So I can’t say I was too shocked when I started unboxing and assembling the QM7K that I found its construction to be a little flimsy.
    Andre Revilla / Digital Trends
    The stand that holds the TV is designed as one central piece, as opposed to the individual legs of the QM6K, which makes attaching it to the QM7K a straightforward process. It’s brushed to look like metal, but metal it is not.
    The plastic T-shaped stand weighs about 5 pounds and does its job in holding the 85-inch model we tested for this review.
    Andre Revilla / Digital Trends
    The TV itself weighs only 75 pounds, aided by a frame made almost entirely of plastic. Savings have to come from somewhere to hit these price points, right? The good news is that this makes assembly a lot easier than on far heavierhigh-end models.
    The QM7K sways a bit anytime you move it or the furniture it sits on, but it’s held securely enough that it’s not going anywhere.
    Decent audio, nothing mind-blowing
    The audio on the TCL QM7K is billed as having better audio than the QM6K, thanks to a Bang & Olufsen audio upgrade, which TCL says will offer “more accurate sound quality for an enhanced home theater audio experience.” All in all, the 2.2-speaker system performs about as expected for a mostly affordable model. Which is to say, it didn’t sound terrible, but it didn’t sound great.
    Andre Revilla / Digital Trends
    The bass response left a lot to be desired, but it’s not like I was expecting a 6-inch subwoofer built into the TV. The dialogue could at times sound muddled, blending in a bit too much with a soundtrack or background noise.
    This really only happened in intense scenes where loud music, dialogue, and sound effects all combined in a cacophony of sound. The QM7K natively supports Dolby Digital and Dolby Digital+ audio, but the built-in speakers aren’t doing it justice.
    Seeing as this model supports Dolby Atmos passthrough, you’d be better off with a Dolby Atmos soundbar, or another dedicated audio system to get the most out of the Dolby audio available on most streaming platforms.
    Color accurate right out of the box
    The QM7K features a number of display profiles that users can select from, but for our purposes we’re going to focus on Filmmaker Mode, which was first added on the QM6K. This mode is designed for color accuracy, and it was spot-on right out of the box.
    Andre Revilla / Digital Trends
    We tested the QM7K first in SDR while in Filmmaker Mode, and it delivered an impressive color delta E of 0.8. While this fell to near zero post-calibration, that’s honestly not even necessary, as the human eye struggles to distinguish a delta E of less than 1.0, making Filmmaker Mode more than sufficient.
    More than bright enough
    If you’re looking to sear your eyeballs out of your sockets during nighttime viewing, then the QM7K is the right TV for you. TCL advertises a peak brightness of 3,000 nits in HDR for the QM7K, though this varies by size and will vary slightly by panel.
    Andre Revilla / Digital Trends
    In my own testing, I was able to get one 2,400-nit burst in HDR testing in a 10% window with brightness, peak luminance, and dynamic backlighting all turned up to the max. More stable readings in HDR came in around 2,000 nits in peak brightness. Peak brightness measurements in SDR came in at a still very respectable 1,600 nits.
    If you’re wanting to get the best color accuracy and contrast out of your QM7K with minimal clipping and as much uniformity as possible, then you’ll likely be watching Filmmaker Mode in its default configuration, which still offers 800 nits with the brightness turned to 100 while keeping those other backlight and luminance settings turned off.
    My gripe with reflections
    While the brightness of the QM7K more than delivers, nothing could get away from the fact that the screen itself was pretty reflective. Don’t get me wrong, I’ve seen worse, but if your living room is like mine and has windows opposite the TV, you’ll find yourself getting up to close them every time you turn on the TV during the daytime.
    Windows reflected in the TCL QM7K Andre Revilla / Digital Trends
    I’m not even picking on reflections when sunlight is pouring in the windows midday, as this issue persists into the evening when the sun is already starting to set. With brightness settings maxed, the QM7K can handle as bright a room as you can throw at it, but any sort of light source directly in front of the screen from your viewing position will be thrown back at you and remains quite visible even in bright scenes.
    A superb image overall
    All in all, the TCL QM7K offers a stunning image for its price point. Without getting too far into the weeds, I’ll say that a lot of cool tech—like the condensed micro lens in the backlight system, helping focus and direct the light from each mini LED, and the decreased optical distance, which is the space between the backlight and the LCD—helps create an image with excellent contrast.
    These technologies also help reduce haloing in HDR, as they lead to less light scatter. The QM7K really goes to show that Mini LED QLED panels are taking the fight to OLED, bringing premium-level picture quality to your living room without costing you a small fortune.

    Value remains the focus
    TCL has continued to impress with panel technology and image quality while maintaining approachable pricing. The 85-inch model we tested launched just over two months ago and is already being sold by all major retailers and TCL at about a 30% markdown from its original MSRP of Right now, that means you can pick up an 85-inch QM7K for and the 55-inch is currently marked down to under Look for these prices to continue dropping as the year goes on, especially as we get into the holiday season.
    The TCL QM7K is an impressive entry that blurs the line between flagship and mid-range in performance while staying solidly in the realm of mid-range pricing. I’ll be eagerly awaiting any TCL launches hopefully still to come this year.
    #tcl #qm7k #review #stunning #image
    TCL QM7K review: stunning image quality for an affordable price
    TCL QM7K MSRP Score Details “The TCL QM7K offers a stunning image for its price point, bringing premium-level picture quality to your living room without costing a small fortune.” Pros Fantastic color accuracy Impressive contrast Excellent brightness Decently wide viewing angle Cons Reflective screen Unimpressive sound “Why you can trust Digital Trends – We have a 20-year history of testing, reviewing, and rating products, services and apps to help you make a sound buying decision. Find out more about how we test and score products.“ Recommended Videos We finally got our hands on the TCL QM7K Mini-LED QLED, winner of our Top Tech of CES 2025 award. Earlier this year we reviewed the QM6K and were impressed with its value and performance, so we’re excited to put the QM7K through its paces. TCL continues to impress in the midrange and I’m happy to say, the QM7K did not disappoint. Mini-LED screen technology is making for gorgeous displays with incredible contrast more affordable for the average consumer, and TCL is really showing what the technology can do with this new entry. There’s a good chance that this isn’t the last model we’ll hear about from TCL this year as the company has switched to a staggered release approach for its 2025 models, but for now let’s soak in the QM7K and all it has to offer. TCL QM7K specs Sizes 55, 65, 75, 85, 98, and 115 inches Pricing and Display type QD-Mini LED Operating system Google TV Screen resolution 4K Ultra HDHDR support Dolby Vision, Dolby Vision Gaming, Dolby Vision IQ, HDR 10+, HDR10, HLG Native refresh rate 144Hz Gaming features Auto Game Mode, AMD FreeSync Premium Pro, Game Accelerator 288, VRRAudio support Dolby Atmos, Dolby Digital +, DTS: Virtual XConnectivity 4 HDMI, USB 3.0, USB 2.0, Ethernet, S/PDIF, ATSC 1.0 Tuner Affordable price means a less premium build Andre Revilla / Digital Trends The QM7K targets that affordable middle ground between a true budget TV and the premium flagship models of today. It aims to be within reach of most consumers, particularly in the smaller 55- or 65-inch models. So I can’t say I was too shocked when I started unboxing and assembling the QM7K that I found its construction to be a little flimsy. Andre Revilla / Digital Trends The stand that holds the TV is designed as one central piece, as opposed to the individual legs of the QM6K, which makes attaching it to the QM7K a straightforward process. It’s brushed to look like metal, but metal it is not. The plastic T-shaped stand weighs about 5 pounds and does its job in holding the 85-inch model we tested for this review. Andre Revilla / Digital Trends The TV itself weighs only 75 pounds, aided by a frame made almost entirely of plastic. Savings have to come from somewhere to hit these price points, right? The good news is that this makes assembly a lot easier than on far heavierhigh-end models. The QM7K sways a bit anytime you move it or the furniture it sits on, but it’s held securely enough that it’s not going anywhere. Decent audio, nothing mind-blowing The audio on the TCL QM7K is billed as having better audio than the QM6K, thanks to a Bang & Olufsen audio upgrade, which TCL says will offer “more accurate sound quality for an enhanced home theater audio experience.” All in all, the 2.2-speaker system performs about as expected for a mostly affordable model. Which is to say, it didn’t sound terrible, but it didn’t sound great. Andre Revilla / Digital Trends The bass response left a lot to be desired, but it’s not like I was expecting a 6-inch subwoofer built into the TV. The dialogue could at times sound muddled, blending in a bit too much with a soundtrack or background noise. This really only happened in intense scenes where loud music, dialogue, and sound effects all combined in a cacophony of sound. The QM7K natively supports Dolby Digital and Dolby Digital+ audio, but the built-in speakers aren’t doing it justice. Seeing as this model supports Dolby Atmos passthrough, you’d be better off with a Dolby Atmos soundbar, or another dedicated audio system to get the most out of the Dolby audio available on most streaming platforms. Color accurate right out of the box The QM7K features a number of display profiles that users can select from, but for our purposes we’re going to focus on Filmmaker Mode, which was first added on the QM6K. This mode is designed for color accuracy, and it was spot-on right out of the box. Andre Revilla / Digital Trends We tested the QM7K first in SDR while in Filmmaker Mode, and it delivered an impressive color delta E of 0.8. While this fell to near zero post-calibration, that’s honestly not even necessary, as the human eye struggles to distinguish a delta E of less than 1.0, making Filmmaker Mode more than sufficient. More than bright enough If you’re looking to sear your eyeballs out of your sockets during nighttime viewing, then the QM7K is the right TV for you. TCL advertises a peak brightness of 3,000 nits in HDR for the QM7K, though this varies by size and will vary slightly by panel. Andre Revilla / Digital Trends In my own testing, I was able to get one 2,400-nit burst in HDR testing in a 10% window with brightness, peak luminance, and dynamic backlighting all turned up to the max. More stable readings in HDR came in around 2,000 nits in peak brightness. Peak brightness measurements in SDR came in at a still very respectable 1,600 nits. If you’re wanting to get the best color accuracy and contrast out of your QM7K with minimal clipping and as much uniformity as possible, then you’ll likely be watching Filmmaker Mode in its default configuration, which still offers 800 nits with the brightness turned to 100 while keeping those other backlight and luminance settings turned off. My gripe with reflections While the brightness of the QM7K more than delivers, nothing could get away from the fact that the screen itself was pretty reflective. Don’t get me wrong, I’ve seen worse, but if your living room is like mine and has windows opposite the TV, you’ll find yourself getting up to close them every time you turn on the TV during the daytime. Windows reflected in the TCL QM7K Andre Revilla / Digital Trends I’m not even picking on reflections when sunlight is pouring in the windows midday, as this issue persists into the evening when the sun is already starting to set. With brightness settings maxed, the QM7K can handle as bright a room as you can throw at it, but any sort of light source directly in front of the screen from your viewing position will be thrown back at you and remains quite visible even in bright scenes. A superb image overall All in all, the TCL QM7K offers a stunning image for its price point. Without getting too far into the weeds, I’ll say that a lot of cool tech—like the condensed micro lens in the backlight system, helping focus and direct the light from each mini LED, and the decreased optical distance, which is the space between the backlight and the LCD—helps create an image with excellent contrast. These technologies also help reduce haloing in HDR, as they lead to less light scatter. The QM7K really goes to show that Mini LED QLED panels are taking the fight to OLED, bringing premium-level picture quality to your living room without costing you a small fortune. Value remains the focus TCL has continued to impress with panel technology and image quality while maintaining approachable pricing. The 85-inch model we tested launched just over two months ago and is already being sold by all major retailers and TCL at about a 30% markdown from its original MSRP of Right now, that means you can pick up an 85-inch QM7K for and the 55-inch is currently marked down to under Look for these prices to continue dropping as the year goes on, especially as we get into the holiday season. The TCL QM7K is an impressive entry that blurs the line between flagship and mid-range in performance while staying solidly in the realm of mid-range pricing. I’ll be eagerly awaiting any TCL launches hopefully still to come this year. #tcl #qm7k #review #stunning #image
    WWW.DIGITALTRENDS.COM
    TCL QM7K review: stunning image quality for an affordable price
    TCL QM7K MSRP $1,300.00 Score Details “The TCL QM7K offers a stunning image for its price point, bringing premium-level picture quality to your living room without costing a small fortune.” Pros Fantastic color accuracy Impressive contrast Excellent brightness Decently wide viewing angle Cons Reflective screen Unimpressive sound “Why you can trust Digital Trends – We have a 20-year history of testing, reviewing, and rating products, services and apps to help you make a sound buying decision. Find out more about how we test and score products.“ Recommended Videos We finally got our hands on the TCL QM7K Mini-LED QLED, winner of our Top Tech of CES 2025 award. Earlier this year we reviewed the QM6K and were impressed with its value and performance, so we’re excited to put the QM7K through its paces. TCL continues to impress in the midrange and I’m happy to say, the QM7K did not disappoint. Mini-LED screen technology is making for gorgeous displays with incredible contrast more affordable for the average consumer, and TCL is really showing what the technology can do with this new entry. There’s a good chance that this isn’t the last model we’ll hear about from TCL this year as the company has switched to a staggered release approach for its 2025 models, but for now let’s soak in the QM7K and all it has to offer. TCL QM7K specs Sizes 55, 65, 75, 85, 98, and 115 inches Pricing $1,299.99, $1,499.99, $1,999.99, $2,499.99, $4,999.99, and $19,999.99 Display type QD-Mini LED Operating system Google TV Screen resolution 4K Ultra HD (3,840 x 2,160) HDR support Dolby Vision, Dolby Vision Gaming, Dolby Vision IQ, HDR 10+, HDR10, HLG Native refresh rate 144Hz Gaming features Auto Game Mode (ALLM), AMD FreeSync Premium Pro, Game Accelerator 288, VRR (up to 144Hz) Audio support Dolby Atmos, Dolby Digital +, DTS: Virtual X (Passthrough Dolby Atmos, Dolby Digital +, Dolby Digital, PCM) Connectivity 4 HDMI (1x eARC), USB 3.0, USB 2.0, Ethernet (LAN), S/PDIF, ATSC 1.0 Tuner Affordable price means a less premium build Andre Revilla / Digital Trends The QM7K targets that affordable middle ground between a true budget TV and the premium flagship models of today. It aims to be within reach of most consumers, particularly in the smaller 55- or 65-inch models. So I can’t say I was too shocked when I started unboxing and assembling the QM7K that I found its construction to be a little flimsy. Andre Revilla / Digital Trends The stand that holds the TV is designed as one central piece, as opposed to the individual legs of the QM6K, which makes attaching it to the QM7K a straightforward process. It’s brushed to look like metal, but metal it is not. The plastic T-shaped stand weighs about 5 pounds and does its job in holding the 85-inch model we tested for this review. Andre Revilla / Digital Trends The TV itself weighs only 75 pounds, aided by a frame made almost entirely of plastic. Savings have to come from somewhere to hit these price points, right? The good news is that this makes assembly a lot easier than on far heavier (albeit sturdier) high-end models. The QM7K sways a bit anytime you move it or the furniture it sits on, but it’s held securely enough that it’s not going anywhere. Decent audio, nothing mind-blowing The audio on the TCL QM7K is billed as having better audio than the QM6K, thanks to a Bang & Olufsen audio upgrade, which TCL says will offer “more accurate sound quality for an enhanced home theater audio experience.” All in all, the 2.2-speaker system performs about as expected for a mostly affordable model. Which is to say, it didn’t sound terrible, but it didn’t sound great. Andre Revilla / Digital Trends The bass response left a lot to be desired, but it’s not like I was expecting a 6-inch subwoofer built into the TV. The dialogue could at times sound muddled, blending in a bit too much with a soundtrack or background noise. This really only happened in intense scenes where loud music, dialogue, and sound effects all combined in a cacophony of sound. The QM7K natively supports Dolby Digital and Dolby Digital+ audio, but the built-in speakers aren’t doing it justice. Seeing as this model supports Dolby Atmos passthrough, you’d be better off with a Dolby Atmos soundbar, or another dedicated audio system to get the most out of the Dolby audio available on most streaming platforms. Color accurate right out of the box The QM7K features a number of display profiles that users can select from, but for our purposes we’re going to focus on Filmmaker Mode, which was first added on the QM6K. This mode is designed for color accuracy, and it was spot-on right out of the box. Andre Revilla / Digital Trends We tested the QM7K first in SDR while in Filmmaker Mode, and it delivered an impressive color delta E of 0.8. While this fell to near zero post-calibration, that’s honestly not even necessary, as the human eye struggles to distinguish a delta E of less than 1.0, making Filmmaker Mode more than sufficient. More than bright enough If you’re looking to sear your eyeballs out of your sockets during nighttime viewing, then the QM7K is the right TV for you. TCL advertises a peak brightness of 3,000 nits in HDR for the QM7K, though this varies by size and will vary slightly by panel. Andre Revilla / Digital Trends In my own testing, I was able to get one 2,400-nit burst in HDR testing in a 10% window with brightness, peak luminance, and dynamic backlighting all turned up to the max. More stable readings in HDR came in around 2,000 nits in peak brightness. Peak brightness measurements in SDR came in at a still very respectable 1,600 nits. If you’re wanting to get the best color accuracy and contrast out of your QM7K with minimal clipping and as much uniformity as possible, then you’ll likely be watching Filmmaker Mode in its default configuration, which still offers 800 nits with the brightness turned to 100 while keeping those other backlight and luminance settings turned off. My gripe with reflections While the brightness of the QM7K more than delivers, nothing could get away from the fact that the screen itself was pretty reflective. Don’t get me wrong, I’ve seen worse, but if your living room is like mine and has windows opposite the TV, you’ll find yourself getting up to close them every time you turn on the TV during the daytime. Windows reflected in the TCL QM7K Andre Revilla / Digital Trends I’m not even picking on reflections when sunlight is pouring in the windows midday, as this issue persists into the evening when the sun is already starting to set. With brightness settings maxed, the QM7K can handle as bright a room as you can throw at it, but any sort of light source directly in front of the screen from your viewing position will be thrown back at you and remains quite visible even in bright scenes. A superb image overall All in all, the TCL QM7K offers a stunning image for its price point. Without getting too far into the weeds, I’ll say that a lot of cool tech—like the condensed micro lens in the backlight system, helping focus and direct the light from each mini LED, and the decreased optical distance, which is the space between the backlight and the LCD—helps create an image with excellent contrast. These technologies also help reduce haloing in HDR, as they lead to less light scatter. The QM7K really goes to show that Mini LED QLED panels are taking the fight to OLED, bringing premium-level picture quality to your living room without costing you a small fortune. Value remains the focus TCL has continued to impress with panel technology and image quality while maintaining approachable pricing. The 85-inch model we tested launched just over two months ago and is already being sold by all major retailers and TCL at about a 30% markdown from its original MSRP of $2,500. Right now, that means you can pick up an 85-inch QM7K for $1,800, and the 55-inch is currently marked down to under $900. Look for these prices to continue dropping as the year goes on, especially as we get into the holiday season. The TCL QM7K is an impressive entry that blurs the line between flagship and mid-range in performance while staying solidly in the realm of mid-range pricing. I’ll be eagerly awaiting any TCL launches hopefully still to come this year.
    0 Commentarii 0 Distribuiri
  • GTA 6 Trailer Gets PS2-Style Visual Treatment

    Earlier this month, the second trailer for Grand Theft Auto 6 was finally released by Rockstar Games, and as Take-Two boss Strauss Zelnick put it, it "broke the internet." To date, it has over one billion views on YouTube, so it was inevitable that fans would remix the original trailer into a new form. Now, someone has released a remake of the GTA 6 trailer with a PlayStation 2-inspired makeover. The PS2 Grand Theft Auto 6 trailer was posted on YouTube by a user called Foosmoke, and there appears to be some subtle humor in this video. There's clipping when Jason robs the convenience store, which is definitely true to a PS2 experience. Jason also appears to have more trouble lifting weights in this version than he did in the actual trailer.The Vice City glimpsed in this trailer remake is also noticeably less populated than the one seen in the original trailer. The PS2-era GTA games couldn't handle subtle expressions and body language in their character models like the new game can. And without those touches, this take on the GTA 6 trailer doesn't have the same resonance as the original.Continue Reading at GameSpot
    #gta #trailer #gets #ps2style #visual
    GTA 6 Trailer Gets PS2-Style Visual Treatment
    Earlier this month, the second trailer for Grand Theft Auto 6 was finally released by Rockstar Games, and as Take-Two boss Strauss Zelnick put it, it "broke the internet." To date, it has over one billion views on YouTube, so it was inevitable that fans would remix the original trailer into a new form. Now, someone has released a remake of the GTA 6 trailer with a PlayStation 2-inspired makeover. The PS2 Grand Theft Auto 6 trailer was posted on YouTube by a user called Foosmoke, and there appears to be some subtle humor in this video. There's clipping when Jason robs the convenience store, which is definitely true to a PS2 experience. Jason also appears to have more trouble lifting weights in this version than he did in the actual trailer.The Vice City glimpsed in this trailer remake is also noticeably less populated than the one seen in the original trailer. The PS2-era GTA games couldn't handle subtle expressions and body language in their character models like the new game can. And without those touches, this take on the GTA 6 trailer doesn't have the same resonance as the original.Continue Reading at GameSpot #gta #trailer #gets #ps2style #visual
    WWW.GAMESPOT.COM
    GTA 6 Trailer Gets PS2-Style Visual Treatment
    Earlier this month, the second trailer for Grand Theft Auto 6 was finally released by Rockstar Games, and as Take-Two boss Strauss Zelnick put it, it "broke the internet." To date, it has over one billion views on YouTube, so it was inevitable that fans would remix the original trailer into a new form. Now, someone has released a remake of the GTA 6 trailer with a PlayStation 2-inspired makeover. The PS2 Grand Theft Auto 6 trailer was posted on YouTube by a user called Foosmoke, and there appears to be some subtle humor in this video. There's clipping when Jason robs the convenience store, which is definitely true to a PS2 experience. Jason also appears to have more trouble lifting weights in this version than he did in the actual trailer.The Vice City glimpsed in this trailer remake is also noticeably less populated than the one seen in the original trailer. The PS2-era GTA games couldn't handle subtle expressions and body language in their character models like the new game can. And without those touches, this take on the GTA 6 trailer doesn't have the same resonance as the original.Continue Reading at GameSpot
    0 Commentarii 0 Distribuiri
  • AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

    The age of video analytics AI agents is here.
    Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for enterprises across industries, it is one of the largest and most ubiquitous data sources in the world. Yet less than 1% of it is analyzed for insights.
    Nearly half of global GDP comes from physical industries — spanning energy to automotive and electronics. With labor shortage concerns, manufacturing onshoring efforts and rising demand for automation, video analytics AI agents will play a more critical role than ever, helping bridge the physical and digital worlds.
    To accelerate the development of these agents, NVIDIA today is making the AI Blueprint for video search and summarization, powered by the NVIDIA Metropolis platform, generally available — giving developers the tools to create and deploy highly capable AI agents for analyzing vast sums of real-time and archived videos.
    A wave of vision AI agents and productivity assistants powered by vision language modelsare coming online. Combining powerful computer vision models with the skills of super intelligent large language models, these video analytics AI agents allow enterprises to easily see, search and summarize huge volumes of video. By analyzing videos in real time or reviewing terabytes of recorded video, video analytics AI agents are unlocking unprecedented value and opportunities across a range of important industries.
    Manufacturers and warehouses are using AI agents to help increase worker safety and productivity. For example, agents can help distribute forklifts and position workers for optimal efficiency. Smart cities are deploying video analytics AI agents to reduce traffic congestion and increase safety, and the uses go on and on.

    A Blueprint to Create Diverse Fleets of Video Analytics AI Agents
    The VSS blueprint is built on top of the NVIDIA Metropolis platform and boosted by VLMs and LLMs such as NVIDIA VILA and NVIDIA Llama Nemotron, NVIDIA NeMo Retriever microservices, and retrieval-augmented generation— a technique that connects LLMs to a company’s enterprise data.
    The VSS blueprint incorporates the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices for VLMs, LLMs and advanced AI frameworks for RAG. With the VSS blueprint, users can summarize a video 100x faster than watching in real time. For example, an hourlong video can be summarized in text in less than one minute.
    The VSS blueprint offers a host of powerful features designed to provide robust video understanding, performance and scalability.
    This release introduces expanded hardware support, including the ability to deploy on a single NVIDIA A100 or H100 GPU for smaller workloads, offering greater flexibility in resource allocation. The blueprint can also be deployed at the edge on the NVIDIA RTX 6000 PRO and NVIDIA DGX Spark computing platforms.
    The VSS blueprint can process hundreds of live video streams or burst clips simultaneously. In addition to visual understanding, it offers audio transcription. Converting speech to text adds contextual depth in scenarios where audio is critical — such as training videos, keynotes or team meetings.
    Industry Leaders Deploy Video Analytics AI Agents to Drive Business Value
    Everyone from the world’s leading manufacturers to smart cities and sports leagues are using the VSS blueprint to develop AI agents for optimizing operations.
    Pegatron, a leading electronics manufacturing company, uses the VSS blueprint to study operating procedures and train employees on best practices. The company is also integrating the blueprint into its PEGAAi platform so organizations can build AI agents to transform manufacturing processes.

    These agents can ingest and analyze massive volumes of video, enabling advanced capabilities like automated monitoring, anomaly detection, video search and incident reporting. Pegatron’s Visual Analytics Agent can be used to understand operating procedures for printed circuit board assembly and identify when actions are correct or incorrect. To date, the agents have reduced Pegatron’s labor costs by 7% and defect rates by 67%.
    Additional leading Taiwanese semiconductor and electronics manufacturers are building AI agents and digital twins to optimize their planning and operational applications.
    Kaohsiung City, Taiwan, is using a unified smart city vision AI application developed by its partner, Linker Vision, to improve incident response times. Previously, city departments such as waste management, transportation and emergency response were isolated by siloed infrastructure — leading to slow response times due to lack of access to critical information.
    Powered by the VSS blueprint, Linker Vision’s AI-powered application has agents that combine real-time video analytics with generative AI to not just detect visual elements but also understand and narrate complex urban events like floods or traffic accidents.
    Linker Vision currently delivers timely insights to 12 city departments and is on track to scale from 30,000 city cameras to over 50,000 by 2026. These insights are providing improved situational awareness and data-driven decision-making across city services, and reducing incident response times by up to 80%.

    The National Hockey League used the VAST InsightEngine with the VSS blueprint to streamline and accelerate vision AI workflows. It manages massive volumes of game footage.
    With the VAST InsightEngine, the NHL is positioned to search through petabytes of video in sub-seconds, enabling near-instant retrieval of highlights and in-game moments. AI-driven agentic workflows further enhance content creation by automatically clipping, tagging and assembling video content for ease of access and use.
    In the future, the League could potentially use real-time AI reasoning to enable tailored insights — such as player stats, strategy analyses or fantasy recommendations — generated dynamically during live games. This end-to-end automation could transform how media is created, curated and delivered, setting a new standard for AI-driven sports content production.

    Siemens is using its Industrial Copilot for Operations to assist factory floor workers with equipment maintenance tasks, error handling and performance optimization. This generative AI-powered assistant offers real-time answers to equipment errors using information about operational and document data.
    The copilot was built with a fusion of VSS components like VLMs, LLMs and NVIDIA NeMo microservices. The Industrial Copilot has resulted in rapid decision-making and reduced machine downtime. Siemens has reported a 30% increase in productivity, with the potential to reach 50%.
    Supported by an Expanding Partner Ecosystem Creating Sophisticated AI Agents
    NVIDIA partners are using the VSS blueprint to expedite the creation of agentic AI video analytics capabilities for their workflows, reducing development time from months to weeks.
    Superb AI, a leader in intelligent video analytics, set up a sophisticated airport operations project at Incheon Airport to reduce passenger wait times in a matter of weeks. In Malaysia, solution provider ITMAX is building advanced visual AI agents with the VSS blueprint for the City of Kuala Lumpur to improve overall city management and reduce incident response times.
    In the advertising sector, PYLER integrated the VSS blueprint into its brand safetyand ad targetingsolutions in just a few weeks. Using AiD and AiM, Samsung Electronics increased advertising effectiveness with brand- and product-aligned, high-value ad placements. BYD saw its ad-click through rates increase 4x by targeting contextually relevant and positive content, while Hana Financial Group surpassed multiple brand campaign goals.
    Fingermark is the application provider of Eyecue, a real-time computer vision platform used by quick service restaurants. Fingermark is adding the VSS blueprint into Eyecue to turn video footage into clear, actionable insights regarding drive-thru wait times, service bottlenecks and staff-related incidents at scale.
    Try the VSS blueprint on build.nvidia.com and read this technical blog for more details.
    Watch the COMPUTEX keynote from NVIDIA founder and CEO Jensen Huang, as well as NVIDIA GTC Taipei 2025 sessions.
    #blueprint #video #search #summarization #now
    AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries
    The age of video analytics AI agents is here. Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for enterprises across industries, it is one of the largest and most ubiquitous data sources in the world. Yet less than 1% of it is analyzed for insights. Nearly half of global GDP comes from physical industries — spanning energy to automotive and electronics. With labor shortage concerns, manufacturing onshoring efforts and rising demand for automation, video analytics AI agents will play a more critical role than ever, helping bridge the physical and digital worlds. To accelerate the development of these agents, NVIDIA today is making the AI Blueprint for video search and summarization, powered by the NVIDIA Metropolis platform, generally available — giving developers the tools to create and deploy highly capable AI agents for analyzing vast sums of real-time and archived videos. A wave of vision AI agents and productivity assistants powered by vision language modelsare coming online. Combining powerful computer vision models with the skills of super intelligent large language models, these video analytics AI agents allow enterprises to easily see, search and summarize huge volumes of video. By analyzing videos in real time or reviewing terabytes of recorded video, video analytics AI agents are unlocking unprecedented value and opportunities across a range of important industries. Manufacturers and warehouses are using AI agents to help increase worker safety and productivity. For example, agents can help distribute forklifts and position workers for optimal efficiency. Smart cities are deploying video analytics AI agents to reduce traffic congestion and increase safety, and the uses go on and on. A Blueprint to Create Diverse Fleets of Video Analytics AI Agents The VSS blueprint is built on top of the NVIDIA Metropolis platform and boosted by VLMs and LLMs such as NVIDIA VILA and NVIDIA Llama Nemotron, NVIDIA NeMo Retriever microservices, and retrieval-augmented generation— a technique that connects LLMs to a company’s enterprise data. The VSS blueprint incorporates the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices for VLMs, LLMs and advanced AI frameworks for RAG. With the VSS blueprint, users can summarize a video 100x faster than watching in real time. For example, an hourlong video can be summarized in text in less than one minute. The VSS blueprint offers a host of powerful features designed to provide robust video understanding, performance and scalability. This release introduces expanded hardware support, including the ability to deploy on a single NVIDIA A100 or H100 GPU for smaller workloads, offering greater flexibility in resource allocation. The blueprint can also be deployed at the edge on the NVIDIA RTX 6000 PRO and NVIDIA DGX Spark computing platforms. The VSS blueprint can process hundreds of live video streams or burst clips simultaneously. In addition to visual understanding, it offers audio transcription. Converting speech to text adds contextual depth in scenarios where audio is critical — such as training videos, keynotes or team meetings. Industry Leaders Deploy Video Analytics AI Agents to Drive Business Value Everyone from the world’s leading manufacturers to smart cities and sports leagues are using the VSS blueprint to develop AI agents for optimizing operations. Pegatron, a leading electronics manufacturing company, uses the VSS blueprint to study operating procedures and train employees on best practices. The company is also integrating the blueprint into its PEGAAi platform so organizations can build AI agents to transform manufacturing processes. These agents can ingest and analyze massive volumes of video, enabling advanced capabilities like automated monitoring, anomaly detection, video search and incident reporting. Pegatron’s Visual Analytics Agent can be used to understand operating procedures for printed circuit board assembly and identify when actions are correct or incorrect. To date, the agents have reduced Pegatron’s labor costs by 7% and defect rates by 67%. Additional leading Taiwanese semiconductor and electronics manufacturers are building AI agents and digital twins to optimize their planning and operational applications. Kaohsiung City, Taiwan, is using a unified smart city vision AI application developed by its partner, Linker Vision, to improve incident response times. Previously, city departments such as waste management, transportation and emergency response were isolated by siloed infrastructure — leading to slow response times due to lack of access to critical information. Powered by the VSS blueprint, Linker Vision’s AI-powered application has agents that combine real-time video analytics with generative AI to not just detect visual elements but also understand and narrate complex urban events like floods or traffic accidents. Linker Vision currently delivers timely insights to 12 city departments and is on track to scale from 30,000 city cameras to over 50,000 by 2026. These insights are providing improved situational awareness and data-driven decision-making across city services, and reducing incident response times by up to 80%. The National Hockey League used the VAST InsightEngine with the VSS blueprint to streamline and accelerate vision AI workflows. It manages massive volumes of game footage. With the VAST InsightEngine, the NHL is positioned to search through petabytes of video in sub-seconds, enabling near-instant retrieval of highlights and in-game moments. AI-driven agentic workflows further enhance content creation by automatically clipping, tagging and assembling video content for ease of access and use. In the future, the League could potentially use real-time AI reasoning to enable tailored insights — such as player stats, strategy analyses or fantasy recommendations — generated dynamically during live games. This end-to-end automation could transform how media is created, curated and delivered, setting a new standard for AI-driven sports content production. Siemens is using its Industrial Copilot for Operations to assist factory floor workers with equipment maintenance tasks, error handling and performance optimization. This generative AI-powered assistant offers real-time answers to equipment errors using information about operational and document data. The copilot was built with a fusion of VSS components like VLMs, LLMs and NVIDIA NeMo microservices. The Industrial Copilot has resulted in rapid decision-making and reduced machine downtime. Siemens has reported a 30% increase in productivity, with the potential to reach 50%. Supported by an Expanding Partner Ecosystem Creating Sophisticated AI Agents NVIDIA partners are using the VSS blueprint to expedite the creation of agentic AI video analytics capabilities for their workflows, reducing development time from months to weeks. Superb AI, a leader in intelligent video analytics, set up a sophisticated airport operations project at Incheon Airport to reduce passenger wait times in a matter of weeks. In Malaysia, solution provider ITMAX is building advanced visual AI agents with the VSS blueprint for the City of Kuala Lumpur to improve overall city management and reduce incident response times. In the advertising sector, PYLER integrated the VSS blueprint into its brand safetyand ad targetingsolutions in just a few weeks. Using AiD and AiM, Samsung Electronics increased advertising effectiveness with brand- and product-aligned, high-value ad placements. BYD saw its ad-click through rates increase 4x by targeting contextually relevant and positive content, while Hana Financial Group surpassed multiple brand campaign goals. Fingermark is the application provider of Eyecue, a real-time computer vision platform used by quick service restaurants. Fingermark is adding the VSS blueprint into Eyecue to turn video footage into clear, actionable insights regarding drive-thru wait times, service bottlenecks and staff-related incidents at scale. Try the VSS blueprint on build.nvidia.com and read this technical blog for more details. Watch the COMPUTEX keynote from NVIDIA founder and CEO Jensen Huang, as well as NVIDIA GTC Taipei 2025 sessions. #blueprint #video #search #summarization #now
    BLOGS.NVIDIA.COM
    AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries
    The age of video analytics AI agents is here. Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for enterprises across industries, it is one of the largest and most ubiquitous data sources in the world. Yet less than 1% of it is analyzed for insights. Nearly half of global GDP comes from physical industries — spanning energy to automotive and electronics. With labor shortage concerns, manufacturing onshoring efforts and rising demand for automation, video analytics AI agents will play a more critical role than ever, helping bridge the physical and digital worlds. To accelerate the development of these agents, NVIDIA today is making the AI Blueprint for video search and summarization (VSS), powered by the NVIDIA Metropolis platform, generally available — giving developers the tools to create and deploy highly capable AI agents for analyzing vast sums of real-time and archived videos. A wave of vision AI agents and productivity assistants powered by vision language models (VLMs) are coming online. Combining powerful computer vision models with the skills of super intelligent large language models (LLMs), these video analytics AI agents allow enterprises to easily see, search and summarize huge volumes of video. By analyzing videos in real time or reviewing terabytes of recorded video, video analytics AI agents are unlocking unprecedented value and opportunities across a range of important industries. Manufacturers and warehouses are using AI agents to help increase worker safety and productivity. For example, agents can help distribute forklifts and position workers for optimal efficiency. Smart cities are deploying video analytics AI agents to reduce traffic congestion and increase safety, and the uses go on and on. A Blueprint to Create Diverse Fleets of Video Analytics AI Agents The VSS blueprint is built on top of the NVIDIA Metropolis platform and boosted by VLMs and LLMs such as NVIDIA VILA and NVIDIA Llama Nemotron, NVIDIA NeMo Retriever microservices, and retrieval-augmented generation (RAG) — a technique that connects LLMs to a company’s enterprise data. The VSS blueprint incorporates the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices for VLMs, LLMs and advanced AI frameworks for RAG. With the VSS blueprint, users can summarize a video 100x faster than watching in real time. For example, an hourlong video can be summarized in text in less than one minute. The VSS blueprint offers a host of powerful features designed to provide robust video understanding, performance and scalability. This release introduces expanded hardware support, including the ability to deploy on a single NVIDIA A100 or H100 GPU for smaller workloads, offering greater flexibility in resource allocation. The blueprint can also be deployed at the edge on the NVIDIA RTX 6000 PRO and NVIDIA DGX Spark computing platforms. The VSS blueprint can process hundreds of live video streams or burst clips simultaneously. In addition to visual understanding, it offers audio transcription. Converting speech to text adds contextual depth in scenarios where audio is critical — such as training videos, keynotes or team meetings. Industry Leaders Deploy Video Analytics AI Agents to Drive Business Value Everyone from the world’s leading manufacturers to smart cities and sports leagues are using the VSS blueprint to develop AI agents for optimizing operations. Pegatron, a leading electronics manufacturing company, uses the VSS blueprint to study operating procedures and train employees on best practices. The company is also integrating the blueprint into its PEGAAi platform so organizations can build AI agents to transform manufacturing processes. These agents can ingest and analyze massive volumes of video, enabling advanced capabilities like automated monitoring, anomaly detection, video search and incident reporting. Pegatron’s Visual Analytics Agent can be used to understand operating procedures for printed circuit board assembly and identify when actions are correct or incorrect. To date, the agents have reduced Pegatron’s labor costs by 7% and defect rates by 67%. Additional leading Taiwanese semiconductor and electronics manufacturers are building AI agents and digital twins to optimize their planning and operational applications. Kaohsiung City, Taiwan, is using a unified smart city vision AI application developed by its partner, Linker Vision, to improve incident response times. Previously, city departments such as waste management, transportation and emergency response were isolated by siloed infrastructure — leading to slow response times due to lack of access to critical information. Powered by the VSS blueprint, Linker Vision’s AI-powered application has agents that combine real-time video analytics with generative AI to not just detect visual elements but also understand and narrate complex urban events like floods or traffic accidents. Linker Vision currently delivers timely insights to 12 city departments and is on track to scale from 30,000 city cameras to over 50,000 by 2026. These insights are providing improved situational awareness and data-driven decision-making across city services, and reducing incident response times by up to 80%. The National Hockey League used the VAST InsightEngine with the VSS blueprint to streamline and accelerate vision AI workflows. It manages massive volumes of game footage. With the VAST InsightEngine, the NHL is positioned to search through petabytes of video in sub-seconds, enabling near-instant retrieval of highlights and in-game moments. AI-driven agentic workflows further enhance content creation by automatically clipping, tagging and assembling video content for ease of access and use. In the future, the League could potentially use real-time AI reasoning to enable tailored insights — such as player stats, strategy analyses or fantasy recommendations — generated dynamically during live games. This end-to-end automation could transform how media is created, curated and delivered, setting a new standard for AI-driven sports content production. Siemens is using its Industrial Copilot for Operations to assist factory floor workers with equipment maintenance tasks, error handling and performance optimization. This generative AI-powered assistant offers real-time answers to equipment errors using information about operational and document data. The copilot was built with a fusion of VSS components like VLMs, LLMs and NVIDIA NeMo microservices. The Industrial Copilot has resulted in rapid decision-making and reduced machine downtime. Siemens has reported a 30% increase in productivity, with the potential to reach 50%. Supported by an Expanding Partner Ecosystem Creating Sophisticated AI Agents NVIDIA partners are using the VSS blueprint to expedite the creation of agentic AI video analytics capabilities for their workflows, reducing development time from months to weeks. Superb AI, a leader in intelligent video analytics, set up a sophisticated airport operations project at Incheon Airport to reduce passenger wait times in a matter of weeks. In Malaysia, solution provider ITMAX is building advanced visual AI agents with the VSS blueprint for the City of Kuala Lumpur to improve overall city management and reduce incident response times. In the advertising sector, PYLER integrated the VSS blueprint into its brand safety (AiD) and ad targeting (AiM) solutions in just a few weeks. Using AiD and AiM, Samsung Electronics increased advertising effectiveness with brand- and product-aligned, high-value ad placements. BYD saw its ad-click through rates increase 4x by targeting contextually relevant and positive content, while Hana Financial Group surpassed multiple brand campaign goals. Fingermark is the application provider of Eyecue, a real-time computer vision platform used by quick service restaurants. Fingermark is adding the VSS blueprint into Eyecue to turn video footage into clear, actionable insights regarding drive-thru wait times, service bottlenecks and staff-related incidents at scale. Try the VSS blueprint on build.nvidia.com and read this technical blog for more details. Watch the COMPUTEX keynote from NVIDIA founder and CEO Jensen Huang, as well as NVIDIA GTC Taipei 2025 sessions.
    0 Commentarii 0 Distribuiri
  • An Unusual Single-Blade Fingernail Clipper Design

    We live in an interesting, some might say gluttonous, era of product development. In addition to the seemingly daily invention of new EDC objects, any given product design has multiple competitors' offerings to choose from. On top of that, both startups and established companies regularly seek to re-invent and re-design existing objects in the name of optimization.On that latter note, take the nail clipper. Most of us take them for granted, if we think about them at all. But Canadian startup Khlip reckoned they could improve the ergonomics and reversed the leverage arrangement. The Griff rotating nail clipper, by Japanese industrial designer Yoshita Moritaka, is also designed with ergonomics in mind. Now a startup called EDJY jumps into this market with both reversed leverage and a re-thought blade arrangement. While the Khlip and Griff designs do demonstrate some ergonomic advantage, particularly for those with compromised grip strength, EDJY's claim is a bit harder to swallow: Their cutting technique, they say, results in "Smoother, healthier nails." Most nail clippers have two blades in a jaw arrangement. EDJY's eponymous product features just a top blade, with an anvil arrangement at the bottom. The company claims this set-up "cuts, not crushes" fingernails, "leaving them with a flawlessly smooth edge."Standard resultsEDJY resultsThey do claim that the leverage arrangement "requires 250% less force to cut through your nails," which would be an improvement for the elderly or those with grip issues. The nail clippings are captured within the body of the clippers. While that's not a unique feature, with multiple manufacturers offering a collection-bin-style design, the Khlip and Griff designs lack this. The EDJY is made in the U.S.A. and runs
    #unusual #singleblade #fingernail #clipper #design
    An Unusual Single-Blade Fingernail Clipper Design
    We live in an interesting, some might say gluttonous, era of product development. In addition to the seemingly daily invention of new EDC objects, any given product design has multiple competitors' offerings to choose from. On top of that, both startups and established companies regularly seek to re-invent and re-design existing objects in the name of optimization.On that latter note, take the nail clipper. Most of us take them for granted, if we think about them at all. But Canadian startup Khlip reckoned they could improve the ergonomics and reversed the leverage arrangement. The Griff rotating nail clipper, by Japanese industrial designer Yoshita Moritaka, is also designed with ergonomics in mind. Now a startup called EDJY jumps into this market with both reversed leverage and a re-thought blade arrangement. While the Khlip and Griff designs do demonstrate some ergonomic advantage, particularly for those with compromised grip strength, EDJY's claim is a bit harder to swallow: Their cutting technique, they say, results in "Smoother, healthier nails." Most nail clippers have two blades in a jaw arrangement. EDJY's eponymous product features just a top blade, with an anvil arrangement at the bottom. The company claims this set-up "cuts, not crushes" fingernails, "leaving them with a flawlessly smooth edge."Standard resultsEDJY resultsThey do claim that the leverage arrangement "requires 250% less force to cut through your nails," which would be an improvement for the elderly or those with grip issues. The nail clippings are captured within the body of the clippers. While that's not a unique feature, with multiple manufacturers offering a collection-bin-style design, the Khlip and Griff designs lack this. The EDJY is made in the U.S.A. and runs #unusual #singleblade #fingernail #clipper #design
    WWW.CORE77.COM
    An Unusual Single-Blade Fingernail Clipper Design
    We live in an interesting, some might say gluttonous, era of product development. In addition to the seemingly daily invention of new EDC objects, any given product design has multiple competitors' offerings to choose from. On top of that, both startups and established companies regularly seek to re-invent and re-design existing objects in the name of optimization.On that latter note, take the nail clipper. Most of us take them for granted, if we think about them at all. But Canadian startup Khlip reckoned they could improve the ergonomics and reversed the leverage arrangement. The Griff rotating nail clipper, by Japanese industrial designer Yoshita Moritaka, is also designed with ergonomics in mind. Now a startup called EDJY jumps into this market with both reversed leverage and a re-thought blade arrangement. While the Khlip and Griff designs do demonstrate some ergonomic advantage, particularly for those with compromised grip strength, EDJY's claim is a bit harder to swallow: Their cutting technique, they say, results in "Smoother, healthier nails." Most nail clippers have two blades in a jaw arrangement. EDJY's eponymous product features just a top blade, with an anvil arrangement at the bottom. The company claims this set-up "cuts, not crushes" fingernails, "leaving them with a flawlessly smooth edge." (Sincere question: Are jagged nails a problem for many? I don't pay much attention to mine.) Standard resultsEDJY resultsThey do claim that the leverage arrangement "requires 250% less force to cut through your nails," which would be an improvement for the elderly or those with grip issues. The nail clippings are captured within the body of the clippers. While that's not a unique feature, with multiple manufacturers offering a collection-bin-style design, the Khlip and Griff designs lack this. The EDJY is made in the U.S.A. and runs $16.50.
    0 Commentarii 0 Distribuiri