• Apple is reportedly redesigning the MacBook Pro next year, here’s what we’re expecting

    Rumors strongly suggest that Apple will be overhauling the MacBook Pro in 2026, marking five years since the previous redesign that we know and love today. There are three key rumors to follow with this redesigned MacBook Pro, and we’ll be delving into them here.

    OLED display
    After debuting in the iPad Pro in 2024, Apple is expected to introduce OLED display technology to the MacBook Pro for the very first time with the redesign in 2026. This’ll provide higher brightness, better contrast ratios, and nicer colors to the MacBook Pro lineup for the very first time.
    Plus, according to TheElec, Apple will be using the same Tandem OLED display tech as the aforementioned iPad Pro:

    The OLED MacBook Air is also expected to get a standard single-stack display, rather than the more sophisticated Two-Stack Tandem displays we reported on for the MacBook Pro.
    Single-stack displays have one red, green and blue layer, while two-stack tandem OLED has a second RGB layer. Two layers stacked in tandem increases the brightness of the screen, while also increasing longevity.

    While transitioning to OLED, Apple may also ditch the notch, in favor of a smaller camera hole cutout. This information comes from Omdia, who describes it as a “rounded corner + hole cut.”
    The report doesn’t specify whether or not it’ll be a single hole punch, or something more similar to Dynamic Island on the iPhone. Either way, there won’t be as chunky of a cutout in your MacBook Pro display once the redesign arrives.
    Thinner design
    According to Bloomberg, Apple will be adopting a new, thinner design with the 2026 MacBook Pro. There aren’t many other details specified, so it’s unclear if the overall chassis design will change:

    Though Apple has continued to enhance the product with new chips and other internal improvements, the MacBook Pro probably won’t get another true overhaul until 2026. The company had once hoped to release this new version in 2025 — with a thinner design and a move to crisper OLED screens — but there were delays related to the display technology.

    Cutting-edge M6 chip
    Apple will also debut the M6 family of chips in this new MacBook Pro redesign. Currently, M6 is anticipated to be the first generation of Apple Silicon to adopt TSMC’s 2nm technology, alongside the A20 chip for iPhone.
    As per usual, we should see M6, M6 Pro, and M6 Max versions of the MacBook Pro, in both 14-inch and 16-inch sizes. With a new process node, we should see noticeable performance and efficiency gains.
    Wrap up
    Overall, the biggest feature of this upgrade is certainly the fact that the MacBook Pro will be adopting OLED. That said, I’ll certainly appreciate the thinner design – particularly on the 16-inch MacBook Pro, which currently comes in at 4.7 pounds.
    In case you aren’t too fond of waiting around a year and a half to buy a new MacBook Pro, there are some good discounts on the current M4 MacBook Pro models now that they’re around halfway through their lifespan. You can pick up an M4 14-inch for an M4 Pro 14-inch for or an M4 Pro 16-inch for These are all around off compared to their typical prices.

    My favorite Apple accessory recommendations:
    Follow Michael: X/Twitter, Bluesky, Instagram

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #apple #reportedly #redesigning #macbook #pro
    Apple is reportedly redesigning the MacBook Pro next year, here’s what we’re expecting
    Rumors strongly suggest that Apple will be overhauling the MacBook Pro in 2026, marking five years since the previous redesign that we know and love today. There are three key rumors to follow with this redesigned MacBook Pro, and we’ll be delving into them here. OLED display After debuting in the iPad Pro in 2024, Apple is expected to introduce OLED display technology to the MacBook Pro for the very first time with the redesign in 2026. This’ll provide higher brightness, better contrast ratios, and nicer colors to the MacBook Pro lineup for the very first time. Plus, according to TheElec, Apple will be using the same Tandem OLED display tech as the aforementioned iPad Pro: The OLED MacBook Air is also expected to get a standard single-stack display, rather than the more sophisticated Two-Stack Tandem displays we reported on for the MacBook Pro. Single-stack displays have one red, green and blue layer, while two-stack tandem OLED has a second RGB layer. Two layers stacked in tandem increases the brightness of the screen, while also increasing longevity. While transitioning to OLED, Apple may also ditch the notch, in favor of a smaller camera hole cutout. This information comes from Omdia, who describes it as a “rounded corner + hole cut.” The report doesn’t specify whether or not it’ll be a single hole punch, or something more similar to Dynamic Island on the iPhone. Either way, there won’t be as chunky of a cutout in your MacBook Pro display once the redesign arrives. Thinner design According to Bloomberg, Apple will be adopting a new, thinner design with the 2026 MacBook Pro. There aren’t many other details specified, so it’s unclear if the overall chassis design will change: Though Apple has continued to enhance the product with new chips and other internal improvements, the MacBook Pro probably won’t get another true overhaul until 2026. The company had once hoped to release this new version in 2025 — with a thinner design and a move to crisper OLED screens — but there were delays related to the display technology. Cutting-edge M6 chip Apple will also debut the M6 family of chips in this new MacBook Pro redesign. Currently, M6 is anticipated to be the first generation of Apple Silicon to adopt TSMC’s 2nm technology, alongside the A20 chip for iPhone. As per usual, we should see M6, M6 Pro, and M6 Max versions of the MacBook Pro, in both 14-inch and 16-inch sizes. With a new process node, we should see noticeable performance and efficiency gains. Wrap up Overall, the biggest feature of this upgrade is certainly the fact that the MacBook Pro will be adopting OLED. That said, I’ll certainly appreciate the thinner design – particularly on the 16-inch MacBook Pro, which currently comes in at 4.7 pounds. In case you aren’t too fond of waiting around a year and a half to buy a new MacBook Pro, there are some good discounts on the current M4 MacBook Pro models now that they’re around halfway through their lifespan. You can pick up an M4 14-inch for an M4 Pro 14-inch for or an M4 Pro 16-inch for These are all around off compared to their typical prices. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #apple #reportedly #redesigning #macbook #pro
    9TO5MAC.COM
    Apple is reportedly redesigning the MacBook Pro next year, here’s what we’re expecting
    Rumors strongly suggest that Apple will be overhauling the MacBook Pro in 2026, marking five years since the previous redesign that we know and love today. There are three key rumors to follow with this redesigned MacBook Pro, and we’ll be delving into them here. OLED display After debuting in the iPad Pro in 2024, Apple is expected to introduce OLED display technology to the MacBook Pro for the very first time with the redesign in 2026. This’ll provide higher brightness, better contrast ratios, and nicer colors to the MacBook Pro lineup for the very first time. Plus, according to TheElec, Apple will be using the same Tandem OLED display tech as the aforementioned iPad Pro: The OLED MacBook Air is also expected to get a standard single-stack display, rather than the more sophisticated Two-Stack Tandem displays we reported on for the MacBook Pro. Single-stack displays have one red, green and blue layer, while two-stack tandem OLED has a second RGB layer. Two layers stacked in tandem increases the brightness of the screen, while also increasing longevity. While transitioning to OLED, Apple may also ditch the notch, in favor of a smaller camera hole cutout. This information comes from Omdia, who describes it as a “rounded corner + hole cut.” The report doesn’t specify whether or not it’ll be a single hole punch, or something more similar to Dynamic Island on the iPhone. Either way, there won’t be as chunky of a cutout in your MacBook Pro display once the redesign arrives. Thinner design According to Bloomberg, Apple will be adopting a new, thinner design with the 2026 MacBook Pro. There aren’t many other details specified, so it’s unclear if the overall chassis design will change: Though Apple has continued to enhance the product with new chips and other internal improvements, the MacBook Pro probably won’t get another true overhaul until 2026. The company had once hoped to release this new version in 2025 — with a thinner design and a move to crisper OLED screens — but there were delays related to the display technology. Cutting-edge M6 chip Apple will also debut the M6 family of chips in this new MacBook Pro redesign. Currently, M6 is anticipated to be the first generation of Apple Silicon to adopt TSMC’s 2nm technology, alongside the A20 chip for iPhone. As per usual, we should see M6, M6 Pro, and M6 Max versions of the MacBook Pro, in both 14-inch and 16-inch sizes. With a new process node, we should see noticeable performance and efficiency gains. Wrap up Overall, the biggest feature of this upgrade is certainly the fact that the MacBook Pro will be adopting OLED. That said, I’ll certainly appreciate the thinner design – particularly on the 16-inch MacBook Pro, which currently comes in at 4.7 pounds. In case you aren’t too fond of waiting around a year and a half to buy a new MacBook Pro, there are some good discounts on the current M4 MacBook Pro models now that they’re around halfway through their lifespan. You can pick up an M4 14-inch for $1429, an M4 Pro 14-inch for $1779, or an M4 Pro 16-inch for $2249. These are all around $200 off compared to their typical prices. My favorite Apple accessory recommendations: Follow Michael: X/Twitter, Bluesky, Instagram Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Comments 0 Shares 0 Reviews
  • Aussie Streaming Guide: The Best TV & Movies for June 2025

    As the chill of June sets in, it's the perfect time to trade spreadsheets for screenplays. Instead of navigating the labyrinth of endless options across Australia's six major streaming platforms, we've curated a selection of standout films and series to warm your winter nights.Table of ContentsNew in June on Foxtel and BingeTV litter pick: Mix Tape – 12 Jun : Former teenage sweethearts reconnect decades later through shared musical memories, reigniting past emotions and questions about their future.Movie litter pick: Wicked – 26 Jun : Explores the origin story of Elphaba and Glinda, two witches whose friendship and destinies shape the Land of Oz.What notable movies are coming to Binge?Rewriting Trump – 1 JunSmile 2 – 4 JunAnora – 11 JunWicked – 26 JunWhat notable series are coming to Binge?Great Canadian Bake Off S8 – 3 JunMr Loverman – 4 JunBelow Deck S12 – 10 JunMix Tape – 12 JunRevival – 13 JunLove It Or List It NZ – 25 JunEva Longoria: Searching For Mexico – 27 JunBack to topNew in June on NetflixTV litter pick: Squid Game S03 – 27 Jun : An animated journey through Pharrell Williams' life, showcasing his musical evolution and creative milestones.Movie litter pick: Piece By Piece – 7 Jun : Contestants engage in deadly games for a cash prize, revealing human nature's darkest facets.What notable movies are coming to Netflix?Power Moves with Shaquille O’Neal – 4 JunPiece By Piece – 7 JunTitan: The Oceangate Disaster – 11 JunThe Waterfront – 19 JunKPop Demon Hunters – 20 JunWhat notable series are coming to Netflix?Ginny and Georgia S03 – 5 JunThe Survivors – 6 JunFubar S02 – 12 JunThe Fairly Oddparents: A New Wish S02 – 12 JunDallas Cowboy Cheerleaders S02 – 18 JunThe Ultimatum: Queer Love S02 – 25 JunSquid Game S03 – 27 JunBack to topNew in June on Disney+TV litter pick: The Bear S04 – 26 Jun : Chef Carmy and his team navigate the pressures of the culinary world, striving for excellence amidst chaos.Movie litter pick: Ironheart – 25 Jun : Teen genius Riri Williams builds her own advanced suit of armor, stepping into the superhero world.What notable movies are coming to Disney+?Predator: Killer of Killers – 6 JunOcean with David Attenborough – 8 JunCall Her Alex – 10 JunIronheart – 25 JunWhat notable series are coming to Disney+?Call Her Alex – 10 JunThe Bear S04 – 26 JunBack to topNew in June on Apple TV+TV litter pick: Stick – 4 Jun : A sports dramedy following a former hockey star's return to coaching, blending humor with heartfelt moments.Movie litter pick: Echo Valley – 13 Jun : A suspenseful thriller where a mother confronts dark secrets to protect her daughter from a dangerous past.What notable movies are on Apple TV+?Echo Valley – 13 JunWhat notable series are coming to Apple TV+?Stick – 4 JunThe Buccaneers S02 – 18 JunSmoke – 27 JunSign up for a free 7–day trial of Apple TV+Back to topNew in June on Amazon Prime VideoTV litter pick: Babygirl – 3 Jun : A provocative drama exploring a young woman's journey through love, betrayal, and self-discovery.Movie litter pick: Deep Cover – 12 Jun : An undercover agent infiltrates a drug cartel, facing moral dilemmas and identity crises.What notable movies are coming to Prime Video?Babygirl – 3 JunNascar to Le Mans – 12 JunDeep Cover – 12 JunBeyond After – 24 JunWhat notable series are coming to Prime Video?We Were Liars S01 – 18 JunCountdown S01 – 25 JunMarry My Husband – 27 JunBack to topNew in June on StanTV litter pick: This City Is Ours – 4 Jun : A gritty crime drama delving into power struggles and corruption within a city's law enforcement.Movie litter pick: The Surfer – 15 Jun : A psychological thriller featuring a man confronting his past while facing surreal challenges on a remote beach.What notable movies are coming to Stan?The Surfer – 15 JunJoh: The Last King of Queensland – 22 JunWhat notable series are coming to Stan?This City Is Ours – 4 JunBlack Mafia Family S04 – 6 JunThe Gold S02 – 9 JunAlone S12 – 13 JunHal and Harper – 26 JunBack to top IGN is now on Flash, live and on demand. Stream the latest and trending news for video games, interviews, videos, and wikis. Check it out here. Adam Mathew is our Aussie streaming savant. He also games on YouTube.
    #aussie #streaming #guide #best #ampamp
    Aussie Streaming Guide: The Best TV & Movies for June 2025
    As the chill of June sets in, it's the perfect time to trade spreadsheets for screenplays. Instead of navigating the labyrinth of endless options across Australia's six major streaming platforms, we've curated a selection of standout films and series to warm your winter nights.Table of ContentsNew in June on Foxtel and BingeTV litter pick: Mix Tape – 12 Jun : Former teenage sweethearts reconnect decades later through shared musical memories, reigniting past emotions and questions about their future.Movie litter pick: Wicked – 26 Jun : Explores the origin story of Elphaba and Glinda, two witches whose friendship and destinies shape the Land of Oz.What notable movies are coming to Binge?Rewriting Trump – 1 JunSmile 2 – 4 JunAnora – 11 JunWicked – 26 JunWhat notable series are coming to Binge?Great Canadian Bake Off S8 – 3 JunMr Loverman – 4 JunBelow Deck S12 – 10 JunMix Tape – 12 JunRevival – 13 JunLove It Or List It NZ – 25 JunEva Longoria: Searching For Mexico – 27 JunBack to topNew in June on NetflixTV litter pick: Squid Game S03 – 27 Jun : An animated journey through Pharrell Williams' life, showcasing his musical evolution and creative milestones.Movie litter pick: Piece By Piece – 7 Jun : Contestants engage in deadly games for a cash prize, revealing human nature's darkest facets.What notable movies are coming to Netflix?Power Moves with Shaquille O’Neal – 4 JunPiece By Piece – 7 JunTitan: The Oceangate Disaster – 11 JunThe Waterfront – 19 JunKPop Demon Hunters – 20 JunWhat notable series are coming to Netflix?Ginny and Georgia S03 – 5 JunThe Survivors – 6 JunFubar S02 – 12 JunThe Fairly Oddparents: A New Wish S02 – 12 JunDallas Cowboy Cheerleaders S02 – 18 JunThe Ultimatum: Queer Love S02 – 25 JunSquid Game S03 – 27 JunBack to topNew in June on Disney+TV litter pick: The Bear S04 – 26 Jun : Chef Carmy and his team navigate the pressures of the culinary world, striving for excellence amidst chaos.Movie litter pick: Ironheart – 25 Jun : Teen genius Riri Williams builds her own advanced suit of armor, stepping into the superhero world.What notable movies are coming to Disney+?Predator: Killer of Killers – 6 JunOcean with David Attenborough – 8 JunCall Her Alex – 10 JunIronheart – 25 JunWhat notable series are coming to Disney+?Call Her Alex – 10 JunThe Bear S04 – 26 JunBack to topNew in June on Apple TV+TV litter pick: Stick – 4 Jun : A sports dramedy following a former hockey star's return to coaching, blending humor with heartfelt moments.Movie litter pick: Echo Valley – 13 Jun : A suspenseful thriller where a mother confronts dark secrets to protect her daughter from a dangerous past.What notable movies are on Apple TV+?Echo Valley – 13 JunWhat notable series are coming to Apple TV+?Stick – 4 JunThe Buccaneers S02 – 18 JunSmoke – 27 JunSign up for a free 7–day trial of Apple TV+Back to topNew in June on Amazon Prime VideoTV litter pick: Babygirl – 3 Jun : A provocative drama exploring a young woman's journey through love, betrayal, and self-discovery.Movie litter pick: Deep Cover – 12 Jun : An undercover agent infiltrates a drug cartel, facing moral dilemmas and identity crises.What notable movies are coming to Prime Video?Babygirl – 3 JunNascar to Le Mans – 12 JunDeep Cover – 12 JunBeyond After – 24 JunWhat notable series are coming to Prime Video?We Were Liars S01 – 18 JunCountdown S01 – 25 JunMarry My Husband – 27 JunBack to topNew in June on StanTV litter pick: This City Is Ours – 4 Jun : A gritty crime drama delving into power struggles and corruption within a city's law enforcement.Movie litter pick: The Surfer – 15 Jun : A psychological thriller featuring a man confronting his past while facing surreal challenges on a remote beach.What notable movies are coming to Stan?The Surfer – 15 JunJoh: The Last King of Queensland – 22 JunWhat notable series are coming to Stan?This City Is Ours – 4 JunBlack Mafia Family S04 – 6 JunThe Gold S02 – 9 JunAlone S12 – 13 JunHal and Harper – 26 JunBack to top IGN is now on Flash, live and on demand. Stream the latest and trending news for video games, interviews, videos, and wikis. Check it out here. Adam Mathew is our Aussie streaming savant. He also games on YouTube. #aussie #streaming #guide #best #ampamp
    WWW.IGN.COM
    Aussie Streaming Guide: The Best TV & Movies for June 2025
    As the chill of June sets in, it's the perfect time to trade spreadsheets for screenplays. Instead of navigating the labyrinth of endless options across Australia's six major streaming platforms, we've curated a selection of standout films and series to warm your winter nights.Table of ContentsNew in June on Foxtel and BingeTV litter pick: Mix Tape – 12 Jun : Former teenage sweethearts reconnect decades later through shared musical memories, reigniting past emotions and questions about their future.Movie litter pick: Wicked – 26 Jun : Explores the origin story of Elphaba and Glinda, two witches whose friendship and destinies shape the Land of Oz.What notable movies are coming to Binge?Rewriting Trump – 1 JunSmile 2 – 4 JunAnora – 11 JunWicked – 26 JunWhat notable series are coming to Binge?Great Canadian Bake Off S8 – 3 JunMr Loverman – 4 JunBelow Deck S12 – 10 JunMix Tape – 12 JunRevival – 13 JunLove It Or List It NZ – 25 JunEva Longoria: Searching For Mexico – 27 JunBack to topNew in June on NetflixTV litter pick: Squid Game S03 – 27 Jun : An animated journey through Pharrell Williams' life, showcasing his musical evolution and creative milestones.Movie litter pick: Piece By Piece – 7 Jun : Contestants engage in deadly games for a cash prize, revealing human nature's darkest facets.What notable movies are coming to Netflix?Power Moves with Shaquille O’Neal – 4 JunPiece By Piece – 7 JunTitan: The Oceangate Disaster – 11 JunThe Waterfront – 19 JunKPop Demon Hunters – 20 JunWhat notable series are coming to Netflix?Ginny and Georgia S03 – 5 JunThe Survivors – 6 JunFubar S02 – 12 JunThe Fairly Oddparents: A New Wish S02 – 12 JunDallas Cowboy Cheerleaders S02 – 18 JunThe Ultimatum: Queer Love S02 – 25 JunSquid Game S03 – 27 JunBack to topNew in June on Disney+TV litter pick: The Bear S04 – 26 Jun : Chef Carmy and his team navigate the pressures of the culinary world, striving for excellence amidst chaos.Movie litter pick: Ironheart – 25 Jun : Teen genius Riri Williams builds her own advanced suit of armor, stepping into the superhero world.What notable movies are coming to Disney+?Predator: Killer of Killers – 6 JunOcean with David Attenborough – 8 JunCall Her Alex – 10 JunIronheart – 25 JunWhat notable series are coming to Disney+?Call Her Alex – 10 JunThe Bear S04 – 26 JunBack to topNew in June on Apple TV+TV litter pick: Stick – 4 Jun : A sports dramedy following a former hockey star's return to coaching, blending humor with heartfelt moments.Movie litter pick: Echo Valley – 13 Jun : A suspenseful thriller where a mother confronts dark secrets to protect her daughter from a dangerous past.What notable movies are on Apple TV+?Echo Valley – 13 JunWhat notable series are coming to Apple TV+?Stick – 4 JunThe Buccaneers S02 – 18 JunSmoke – 27 JunSign up for a free 7–day trial of Apple TV+Back to topNew in June on Amazon Prime VideoTV litter pick: Babygirl – 3 Jun : A provocative drama exploring a young woman's journey through love, betrayal, and self-discovery.Movie litter pick: Deep Cover – 12 Jun : An undercover agent infiltrates a drug cartel, facing moral dilemmas and identity crises.What notable movies are coming to Prime Video?Babygirl – 3 JunNascar to Le Mans – 12 JunDeep Cover – 12 JunBeyond After – 24 JunWhat notable series are coming to Prime Video?We Were Liars S01 – 18 JunCountdown S01 – 25 JunMarry My Husband – 27 JunBack to topNew in June on StanTV litter pick: This City Is Ours – 4 Jun : A gritty crime drama delving into power struggles and corruption within a city's law enforcement.Movie litter pick: The Surfer – 15 Jun : A psychological thriller featuring a man confronting his past while facing surreal challenges on a remote beach.What notable movies are coming to Stan?The Surfer – 15 JunJoh: The Last King of Queensland – 22 JunWhat notable series are coming to Stan?This City Is Ours – 4 JunBlack Mafia Family S04 – 6 JunThe Gold S02 – 9 JunAlone S12 – 13 JunHal and Harper – 26 JunBack to top IGN is now on Flash, live and on demand. Stream the latest and trending news for video games, interviews, videos, and wikis. Check it out here. Adam Mathew is our Aussie streaming savant. He also games on YouTube.
    0 Comments 0 Shares 0 Reviews
  • Elden Ring Nightreign may be co-op, but I’m having a blast solo

    Imagine playing Fortnite, but instead of fighting other players, all you want to do is break into houses to look for caches of slurp juice. Yes, the storm is closing in on you, and there’s a bunch of enemies waiting to kill you, but all you want to do is take a walking tour of Tilted Towers. Then when the match is over, instead of queueing again, you start reading the in-game lore for Peely and Sabrina Carpenter. You can count your number of player kills on one hand meanwhile your number of deaths is in the hundreds. You’ve never achieved a victory royale, but you’ve never had more fun.That’s how I play Elden Ring Nightreign.Nightreign is FromSoftware’s first Elden Ring spinoff, and it’s unlike any Souls game that the developer has done before. Nightreign has the conceit of so many battle royale games — multiplayer combat focused on acquiring resources across a large map that slowly shrinks over time — wrapped in the narrative, visual aesthetics, and combat of Elden Ring. Instead of the Tarnished, you are a Nightfarer. Instead of the expansive Lands Between, you are sent to Limveld, an island with an ever-shifting landscape. And instead of becoming the Elden Lord, your goal is to defeat the Night Lord and end the destructive storm that scours the land.Elden Ring Nightreign aura-farming exhibit A.In Nightreign, gameplay sessions are broken up into expeditions, each of which is divided into three day-night cycles. During the day, you — either solo or with two other players — explore the world looking for weapon upgrades and fighting bosses for the enhancements they reward. You’ll be forced to move as the deadly Night’s Tide slowly consumes the map, whittling your health to nothing if you’re caught in it. When the map is at its smallest, you face a tough midboss. Defeat it to commence day two of the expedition or die and start it all over. Then, on the third day, you face the expedition’s final boss. There are several expeditions to conquer each with different bosses, mid-bosses, weapons to collect, and all kinds of events that make each run unique.I had the opportunity to play Nightreign once before earlier this year, and it wasn’t the best preview, as the game was plagued with all kinds of issues that didn’t allow me to experience it the way the developers intended. Those technical issues have been ironed out but I still haven’t completed the game’s most basic objective: beat the first expedition. This isn’t because of any technical or gameplay issues I had. For the times I wanted to play as intended, my colleague Jay Peters stepped in to help me and I had no problem finding party members to tackle expeditions with on my own… I just never really wanted to. And part of the reason why I’m enjoying Nightreign so much is because the game lets me play it in a way that’s completely counterintuitive – slowly and alone.Collaborative gaming doesn’t always feel good to me. I want to take things at my own pace, and that’s hard to do when there’s a group of people frustrated with me because they need my help to kill a boss while I’m still delving into a dungeon a mile away. But the ability to solo queue does come with a significant catch – you’re not gonna get very far. I died often and to everything from random enemies to bosses. It’s not often that I even make it to that first boss fight without dying to the warm-up battles that precede it. This should frustrate me, but I don’t care in the slightest. I’m just so pleased that I can go at my own pace to explore more of Elden Ring’s visually gorgeous and narratively sumptuous world.You get by with a little help from your friends. I, however, am built different. Image: FromSoftwareWhich brings me to my favorite part: its characters. Nightreign has eight new classes, each with their own unique abilities. The classes can still use every weapon you findso there’s an option to tailor a character to fit your playstyle. There are certain kinds of classes I gravitate toward, specifically ranged combat, but for the first time in a class-based game, I love every one of them. It is so much fun shredding enemies to ribbons with the Duchess, using her Restage ability to replay the attacks done to an enemy essentially doubling the damage they receive. I love the Raider’s powers of just being a big fuckin’ dude, slamming things with big ass great weapons. And true to my ranged combat loving heart, Ironeye’s specialty with bows makes it so nice when I wanna kill things without putting myself in danger.Then there’s the Guardian. Look at him. He’s a giant armored bird-person with the busted wing and the huge-ass halberd and shield. His story involves being a protector who failed his flock and has found a new one in the other Nightfarers. I fell to my knees reading one of his codex entries and seeing how the Recluse, the mage character, helped him with his damaged wing. Every character has a codex that updates with their personal story the more expeditions you attempt. This is the shit I get out of bed for. The Guardian is the coolest FromSoftware character since Patches and I have a crush on him. Image: FromSoftwareI thought I was going to hate the concept of Nightreign. I want more Elden Ring: I love that world, so any chance I can have to go back, I’ll take but… I just don’t like multiplayer games. Describing Nightreign makes it sound like the reason why it exists is because an out of touch CEO looked at the popularity of Elden Ring and at all the money Fortnite prints and went “Yeah, let’s do that.” Even if that’s the case, Nightreign has been constructed so that it still appeals to lore freaks like me and I can ignore the less savory bits around multiplayer with relative ease. If I can take a moment and borrow a pair of words from my Gen Z niblings to describe Nightreign it’d be “aura” and “aura farming.” Aura is used to describe a person’s general coolness or badassery while aura farming is the activities one can engage in to increase one’s aura. John Wick has aura. In the first movie, when he performs his monologue about getting back in the assassin business spitting and screaming – that’s aura farming.And between the cooperative nature of the game, its rapid-paced combat, and the new characters, abilities, and story, Elden Ring Nightreign has a ton of aura that I’m having a lot of fun farming – just not in the way I expected.Elden Ring Nightreign is out now on Xbox, PlayStation, and PC.See More:
    #elden #ring #nightreign #coop #but
    Elden Ring Nightreign may be co-op, but I’m having a blast solo
    Imagine playing Fortnite, but instead of fighting other players, all you want to do is break into houses to look for caches of slurp juice. Yes, the storm is closing in on you, and there’s a bunch of enemies waiting to kill you, but all you want to do is take a walking tour of Tilted Towers. Then when the match is over, instead of queueing again, you start reading the in-game lore for Peely and Sabrina Carpenter. You can count your number of player kills on one hand meanwhile your number of deaths is in the hundreds. You’ve never achieved a victory royale, but you’ve never had more fun.That’s how I play Elden Ring Nightreign.Nightreign is FromSoftware’s first Elden Ring spinoff, and it’s unlike any Souls game that the developer has done before. Nightreign has the conceit of so many battle royale games — multiplayer combat focused on acquiring resources across a large map that slowly shrinks over time — wrapped in the narrative, visual aesthetics, and combat of Elden Ring. Instead of the Tarnished, you are a Nightfarer. Instead of the expansive Lands Between, you are sent to Limveld, an island with an ever-shifting landscape. And instead of becoming the Elden Lord, your goal is to defeat the Night Lord and end the destructive storm that scours the land.Elden Ring Nightreign aura-farming exhibit A.In Nightreign, gameplay sessions are broken up into expeditions, each of which is divided into three day-night cycles. During the day, you — either solo or with two other players — explore the world looking for weapon upgrades and fighting bosses for the enhancements they reward. You’ll be forced to move as the deadly Night’s Tide slowly consumes the map, whittling your health to nothing if you’re caught in it. When the map is at its smallest, you face a tough midboss. Defeat it to commence day two of the expedition or die and start it all over. Then, on the third day, you face the expedition’s final boss. There are several expeditions to conquer each with different bosses, mid-bosses, weapons to collect, and all kinds of events that make each run unique.I had the opportunity to play Nightreign once before earlier this year, and it wasn’t the best preview, as the game was plagued with all kinds of issues that didn’t allow me to experience it the way the developers intended. Those technical issues have been ironed out but I still haven’t completed the game’s most basic objective: beat the first expedition. This isn’t because of any technical or gameplay issues I had. For the times I wanted to play as intended, my colleague Jay Peters stepped in to help me and I had no problem finding party members to tackle expeditions with on my own… I just never really wanted to. And part of the reason why I’m enjoying Nightreign so much is because the game lets me play it in a way that’s completely counterintuitive – slowly and alone.Collaborative gaming doesn’t always feel good to me. I want to take things at my own pace, and that’s hard to do when there’s a group of people frustrated with me because they need my help to kill a boss while I’m still delving into a dungeon a mile away. But the ability to solo queue does come with a significant catch – you’re not gonna get very far. I died often and to everything from random enemies to bosses. It’s not often that I even make it to that first boss fight without dying to the warm-up battles that precede it. This should frustrate me, but I don’t care in the slightest. I’m just so pleased that I can go at my own pace to explore more of Elden Ring’s visually gorgeous and narratively sumptuous world.You get by with a little help from your friends. I, however, am built different. Image: FromSoftwareWhich brings me to my favorite part: its characters. Nightreign has eight new classes, each with their own unique abilities. The classes can still use every weapon you findso there’s an option to tailor a character to fit your playstyle. There are certain kinds of classes I gravitate toward, specifically ranged combat, but for the first time in a class-based game, I love every one of them. It is so much fun shredding enemies to ribbons with the Duchess, using her Restage ability to replay the attacks done to an enemy essentially doubling the damage they receive. I love the Raider’s powers of just being a big fuckin’ dude, slamming things with big ass great weapons. And true to my ranged combat loving heart, Ironeye’s specialty with bows makes it so nice when I wanna kill things without putting myself in danger.Then there’s the Guardian. Look at him. He’s a giant armored bird-person with the busted wing and the huge-ass halberd and shield. His story involves being a protector who failed his flock and has found a new one in the other Nightfarers. I fell to my knees reading one of his codex entries and seeing how the Recluse, the mage character, helped him with his damaged wing. Every character has a codex that updates with their personal story the more expeditions you attempt. This is the shit I get out of bed for. The Guardian is the coolest FromSoftware character since Patches and I have a crush on him. Image: FromSoftwareI thought I was going to hate the concept of Nightreign. I want more Elden Ring: I love that world, so any chance I can have to go back, I’ll take but… I just don’t like multiplayer games. Describing Nightreign makes it sound like the reason why it exists is because an out of touch CEO looked at the popularity of Elden Ring and at all the money Fortnite prints and went “Yeah, let’s do that.” Even if that’s the case, Nightreign has been constructed so that it still appeals to lore freaks like me and I can ignore the less savory bits around multiplayer with relative ease. If I can take a moment and borrow a pair of words from my Gen Z niblings to describe Nightreign it’d be “aura” and “aura farming.” Aura is used to describe a person’s general coolness or badassery while aura farming is the activities one can engage in to increase one’s aura. John Wick has aura. In the first movie, when he performs his monologue about getting back in the assassin business spitting and screaming – that’s aura farming.And between the cooperative nature of the game, its rapid-paced combat, and the new characters, abilities, and story, Elden Ring Nightreign has a ton of aura that I’m having a lot of fun farming – just not in the way I expected.Elden Ring Nightreign is out now on Xbox, PlayStation, and PC.See More: #elden #ring #nightreign #coop #but
    WWW.THEVERGE.COM
    Elden Ring Nightreign may be co-op, but I’m having a blast solo
    Imagine playing Fortnite, but instead of fighting other players, all you want to do is break into houses to look for caches of slurp juice. Yes, the storm is closing in on you, and there’s a bunch of enemies waiting to kill you, but all you want to do is take a walking tour of Tilted Towers. Then when the match is over, instead of queueing again, you start reading the in-game lore for Peely and Sabrina Carpenter. You can count your number of player kills on one hand meanwhile your number of deaths is in the hundreds. You’ve never achieved a victory royale, but you’ve never had more fun.That’s how I play Elden Ring Nightreign.Nightreign is FromSoftware’s first Elden Ring spinoff, and it’s unlike any Souls game that the developer has done before. Nightreign has the conceit of so many battle royale games — multiplayer combat focused on acquiring resources across a large map that slowly shrinks over time — wrapped in the narrative, visual aesthetics, and combat of Elden Ring. Instead of the Tarnished, you are a Nightfarer. Instead of the expansive Lands Between, you are sent to Limveld, an island with an ever-shifting landscape. And instead of becoming the Elden Lord, your goal is to defeat the Night Lord and end the destructive storm that scours the land.Elden Ring Nightreign aura-farming exhibit A.In Nightreign, gameplay sessions are broken up into expeditions, each of which is divided into three day-night cycles. During the day, you — either solo or with two other players — explore the world looking for weapon upgrades and fighting bosses for the enhancements they reward. You’ll be forced to move as the deadly Night’s Tide slowly consumes the map, whittling your health to nothing if you’re caught in it. When the map is at its smallest, you face a tough midboss. Defeat it to commence day two of the expedition or die and start it all over. Then, on the third day, you face the expedition’s final boss. There are several expeditions to conquer each with different bosses, mid-bosses, weapons to collect, and all kinds of events that make each run unique.I had the opportunity to play Nightreign once before earlier this year (and during a more recent network test) , and it wasn’t the best preview, as the game was plagued with all kinds of issues that didn’t allow me to experience it the way the developers intended. Those technical issues have been ironed out but I still haven’t completed the game’s most basic objective: beat the first expedition. This isn’t because of any technical or gameplay issues I had. For the times I wanted to play as intended, my colleague Jay Peters stepped in to help me and I had no problem finding party members to tackle expeditions with on my own… I just never really wanted to. And part of the reason why I’m enjoying Nightreign so much is because the game lets me play it in a way that’s completely counterintuitive – slowly and alone.Collaborative gaming doesn’t always feel good to me. I want to take things at my own pace, and that’s hard to do when there’s a group of people frustrated with me because they need my help to kill a boss while I’m still delving into a dungeon a mile away. But the ability to solo queue does come with a significant catch – you’re not gonna get very far. I died often and to everything from random enemies to bosses. It’s not often that I even make it to that first boss fight without dying to the warm-up battles that precede it. This should frustrate me, but I don’t care in the slightest. I’m just so pleased that I can go at my own pace to explore more of Elden Ring’s visually gorgeous and narratively sumptuous world.You get by with a little help from your friends. I, however, am built different. Image: FromSoftwareWhich brings me to my favorite part: its characters. Nightreign has eight new classes, each with their own unique abilities. The classes can still use every weapon you find (with some locked behind level requirements) so there’s an option to tailor a character to fit your playstyle. There are certain kinds of classes I gravitate toward, specifically ranged combat, but for the first time in a class-based game, I love every one of them. It is so much fun shredding enemies to ribbons with the Duchess, using her Restage ability to replay the attacks done to an enemy essentially doubling the damage they receive. I love the Raider’s powers of just being a big fuckin’ dude, slamming things with big ass great weapons. And true to my ranged combat loving heart, Ironeye’s specialty with bows makes it so nice when I wanna kill things without putting myself in danger.Then there’s the Guardian. Look at him. He’s a giant armored bird-person with the busted wing and the huge-ass halberd and shield. His story involves being a protector who failed his flock and has found a new one in the other Nightfarers. I fell to my knees reading one of his codex entries and seeing how the Recluse, the mage character, helped him with his damaged wing. Every character has a codex that updates with their personal story the more expeditions you attempt. This is the shit I get out of bed for. The Guardian is the coolest FromSoftware character since Patches and I have a crush on him. Image: FromSoftwareI thought I was going to hate the concept of Nightreign. I want more Elden Ring: I love that world, so any chance I can have to go back, I’ll take but… I just don’t like multiplayer games. Describing Nightreign makes it sound like the reason why it exists is because an out of touch CEO looked at the popularity of Elden Ring and at all the money Fortnite prints and went “Yeah, let’s do that.” Even if that’s the case, Nightreign has been constructed so that it still appeals to lore freaks like me and I can ignore the less savory bits around multiplayer with relative ease. If I can take a moment and borrow a pair of words from my Gen Z niblings to describe Nightreign it’d be “aura” and “aura farming.” Aura is used to describe a person’s general coolness or badassery while aura farming is the activities one can engage in to increase one’s aura. John Wick has aura. In the first movie, when he performs his monologue about getting back in the assassin business spitting and screaming – that’s aura farming.And between the cooperative nature of the game, its rapid-paced combat, and the new characters, abilities, and story, Elden Ring Nightreign has a ton of aura that I’m having a lot of fun farming – just not in the way I expected.Elden Ring Nightreign is out now on Xbox, PlayStation, and PC.See More:
    0 Comments 0 Shares 0 Reviews
  • Improvements to shader build times and memory usage in 2021 LTS

    As Unity’s Scriptable Render Pipeline’s available feature set continues to grow, so does the amount of shader variants being processed and compiled at build time. Alongside ongoing support for additional graphics APIs and an ever-growing selection of target platforms, the SRP’s improvements continue to expand.Shaders are compiled and cached after an initialbuild, thus accelerating further incrementalbuilds. While clean builds usually take the longest, lengthy warm build times can be a common pain point during project development and iteration.To address this problem, Unity’s Shader Management team has been hard at work to provide meaningful and scalable solutions. This has resulted in significantly reduced shader build times and runtime memory usage for projects created using Unity 2021 LTS and later versions.To read more about these new optimizations, including affected versions, backports, and figures from our internal testing, skip directly to the sections covering shader variantprefiltering and dynamic shader loading. At the end of this blog post, we also address our future plans to further refine shader variant management as a whole – across project authoring, build, and runtime.Before delving into the exciting improvements made to Unity’s shader system, let’s also take the opportunity to quickly review the concepts of conditional shader compilation, shader variants, and shader variant stripping.Conditional shader features enable developers and artists to conveniently control and alter a shader’s functionality using scripts, material settings, as well as project and graphics settings. Such conditional features serve to simplify project authoring, allowing projects to efficiently scale by minimizing the number of shaders you’ll have to author and maintain.Conditional shader features can be implemented in different ways:StaticbranchingShader variants compilationDynamicbranchingWhile static branching avoids branching-related shader execution overhead at runtime, it’s evaluated and locked at compilation time and does not provide runtime control. Shader variant compilation, meanwhile, is a form of static branching that provides additional runtime control. This works by compiling a unique shader programfor every possible combination of static branches, in order to maintain optimal GPU performance at runtime.Such variants are created by conditionally declaring and evaluating shader functionality through shader_feature and multi_compile shader keywords. The correct shader variants are loaded at run time based on active keywords and runtime settings. Declaring and evaluating additional shader keywords can lead to an increase in build time, file size, and runtime memory usage.At the same time, dynamicbranching entirely avoids the overhead of shader variant compilation, resulting in faster builds and both reduced file size and memory usage. This can bring forth smoother and faster iteration during development.On the other hand, dynamic branching can have a strong impact on shader execution performance based on the shader’s complexity and the target device. Asymmetric branches, where one side of the branch is much more complex than the other, can negatively impact performance. This is because the execution of a simpler path can still incur the performance penalties of the more complex path.When introducing conditional shader features in your own shaders, these approaches and trade-offs should be kept in mind. For more detailed information, see the shader conditionals, shader branching, and shader variants documentation.To mitigate the increase in shader processing and compilation time, shader variant stripping is utilized. It aims to exclude unnecessary shader variants from compilation based on factors such as:Materials included and keywords enabledProject and Render Pipeline settingsScriptable strippingWhen enumerating shader variants, the Editor will automatically filter out any keywords declared with shader_feature that are not enabled by materials referenced and included in the build. As a result, these keywords will not generate any additional variants.For example, if the Clear Coat material property is not enabled by any material using the Complex Lit URP Shader, all shader variants that implement the Clear Coat functionality will safely be stripped at build time.In the meantime, multi_compile keywords prompt developers and players to freely control the shader’s functionality at runtime based on available Player settings and scripts. The flip side is that such keywords cannot automatically be stripped by the Editor to the same degree as shader_feature keywords. That’s why they generally produce a larger number of variants.Scriptable stripping is a C# API that lets you exclude shader variants from compilation during build time via keywords and combinations not required at runtime. The render pipelines utilize scriptable stripping in order to strip unnecessary variants according to the project’s Render Pipeline settings and Quality Assets included in the build. Low quality High quality Variant multiplier Main Light/Cast Shadows: Off On 2x Main Light/Cast Shadows: On On 1x Main Light/Cast Shadows: Off Off 1xIn order to maximize the effects of the Editor’s shader variant stripping, we recommend disabling all graphics-related features and Render Pipeline settings not utilized at runtime. Please refer to the official documentation for more on shader variant stripping.Shader variant stripping greatly reduces the amount of compiled shader variants, based on factors like the Render Pipeline Quality Assets in the build. However, stripping is currently performed at the end of the shader processing stage. Simply enumerating all the possible variants can still take a long time, regardless of compilation.In order to reduce the shader variant processingtimes, we are now introducing a significant optimization to the engine’s built-in shader variant stripping. With shader variant prefiltering, both clean and warm build times are significantly reduced.The optimization works by introducing the early exclusion of multi_compile keywords, according to Prefiltering Attributes driven by Render Pipeline settings. This decreases the amount of variants being enumerated for potential stripping and compilation, which in turn, reduces shader processing time – with warm build times reduced byup to 90% in the most drastic examples.Shader variant prefiltering first landed in 2023.1.0a14, and has been backported to 2022.2.0b15 and 2021.3.15f1.Variant prefiltering also helps cut down initial/clean build times by applying the same principle.Historically, the Unity runtime would front-load all shader objects from disk to CPU memory during scene and resource load. In most cases, a built project and scene includes many more shader variants than needed at any given moment during the application’s runtime. For projects using a large amount of shaders, this often results in high shader memory usage at runtime.Dynamic shader loading addresses the issue by providing refined user control over shader loading behavior and memory usage. This optimization facilitates the streaming of shader data chunks into memory, as well as the eviction of shader data that is no longer needed at runtime, based on a user controlled memory budget. This allows you to significantly reduce shader memory usage on platforms with limited memory budgets.New Shader Variant Loading Settings are now accessible from the Editor’s Player Settings. Use them to override the maximum number of shader chunks loaded and per-shader chunk size.With the following C# API now available, you can override the Shader Variant Loading Settings using Editor scripts, such as:PlayerSettings.SetDefaultShaderChunkCount and PlayerSettings.SetDefaultShaderChunkSizeInMB to override the project’s default shader loading settingsPlayerSettings.SetShaderChunkCountForPlatform and PlayerSettings.SetShaderChunkSizeInMBForPlatformto override these settings on a per-platform basisYou can also override the maximum amount of loaded shader chunks at runtime using the C# API via Shader.maximumChunksOverride. This enables you to override the shader memory budget based on factors such as the total available system and graphics memory queried at runtime.Dynamic shader loading landed in 2023.1.0a11 and has been backported to 2022.2.0b10, 2022.1.21f1,and 2021.3.12f. In the case of the Universal Render Pipeline’s Boat Attack, we observed a78.8% reduction in runtime memory usage for shaders, from 315 MiBto 66.8 MiB. You can read more about this optimization in the official announcement.Beyond the critical changes mentioned above, we are working to enhance the Universal Render Pipeline’s shader variant generation and stripping. We’re also investigating additional improvements to Unity’s shader variant management at large. The ultimate goal is to facilitate the engine’s increasing feature set, while ensuring minimal shader build and runtime overhead.Some of our ongoing investigations involve the deduplication of shader resources across similar variants, as well as overall improvements to the shader keywords and Shader Variant Collection APIs. The aim is to provide more flexibility and control over shader variant processing and runtime performance.Looking ahead, we are also exploring the possibility of in-Editor tooling for shader variant tracing and analysis to provide the following details on shader variant usage:Which shaders and keywords produce the most variants?Which variants are compiled but unused at runtime?Which variants are stripped but requested at runtime?Your feedback has been instrumental so far as it helps us prioritize the most meaningful solutions. Please check out our public roadmap to vote on the features that best suit your needs. If there are additional changes you’d like to see, feel free to submit a feature request, or contact the team directly in this shader forum.
    #improvements #shader #build #times #memory
    Improvements to shader build times and memory usage in 2021 LTS
    As Unity’s Scriptable Render Pipeline’s available feature set continues to grow, so does the amount of shader variants being processed and compiled at build time. Alongside ongoing support for additional graphics APIs and an ever-growing selection of target platforms, the SRP’s improvements continue to expand.Shaders are compiled and cached after an initialbuild, thus accelerating further incrementalbuilds. While clean builds usually take the longest, lengthy warm build times can be a common pain point during project development and iteration.To address this problem, Unity’s Shader Management team has been hard at work to provide meaningful and scalable solutions. This has resulted in significantly reduced shader build times and runtime memory usage for projects created using Unity 2021 LTS and later versions.To read more about these new optimizations, including affected versions, backports, and figures from our internal testing, skip directly to the sections covering shader variantprefiltering and dynamic shader loading. At the end of this blog post, we also address our future plans to further refine shader variant management as a whole – across project authoring, build, and runtime.Before delving into the exciting improvements made to Unity’s shader system, let’s also take the opportunity to quickly review the concepts of conditional shader compilation, shader variants, and shader variant stripping.Conditional shader features enable developers and artists to conveniently control and alter a shader’s functionality using scripts, material settings, as well as project and graphics settings. Such conditional features serve to simplify project authoring, allowing projects to efficiently scale by minimizing the number of shaders you’ll have to author and maintain.Conditional shader features can be implemented in different ways:StaticbranchingShader variants compilationDynamicbranchingWhile static branching avoids branching-related shader execution overhead at runtime, it’s evaluated and locked at compilation time and does not provide runtime control. Shader variant compilation, meanwhile, is a form of static branching that provides additional runtime control. This works by compiling a unique shader programfor every possible combination of static branches, in order to maintain optimal GPU performance at runtime.Such variants are created by conditionally declaring and evaluating shader functionality through shader_feature and multi_compile shader keywords. The correct shader variants are loaded at run time based on active keywords and runtime settings. Declaring and evaluating additional shader keywords can lead to an increase in build time, file size, and runtime memory usage.At the same time, dynamicbranching entirely avoids the overhead of shader variant compilation, resulting in faster builds and both reduced file size and memory usage. This can bring forth smoother and faster iteration during development.On the other hand, dynamic branching can have a strong impact on shader execution performance based on the shader’s complexity and the target device. Asymmetric branches, where one side of the branch is much more complex than the other, can negatively impact performance. This is because the execution of a simpler path can still incur the performance penalties of the more complex path.When introducing conditional shader features in your own shaders, these approaches and trade-offs should be kept in mind. For more detailed information, see the shader conditionals, shader branching, and shader variants documentation.To mitigate the increase in shader processing and compilation time, shader variant stripping is utilized. It aims to exclude unnecessary shader variants from compilation based on factors such as:Materials included and keywords enabledProject and Render Pipeline settingsScriptable strippingWhen enumerating shader variants, the Editor will automatically filter out any keywords declared with shader_feature that are not enabled by materials referenced and included in the build. As a result, these keywords will not generate any additional variants.For example, if the Clear Coat material property is not enabled by any material using the Complex Lit URP Shader, all shader variants that implement the Clear Coat functionality will safely be stripped at build time.In the meantime, multi_compile keywords prompt developers and players to freely control the shader’s functionality at runtime based on available Player settings and scripts. The flip side is that such keywords cannot automatically be stripped by the Editor to the same degree as shader_feature keywords. That’s why they generally produce a larger number of variants.Scriptable stripping is a C# API that lets you exclude shader variants from compilation during build time via keywords and combinations not required at runtime. The render pipelines utilize scriptable stripping in order to strip unnecessary variants according to the project’s Render Pipeline settings and Quality Assets included in the build. Low quality High quality Variant multiplier Main Light/Cast Shadows: Off On 2x Main Light/Cast Shadows: On On 1x Main Light/Cast Shadows: Off Off 1xIn order to maximize the effects of the Editor’s shader variant stripping, we recommend disabling all graphics-related features and Render Pipeline settings not utilized at runtime. Please refer to the official documentation for more on shader variant stripping.Shader variant stripping greatly reduces the amount of compiled shader variants, based on factors like the Render Pipeline Quality Assets in the build. However, stripping is currently performed at the end of the shader processing stage. Simply enumerating all the possible variants can still take a long time, regardless of compilation.In order to reduce the shader variant processingtimes, we are now introducing a significant optimization to the engine’s built-in shader variant stripping. With shader variant prefiltering, both clean and warm build times are significantly reduced.The optimization works by introducing the early exclusion of multi_compile keywords, according to Prefiltering Attributes driven by Render Pipeline settings. This decreases the amount of variants being enumerated for potential stripping and compilation, which in turn, reduces shader processing time – with warm build times reduced byup to 90% in the most drastic examples.Shader variant prefiltering first landed in 2023.1.0a14, and has been backported to 2022.2.0b15 and 2021.3.15f1.Variant prefiltering also helps cut down initial/clean build times by applying the same principle.Historically, the Unity runtime would front-load all shader objects from disk to CPU memory during scene and resource load. In most cases, a built project and scene includes many more shader variants than needed at any given moment during the application’s runtime. For projects using a large amount of shaders, this often results in high shader memory usage at runtime.Dynamic shader loading addresses the issue by providing refined user control over shader loading behavior and memory usage. This optimization facilitates the streaming of shader data chunks into memory, as well as the eviction of shader data that is no longer needed at runtime, based on a user controlled memory budget. This allows you to significantly reduce shader memory usage on platforms with limited memory budgets.New Shader Variant Loading Settings are now accessible from the Editor’s Player Settings. Use them to override the maximum number of shader chunks loaded and per-shader chunk size.With the following C# API now available, you can override the Shader Variant Loading Settings using Editor scripts, such as:PlayerSettings.SetDefaultShaderChunkCount and PlayerSettings.SetDefaultShaderChunkSizeInMB to override the project’s default shader loading settingsPlayerSettings.SetShaderChunkCountForPlatform and PlayerSettings.SetShaderChunkSizeInMBForPlatformto override these settings on a per-platform basisYou can also override the maximum amount of loaded shader chunks at runtime using the C# API via Shader.maximumChunksOverride. This enables you to override the shader memory budget based on factors such as the total available system and graphics memory queried at runtime.Dynamic shader loading landed in 2023.1.0a11 and has been backported to 2022.2.0b10, 2022.1.21f1,and 2021.3.12f. In the case of the Universal Render Pipeline’s Boat Attack, we observed a78.8% reduction in runtime memory usage for shaders, from 315 MiBto 66.8 MiB. You can read more about this optimization in the official announcement.Beyond the critical changes mentioned above, we are working to enhance the Universal Render Pipeline’s shader variant generation and stripping. We’re also investigating additional improvements to Unity’s shader variant management at large. The ultimate goal is to facilitate the engine’s increasing feature set, while ensuring minimal shader build and runtime overhead.Some of our ongoing investigations involve the deduplication of shader resources across similar variants, as well as overall improvements to the shader keywords and Shader Variant Collection APIs. The aim is to provide more flexibility and control over shader variant processing and runtime performance.Looking ahead, we are also exploring the possibility of in-Editor tooling for shader variant tracing and analysis to provide the following details on shader variant usage:Which shaders and keywords produce the most variants?Which variants are compiled but unused at runtime?Which variants are stripped but requested at runtime?Your feedback has been instrumental so far as it helps us prioritize the most meaningful solutions. Please check out our public roadmap to vote on the features that best suit your needs. If there are additional changes you’d like to see, feel free to submit a feature request, or contact the team directly in this shader forum. #improvements #shader #build #times #memory
    UNITY.COM
    Improvements to shader build times and memory usage in 2021 LTS
    As Unity’s Scriptable Render Pipeline (SRP)’s available feature set continues to grow, so does the amount of shader variants being processed and compiled at build time. Alongside ongoing support for additional graphics APIs and an ever-growing selection of target platforms, the SRP’s improvements continue to expand.Shaders are compiled and cached after an initial (“clean”) build, thus accelerating further incremental (“warm”) builds. While clean builds usually take the longest, lengthy warm build times can be a common pain point during project development and iteration.To address this problem, Unity’s Shader Management team has been hard at work to provide meaningful and scalable solutions. This has resulted in significantly reduced shader build times and runtime memory usage for projects created using Unity 2021 LTS and later versions.To read more about these new optimizations, including affected versions, backports, and figures from our internal testing, skip directly to the sections covering shader variantprefiltering and dynamic shader loading. At the end of this blog post, we also address our future plans to further refine shader variant management as a whole – across project authoring, build, and runtime.Before delving into the exciting improvements made to Unity’s shader system, let’s also take the opportunity to quickly review the concepts of conditional shader compilation, shader variants, and shader variant stripping.Conditional shader features enable developers and artists to conveniently control and alter a shader’s functionality using scripts, material settings, as well as project and graphics settings. Such conditional features serve to simplify project authoring, allowing projects to efficiently scale by minimizing the number of shaders you’ll have to author and maintain.Conditional shader features can be implemented in different ways:Static (compile-time) branchingShader variants compilationDynamic (runtime) branchingWhile static branching avoids branching-related shader execution overhead at runtime, it’s evaluated and locked at compilation time and does not provide runtime control. Shader variant compilation, meanwhile, is a form of static branching that provides additional runtime control. This works by compiling a unique shader program (variant) for every possible combination of static branches, in order to maintain optimal GPU performance at runtime.Such variants are created by conditionally declaring and evaluating shader functionality through shader_feature and multi_compile shader keywords. The correct shader variants are loaded at run time based on active keywords and runtime settings. Declaring and evaluating additional shader keywords can lead to an increase in build time, file size, and runtime memory usage.At the same time, dynamic (uniform-based) branching entirely avoids the overhead of shader variant compilation, resulting in faster builds and both reduced file size and memory usage. This can bring forth smoother and faster iteration during development.On the other hand, dynamic branching can have a strong impact on shader execution performance based on the shader’s complexity and the target device. Asymmetric branches, where one side of the branch is much more complex than the other, can negatively impact performance. This is because the execution of a simpler path can still incur the performance penalties of the more complex path.When introducing conditional shader features in your own shaders, these approaches and trade-offs should be kept in mind. For more detailed information, see the shader conditionals, shader branching, and shader variants documentation.To mitigate the increase in shader processing and compilation time, shader variant stripping is utilized. It aims to exclude unnecessary shader variants from compilation based on factors such as:Materials included and keywords enabledProject and Render Pipeline settingsScriptable strippingWhen enumerating shader variants, the Editor will automatically filter out any keywords declared with shader_feature that are not enabled by materials referenced and included in the build. As a result, these keywords will not generate any additional variants.For example, if the Clear Coat material property is not enabled by any material using the Complex Lit URP Shader, all shader variants that implement the Clear Coat functionality will safely be stripped at build time.In the meantime, multi_compile keywords prompt developers and players to freely control the shader’s functionality at runtime based on available Player settings and scripts. The flip side is that such keywords cannot automatically be stripped by the Editor to the same degree as shader_feature keywords. That’s why they generally produce a larger number of variants.Scriptable stripping is a C# API that lets you exclude shader variants from compilation during build time via keywords and combinations not required at runtime. The render pipelines utilize scriptable stripping in order to strip unnecessary variants according to the project’s Render Pipeline settings and Quality Assets included in the build. Low quality High quality Variant multiplier Main Light/Cast Shadows: Off On 2x Main Light/Cast Shadows: On On 1x Main Light/Cast Shadows: Off Off 1xIn order to maximize the effects of the Editor’s shader variant stripping, we recommend disabling all graphics-related features and Render Pipeline settings not utilized at runtime. Please refer to the official documentation for more on shader variant stripping.Shader variant stripping greatly reduces the amount of compiled shader variants, based on factors like the Render Pipeline Quality Assets in the build. However, stripping is currently performed at the end of the shader processing stage. Simply enumerating all the possible variants can still take a long time, regardless of compilation.In order to reduce the shader variant processing (and project build) times, we are now introducing a significant optimization to the engine’s built-in shader variant stripping. With shader variant prefiltering, both clean and warm build times are significantly reduced.The optimization works by introducing the early exclusion of multi_compile keywords, according to Prefiltering Attributes driven by Render Pipeline settings. This decreases the amount of variants being enumerated for potential stripping and compilation, which in turn, reduces shader processing time – with warm build times reduced byup to 90% in the most drastic examples.Shader variant prefiltering first landed in 2023.1.0a14, and has been backported to 2022.2.0b15 and 2021.3.15f1.Variant prefiltering also helps cut down initial/clean build times by applying the same principle.Historically, the Unity runtime would front-load all shader objects from disk to CPU memory during scene and resource load. In most cases, a built project and scene includes many more shader variants than needed at any given moment during the application’s runtime. For projects using a large amount of shaders, this often results in high shader memory usage at runtime.Dynamic shader loading addresses the issue by providing refined user control over shader loading behavior and memory usage. This optimization facilitates the streaming of shader data chunks into memory, as well as the eviction of shader data that is no longer needed at runtime, based on a user controlled memory budget. This allows you to significantly reduce shader memory usage on platforms with limited memory budgets.New Shader Variant Loading Settings are now accessible from the Editor’s Player Settings. Use them to override the maximum number of shader chunks loaded and per-shader chunk size (MB).With the following C# API now available, you can override the Shader Variant Loading Settings using Editor scripts, such as:PlayerSettings.SetDefaultShaderChunkCount and PlayerSettings.SetDefaultShaderChunkSizeInMB to override the project’s default shader loading settingsPlayerSettings.SetShaderChunkCountForPlatform and PlayerSettings.SetShaderChunkSizeInMBForPlatformto override these settings on a per-platform basisYou can also override the maximum amount of loaded shader chunks at runtime using the C# API via Shader.maximumChunksOverride. This enables you to override the shader memory budget based on factors such as the total available system and graphics memory queried at runtime.Dynamic shader loading landed in 2023.1.0a11 and has been backported to 2022.2.0b10, 2022.1.21f1,and 2021.3.12f. In the case of the Universal Render Pipeline (URP)’s Boat Attack, we observed a78.8% reduction in runtime memory usage for shaders, from 315 MiB (default) to 66.8 MiB (dynamic loading). You can read more about this optimization in the official announcement.Beyond the critical changes mentioned above, we are working to enhance the Universal Render Pipeline’s shader variant generation and stripping. We’re also investigating additional improvements to Unity’s shader variant management at large. The ultimate goal is to facilitate the engine’s increasing feature set, while ensuring minimal shader build and runtime overhead.Some of our ongoing investigations involve the deduplication of shader resources across similar variants, as well as overall improvements to the shader keywords and Shader Variant Collection APIs. The aim is to provide more flexibility and control over shader variant processing and runtime performance.Looking ahead, we are also exploring the possibility of in-Editor tooling for shader variant tracing and analysis to provide the following details on shader variant usage:Which shaders and keywords produce the most variants?Which variants are compiled but unused at runtime?Which variants are stripped but requested at runtime?Your feedback has been instrumental so far as it helps us prioritize the most meaningful solutions. Please check out our public roadmap to vote on the features that best suit your needs. If there are additional changes you’d like to see, feel free to submit a feature request, or contact the team directly in this shader forum.
    0 Comments 0 Shares 0 Reviews
  • What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us

    The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level.

    While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts.

    Into it: Quantifying ChatGPT’s Impact on Student Learning

    The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies.

    The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results:

    Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups.

    For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat.

    Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance.

    How Different Factors Affect Learning With ChatGPT

    Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results.

    First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEMand related subjects, and then by language learning/academic writing.

    The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect.

    The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning.

    Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies.

    Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception.

    The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced.

    Implications for the Development of AI-Based Educational Technologies

    The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings:

    First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help.

    Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance.

    The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning.

    On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance, enhancing their impact on affective domainsand advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations.

    Limitations and Where Future Research Should Go

    The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately.

    Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education.

    Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills.

    Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out therebut I haven’t seem them deployed at large in ways that educators can easily utilize.

    Conclusion: Towards an Evidence-Informed Integration of AI in Education

    The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process.

    For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning.

    References

    Here is the paper by Wang and Fan:

    The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 If you liked this, check out my TDS profile.

    The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science.
    #what #most #detailed #peerreviewed #study
    What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us
    The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level. While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts. Into it: Quantifying ChatGPT’s Impact on Student Learning The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies. The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results: Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups. For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat. Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance. How Different Factors Affect Learning With ChatGPT Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results. First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEMand related subjects, and then by language learning/academic writing. The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect. The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning. Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies. Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception. The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced. Implications for the Development of AI-Based Educational Technologies The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings: First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help. Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance. The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning. On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance, enhancing their impact on affective domainsand advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations. Limitations and Where Future Research Should Go The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately. Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education. Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills. Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out therebut I haven’t seem them deployed at large in ways that educators can easily utilize. Conclusion: Towards an Evidence-Informed Integration of AI in Education The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process. For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning. References Here is the paper by Wang and Fan: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 If you liked this, check out my TDS profile. The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science. #what #most #detailed #peerreviewed #study
    What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us
    The rapid proliferation and superb capabilities of widely available LLMs has ignited intense debate within the educational sector. On one side they offer students a 24/7 tutor who is always available to help; but then of course students can use LLMs to cheat! I’ve seen both sides of the coin with my students; yes, even the bad side and even at the university level. While the potential benefits and problems of LLMs in education are widely discussed, a critical need existed for robust, empirical evidence to guide the integration of these technologies in the classroom, curricula, and studies in general. Moving beyond anecdotal accounts and rather limited studies, a recent work titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis” offers one of the most comprehensive quantitative assessments to date. The article, by Jin Wang and Wenxiang Fan from the Chinese Education Modernization Research Institute of Hangzhou Normal University, was published this month in the journal Humanities and Social Sciences Communications from the Nature Publishing group. It is as complex as detailed, so here I will delve into the findings reported in it, touching also on the methodology and delving into the implications for those developing and deploying AI in educational contexts. Into it: Quantifying ChatGPT’s Impact on Student Learning The study by Wang and Fan is a meta-analysis that synthesizes data from 51 research papers published between November 2022 and February 2025, examining the impact of ChatGPT on three crucial student outcomes: learning performance, learning perception, and higher-order thinking. For AI practitioners and data scientists, this meta-analysis provides a valuable, evidence-based lens through which to evaluate current LLM capabilities and inform the future development of Education technologies. The primary research question sought to determine the overall effectiveness of ChatGPT across the three key educational outcomes. The meta-analysis yielded statistically significant and noteworthy results: Regarding learning performance, data from 44 studies indicated a large positive impact attributable to ChatGPT usage. In fact it turned out that, on average, students integrating ChatGPT into their learning processes demonstrated significantly improved academic outcomes compared to control groups. For learning perception, encompassing students’ attitudes, motivation, and engagement, analysis of 19 studies revealed a moderately but significant positive impact. This implies that ChatGPT can contribute to a more favorable learning experience from the student’s perspective, despite the a priori limitations and problems associated to a tool that students can use to cheat. Similarly, the impact on higher-order thinking skills—such as critical analysis, problem-solving, and creativity—was also found to be moderately positive, based on 9 studies. It is good news then that ChatGPT can support the development of these crucial cognitive abilities, although its influence is clearly not as pronounced as on direct learning performance. How Different Factors Affect Learning With ChatGPT Beyond overall efficacy, Wang and Fan investigated how various study characteristics affected ChatGPT’s impact on learning. Let me summarize for you the core results. First, there was a strong effect of the type of course. The largest effect was observed in courses that involved the development of skills and competencies, followed closely by STEM (science/Technology) and related subjects, and then by language learning/academic writing. The course’s learning model also played a critical role in modulating how much ChatGPT assisted students. Problem-based learning saw a particularly strong potentiation by ChatGPT, yielding a very large effect size. Personalized learning contexts also showed a large effect, while project-based learning demonstrated a smaller, though still positive, effect. The duration of ChatGPT use was also an important modulator of ChatGPT’s effect on learning performance. Short durations in the order of a single week produced small effects, while extended use over 4–8 weeks had the strongest impact, which did not grow much more if the usage was extended even further. This suggests that sustained interaction and familiarity may be crucial for cultivating positive affective responses to LLM-assisted learning. Interestingly, the students’ grade levels, the specific role played by ChatGPT in the activity, and the area of application did not affect learning performance significantly, in any of the analyzed studies. Other factors, including grade level, type of course, learning model, the specific role adopted by ChatGPT, and the area of application, did not significantly moderate the impact on learning perception. The study further showed that when ChatGPT functioned as an intelligent tutor, providing personalized guidance and feedback, its impact on fostering higher-order thinking was most pronounced. Implications for the Development of AI-Based Educational Technologies The findings from Wang & Fan’s meta-analysis carry substantial implications for the design, development, and strategic deployment of AI in educational settings: First of all, regarding the strategic scaffolding for deeper cognition. The impact on the development of thinking skills was somewhat lower than on performance, which means that LLMs are not inherently cultivators of deep critical thought, even if they do have a positive global effect on learning. Therefore, AI-based educational tools should integrate explicit scaffolding mechanisms that foster the development of thinking processes, to guide students from knowledge acquisition towards higher-level analysis, synthesis, and evaluation in parallel to the AI system’s direct help. Thus, the implementation of AI tools in education must be framed properly, and as we saw above this framing will depend on the exact type and content of the course, the learning model one wishes to apply, and the available time. One particularly interesting setup would be that where the AI tool supports inquiry, hypothesis testing, and collaborative problem-solving. Note though that the findings on optimal duration imply the need for onboarding strategies and adaptive engagement techniques to maximize impact and mitigate potential over-reliance. The superior impact documented when ChatGPT functions as an intelligent tutor highlights a key direction for AI in education. Developing LLM-based systems that can provide adaptive feedback, pose diagnostic and reflective questions, and guide learners through complex cognitive tasks is paramount. This requires moving beyond simple Q&A capabilities towards more sophisticated conversational AI and pedagogical reasoning. On top, there are a few non-minor issues to work on. While LLMs excel at information delivery and task assistance (leading to high performance gains), enhancing their impact on affective domains (perception) and advanced cognitive skills requires better interaction designs. Incorporating elements that foster student agency, provide meaningful feedback, and manage cognitive load effectively are crucial considerations. Limitations and Where Future Research Should Go The authors of the study prudently acknowledge some limitations, which also illuminate avenues for future research. Although the total sample size was the largest ever, it is still small, and very small for some specific questions. More research needs to be done, and a new meta-analysis will probably be required when more data becomes available. A difficult point, and this is my personal addition, is that as the technology progresses so fast, results might become obsolete very rapidly, unfortunately. Another limitation in the studies analyzed in this paper is that they are largely biased toward college-level students, with very limited data on primary education. Wang and Fan also discuss what AI, data science, and pedagogues should consider in future research. First, they should try to disaggregate effects based on specific LLM versions, a point that is critical because they evolve so fast. Second, they should study how students and teachers typically “prompt” the LLMs, and then investigate the impact of differential prompting on the final learning outcomes. Then, somehow they need to develop and evaluate adaptive scaffolding mechanisms embedded within LLM-based educational tools. Finally, and over a long term, we need to explore the effects of LLM integration on knowledge retention and the development of self-regulated learning skills. Personally, I add at this point, I am of the opinion that studies need to dig more into how students use LLMs to cheat, not necessarily willingly but possibly also by seeking for shortcuts that lead them wrong or allow them to get out of the way but without really learning anything. And in this context, I think AI scientists are falling short in developing camouflaged systems for the detection of AI-generated texts, that they can use to rapidly and confidently tell if, for example, a homework was done with an LLM. Yes, there are some watermarking and similar systems out there (which I will cover some day!) but I haven’t seem them deployed at large in ways that educators can easily utilize. Conclusion: Towards an Evidence-Informed Integration of AI in Education The meta-analysis I’ve covered here for you provides a critical, data-driven contribution to the discourse on AI in education. It confirms the substantial potential of LLMs, particularly ChatGPT in these studies, to enhance student learning performance and positively influence learning perception and higher-order thinking. However, the study also powerfully illustrates that the effectiveness of these tools is not uniform but is significantly moderated by contextual factors and the nature of their integration into the learning process. For the AI and data science community, these findings serve as both an affirmation and a challenge. The affirmation lies in the demonstrated efficacy of LLM technology. The challenge resides in harnessing this potential through thoughtful, evidence-informed design that moves beyond generic applications towards sophisticated, adaptive, and pedagogically sound educational tools. The path forward requires a continued commitment to rigorous research and a nuanced understanding of the complex interplay between AI, pedagogy, and human learning. References Here is the paper by Wang and Fan: The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Jin Wang & Wenxiang Fan Humanities and Social Sciences Communications volume 12, 621 (2025) If you liked this, check out my TDS profile. The post What the Most Detailed Peer-Reviewed Study on AI in the Classroom Taught Us appeared first on Towards Data Science.
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com