• IA : OpenAI rachète la start-up de l’ex-designer historique d’Apple Jony Ive, pour 6,5 milliards de dollars

    IA : OpenAI rachète la start-up de l’ex-designer historique d’Apple Jony Ive, pour 6,5 milliards de dollars L’entreprise de Sam Altman annonce que l’ex-cadre associé au lancement de l’iPhone chez Apple sera responsable du design, notamment d’une « nouvelle génération d’ordinateurs nourris à l’intelligence artificielle ». Article réservé aux abonnés Sam Altman, cofondateur et patron d’OpenAI, à Washington, le 8 mai 2025. JOSE LUIS MAGANA/AP « Excité de nouer un partenariat avec Jony, à mon avis le plus grand designer du monde. Et excité aussi d’essayer de créer une nouvelle génération d’ordinateurs nourris à l’intelligence artificielle. » Le directeur général d’OpenAI,Jony Ive, l’ancien designer historique d’Apple, célèbre, notamment, pour avoir participé au lancement de l’iPhone. Lire aussi | Article réservé à nos abonnés Les entreprises d’IA voient dans la robotique une nouvelle frontière La start-up d’intelligence artificiellerachète à cette occasion, pour 6,5 milliards de dollars, io, la jeune entreprise lancée par Jony Ive pour concevoir des appareils connectés. Ce dernier assurera une fonction de responsable du design chez OpenAI, pour les futurs matériels, mais aussi le logiciel, dont l’interface du célèbre robot conversationnel ChatGPT. Le Britannique Jony Ive, 58 ans, est un ancien collaborateur du cofondateur d’Apple Steve Jobs. Il est associé au lancement de plusieurs de ses produits, dont, outre l’iPhone, l’iPod, la tablette iPad ou la montre Apple Watch. Il a quitté Apple en 2019. L’accord annoncé mercredi symbolise la volonté de diversification de Sam Altman pour OpenAI. Le fondateur de l’entreprise spécialisée dans les logiciels d’IA a commencé, il y a plusieurs mois, à discuter de projets de matériels avec Jony Ive. En 2024, OpenAI avait déjà pris une participation minoritaire dans son projet « io ». L’entreprise, valorisée 300 milliards de dollars, rajoute aujourd’hui 5 milliards en actions pour finaliser le rachat. Cette transaction devra toutefois encore être approuvée par les autorités de la concurrence d’ici à l’été 2025, précise la presse américaine. Il vous reste 56.8% de cet article à lire. La suite est réservée aux abonnés.
    #openai #rachète #startup #lexdesigner #historique
    IA : OpenAI rachète la start-up de l’ex-designer historique d’Apple Jony Ive, pour 6,5 milliards de dollars
    IA : OpenAI rachète la start-up de l’ex-designer historique d’Apple Jony Ive, pour 6,5 milliards de dollars L’entreprise de Sam Altman annonce que l’ex-cadre associé au lancement de l’iPhone chez Apple sera responsable du design, notamment d’une « nouvelle génération d’ordinateurs nourris à l’intelligence artificielle ». Article réservé aux abonnés Sam Altman, cofondateur et patron d’OpenAI, à Washington, le 8 mai 2025. JOSE LUIS MAGANA/AP « Excité de nouer un partenariat avec Jony, à mon avis le plus grand designer du monde. Et excité aussi d’essayer de créer une nouvelle génération d’ordinateurs nourris à l’intelligence artificielle. » Le directeur général d’OpenAI,Jony Ive, l’ancien designer historique d’Apple, célèbre, notamment, pour avoir participé au lancement de l’iPhone. Lire aussi | Article réservé à nos abonnés Les entreprises d’IA voient dans la robotique une nouvelle frontière La start-up d’intelligence artificiellerachète à cette occasion, pour 6,5 milliards de dollars, io, la jeune entreprise lancée par Jony Ive pour concevoir des appareils connectés. Ce dernier assurera une fonction de responsable du design chez OpenAI, pour les futurs matériels, mais aussi le logiciel, dont l’interface du célèbre robot conversationnel ChatGPT. Le Britannique Jony Ive, 58 ans, est un ancien collaborateur du cofondateur d’Apple Steve Jobs. Il est associé au lancement de plusieurs de ses produits, dont, outre l’iPhone, l’iPod, la tablette iPad ou la montre Apple Watch. Il a quitté Apple en 2019. L’accord annoncé mercredi symbolise la volonté de diversification de Sam Altman pour OpenAI. Le fondateur de l’entreprise spécialisée dans les logiciels d’IA a commencé, il y a plusieurs mois, à discuter de projets de matériels avec Jony Ive. En 2024, OpenAI avait déjà pris une participation minoritaire dans son projet « io ». L’entreprise, valorisée 300 milliards de dollars, rajoute aujourd’hui 5 milliards en actions pour finaliser le rachat. Cette transaction devra toutefois encore être approuvée par les autorités de la concurrence d’ici à l’été 2025, précise la presse américaine. Il vous reste 56.8% de cet article à lire. La suite est réservée aux abonnés. #openai #rachète #startup #lexdesigner #historique
    IA : OpenAI rachète la start-up de l’ex-designer historique d’Apple Jony Ive, pour 6,5 milliards de dollars
    www.lemonde.fr
    IA : OpenAI rachète la start-up de l’ex-designer historique d’Apple Jony Ive, pour 6,5 milliards de dollars L’entreprise de Sam Altman annonce que l’ex-cadre associé au lancement de l’iPhone chez Apple sera responsable du design, notamment d’une « nouvelle génération d’ordinateurs nourris à l’intelligence artificielle ». Article réservé aux abonnés Sam Altman, cofondateur et patron d’OpenAI, à Washington, le 8 mai 2025. JOSE LUIS MAGANA/AP « Excité de nouer un partenariat avec Jony, à mon avis le plus grand designer du monde. Et excité aussi d’essayer de créer une nouvelle génération d’ordinateurs nourris à l’intelligence artificielle. » Le directeur général d’OpenAI,Jony Ive, l’ancien designer historique d’Apple, célèbre, notamment, pour avoir participé au lancement de l’iPhone. Lire aussi | Article réservé à nos abonnés Les entreprises d’IA voient dans la robotique une nouvelle frontière La start-up d’intelligence artificielle (IA) rachète à cette occasion, pour 6,5 milliards de dollars (5,74 milliards d’euros), io, la jeune entreprise lancée par Jony Ive pour concevoir des appareils connectés. Ce dernier assurera une fonction de responsable du design chez OpenAI, pour les futurs matériels, mais aussi le logiciel, dont l’interface du célèbre robot conversationnel ChatGPT. Le Britannique Jony Ive, 58 ans, est un ancien collaborateur du cofondateur d’Apple Steve Jobs. Il est associé au lancement de plusieurs de ses produits, dont, outre l’iPhone, l’iPod, la tablette iPad ou la montre Apple Watch. Il a quitté Apple en 2019. L’accord annoncé mercredi symbolise la volonté de diversification de Sam Altman pour OpenAI. Le fondateur de l’entreprise spécialisée dans les logiciels d’IA a commencé, il y a plusieurs mois, à discuter de projets de matériels avec Jony Ive. En 2024, OpenAI avait déjà pris une participation minoritaire dans son projet « io ». L’entreprise, valorisée 300 milliards de dollars, rajoute aujourd’hui 5 milliards en actions pour finaliser le rachat. Cette transaction devra toutefois encore être approuvée par les autorités de la concurrence d’ici à l’été 2025, précise la presse américaine. Il vous reste 56.8% de cet article à lire. La suite est réservée aux abonnés.
    0 Comments ·0 Shares ·0 Reviews
  • Wild Hearts S Showcases 4-Player Multiplayer Gameplay

    Koei Tecmo and Omega Force’s Monster Hunter-inspired action RPG Wild Hearts is set to receive a re-release for the Nintendo Switch 2 now long from now in the form of Wild Hearts S, and Koei Tecmo has offered an extended look at its headlining new feature ahead of its approaching release a couple of months from now. 
    Wild Hearts S’ big new addition will be seamless four-player co-op – which, as you can imagine, is a pretty major new feature for a Monster Hunter-style experience – and Koei Tecmo has offered an extended new look at it with a new gameplay video. The video showcases roughly 14 minutes of gameplay footage with Japanese commentary from members of the development team, showcasing traversal across several areas, combat against multiple monsters, and more. Check out the video below. 
    Wild Hearts S launches for the Nintendo Switch 2 on July 25. There’s no word yet on whenit’ll release for other platforms.
    #wild #hearts #showcases #4player #multiplayer
    Wild Hearts S Showcases 4-Player Multiplayer Gameplay
    Koei Tecmo and Omega Force’s Monster Hunter-inspired action RPG Wild Hearts is set to receive a re-release for the Nintendo Switch 2 now long from now in the form of Wild Hearts S, and Koei Tecmo has offered an extended look at its headlining new feature ahead of its approaching release a couple of months from now.  Wild Hearts S’ big new addition will be seamless four-player co-op – which, as you can imagine, is a pretty major new feature for a Monster Hunter-style experience – and Koei Tecmo has offered an extended new look at it with a new gameplay video. The video showcases roughly 14 minutes of gameplay footage with Japanese commentary from members of the development team, showcasing traversal across several areas, combat against multiple monsters, and more. Check out the video below.  Wild Hearts S launches for the Nintendo Switch 2 on July 25. There’s no word yet on whenit’ll release for other platforms. #wild #hearts #showcases #4player #multiplayer
    Wild Hearts S Showcases 4-Player Multiplayer Gameplay
    gamingbolt.com
    Koei Tecmo and Omega Force’s Monster Hunter-inspired action RPG Wild Hearts is set to receive a re-release for the Nintendo Switch 2 now long from now in the form of Wild Hearts S, and Koei Tecmo has offered an extended look at its headlining new feature ahead of its approaching release a couple of months from now.  Wild Hearts S’ big new addition will be seamless four-player co-op – which, as you can imagine, is a pretty major new feature for a Monster Hunter-style experience – and Koei Tecmo has offered an extended new look at it with a new gameplay video. The video showcases roughly 14 minutes of gameplay footage with Japanese commentary from members of the development team, showcasing traversal across several areas, combat against multiple monsters, and more. Check out the video below.  Wild Hearts S launches for the Nintendo Switch 2 on July 25. There’s no word yet on when (or if) it’ll release for other platforms.
    0 Comments ·0 Shares ·0 Reviews
  • Sirens review: Julianne Moore, Meghann Fahy, and Milly Alcock serve up beachy thrills

    Like the alluring mythological creatures from which it draws its name, Netflix's Sirens wears a tempting facade, but conceals something darker at its core.That facade draws on the pleasures of shows like Big Little Lies and The White Lotus: great actors — Julianne Moore! Meghann Fahy! Milly Alcock! — facing off against a backdrop of picturesque mansions and beaches. As in those series, showrunner Molly Smith Metzleralso looks to tackle thornier topics of class and trauma. Yet these subjects rarely get the depth they deserve, brushed over by a haphazard plot that delivers soapy fun, if not much else.What's Sirens about?

    Meghann Fahy and Milly Alcock in "Sirens."
    Credit: Macall Polay / Netflix

    Sirens kicks off with the world's most misguided edible arrangement. Devon, fresh off her second DUI and learning her fatherhas early-onset dementia, has appealed to her absent younger sister Simonefor help. Simone's response? A basket of melon and berries, and a card telling Devon to "keep your chin up." The subpar gift and empty platitude are enough to make Devon travel several hours, rotting fruit in hand, to the luxurious island where Simone works as assistant to wealthy philanthropist Michaela "Kiki" Kell.

    You May Also Like

    Simone doesn't just manage the staff who run Michaela's lavish Cliff House estate. The working relationship between the two is deeply personal — and frankly, creepy. Boundaries don't exist for them: Simone drafts Michaela's sexts to her husband Peter. The pair share gum in order to have fresh breath. If this is raising red flags for you, you're not alone: Devon is horrified by Simone's bond with her boss, and she's ready to drag her sister kicking and screaming from Michaela's grasp. But as a scrappy working-class interloper in Michaela's wealthy world — over the all-important, party-filled Labor Day weekend, no less — Devon is at a major disadvantage. As she attempts to protect her sister, dark secrets about their pastcome to light, prompting a whirlwind of dramatic revelations that ultimately don't hold the weight they should.

    Mashable Top Stories

    Stay connected with the hottest stories of the day and the latest entertainment news.
    Sign up for Mashable's Top Stories newsletter

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    Julianne Moore, Meghann Fahy, and Milly Alcock are great in Sirens, but is it enough?

    Julianne Moore in "Sirens."
    Credit: Macall Polay / Netflix

    Sirens is at its best when it's a dark comedy with a touch of soap opera, and much of that comes down to Moore, Fahy, and Alcock's performances. Moore and Alcock make a perfect pair, channeling Stepford Wives creepiness in their pastel getups and matching athleisure sets. Alcock's Simone simpers and preens for her boss, while Moore commits fully to Michaela's frigid cult leader vibes.Fahy's Devon, meanwhile, is a wonderfully prickly contrast to Simone and Michaela's rich girl acts. She's raw and unapologetic, unafraid to call out Michaela's bizarro rituals. When she and Simone are together, that rawness rubs off on Simone, too, highlighting their sisterly connection and the pain the two shared during their traumatic upbringing.

    Related Stories

    Discussion of that trauma results in some of Sirens' biggest tonal swings, as the show ranges from send-ups of the superficial rich — Glenn Howerton excels as Michaela and Peter's sleazeball neighbor, for example — to clichéd explorations of mental health, like Simone's panic attacks. Also predictable? Sirens' examination of unbalanced, predatory power dynamics within relationships. As soon as Bacon's Peter shows up on the scene, it's clear what will play out between him, Michaela, and Simone. The show treats this arc as culminating in a revelatory plot twist, but it feels more tired than anything.Sirens isn't without interesting ideas. In keeping with the "sirens" motif, all three women are treated as monstrous at some point in the show's five-episode run, even though they're often at a disadvantage.The mythological theme extends to a solid running joke in which two of Devon's loser suitors follow her around, as if lured by her siren song, despite her annoyed rejections of them. These contrasts between people perceiving Sirens' leads as near-mythic beings versus their actual, unfulfilling realities result in the show's most fascinating moments. But with only five episodes, Sirens fails to probe these contrasts as much as it could, and its song ultimately falls flat.Sirens is now streaming on Netflix.

    Topics
    Netflix
    #sirens #review #julianne #moore #meghann
    Sirens review: Julianne Moore, Meghann Fahy, and Milly Alcock serve up beachy thrills
    Like the alluring mythological creatures from which it draws its name, Netflix's Sirens wears a tempting facade, but conceals something darker at its core.That facade draws on the pleasures of shows like Big Little Lies and The White Lotus: great actors — Julianne Moore! Meghann Fahy! Milly Alcock! — facing off against a backdrop of picturesque mansions and beaches. As in those series, showrunner Molly Smith Metzleralso looks to tackle thornier topics of class and trauma. Yet these subjects rarely get the depth they deserve, brushed over by a haphazard plot that delivers soapy fun, if not much else.What's Sirens about? Meghann Fahy and Milly Alcock in "Sirens." Credit: Macall Polay / Netflix Sirens kicks off with the world's most misguided edible arrangement. Devon, fresh off her second DUI and learning her fatherhas early-onset dementia, has appealed to her absent younger sister Simonefor help. Simone's response? A basket of melon and berries, and a card telling Devon to "keep your chin up." The subpar gift and empty platitude are enough to make Devon travel several hours, rotting fruit in hand, to the luxurious island where Simone works as assistant to wealthy philanthropist Michaela "Kiki" Kell. You May Also Like Simone doesn't just manage the staff who run Michaela's lavish Cliff House estate. The working relationship between the two is deeply personal — and frankly, creepy. Boundaries don't exist for them: Simone drafts Michaela's sexts to her husband Peter. The pair share gum in order to have fresh breath. If this is raising red flags for you, you're not alone: Devon is horrified by Simone's bond with her boss, and she's ready to drag her sister kicking and screaming from Michaela's grasp. But as a scrappy working-class interloper in Michaela's wealthy world — over the all-important, party-filled Labor Day weekend, no less — Devon is at a major disadvantage. As she attempts to protect her sister, dark secrets about their pastcome to light, prompting a whirlwind of dramatic revelations that ultimately don't hold the weight they should. Mashable Top Stories Stay connected with the hottest stories of the day and the latest entertainment news. Sign up for Mashable's Top Stories newsletter By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Julianne Moore, Meghann Fahy, and Milly Alcock are great in Sirens, but is it enough? Julianne Moore in "Sirens." Credit: Macall Polay / Netflix Sirens is at its best when it's a dark comedy with a touch of soap opera, and much of that comes down to Moore, Fahy, and Alcock's performances. Moore and Alcock make a perfect pair, channeling Stepford Wives creepiness in their pastel getups and matching athleisure sets. Alcock's Simone simpers and preens for her boss, while Moore commits fully to Michaela's frigid cult leader vibes.Fahy's Devon, meanwhile, is a wonderfully prickly contrast to Simone and Michaela's rich girl acts. She's raw and unapologetic, unafraid to call out Michaela's bizarro rituals. When she and Simone are together, that rawness rubs off on Simone, too, highlighting their sisterly connection and the pain the two shared during their traumatic upbringing. Related Stories Discussion of that trauma results in some of Sirens' biggest tonal swings, as the show ranges from send-ups of the superficial rich — Glenn Howerton excels as Michaela and Peter's sleazeball neighbor, for example — to clichéd explorations of mental health, like Simone's panic attacks. Also predictable? Sirens' examination of unbalanced, predatory power dynamics within relationships. As soon as Bacon's Peter shows up on the scene, it's clear what will play out between him, Michaela, and Simone. The show treats this arc as culminating in a revelatory plot twist, but it feels more tired than anything.Sirens isn't without interesting ideas. In keeping with the "sirens" motif, all three women are treated as monstrous at some point in the show's five-episode run, even though they're often at a disadvantage.The mythological theme extends to a solid running joke in which two of Devon's loser suitors follow her around, as if lured by her siren song, despite her annoyed rejections of them. These contrasts between people perceiving Sirens' leads as near-mythic beings versus their actual, unfulfilling realities result in the show's most fascinating moments. But with only five episodes, Sirens fails to probe these contrasts as much as it could, and its song ultimately falls flat.Sirens is now streaming on Netflix. Topics Netflix #sirens #review #julianne #moore #meghann
    Sirens review: Julianne Moore, Meghann Fahy, and Milly Alcock serve up beachy thrills
    mashable.com
    Like the alluring mythological creatures from which it draws its name, Netflix's Sirens wears a tempting facade, but conceals something darker at its core.That facade draws on the pleasures of shows like Big Little Lies and The White Lotus: great actors — Julianne Moore! Meghann Fahy! Milly Alcock! — facing off against a backdrop of picturesque mansions and beaches. As in those series, showrunner Molly Smith Metzler (Maid) also looks to tackle thornier topics of class and trauma. Yet these subjects rarely get the depth they deserve, brushed over by a haphazard plot that delivers soapy fun, if not much else.What's Sirens about? Meghann Fahy and Milly Alcock in "Sirens." Credit: Macall Polay / Netflix Sirens kicks off with the world's most misguided edible arrangement. Devon (Fahy), fresh off her second DUI and learning her father (Bill Camp) has early-onset dementia, has appealed to her absent younger sister Simone (Alcock) for help. Simone's response? A basket of melon and berries, and a card telling Devon to "keep your chin up." The subpar gift and empty platitude are enough to make Devon travel several hours, rotting fruit in hand, to the luxurious island where Simone works as assistant to wealthy philanthropist Michaela "Kiki" Kell (Moore). You May Also Like Simone doesn't just manage the staff who run Michaela's lavish Cliff House estate. The working relationship between the two is deeply personal — and frankly, creepy. Boundaries don't exist for them: Simone drafts Michaela's sexts to her husband Peter (Kevin Bacon). The pair share gum in order to have fresh breath. If this is raising red flags for you, you're not alone: Devon is horrified by Simone's bond with her boss, and she's ready to drag her sister kicking and screaming from Michaela's grasp. But as a scrappy working-class interloper in Michaela's wealthy world — over the all-important, party-filled Labor Day weekend, no less — Devon is at a major disadvantage. As she attempts to protect her sister, dark secrets about their past (and dark rumors about Michaela's) come to light, prompting a whirlwind of dramatic revelations that ultimately don't hold the weight they should. Mashable Top Stories Stay connected with the hottest stories of the day and the latest entertainment news. Sign up for Mashable's Top Stories newsletter By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! Julianne Moore, Meghann Fahy, and Milly Alcock are great in Sirens, but is it enough? Julianne Moore in "Sirens." Credit: Macall Polay / Netflix Sirens is at its best when it's a dark comedy with a touch of soap opera, and much of that comes down to Moore, Fahy, and Alcock's performances. Moore and Alcock make a perfect pair, channeling Stepford Wives creepiness in their pastel getups and matching athleisure sets. Alcock's Simone simpers and preens for her boss, while Moore commits fully to Michaela's frigid cult leader vibes. (Whether Michaela's bird preservation society is actually a cult is one of the mysteries Sirens presents, even if the resolution isn't particularly satisfying.)Fahy's Devon, meanwhile, is a wonderfully prickly contrast to Simone and Michaela's rich girl acts. She's raw and unapologetic, unafraid to call out Michaela's bizarro rituals. When she and Simone are together, that rawness rubs off on Simone, too, highlighting their sisterly connection and the pain the two shared during their traumatic upbringing. Related Stories Discussion of that trauma results in some of Sirens' biggest tonal swings, as the show ranges from send-ups of the superficial rich — Glenn Howerton excels as Michaela and Peter's sleazeball neighbor, for example — to clichéd explorations of mental health, like Simone's panic attacks. Also predictable? Sirens' examination of unbalanced, predatory power dynamics within relationships. As soon as Bacon's Peter shows up on the scene, it's clear what will play out between him, Michaela, and Simone. The show treats this arc as culminating in a revelatory plot twist, but it feels more tired than anything.Sirens isn't without interesting ideas. In keeping with the "sirens" motif, all three women are treated as monstrous at some point in the show's five-episode run, even though they're often at a disadvantage. (Especially Devon and Simone.) The mythological theme extends to a solid running joke in which two of Devon's loser suitors follow her around, as if lured by her siren song, despite her annoyed rejections of them. These contrasts between people perceiving Sirens' leads as near-mythic beings versus their actual, unfulfilling realities result in the show's most fascinating moments. But with only five episodes, Sirens fails to probe these contrasts as much as it could, and its song ultimately falls flat.Sirens is now streaming on Netflix. Topics Netflix
    0 Comments ·0 Shares ·0 Reviews
  • Belgian AI startup says it can automate 80% of work at ‘expert firms’

    Belgium-based Ravical has secured €7.3mn in pre-seed funding to bring AI agents to professional services firms in tax, legal, accounting, and insurance. 
    Joris Van Der Gucht, Ravical’s CEO and co-founder, said the “virtual employees” could do 80% of the work in these firms.  
    “Ravical’s agents take on the repetitive, time-consuming tasks that slow experts down,” he told TNW, citing examples such as retrieving data from internal systems, checking the latest regulations, or reading long policies. 
    Despite doing up to 80% of the work in these firms, Van Der Gucht downplayed concerns about the agents supplanting humans.
    “We don’t expect job losses,” he said. “It’s not about replacing experts, it’s about creating space for them to be more impactful and reimagining how they engage with their clients.”The of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!
    Ravical’s AI agents are designed to suggest, not act independently. The company said each AI-generated output is traceable and audible, which are critical requirements in the world of professional services. 
    Ravical isn’t Van Der Gucht’s first venture into automating workflows. He previously co-founded Silverfin, which provides cloud-based accounting automation software. In 2023, he sold the company for €320mn.
    In February, he founded Ravical alongside AI experts Ken Bastiaensen and Benjamin Vandermarliere. Armed with fresh funding, the startup plans to refine its algorithms and expand its team.  
    Enrico Mellis, partner at Lakestar, the lead investor in the round, said he was excited to support the company in bringing its “proven” experience in automation to the booming agentic AI market.
    “Agentic AI is moving from buzzword to board-level priority,” Mellis said. 
    Ravical said it has already undertaken 10 pilot projects with professional services firms of various sizes and across multiple industries. The company is eyeing international expansion, but didn’t elaborate on where.
    AI’s evolution will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off.

    Story by

    Siôn Geschwindt

    Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #belgian #startup #says #can #automate
    Belgian AI startup says it can automate 80% of work at ‘expert firms’
    Belgium-based Ravical has secured €7.3mn in pre-seed funding to bring AI agents to professional services firms in tax, legal, accounting, and insurance.  Joris Van Der Gucht, Ravical’s CEO and co-founder, said the “virtual employees” could do 80% of the work in these firms.   “Ravical’s agents take on the repetitive, time-consuming tasks that slow experts down,” he told TNW, citing examples such as retrieving data from internal systems, checking the latest regulations, or reading long policies.  Despite doing up to 80% of the work in these firms, Van Der Gucht downplayed concerns about the agents supplanting humans. “We don’t expect job losses,” he said. “It’s not about replacing experts, it’s about creating space for them to be more impactful and reimagining how they engage with their clients.”The 💜 of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now! Ravical’s AI agents are designed to suggest, not act independently. The company said each AI-generated output is traceable and audible, which are critical requirements in the world of professional services.  Ravical isn’t Van Der Gucht’s first venture into automating workflows. He previously co-founded Silverfin, which provides cloud-based accounting automation software. In 2023, he sold the company for €320mn. In February, he founded Ravical alongside AI experts Ken Bastiaensen and Benjamin Vandermarliere. Armed with fresh funding, the startup plans to refine its algorithms and expand its team.   Enrico Mellis, partner at Lakestar, the lead investor in the round, said he was excited to support the company in bringing its “proven” experience in automation to the booming agentic AI market. “Agentic AI is moving from buzzword to board-level priority,” Mellis said.  Ravical said it has already undertaken 10 pilot projects with professional services firms of various sizes and across multiple industries. The company is eyeing international expansion, but didn’t elaborate on where. AI’s evolution will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicSiôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindtprotonmailcom Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #belgian #startup #says #can #automate
    Belgian AI startup says it can automate 80% of work at ‘expert firms’
    thenextweb.com
    Belgium-based Ravical has secured €7.3mn in pre-seed funding to bring AI agents to professional services firms in tax, legal, accounting, and insurance.  Joris Van Der Gucht, Ravical’s CEO and co-founder, said the “virtual employees” could do 80% of the work in these firms.   “Ravical’s agents take on the repetitive, time-consuming tasks that slow experts down,” he told TNW, citing examples such as retrieving data from internal systems, checking the latest regulations, or reading long policies.  Despite doing up to 80% of the work in these firms, Van Der Gucht downplayed concerns about the agents supplanting humans. “We don’t expect job losses,” he said. “It’s not about replacing experts, it’s about creating space for them to be more impactful and reimagining how they engage with their clients.”The 💜 of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now! Ravical’s AI agents are designed to suggest, not act independently. The company said each AI-generated output is traceable and audible, which are critical requirements in the world of professional services.  Ravical isn’t Van Der Gucht’s first venture into automating workflows. He previously co-founded Silverfin, which provides cloud-based accounting automation software. In 2023, he sold the company for €320mn. In February, he founded Ravical alongside AI experts Ken Bastiaensen and Benjamin Vandermarliere. Armed with fresh funding, the startup plans to refine its algorithms and expand its team.   Enrico Mellis, partner at Lakestar, the lead investor in the round, said he was excited to support the company in bringing its “proven” experience in automation to the booming agentic AI market. “Agentic AI is moving from buzzword to board-level priority,” Mellis said.  Ravical said it has already undertaken 10 pilot projects with professional services firms of various sizes and across multiple industries. The company is eyeing international expansion, but didn’t elaborate on where. AI’s evolution will be a hot topic at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale — use the code TNWXMEDIA2025 at the checkout to get 30% off. Story by Siôn Geschwindt Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehic (show all) Siôn is a freelance science and technology reporter, specialising in climate and energy. From nuclear fusion breakthroughs to electric vehicles, he's happiest sourcing a scoop, investigating the impact of emerging technologies, and even putting them to the test. He has five years of journalism experience and holds a dual degree in media and environmental science from the University of Cape Town, South Africa. When he's not writing, you can probably find Siôn out hiking, surfing, playing the drums or catering to his moderate caffeine addiction. You can contact him at: sion.geschwindt [at] protonmail [dot] com Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    0 Comments ·0 Shares ·0 Reviews
  • Maxon discontinues ZBrushCore and ZBrushCoreMini

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Maxon is discontinuing ZBrushCore and ZBrushCoreMini. the commercial and free cut-down editions of ZBrush, its digital sculpting software.ZBrushCoreMini downloads will be removed on 30 May 2025, and sales of new ZBrushCore subscriptions will be ended, although existing subs can be renewed until 30 September 2025.
    In its online FAQs, Maxon also teases a new “freemium” version of ZBrush to “align the desktop and iPad versions”.
    ZBrushCore and ZBrushCoreMini: the old cut-down desktop editions of ZBrush

    First released in 2016, ZBrushCore is a cut-down edition of the software aimed at “users who are new to 3D, illustrators, students and 3D printing enthusiasts”.ZBrushCoreMini, a free non-commercial edition, followed it in 2020.
    You can see a feature comparison table for ZBrushCoreMini and ZBrushCore on Maxon’s website, and find more details on the ZBrushCore and ZBrushCoreMini product pages.
    Neither has seen many updates since Maxon acquired original ZBrush developer Pixologic in 2022: the most recent version of ZBrushCore is the 2021.6 release.
    Updates and downloads to stop on 30 May 2025; subscription renewals on 30 September

    Both editions will now enter ‘limited maintenance mode’ on 30 May 2025, so neither will receive any updates or bugfixes, and ZBrushCoreMini will no longer be available for download.It will also no longer be possible to take out a new ZBrushCore subscription, although existing subscribers will be able to renew monthly subscriptions until 30 September 2025, and will receive active support until that date.
    New ‘freemium’ edition of ZBrush coming soon

    Maxon doesn’t give a reason for discontinuing ZBrushCore and ZBrushCoreMini in its FAQs, but it does mention the new iPad edition of ZBrush, which fulfils the role of a less expensive, less fully featured alternative to the desktop version of the software, with a free base edition.In the FAQs, Maxon also notes that as part of its “continued efforts to align the Desktop and iPad versions, a new Freemium version of ZBrush Desktop is on its way”.
    Price, system requirements and dates

    Maxon doesn’t list system requirements or prices for ZBrushCore on its website, but on release, ZBrushCore 2021 was compatible with Windows 7+ and Mac OS X 10.10+, and cost /month.ZBrushCoreMini is available free until 30 May 2025. On the release of ZBrushCoreMini 2021, it was compatible with 64-bit Windows 7+ and Mac OS X 10.11+.
    ZBrush for iPad is compatible with iPadOS 17.0+. It requires an iPad with a A12 Bionic chip or later. The base app is free, but is export-disabled. Access to the full feature set requires a paid subscription, which costs /month or /year.
    The desktop edition of ZBrush is compatible with Windows 10+ and macOS 11.5+. It is rental-only, with subscriptions costing /month or /year, also including the iPad edition. Maxon hasn’t announced a release date for the new freemium edition.
    Read Maxon’s online FAQs about discontinuing ZBrushCore and ZBrushCoreMini
    Download ZBrushCoreMini for free until 30 May 2025Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #maxon #discontinues #zbrushcore #zbrushcoremini
    Maxon discontinues ZBrushCore and ZBrushCoreMini
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Maxon is discontinuing ZBrushCore and ZBrushCoreMini. the commercial and free cut-down editions of ZBrush, its digital sculpting software.ZBrushCoreMini downloads will be removed on 30 May 2025, and sales of new ZBrushCore subscriptions will be ended, although existing subs can be renewed until 30 September 2025. In its online FAQs, Maxon also teases a new “freemium” version of ZBrush to “align the desktop and iPad versions”. ZBrushCore and ZBrushCoreMini: the old cut-down desktop editions of ZBrush First released in 2016, ZBrushCore is a cut-down edition of the software aimed at “users who are new to 3D, illustrators, students and 3D printing enthusiasts”.ZBrushCoreMini, a free non-commercial edition, followed it in 2020. You can see a feature comparison table for ZBrushCoreMini and ZBrushCore on Maxon’s website, and find more details on the ZBrushCore and ZBrushCoreMini product pages. Neither has seen many updates since Maxon acquired original ZBrush developer Pixologic in 2022: the most recent version of ZBrushCore is the 2021.6 release. Updates and downloads to stop on 30 May 2025; subscription renewals on 30 September Both editions will now enter ‘limited maintenance mode’ on 30 May 2025, so neither will receive any updates or bugfixes, and ZBrushCoreMini will no longer be available for download.It will also no longer be possible to take out a new ZBrushCore subscription, although existing subscribers will be able to renew monthly subscriptions until 30 September 2025, and will receive active support until that date. New ‘freemium’ edition of ZBrush coming soon Maxon doesn’t give a reason for discontinuing ZBrushCore and ZBrushCoreMini in its FAQs, but it does mention the new iPad edition of ZBrush, which fulfils the role of a less expensive, less fully featured alternative to the desktop version of the software, with a free base edition.In the FAQs, Maxon also notes that as part of its “continued efforts to align the Desktop and iPad versions, a new Freemium version of ZBrush Desktop is on its way”. Price, system requirements and dates Maxon doesn’t list system requirements or prices for ZBrushCore on its website, but on release, ZBrushCore 2021 was compatible with Windows 7+ and Mac OS X 10.10+, and cost /month.ZBrushCoreMini is available free until 30 May 2025. On the release of ZBrushCoreMini 2021, it was compatible with 64-bit Windows 7+ and Mac OS X 10.11+. ZBrush for iPad is compatible with iPadOS 17.0+. It requires an iPad with a A12 Bionic chip or later. The base app is free, but is export-disabled. Access to the full feature set requires a paid subscription, which costs /month or /year. The desktop edition of ZBrush is compatible with Windows 10+ and macOS 11.5+. It is rental-only, with subscriptions costing /month or /year, also including the iPad edition. Maxon hasn’t announced a release date for the new freemium edition. Read Maxon’s online FAQs about discontinuing ZBrushCore and ZBrushCoreMini Download ZBrushCoreMini for free until 30 May 2025Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #maxon #discontinues #zbrushcore #zbrushcoremini
    Maxon discontinues ZBrushCore and ZBrushCoreMini
    www.cgchannel.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Maxon is discontinuing ZBrushCore and ZBrushCoreMini. the commercial and free cut-down editions of ZBrush, its digital sculpting software.ZBrushCoreMini downloads will be removed on 30 May 2025, and sales of new ZBrushCore subscriptions will be ended, although existing subs can be renewed until 30 September 2025. In its online FAQs, Maxon also teases a new “freemium” version of ZBrush to “align the desktop and iPad versions”. ZBrushCore and ZBrushCoreMini: the old cut-down desktop editions of ZBrush First released in 2016, ZBrushCore is a cut-down edition of the software aimed at “users who are new to 3D, illustrators, students and 3D printing enthusiasts”.ZBrushCoreMini, a free non-commercial edition, followed it in 2020. You can see a feature comparison table for ZBrushCoreMini and ZBrushCore on Maxon’s website, and find more details on the ZBrushCore and ZBrushCoreMini product pages. Neither has seen many updates since Maxon acquired original ZBrush developer Pixologic in 2022: the most recent version of ZBrushCore is the 2021.6 release. Updates and downloads to stop on 30 May 2025; subscription renewals on 30 September Both editions will now enter ‘limited maintenance mode’ on 30 May 2025, so neither will receive any updates or bugfixes, and ZBrushCoreMini will no longer be available for download.It will also no longer be possible to take out a new ZBrushCore subscription, although existing subscribers will be able to renew monthly subscriptions until 30 September 2025, and will receive active support until that date. New ‘freemium’ edition of ZBrush coming soon Maxon doesn’t give a reason for discontinuing ZBrushCore and ZBrushCoreMini in its FAQs, but it does mention the new iPad edition of ZBrush, which fulfils the role of a less expensive, less fully featured alternative to the desktop version of the software, with a free base edition.In the FAQs, Maxon also notes that as part of its “continued efforts to align the Desktop and iPad versions, a new Freemium version of ZBrush Desktop is on its way”. Price, system requirements and dates Maxon doesn’t list system requirements or prices for ZBrushCore on its website, but on release, ZBrushCore 2021 was compatible with Windows 7+ and Mac OS X 10.10+, and cost $9.95/month.ZBrushCoreMini is available free until 30 May 2025. On the release of ZBrushCoreMini 2021, it was compatible with 64-bit Windows 7+ and Mac OS X 10.11+. ZBrush for iPad is compatible with iPadOS 17.0+. It requires an iPad with a A12 Bionic chip or later. The base app is free, but is export-disabled. Access to the full feature set requires a paid subscription, which costs $9.99/month or $89.99/year. The desktop edition of ZBrush is compatible with Windows 10+ and macOS 11.5+. It is rental-only, with subscriptions costing $49/month or $399/year, also including the iPad edition. Maxon hasn’t announced a release date for the new freemium edition. Read Maxon’s online FAQs about discontinuing ZBrushCore and ZBrushCoreMini Download ZBrushCoreMini for free until 30 May 2025 (Requires a free Maxon account) Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Comments ·0 Shares ·0 Reviews
  • Forget Microsoft 365 subscriptions: This is how to get Office apps for life

    TL;DR: You can get a Microsoft Office lifetime license on sale for through June 1.
    Why are you paying a subscription for Microsoft 365 if you only use Word and Excel? The subscription-based apps may be fancy, but if you aren’t taking full advantage of them, you’re basically paying a month for, well, nothing. You should grab the lifetime version of Microsoft Office instead.
    This way, you can pay only once instead of every month. While there are a few differences, the most important part is that you’ll get your favorite apps, like Word, Excel, PowerPoint, Outlook, OneNote, Publisher, and Access, for life.
    While Microsoft 365 gives you cloud storage and mobile app access, are you really using those features? We’ve found that only advanced users do, and everyone else is overpaying. And, if you want to work offline, you’re probably running into issues. This version gives you easy offline access, even if you can only access them from your PC.

    Download Microsoft Office for Windows while it’s on sale for.
    No coupon needed, but this sale ends on June 1 at 11:59 p.m. PT.

    Microsoft Office Professional Plus 2019 for WindowsSee Deal
    StackSocial prices subject to change.
    #forget #microsoft #subscriptions #this #how
    Forget Microsoft 365 subscriptions: This is how to get Office apps for life
    TL;DR: You can get a Microsoft Office lifetime license on sale for through June 1. Why are you paying a subscription for Microsoft 365 if you only use Word and Excel? The subscription-based apps may be fancy, but if you aren’t taking full advantage of them, you’re basically paying a month for, well, nothing. You should grab the lifetime version of Microsoft Office instead. This way, you can pay only once instead of every month. While there are a few differences, the most important part is that you’ll get your favorite apps, like Word, Excel, PowerPoint, Outlook, OneNote, Publisher, and Access, for life. While Microsoft 365 gives you cloud storage and mobile app access, are you really using those features? We’ve found that only advanced users do, and everyone else is overpaying. And, if you want to work offline, you’re probably running into issues. This version gives you easy offline access, even if you can only access them from your PC. Download Microsoft Office for Windows while it’s on sale for. No coupon needed, but this sale ends on June 1 at 11:59 p.m. PT. Microsoft Office Professional Plus 2019 for WindowsSee Deal StackSocial prices subject to change. #forget #microsoft #subscriptions #this #how
    Forget Microsoft 365 subscriptions: This is how to get Office apps for life
    www.pcworld.com
    TL;DR: You can get a Microsoft Office lifetime license on sale for $25 through June 1 (reg. $229). Why are you paying a subscription for Microsoft 365 if you only use Word and Excel? The subscription-based apps may be fancy, but if you aren’t taking full advantage of them, you’re basically paying $10 a month for, well, nothing. You should grab the lifetime version of Microsoft Office instead. This way, you can pay $25 only once instead of $10 every month. While there are a few differences, the most important part is that you’ll get your favorite apps, like Word, Excel, PowerPoint, Outlook, OneNote, Publisher, and Access, for life. While Microsoft 365 gives you cloud storage and mobile app access, are you really using those features? We’ve found that only advanced users do, and everyone else is overpaying. And, if you want to work offline, you’re probably running into issues. This version gives you easy offline access, even if you can only access them from your PC. Download Microsoft Office for Windows while it’s on sale for $25 (reg. $229). No coupon needed, but this sale ends on June 1 at 11:59 p.m. PT. Microsoft Office Professional Plus 2019 for WindowsSee Deal StackSocial prices subject to change.
    0 Comments ·0 Shares ·0 Reviews
  • Microsoft engineer shows how bad code can lead to your Windows PC slowing down

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Microsoft engineer shows how bad code can lead to your Windows PC slowing down

    Sayan Sen

    Neowin
    @ssc_combater007 ·

    May 22, 2025 04:28 EDT

    Windows 10 is running out of support soon and as such Microsoft issued a reminder recently. The tech giant also published a list of Surface PCs that can't upgrade to Windows 11 as they don't meet the system requirements. The company's official stance is simple: get a new device, ideally, a Copilot+ PC.
    Meanwhile, home users are also starting to debate how they would deal with this upcoming change. For those still on pre-2015 systems that can't officially run Windows 11, many feel that Windows 8/8.1 and not Windows 11 or even Windows 10, is the way to go, as the older OS feels snappier. Neowin readers also opined on the topic by sharing their thoughts. You can join the discussion in this article.
    Bear in mind though that while it is a fun topic to talk about, Windows 8.1 support ended back in January 2023, so it's not a secure option to head back to.

    A reason for the slowness and sluggishness of a system is often bad underlying code that is being run. Recently, an editorial titled "Sloppy software is why you think you need new hardware" authored by colleague David Uzondu, and it looks like a Microsoft employee would quite agree with the idea if he were to read it.
    Matt Hamrick, a Senior Escalation Engineer at Microsoft, has put up a blog post on Microsoft's site regarding this. The topic covers memory leaks and out-of-memoryissues that Windows can encounter as a consequence of poor optimization.
    In the post, Hamrick has used the example of an updated .NET 7 app to establish the premise explaining how a miscalculated ConfigurationBuilder entry for the reloadOnChange parameter can do this. By setting its value to "true" instead of "false", an app can run into memory-leaking scenarios, leading to a sluggish system or a program crash, if not a crash for the entire system.
    For those wondering, the reloadOnChange parameter tells the system to watch a specified file for modified settings. It essentially helps in dynamic reloading as it automatically reloads the updated settings from memory. Thus parts of an app that reference the set configuration can see the new values immediately, without having to restart it. Hence, this is essentially what leads to the available memory pool filling up over time.
    He explains:

    The impact of this code will be greater the more often it is run. The problem is not apparent, but this is the trigger: reloadOnChange: true.
    ....
    reloadOnChange: true ... is really only meant to be used during app startup if a custom config file is being consumed that ASP.NET itself does not already consume automatically.
    Instead, as mentioned above, some folks have mistakenly used this code in something like a Controller action or middleware component to gain access to some needed config value, not knowing what it's doing under-the-hoodinto the app's configuration system).

    Matt Hamrick was able to pin down the problematic code by observing a memory dump from the GC.NET memory manager using various tools like WinDbg, a Windows debugging utility, among others. You can find the full blog post here on Microsoft's Tech Community website.
    Although the example highlighted here is about an app originally coded in .NET 7, Hamrick notes that the problem is not specific to it and can affect apps using newer .NET versions too, which are still supported.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #microsoft #engineer #shows #how #bad
    Microsoft engineer shows how bad code can lead to your Windows PC slowing down
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft engineer shows how bad code can lead to your Windows PC slowing down Sayan Sen Neowin @ssc_combater007 · May 22, 2025 04:28 EDT Windows 10 is running out of support soon and as such Microsoft issued a reminder recently. The tech giant also published a list of Surface PCs that can't upgrade to Windows 11 as they don't meet the system requirements. The company's official stance is simple: get a new device, ideally, a Copilot+ PC. Meanwhile, home users are also starting to debate how they would deal with this upcoming change. For those still on pre-2015 systems that can't officially run Windows 11, many feel that Windows 8/8.1 and not Windows 11 or even Windows 10, is the way to go, as the older OS feels snappier. Neowin readers also opined on the topic by sharing their thoughts. You can join the discussion in this article. Bear in mind though that while it is a fun topic to talk about, Windows 8.1 support ended back in January 2023, so it's not a secure option to head back to. A reason for the slowness and sluggishness of a system is often bad underlying code that is being run. Recently, an editorial titled "Sloppy software is why you think you need new hardware" authored by colleague David Uzondu, and it looks like a Microsoft employee would quite agree with the idea if he were to read it. Matt Hamrick, a Senior Escalation Engineer at Microsoft, has put up a blog post on Microsoft's site regarding this. The topic covers memory leaks and out-of-memoryissues that Windows can encounter as a consequence of poor optimization. In the post, Hamrick has used the example of an updated .NET 7 app to establish the premise explaining how a miscalculated ConfigurationBuilder entry for the reloadOnChange parameter can do this. By setting its value to "true" instead of "false", an app can run into memory-leaking scenarios, leading to a sluggish system or a program crash, if not a crash for the entire system. For those wondering, the reloadOnChange parameter tells the system to watch a specified file for modified settings. It essentially helps in dynamic reloading as it automatically reloads the updated settings from memory. Thus parts of an app that reference the set configuration can see the new values immediately, without having to restart it. Hence, this is essentially what leads to the available memory pool filling up over time. He explains: The impact of this code will be greater the more often it is run. The problem is not apparent, but this is the trigger: reloadOnChange: true. .... reloadOnChange: true ... is really only meant to be used during app startup if a custom config file is being consumed that ASP.NET itself does not already consume automatically. Instead, as mentioned above, some folks have mistakenly used this code in something like a Controller action or middleware component to gain access to some needed config value, not knowing what it's doing under-the-hoodinto the app's configuration system). Matt Hamrick was able to pin down the problematic code by observing a memory dump from the GC.NET memory manager using various tools like WinDbg, a Windows debugging utility, among others. You can find the full blog post here on Microsoft's Tech Community website. Although the example highlighted here is about an app originally coded in .NET 7, Hamrick notes that the problem is not specific to it and can affect apps using newer .NET versions too, which are still supported. Tags Report a problem with article Follow @NeowinFeed #microsoft #engineer #shows #how #bad
    Microsoft engineer shows how bad code can lead to your Windows PC slowing down
    www.neowin.net
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft engineer shows how bad code can lead to your Windows PC slowing down Sayan Sen Neowin @ssc_combater007 · May 22, 2025 04:28 EDT Windows 10 is running out of support soon and as such Microsoft issued a reminder recently. The tech giant also published a list of Surface PCs that can't upgrade to Windows 11 as they don't meet the system requirements. The company's official stance is simple: get a new device, ideally, a Copilot+ PC. Meanwhile, home users are also starting to debate how they would deal with this upcoming change. For those still on pre-2015 systems that can't officially run Windows 11, many feel that Windows 8/8.1 and not Windows 11 or even Windows 10, is the way to go, as the older OS feels snappier. Neowin readers also opined on the topic by sharing their thoughts. You can join the discussion in this article. Bear in mind though that while it is a fun topic to talk about, Windows 8.1 support ended back in January 2023, so it's not a secure option to head back to. A reason for the slowness and sluggishness of a system is often bad underlying code that is being run. Recently, an editorial titled "Sloppy software is why you think you need new hardware" authored by colleague David Uzondu, and it looks like a Microsoft employee would quite agree with the idea if he were to read it. Matt Hamrick, a Senior Escalation Engineer at Microsoft, has put up a blog post on Microsoft's site regarding this. The topic covers memory leaks and out-of-memory (OOM) issues that Windows can encounter as a consequence of poor optimization. In the post, Hamrick has used the example of an updated .NET 7 app to establish the premise explaining how a miscalculated ConfigurationBuilder entry for the reloadOnChange parameter can do this. By setting its value to "true" instead of "false", an app can run into memory-leaking scenarios, leading to a sluggish system or a program crash, if not a crash for the entire system. For those wondering, the reloadOnChange parameter tells the system to watch a specified file for modified settings. It essentially helps in dynamic reloading as it automatically reloads the updated settings from memory. Thus parts of an app that reference the set configuration can see the new values immediately, without having to restart it. Hence, this is essentially what leads to the available memory pool filling up over time. He explains: The impact of this code will be greater the more often it is run. The problem is not apparent, but this is the trigger: reloadOnChange: true. .... reloadOnChange: true ... is really only meant to be used during app startup if a custom config file is being consumed that ASP.NET itself does not already consume automatically (assuming those defaults haven't been changed). Instead, as mentioned above, some folks have mistakenly used this code in something like a Controller action or middleware component to gain access to some needed config value, not knowing what it's doing under-the-hood (also not knowing that the config they typically sought was already loaded (and monitored) into the app's configuration system). Matt Hamrick was able to pin down the problematic code by observing a memory dump from the GC (garbage collector) .NET memory manager using various tools like WinDbg, a Windows debugging utility, among others. You can find the full blog post here on Microsoft's Tech Community website. Although the example highlighted here is about an app originally coded in .NET 7, Hamrick notes that the problem is not specific to it and can affect apps using newer .NET versions too, which are still supported. Tags Report a problem with article Follow @NeowinFeed
    0 Comments ·0 Shares ·0 Reviews
  • New Bacteria Have Been Discovered on a Chinese Space Station

    Scientists have discovered a previously unknown bacterium aboard China's Tiangong space station. "It has been named Niallia tiangongensis, and it inhabited the cockpit controls on the station, living in microgravity conditions," reports Wired. From the report: According to China Central Television, the country's national broadcaster, taikonautscollected swab samples from the space station in May 2023, which were then frozen and sent back to Earth for study. The aim of this work was to investigate the behavior of microorganisms, gathered from a completely sealed environment with a human crew, during space travel, as part of the China Space Station Habitation Area Microbiome Program. A paper published in the Journal of Systematic and Evolutionary Microbiology describes how analysis of samples from the space station revealed this previously unseen bacterial species, which belongs to the genus Niallia. Genomic sequencing showed that its closest terrestrial relative is the bacterium Niallia circulans, although the Tiangong species has substantial genetic differences.It is unclear whether the newly discovered microbe evolved on the space station or whether it is part of the vast sea of as yet unidentified microorganisms on Earth. To date, tens of thousands of bacterial species have been cataloged, although there are estimated to be billions more unclassified species on Earth. The discovery of Niallia tiangongensis will provide a better understanding of the microscopic hazards that the next generation of space travelers will face and help design sanitation protocols for extended missions. It is still too early to determine whether the space bacterium poses any danger to taikonauts aboard Tiangong, although it is known that its terrestrial relative, Niallia circulans, can cause sepsis, especially in immunocompromised people.

    of this story at Slashdot.
    #new #bacteria #have #been #discovered
    New Bacteria Have Been Discovered on a Chinese Space Station
    Scientists have discovered a previously unknown bacterium aboard China's Tiangong space station. "It has been named Niallia tiangongensis, and it inhabited the cockpit controls on the station, living in microgravity conditions," reports Wired. From the report: According to China Central Television, the country's national broadcaster, taikonautscollected swab samples from the space station in May 2023, which were then frozen and sent back to Earth for study. The aim of this work was to investigate the behavior of microorganisms, gathered from a completely sealed environment with a human crew, during space travel, as part of the China Space Station Habitation Area Microbiome Program. A paper published in the Journal of Systematic and Evolutionary Microbiology describes how analysis of samples from the space station revealed this previously unseen bacterial species, which belongs to the genus Niallia. Genomic sequencing showed that its closest terrestrial relative is the bacterium Niallia circulans, although the Tiangong species has substantial genetic differences.It is unclear whether the newly discovered microbe evolved on the space station or whether it is part of the vast sea of as yet unidentified microorganisms on Earth. To date, tens of thousands of bacterial species have been cataloged, although there are estimated to be billions more unclassified species on Earth. The discovery of Niallia tiangongensis will provide a better understanding of the microscopic hazards that the next generation of space travelers will face and help design sanitation protocols for extended missions. It is still too early to determine whether the space bacterium poses any danger to taikonauts aboard Tiangong, although it is known that its terrestrial relative, Niallia circulans, can cause sepsis, especially in immunocompromised people. of this story at Slashdot. #new #bacteria #have #been #discovered
    New Bacteria Have Been Discovered on a Chinese Space Station
    science.slashdot.org
    Scientists have discovered a previously unknown bacterium aboard China's Tiangong space station. "It has been named Niallia tiangongensis, and it inhabited the cockpit controls on the station, living in microgravity conditions," reports Wired. From the report: According to China Central Television, the country's national broadcaster, taikonauts (Chinese astronauts) collected swab samples from the space station in May 2023, which were then frozen and sent back to Earth for study. The aim of this work was to investigate the behavior of microorganisms, gathered from a completely sealed environment with a human crew, during space travel, as part of the China Space Station Habitation Area Microbiome Program (CHAMP). A paper published in the Journal of Systematic and Evolutionary Microbiology describes how analysis of samples from the space station revealed this previously unseen bacterial species, which belongs to the genus Niallia. Genomic sequencing showed that its closest terrestrial relative is the bacterium Niallia circulans, although the Tiangong species has substantial genetic differences. [...] It is unclear whether the newly discovered microbe evolved on the space station or whether it is part of the vast sea of as yet unidentified microorganisms on Earth. To date, tens of thousands of bacterial species have been cataloged, although there are estimated to be billions more unclassified species on Earth. The discovery of Niallia tiangongensis will provide a better understanding of the microscopic hazards that the next generation of space travelers will face and help design sanitation protocols for extended missions. It is still too early to determine whether the space bacterium poses any danger to taikonauts aboard Tiangong, although it is known that its terrestrial relative, Niallia circulans, can cause sepsis, especially in immunocompromised people. Read more of this story at Slashdot.
    0 Comments ·0 Shares ·0 Reviews
  • NetEase games segment revenue grows 12.1% to $3.3bn in Q1

    NetEase games segment revenue grows 12.1% to bn in Q1
    Firm attributed rise to success of its online games including the successful launch of Where Winds Meet last December

    Image credit: Everstone Studio/NetEase

    News

    by Sophie McEvoy
    Staff Writer

    Published on May 22, 2025

    NetEase has published its financial results for the first quarter of 2025, reporting a 12.1% rise in its gaming sector to RMB 24 billion.
    Overall revenue for the firm increased 7.4% year-on-year to RMB 28.8 billion, while profit grew by 8.6% year-on-year to RMB 18.5 billion.
    The Chinese games firm attributed the increase for its gaming segment to online games, which represented approximately 97.5% of revenue compared to 95.2% to the same period last year.
    It highlighted the success of Everstone Studio's Where Winds Meet following its launch last December. The game has since surpassed 30 million players as of March 2025.
    It also noted the continued popularity of Identity 5 and Marvel Rivals, as well as the successful launch of Once Human on mobile.
    "We entered 2025 with solid momentum, fueled by our ongoing innovation and new titles that strengthen our reach across genres and resonate with players around the world," said NetEase CEO William Ding.
    "In addition to the strong performance of our latest games, our long-standing franchises continue to thrive, powered by outstanding content updates and continuous gameplay enhancements that bring fresh takes to player experiences.
    Ding added: "As we reimagine new gaming possibilities, we remain rooted in innovation and long-term operations, partnering with top talent and strategic collaborators to deliver engaging experiences to players everywhere."
    #netease #games #segment #revenue #grows
    NetEase games segment revenue grows 12.1% to $3.3bn in Q1
    NetEase games segment revenue grows 12.1% to bn in Q1 Firm attributed rise to success of its online games including the successful launch of Where Winds Meet last December Image credit: Everstone Studio/NetEase News by Sophie McEvoy Staff Writer Published on May 22, 2025 NetEase has published its financial results for the first quarter of 2025, reporting a 12.1% rise in its gaming sector to RMB 24 billion. Overall revenue for the firm increased 7.4% year-on-year to RMB 28.8 billion, while profit grew by 8.6% year-on-year to RMB 18.5 billion. The Chinese games firm attributed the increase for its gaming segment to online games, which represented approximately 97.5% of revenue compared to 95.2% to the same period last year. It highlighted the success of Everstone Studio's Where Winds Meet following its launch last December. The game has since surpassed 30 million players as of March 2025. It also noted the continued popularity of Identity 5 and Marvel Rivals, as well as the successful launch of Once Human on mobile. "We entered 2025 with solid momentum, fueled by our ongoing innovation and new titles that strengthen our reach across genres and resonate with players around the world," said NetEase CEO William Ding. "In addition to the strong performance of our latest games, our long-standing franchises continue to thrive, powered by outstanding content updates and continuous gameplay enhancements that bring fresh takes to player experiences. Ding added: "As we reimagine new gaming possibilities, we remain rooted in innovation and long-term operations, partnering with top talent and strategic collaborators to deliver engaging experiences to players everywhere." #netease #games #segment #revenue #grows
    NetEase games segment revenue grows 12.1% to $3.3bn in Q1
    www.gamesindustry.biz
    NetEase games segment revenue grows 12.1% to $3.3bn in Q1 Firm attributed rise to success of its online games including the successful launch of Where Winds Meet last December Image credit: Everstone Studio/NetEase News by Sophie McEvoy Staff Writer Published on May 22, 2025 NetEase has published its financial results for the first quarter of 2025, reporting a 12.1% rise in its gaming sector to RMB 24 billion ($3.3 billion). Overall revenue for the firm increased 7.4% year-on-year to RMB 28.8 billion ($4 billion), while profit grew by 8.6% year-on-year to RMB 18.5 billion ($2.5 billion). The Chinese games firm attributed the increase for its gaming segment to online games, which represented approximately 97.5% of revenue compared to 95.2% to the same period last year. It highlighted the success of Everstone Studio's Where Winds Meet following its launch last December. The game has since surpassed 30 million players as of March 2025. It also noted the continued popularity of Identity 5 and Marvel Rivals, as well as the successful launch of Once Human on mobile. "We entered 2025 with solid momentum, fueled by our ongoing innovation and new titles that strengthen our reach across genres and resonate with players around the world," said NetEase CEO William Ding. "In addition to the strong performance of our latest games, our long-standing franchises continue to thrive, powered by outstanding content updates and continuous gameplay enhancements that bring fresh takes to player experiences. Ding added: "As we reimagine new gaming possibilities, we remain rooted in innovation and long-term operations, partnering with top talent and strategic collaborators to deliver engaging experiences to players everywhere."
    0 Comments ·0 Shares ·0 Reviews
  • Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding

    Addressing Architectural Trade-offs in Language Models
    As language models scale, balancing expressivity, efficiency, and adaptability becomes increasingly challenging. Transformer architectures dominate due to their strong performance across a wide range of tasks, but they are computationally expensive—particularly for long-context scenarios—due to the quadratic complexity of self-attention. On the other hand, Structured State Space Modelsoffer improved efficiency and linear scaling, yet often lack the nuanced sequence modeling required for complex language understanding. A combined architecture that leverages the strengths of both approaches is needed to support diverse applications across environments.
    Introducing Falcon-H1: A Hybrid Architecture
    The Falcon-H1 series, released by the Technology Innovation Institute, introduces a hybrid family of language models that combine Transformer attention mechanisms with Mamba2-based SSM components. This architecture is designed to improve computational efficiency while maintaining competitive performance across tasks requiring deep contextual understanding.
    Falcon-H1 covers a wide parameter range—from 0.5B to 34B—catering to use cases from resource-constrained deployments to large-scale distributed inference. The design aims to address common bottlenecks in LLM deployment: memory efficiency, scalability, multilingual support, and the ability to handle extended input sequences.

    Source: /
    Architectural Details and Design Objectives
    Falcon-H1 adopts a parallel structure where attention heads and Mamba2 SSMs operate side by side. This design allows each mechanism to independently contribute to sequence modeling: attention heads specialize in capturing token-level dependencies, while SSM components support efficient long-range information retention.
    The series supports a context length of up to 256K tokens, which is particularly useful for applications in document summarization, retrieval-augmented generation, and multi-turn dialogue systems. Model training incorporates a customized microparameterizationrecipe and optimized data pipelines, allowing for stable and efficient training across model sizes.
    The models are trained with a focus on multilingual capabilities. The architecture is natively equipped to handle 18 languages, with coverage including English, Chinese, Arabic, Hindi, French, and others. The framework is extensible to over 100 languages, supporting localization and region-specific model adaptation.
    Empirical Results and Comparative Evaluation
    Despite relatively modest parameter counts, Falcon-H1 models demonstrate strong empirical performance:

    Falcon-H1-0.5B achieves results comparable to 7B-parameter models released in 2024.
    Falcon-H1-1.5B-Deep performs on par with leading 7B to 10B Transformer models.
    Falcon-H1-34B matches or exceeds the performance of models such as Qwen3-32B, Llama4-Scout-17B/109B, and Gemma3-27B across several benchmarks.

    Evaluations emphasize both general-purpose language understanding and multilingual benchmarks. Notably, the models achieve strong performance across both high-resource and low-resource languages without requiring excessive fine-tuning or additional adaptation layers.

    Source: /
    Deployment and inference are supported through integration with open-source tools such as Hugging Face Transformers. FlashAttention-2 compatibility further reduces memory usage during inference, offering an attractive efficiency-performance balance for enterprise use.
    Conclusion
    Falcon-H1 represents a methodical effort to refine language model architecture by integrating complementary mechanisms—attention and SSMs—within a unified framework. By doing so, it addresses key limitations in both long-context processing and scaling efficiency. The model family provides a range of options for practitioners, from lightweight variants suitable for edge deployment to high-capacity configurations for server-side applications.
    Through its multilingual coverage, long-context capabilities, and architectural flexibility, Falcon-H1 offers a technically sound foundation for research and production use cases that demand performance without compromising on efficiency or accessibility.

    Check out the Official Release, Models on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Google DeepMind Releases Gemma 3n: A Compact, High-Efficiency Multimodal AI Model for Real-Time On-Device UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Implementation Tutorial for Building Modular AI Workflows Using Anthropic’s Claude Sonnet 3.7 through API and LangGraphAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Meta Researchers Introduced J1: A Reinforcement Learning Framework That Trains Language Models to Judge With Reasoned Consistency and Minimal DataAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Sampling Without Data is Now Scalable: Meta AI Releases Adjoint Sampling for Reward-Driven Generative Modeling
    #technology #innovation #institute #tii #releases
    Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    Addressing Architectural Trade-offs in Language Models As language models scale, balancing expressivity, efficiency, and adaptability becomes increasingly challenging. Transformer architectures dominate due to their strong performance across a wide range of tasks, but they are computationally expensive—particularly for long-context scenarios—due to the quadratic complexity of self-attention. On the other hand, Structured State Space Modelsoffer improved efficiency and linear scaling, yet often lack the nuanced sequence modeling required for complex language understanding. A combined architecture that leverages the strengths of both approaches is needed to support diverse applications across environments. Introducing Falcon-H1: A Hybrid Architecture The Falcon-H1 series, released by the Technology Innovation Institute, introduces a hybrid family of language models that combine Transformer attention mechanisms with Mamba2-based SSM components. This architecture is designed to improve computational efficiency while maintaining competitive performance across tasks requiring deep contextual understanding. Falcon-H1 covers a wide parameter range—from 0.5B to 34B—catering to use cases from resource-constrained deployments to large-scale distributed inference. The design aims to address common bottlenecks in LLM deployment: memory efficiency, scalability, multilingual support, and the ability to handle extended input sequences. Source: / Architectural Details and Design Objectives Falcon-H1 adopts a parallel structure where attention heads and Mamba2 SSMs operate side by side. This design allows each mechanism to independently contribute to sequence modeling: attention heads specialize in capturing token-level dependencies, while SSM components support efficient long-range information retention. The series supports a context length of up to 256K tokens, which is particularly useful for applications in document summarization, retrieval-augmented generation, and multi-turn dialogue systems. Model training incorporates a customized microparameterizationrecipe and optimized data pipelines, allowing for stable and efficient training across model sizes. The models are trained with a focus on multilingual capabilities. The architecture is natively equipped to handle 18 languages, with coverage including English, Chinese, Arabic, Hindi, French, and others. The framework is extensible to over 100 languages, supporting localization and region-specific model adaptation. Empirical Results and Comparative Evaluation Despite relatively modest parameter counts, Falcon-H1 models demonstrate strong empirical performance: Falcon-H1-0.5B achieves results comparable to 7B-parameter models released in 2024. Falcon-H1-1.5B-Deep performs on par with leading 7B to 10B Transformer models. Falcon-H1-34B matches or exceeds the performance of models such as Qwen3-32B, Llama4-Scout-17B/109B, and Gemma3-27B across several benchmarks. Evaluations emphasize both general-purpose language understanding and multilingual benchmarks. Notably, the models achieve strong performance across both high-resource and low-resource languages without requiring excessive fine-tuning or additional adaptation layers. Source: / Deployment and inference are supported through integration with open-source tools such as Hugging Face Transformers. FlashAttention-2 compatibility further reduces memory usage during inference, offering an attractive efficiency-performance balance for enterprise use. Conclusion Falcon-H1 represents a methodical effort to refine language model architecture by integrating complementary mechanisms—attention and SSMs—within a unified framework. By doing so, it addresses key limitations in both long-context processing and scaling efficiency. The model family provides a range of options for practitioners, from lightweight variants suitable for edge deployment to high-capacity configurations for server-side applications. Through its multilingual coverage, long-context capabilities, and architectural flexibility, Falcon-H1 offers a technically sound foundation for research and production use cases that demand performance without compromising on efficiency or accessibility. Check out the Official Release, Models on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Google DeepMind Releases Gemma 3n: A Compact, High-Efficiency Multimodal AI Model for Real-Time On-Device UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Implementation Tutorial for Building Modular AI Workflows Using Anthropic’s Claude Sonnet 3.7 through API and LangGraphAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Meta Researchers Introduced J1: A Reinforcement Learning Framework That Trains Language Models to Judge With Reasoned Consistency and Minimal DataAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Sampling Without Data is Now Scalable: Meta AI Releases Adjoint Sampling for Reward-Driven Generative Modeling #technology #innovation #institute #tii #releases
    Technology Innovation Institute TII Releases Falcon-H1: Hybrid Transformer-SSM Language Models for Scalable, Multilingual, and Long-Context Understanding
    www.marktechpost.com
    Addressing Architectural Trade-offs in Language Models As language models scale, balancing expressivity, efficiency, and adaptability becomes increasingly challenging. Transformer architectures dominate due to their strong performance across a wide range of tasks, but they are computationally expensive—particularly for long-context scenarios—due to the quadratic complexity of self-attention. On the other hand, Structured State Space Models (SSMs) offer improved efficiency and linear scaling, yet often lack the nuanced sequence modeling required for complex language understanding. A combined architecture that leverages the strengths of both approaches is needed to support diverse applications across environments. Introducing Falcon-H1: A Hybrid Architecture The Falcon-H1 series, released by the Technology Innovation Institute (TII), introduces a hybrid family of language models that combine Transformer attention mechanisms with Mamba2-based SSM components. This architecture is designed to improve computational efficiency while maintaining competitive performance across tasks requiring deep contextual understanding. Falcon-H1 covers a wide parameter range—from 0.5B to 34B—catering to use cases from resource-constrained deployments to large-scale distributed inference. The design aims to address common bottlenecks in LLM deployment: memory efficiency, scalability, multilingual support, and the ability to handle extended input sequences. Source: https://falcon-lm.github.io/blog/falcon-h1/ Architectural Details and Design Objectives Falcon-H1 adopts a parallel structure where attention heads and Mamba2 SSMs operate side by side. This design allows each mechanism to independently contribute to sequence modeling: attention heads specialize in capturing token-level dependencies, while SSM components support efficient long-range information retention. The series supports a context length of up to 256K tokens, which is particularly useful for applications in document summarization, retrieval-augmented generation, and multi-turn dialogue systems. Model training incorporates a customized microparameterization (μP) recipe and optimized data pipelines, allowing for stable and efficient training across model sizes. The models are trained with a focus on multilingual capabilities. The architecture is natively equipped to handle 18 languages, with coverage including English, Chinese, Arabic, Hindi, French, and others. The framework is extensible to over 100 languages, supporting localization and region-specific model adaptation. Empirical Results and Comparative Evaluation Despite relatively modest parameter counts, Falcon-H1 models demonstrate strong empirical performance: Falcon-H1-0.5B achieves results comparable to 7B-parameter models released in 2024. Falcon-H1-1.5B-Deep performs on par with leading 7B to 10B Transformer models. Falcon-H1-34B matches or exceeds the performance of models such as Qwen3-32B, Llama4-Scout-17B/109B, and Gemma3-27B across several benchmarks. Evaluations emphasize both general-purpose language understanding and multilingual benchmarks. Notably, the models achieve strong performance across both high-resource and low-resource languages without requiring excessive fine-tuning or additional adaptation layers. Source: https://falcon-lm.github.io/blog/falcon-h1/ Deployment and inference are supported through integration with open-source tools such as Hugging Face Transformers. FlashAttention-2 compatibility further reduces memory usage during inference, offering an attractive efficiency-performance balance for enterprise use. Conclusion Falcon-H1 represents a methodical effort to refine language model architecture by integrating complementary mechanisms—attention and SSMs—within a unified framework. By doing so, it addresses key limitations in both long-context processing and scaling efficiency. The model family provides a range of options for practitioners, from lightweight variants suitable for edge deployment to high-capacity configurations for server-side applications. Through its multilingual coverage, long-context capabilities, and architectural flexibility, Falcon-H1 offers a technically sound foundation for research and production use cases that demand performance without compromising on efficiency or accessibility. Check out the Official Release, Models on Hugging Face and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Google DeepMind Releases Gemma 3n: A Compact, High-Efficiency Multimodal AI Model for Real-Time On-Device UseAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Implementation Tutorial for Building Modular AI Workflows Using Anthropic’s Claude Sonnet 3.7 through API and LangGraphAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Meta Researchers Introduced J1: A Reinforcement Learning Framework That Trains Language Models to Judge With Reasoned Consistency and Minimal DataAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Sampling Without Data is Now Scalable: Meta AI Releases Adjoint Sampling for Reward-Driven Generative Modeling
    0 Comments ·0 Shares ·0 Reviews
CGShares https://cgshares.com