• Il est inacceptable que, pendant plus de trente ans, la série Mario Kart ait continué à nous trahir avec ses erreurs flagrantes et son inégalité déconcertante. Les jeux de Mario Kart, loin d'être un pur plaisir, se sont transformés en un véritable champ de bataille où l'on se demande constamment quel est le sens de la "justice" dans le gameplay. Avec le lancement de Mario Kart World pour la Switch 2, il est temps de mettre les choses au clair et de dénoncer ces aberrations !

    Commençons par les classements. Pourquoi diable devrions-nous faire un classement des jeux Mario Kart ? Est-ce pour masquer les défauts évidents de certains d'entre eux ? Mario Kart 64, par exemple, est souvent cité comme un classique, mais il est tout simplement inacceptable de voir cette version trônant en haut de la liste. Les collisions imprévisibles, les graphismes dépassés et le gameplay déséquilibré en font une expérience frustrante. Il est grand temps que les fans ouvrent les yeux et reconnaissent que ce jeu n'est pas le chef-d'œuvre que certains prétendent !

    Et que dire de Mario Kart Wii ? Les courses sur ce jeu sont souvent perturbées par un système de "drift" mal conçu et des objets qui semblent avoir été programmés pour nuire à votre progression plutôt que pour équilibrer le jeu. Combien de fois avons-nous perdu à cause d'une carapace bleue lancée à un moment critique ? C'est insupportable ! Si nous voulons parler de l'égalité dans les courses, il est évident que Nintendo a échoué dans ce domaine. Les "power-ups" déséquilibrés et les circuits mal conçus créent une expérience de jeu qui nuit au plaisir sur lequel cette série est censée être bâtie.

    Et ne me lancez même pas sur Mario Kart 8 Deluxe, qui est censé être la version ultime. Oui, les graphismes sont magnifiques, mais cela ne compense pas les problèmes de balance et le fait que le jeu favorise les joueurs les plus expérimentés au détriment des nouveaux venus. Que devient le plaisir de jouer ? Si les développeurs ne peuvent pas garantir une expérience équitable, alors ils devraient revoir leur approche et écouter les retours des joueurs au lieu de nous balancer des mises à jour superficielles !

    Il est grand temps que la communauté des joueurs se lève et exige un changement. Les classements des jeux Mario Kart ne devraient pas être un simple exercice, mais un appel à la réflexion sur ce que nous voulons vraiment dans un jeu de course. Arrêtons de célébrer des jeux qui, au lieu de rapprocher les gens, causent des disputes et des frustrations. Nous méritons mieux que cela !

    #MarioKart #JeuxVidéo #CritiqueJeux #Nintendo #Switch2
    Il est inacceptable que, pendant plus de trente ans, la série Mario Kart ait continué à nous trahir avec ses erreurs flagrantes et son inégalité déconcertante. Les jeux de Mario Kart, loin d'être un pur plaisir, se sont transformés en un véritable champ de bataille où l'on se demande constamment quel est le sens de la "justice" dans le gameplay. Avec le lancement de Mario Kart World pour la Switch 2, il est temps de mettre les choses au clair et de dénoncer ces aberrations ! Commençons par les classements. Pourquoi diable devrions-nous faire un classement des jeux Mario Kart ? Est-ce pour masquer les défauts évidents de certains d'entre eux ? Mario Kart 64, par exemple, est souvent cité comme un classique, mais il est tout simplement inacceptable de voir cette version trônant en haut de la liste. Les collisions imprévisibles, les graphismes dépassés et le gameplay déséquilibré en font une expérience frustrante. Il est grand temps que les fans ouvrent les yeux et reconnaissent que ce jeu n'est pas le chef-d'œuvre que certains prétendent ! Et que dire de Mario Kart Wii ? Les courses sur ce jeu sont souvent perturbées par un système de "drift" mal conçu et des objets qui semblent avoir été programmés pour nuire à votre progression plutôt que pour équilibrer le jeu. Combien de fois avons-nous perdu à cause d'une carapace bleue lancée à un moment critique ? C'est insupportable ! Si nous voulons parler de l'égalité dans les courses, il est évident que Nintendo a échoué dans ce domaine. Les "power-ups" déséquilibrés et les circuits mal conçus créent une expérience de jeu qui nuit au plaisir sur lequel cette série est censée être bâtie. Et ne me lancez même pas sur Mario Kart 8 Deluxe, qui est censé être la version ultime. Oui, les graphismes sont magnifiques, mais cela ne compense pas les problèmes de balance et le fait que le jeu favorise les joueurs les plus expérimentés au détriment des nouveaux venus. Que devient le plaisir de jouer ? Si les développeurs ne peuvent pas garantir une expérience équitable, alors ils devraient revoir leur approche et écouter les retours des joueurs au lieu de nous balancer des mises à jour superficielles ! Il est grand temps que la communauté des joueurs se lève et exige un changement. Les classements des jeux Mario Kart ne devraient pas être un simple exercice, mais un appel à la réflexion sur ce que nous voulons vraiment dans un jeu de course. Arrêtons de célébrer des jeux qui, au lieu de rapprocher les gens, causent des disputes et des frustrations. Nous méritons mieux que cela ! #MarioKart #JeuxVidéo #CritiqueJeux #Nintendo #Switch2
    The Mario Kart Games, Ranked From Worst To Best
    For over thirty years, we’ve been driving like maniacs, questioning the meaning of fairness and ending friendships in Nintendo’s Mario Kart series. So with Mario Kart World kicking off the Switch 2’s launch this month, why not see if we can end a few
    Like
    Love
    Wow
    Sad
    Angry
    563
    1 Comments 0 Shares 0 Reviews
  • A long-predicted cosmic collision might not happen after all

    Nature, Published online: 13 June 2025; doi:10.1038/d41586-025-01804-7The pull of a third galaxy could yank the Milky Way out of the path of Andromeda.
    #longpredicted #cosmic #collision #might #not
    A long-predicted cosmic collision might not happen after all
    Nature, Published online: 13 June 2025; doi:10.1038/d41586-025-01804-7The pull of a third galaxy could yank the Milky Way out of the path of Andromeda. #longpredicted #cosmic #collision #might #not
    WWW.NATURE.COM
    A long-predicted cosmic collision might not happen after all
    Nature, Published online: 13 June 2025; doi:10.1038/d41586-025-01804-7The pull of a third galaxy could yank the Milky Way out of the path of Andromeda.
    Like
    Love
    Wow
    Sad
    Angry
    521
    2 Comments 0 Shares 0 Reviews
  • Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Comments 0 Shares 0 Reviews
  • MindsEye review – a dystopian future that plays like it’s from 2012

    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games.Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaignis the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does.

    MindsEye is out now; £54.99
    #mindseye #review #dystopian #future #that
    MindsEye review – a dystopian future that plays like it’s from 2012
    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games.Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaignis the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does. MindsEye is out now; £54.99 #mindseye #review #dystopian #future #that
    WWW.THEGUARDIAN.COM
    MindsEye review – a dystopian future that plays like it’s from 2012
    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games. (The airborne craft are less fun because they have less character.)Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaign (one that tries, at least, to vary things now and then with stealth, trailing and sniper sections) is the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does. MindsEye is out now; £54.99
    0 Comments 0 Shares 0 Reviews
  • Chaos in Color – FLIP Fluids Meets Joker Art

    In our newest showcase, the FLIP Fluids Addon takes on a face that feels familiar – a chaotic canvas inspired by the Joker. Vibrant fluid simulations bring emotional depth and striking transitions, as color flows across skin with cinematic precision.

    Using the Color Attribute in tandem with our Mixing Plugin, we craft mesmerizing blends that dance over the surface. Surface Tension and Sheeting enhance realism, allowing liquid trails to cling and slide in perfect harmony.

    A boosted friction value on the skin lets the fluid settle in haunting detail, while ShaderPLUS gives it a glossy, almost surreal look. Hair collisions? Solved via Geometry Nodes, converted to volumes and optimized meshes. And thanks to flip_color and Dynamic Paint, a vibrant wetmap forms – right where emotion meets simulation.

    FLASH SALE ALERT! From May 30 – June 2, FLIP Fluids is part of the FlippedNormals FLASH SALE – grab it now alongside other top Blender tools at a beautifully chaotic discount.

    #b3d #blender3d #motiondesign #vfx #jokerface #fluidsimulation #blenderaddons #3dart #digitalart #cgivfx #visualeffects #flippednormals #blendercommunity
    #shorts
    #chaos #color #flip #fluids #meets
    🎭 Chaos in Color – FLIP Fluids Meets Joker Art
    In our newest showcase, the FLIP Fluids Addon takes on a face that feels familiar – a chaotic canvas inspired by the Joker. Vibrant fluid simulations bring emotional depth and striking transitions, as color flows across skin with cinematic precision. 🎨 Using the Color Attribute in tandem with our Mixing Plugin, we craft mesmerizing blends that dance over the surface. Surface Tension and Sheeting enhance realism, allowing liquid trails to cling and slide in perfect harmony. 💧 A boosted friction value on the skin lets the fluid settle in haunting detail, while ShaderPLUS gives it a glossy, almost surreal look. Hair collisions? Solved via Geometry Nodes, converted to volumes and optimized meshes. And thanks to flip_color and Dynamic Paint, a vibrant wetmap forms – right where emotion meets simulation. 🛍️ FLASH SALE ALERT! From May 30 – June 2, FLIP Fluids is part of the FlippedNormals FLASH SALE – grab it now alongside other top Blender tools at a beautifully chaotic discount. #b3d #blender3d #motiondesign #vfx #jokerface #fluidsimulation #blenderaddons #3dart #digitalart #cgivfx #visualeffects #flippednormals #blendercommunity #shorts #chaos #color #flip #fluids #meets
    WWW.YOUTUBE.COM
    🎭 Chaos in Color – FLIP Fluids Meets Joker Art
    In our newest showcase, the FLIP Fluids Addon takes on a face that feels familiar – a chaotic canvas inspired by the Joker. Vibrant fluid simulations bring emotional depth and striking transitions, as color flows across skin with cinematic precision. 🎨 Using the Color Attribute in tandem with our Mixing Plugin, we craft mesmerizing blends that dance over the surface. Surface Tension and Sheeting enhance realism, allowing liquid trails to cling and slide in perfect harmony. 💧 A boosted friction value on the skin lets the fluid settle in haunting detail, while ShaderPLUS gives it a glossy, almost surreal look. Hair collisions? Solved via Geometry Nodes, converted to volumes and optimized meshes. And thanks to flip_color and Dynamic Paint, a vibrant wetmap forms – right where emotion meets simulation. 🛍️ FLASH SALE ALERT! From May 30 – June 2, FLIP Fluids is part of the FlippedNormals FLASH SALE – grab it now alongside other top Blender tools at a beautifully chaotic discount. #b3d #blender3d #motiondesign #vfx #jokerface #fluidsimulation #blenderaddons #3dart #digitalart #cgivfx #visualeffects #flippednormals #blendercommunity #shorts
    Like
    Love
    Wow
    Sad
    Angry
    856
    0 Comments 0 Shares 0 Reviews
  • Autodesk adds AI animation tool MotionMaker to Maya 2026.1

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations.

    Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work.
    Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows.
    Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios.

    MotionMaker: new generative AI tool roughs out movement animations

    The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation.
    Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between.
    At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting.
    Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation.
    Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”.
    Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools.
    Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example.
    There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points.
    There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated.
    You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog.
    According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”.
    That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additionalstyles” in future.

    Bifrost: new modular character rigging framework

    Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch.
    The release is compatibility-breaking, and does not work with earlier versions of the toolset.
    The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”.
    Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes.

    Bifrost: improvements to liquid simulation and workflow
    Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation.
    The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions.
    In addition, a new parameter controls air drag on foam and spray thrown out by a liquid.
    Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”.

    LookdevX: support for OpenPBR in FBX files
    LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated.
    Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024.
    To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers.
    LookdevX 1.8 also features a number of workflow improvements, particularly on macOS.
    USD for Maya: workflow improvements

    USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport.
    Arnold for Maya: performance improvements

    Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores.
    Maya Creative 2026.1 also released

    Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya.
    Price and system requirements

    Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost /month or /year, up a further /month or /year since the release of Maya 2026.
    In many countries, artists earning under /year and working on projects valued at under /year, qualify for Maya Indie subscriptions, now priced at /year.
    Maya Creative is available pay-as-you-go, with prices starting at /day, and a minimum spend of /year.
    Read a full list of new features in Maya 2026.1 in the online documentation

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #autodesk #adds #animation #tool #motionmaker
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations. Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work. Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows. Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios. MotionMaker: new generative AI tool roughs out movement animations The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation. Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between. At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting. Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation. Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”. Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools. Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example. There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points. There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated. You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog. According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”. That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additionalstyles” in future. Bifrost: new modular character rigging framework Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch. The release is compatibility-breaking, and does not work with earlier versions of the toolset. The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”. Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes. Bifrost: improvements to liquid simulation and workflow Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation. The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions. In addition, a new parameter controls air drag on foam and spray thrown out by a liquid. Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”. LookdevX: support for OpenPBR in FBX files LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated. Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024. To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers. LookdevX 1.8 also features a number of workflow improvements, particularly on macOS. USD for Maya: workflow improvements USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport. Arnold for Maya: performance improvements Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Maya Creative 2026.1 also released Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya. Price and system requirements Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost /month or /year, up a further /month or /year since the release of Maya 2026. In many countries, artists earning under /year and working on projects valued at under /year, qualify for Maya Indie subscriptions, now priced at /year. Maya Creative is available pay-as-you-go, with prices starting at /day, and a minimum spend of /year. Read a full list of new features in Maya 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #autodesk #adds #animation #tool #motionmaker
    WWW.CGCHANNEL.COM
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations. Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work. Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows. Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios. MotionMaker: new generative AI tool roughs out movement animations The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation. Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between. At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting. Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation. Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”. Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools. Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example. There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points. There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated. You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog. According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”. That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additional [motion] styles” in future. Bifrost: new modular character rigging framework Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch. The release is compatibility-breaking, and does not work with earlier versions of the toolset. The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”. Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes. Bifrost: improvements to liquid simulation and workflow Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation. The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions. In addition, a new parameter controls air drag on foam and spray thrown out by a liquid. Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”. LookdevX: support for OpenPBR in FBX files LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated. Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024. To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers. LookdevX 1.8 also features a number of workflow improvements, particularly on macOS. USD for Maya: workflow improvements USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport. Arnold for Maya: performance improvements Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Maya Creative 2026.1 also released Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya. Price and system requirements Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost $255/month or $2,010/year, up a further $10/month or $65/year since the release of Maya 2026. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year, qualify for Maya Indie subscriptions, now priced at $330/year. Maya Creative is available pay-as-you-go, with prices starting at $3/day, and a minimum spend of $300/year. Read a full list of new features in Maya 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    498
    0 Comments 0 Shares 0 Reviews
  • New Multi-Axis Tool from Virginia Tech Boosts Fiber-Reinforced 3D Printing

    Researchers from the Department of Mechanical Engineering at Virginia Tech have introduced a continuous fiber reinforcementdeposition tool designed for multi-axis 3D printing, significantly enhancing mechanical performance in composite structures. Led by Kieran D. Beaumont, Joseph R. Kubalak, and Christopher B. Williams, and published in Springer Nature Link, the study demonstrates an 820% improvement in maximum load capacity compared to conventional planar short carbon fiber3D printing methods. This tool integrates three key functions: reliable fiber cutting and re-feeding, in situ fiber volume fraction control, and a slender collision volume to support complex multi-axis toolpaths.
    The newly developed deposition tool addresses critical challenges in CFR additive manufacturing. It is capable of cutting and re-feeding continuous fibers during travel movements, a function required to create complex geometries without material tearing or print failure. In situ control of fiber volume fraction is also achieved by adjusting the polymer extrusion rate. A slender geometry minimizes collisions between the tool and the printed part during multi-axis movements.
    The researchers designed the tool to co-extrude a thermoplastic polymer matrix with a continuous carbon fibertowpreg. This approach allowed reliable fiber re-feeding after each cut and enabled printing with variable fiber content within a single part. The tool’s slender collision volume supports increased range of motion for the robotic arm used in the experiments, allowing alignment of fibers with three-dimensional load paths in complex structures.
    The six Degree-of-Freedom Robotic Arm printing a multi-axis geometry from a CFR polymer composite. Photo via Springer Nature Link.
    Mechanical Testing Confirms Load-Bearing Improvements
    Mechanical tests evaluated the impact of continuous fiber reinforcement on polylactic acidparts. In tensile tests, samples reinforced with continuous carbon fibers achieved a tensile strength of 190.76 MPa and a tensile modulus of 9.98 GPa in the fiber direction. These values compare to 60.31 MPa and 3.01 GPa for neat PLA, and 56.92 MPa and 4.30 GPa for parts containing short carbon fibers. Additional tests assessed intra-layer and inter-layer performance, revealing that the continuous fiber–reinforced material had reduced mechanical properties in these orientations. Compared to neat PLA, intra-layer tensile strength and modulus dropped by 66% and 63%, respectively, and inter-layer strength and modulus decreased by 86% and 60%.
    Researchers printed curved tensile bar geometries using three methods to evaluate performance in parts with three-dimensional load paths: planar short carbon fiber–reinforced PLA, multi-axis short fiber–reinforced samples, and multi-axis continuous fiber–reinforced composites. The multi-axis short fiber–reinforced parts showed a 41.6% increase in maximum load compared to their planar counterparts. Meanwhile, multi-axis continuous fiber–reinforced parts absorbed loads 8.2 times higher than the planar short fiber–reinforced specimens. Scanning electron microscopyimages of fracture surfaces revealed fiber pull-out and limited fiber-matrix bonding, particularly in samples with continuous fibers.
    Schematic illustration of common continuous fiber reinforcement–material extrusionmodalities: in situ impregnation, towpreg extrusion, and co-extrusion with towpreg. Photo via Springer Nature Link.
    To verify the tool’s fiber cutting and re-feeding capability, the researchers printed a 100 × 150 × 3 mm rectangular plaque that required 426 cutting and re-feeding operations across six layers. The deposition tool achieved a 100% success rate, demonstrating reliable cutting and re-feeding without fiber clogging. This reliability is critical for manufacturing complex structures that require frequent travel movements between deposition paths.
    In situ fiber volume fraction control was validated through printing a rectangular prism sample with varying polymer feed rates, road widths, and layer heights. The fiber volume fractions achieved in different sections of the part were 6.51%, 8.00%, and 9.86%, as measured by cross-sectional microscopy and image analysis. Although lower than some literature reports, the researchers attributed this to the specific combination of tool geometry, polymer-fiber interaction time, and print speed.
    The tool uses Anisoprint’s CCF towpreg, a pre-impregnated continuous carbon fiber product with a fiber volume fraction of 57% and a diameter of 0.35 mm. 3DXTECH’s black PLA and SCF-PLA filaments were selected to ensure consistent matrix properties and avoid the influence of pigment variations on mechanical testing. The experiments were conducted using an ABB IRB 4600–40/2.55 robotic arm equipped with a tool changer for switching between the CFR-MEX deposition tool and a standard MEX tool with an elongated nozzle for planar prints.
    Deposition Tool CAD and Assembly. Photo via Springer Nature Link.
    Context Within Existing Research and Future Directions
    Continuous fiber reinforcement in additive manufacturing has previously demonstrated significant improvements in part performance, with some studies reporting tensile strengths of up to 650 MPa for PLA composites reinforced with continuous carbon fibers. However, traditional three-axis printing methods restrict fiber orientation to planar directions, limiting these gains to within the XY-plane. Multi-axis 3D printing approaches have demonstrated improved load-bearing capacity in short-fiber reinforced parts. For example, multi-axis printed samples have shown failure loads several times higher than planar-printed counterparts in pressure cap and curved geometry applications.
    Virginia Tech’s tool integrates multiple functionalities that previous tools in literature could not achieve simultaneously. It combines a polymer feeder based on a dual drive extruder, a fiber cutter and re-feeder assembly, and a co-extrusion hotend with adjustable interaction time for fiber-polymer bonding. A needle-like geometry and external pneumatic cooling pipes reduce the risk of collision with the printed part during multi-axis reorientation. Measured collision volume angles were 56.2° for the full tool and 41.6° for the hotend assembly.
    Load-extension performance graphs for curved tensile bars. Photo via Springer Nature Link.
    Despite these advances, the researchers identified challenges related to weak bonding between the fiber and the polymer matrix. SEM images showed limited impregnation of the polymer into the fiber towpreg, with the fiber-matrix interface remaining a key area for future work. The study highlights that optimizing fiber tow sizing and improving the fiber-polymer interaction time during printing could enhance inter-layer and intra-layer performance. The results also suggest that advanced toolpath planning algorithms could further leverage the tool’s ability to align fiber deposition along three-dimensional load paths, improving mechanical performance in functional parts.
    The publication in Springer Nature Link documents the full design, validation experiments, and mechanical characterization of the CFR-MEX tool. The work adds to a growing body of research on multi-axis additive manufacturing, particularly in combining continuous fiber reinforcement with complex geometries.
    Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes.
    Ready to discover who won the 20243D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights.
    Featured photo shows the six Degree-of-Freedom Robotic Arm printing a multi-axis geometry. Photo via Springer Nature Link.

    Anyer Tenorio Lara
    Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    #new #multiaxis #tool #virginia #tech
    New Multi-Axis Tool from Virginia Tech Boosts Fiber-Reinforced 3D Printing
    Researchers from the Department of Mechanical Engineering at Virginia Tech have introduced a continuous fiber reinforcementdeposition tool designed for multi-axis 3D printing, significantly enhancing mechanical performance in composite structures. Led by Kieran D. Beaumont, Joseph R. Kubalak, and Christopher B. Williams, and published in Springer Nature Link, the study demonstrates an 820% improvement in maximum load capacity compared to conventional planar short carbon fiber3D printing methods. This tool integrates three key functions: reliable fiber cutting and re-feeding, in situ fiber volume fraction control, and a slender collision volume to support complex multi-axis toolpaths. The newly developed deposition tool addresses critical challenges in CFR additive manufacturing. It is capable of cutting and re-feeding continuous fibers during travel movements, a function required to create complex geometries without material tearing or print failure. In situ control of fiber volume fraction is also achieved by adjusting the polymer extrusion rate. A slender geometry minimizes collisions between the tool and the printed part during multi-axis movements. The researchers designed the tool to co-extrude a thermoplastic polymer matrix with a continuous carbon fibertowpreg. This approach allowed reliable fiber re-feeding after each cut and enabled printing with variable fiber content within a single part. The tool’s slender collision volume supports increased range of motion for the robotic arm used in the experiments, allowing alignment of fibers with three-dimensional load paths in complex structures. The six Degree-of-Freedom Robotic Arm printing a multi-axis geometry from a CFR polymer composite. Photo via Springer Nature Link. Mechanical Testing Confirms Load-Bearing Improvements Mechanical tests evaluated the impact of continuous fiber reinforcement on polylactic acidparts. In tensile tests, samples reinforced with continuous carbon fibers achieved a tensile strength of 190.76 MPa and a tensile modulus of 9.98 GPa in the fiber direction. These values compare to 60.31 MPa and 3.01 GPa for neat PLA, and 56.92 MPa and 4.30 GPa for parts containing short carbon fibers. Additional tests assessed intra-layer and inter-layer performance, revealing that the continuous fiber–reinforced material had reduced mechanical properties in these orientations. Compared to neat PLA, intra-layer tensile strength and modulus dropped by 66% and 63%, respectively, and inter-layer strength and modulus decreased by 86% and 60%. Researchers printed curved tensile bar geometries using three methods to evaluate performance in parts with three-dimensional load paths: planar short carbon fiber–reinforced PLA, multi-axis short fiber–reinforced samples, and multi-axis continuous fiber–reinforced composites. The multi-axis short fiber–reinforced parts showed a 41.6% increase in maximum load compared to their planar counterparts. Meanwhile, multi-axis continuous fiber–reinforced parts absorbed loads 8.2 times higher than the planar short fiber–reinforced specimens. Scanning electron microscopyimages of fracture surfaces revealed fiber pull-out and limited fiber-matrix bonding, particularly in samples with continuous fibers. Schematic illustration of common continuous fiber reinforcement–material extrusionmodalities: in situ impregnation, towpreg extrusion, and co-extrusion with towpreg. Photo via Springer Nature Link. To verify the tool’s fiber cutting and re-feeding capability, the researchers printed a 100 × 150 × 3 mm rectangular plaque that required 426 cutting and re-feeding operations across six layers. The deposition tool achieved a 100% success rate, demonstrating reliable cutting and re-feeding without fiber clogging. This reliability is critical for manufacturing complex structures that require frequent travel movements between deposition paths. In situ fiber volume fraction control was validated through printing a rectangular prism sample with varying polymer feed rates, road widths, and layer heights. The fiber volume fractions achieved in different sections of the part were 6.51%, 8.00%, and 9.86%, as measured by cross-sectional microscopy and image analysis. Although lower than some literature reports, the researchers attributed this to the specific combination of tool geometry, polymer-fiber interaction time, and print speed. The tool uses Anisoprint’s CCF towpreg, a pre-impregnated continuous carbon fiber product with a fiber volume fraction of 57% and a diameter of 0.35 mm. 3DXTECH’s black PLA and SCF-PLA filaments were selected to ensure consistent matrix properties and avoid the influence of pigment variations on mechanical testing. The experiments were conducted using an ABB IRB 4600–40/2.55 robotic arm equipped with a tool changer for switching between the CFR-MEX deposition tool and a standard MEX tool with an elongated nozzle for planar prints. Deposition Tool CAD and Assembly. Photo via Springer Nature Link. Context Within Existing Research and Future Directions Continuous fiber reinforcement in additive manufacturing has previously demonstrated significant improvements in part performance, with some studies reporting tensile strengths of up to 650 MPa for PLA composites reinforced with continuous carbon fibers. However, traditional three-axis printing methods restrict fiber orientation to planar directions, limiting these gains to within the XY-plane. Multi-axis 3D printing approaches have demonstrated improved load-bearing capacity in short-fiber reinforced parts. For example, multi-axis printed samples have shown failure loads several times higher than planar-printed counterparts in pressure cap and curved geometry applications. Virginia Tech’s tool integrates multiple functionalities that previous tools in literature could not achieve simultaneously. It combines a polymer feeder based on a dual drive extruder, a fiber cutter and re-feeder assembly, and a co-extrusion hotend with adjustable interaction time for fiber-polymer bonding. A needle-like geometry and external pneumatic cooling pipes reduce the risk of collision with the printed part during multi-axis reorientation. Measured collision volume angles were 56.2° for the full tool and 41.6° for the hotend assembly. Load-extension performance graphs for curved tensile bars. Photo via Springer Nature Link. Despite these advances, the researchers identified challenges related to weak bonding between the fiber and the polymer matrix. SEM images showed limited impregnation of the polymer into the fiber towpreg, with the fiber-matrix interface remaining a key area for future work. The study highlights that optimizing fiber tow sizing and improving the fiber-polymer interaction time during printing could enhance inter-layer and intra-layer performance. The results also suggest that advanced toolpath planning algorithms could further leverage the tool’s ability to align fiber deposition along three-dimensional load paths, improving mechanical performance in functional parts. The publication in Springer Nature Link documents the full design, validation experiments, and mechanical characterization of the CFR-MEX tool. The work adds to a growing body of research on multi-axis additive manufacturing, particularly in combining continuous fiber reinforcement with complex geometries. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Featured photo shows the six Degree-of-Freedom Robotic Arm printing a multi-axis geometry. Photo via Springer Nature Link. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology. #new #multiaxis #tool #virginia #tech
    3DPRINTINGINDUSTRY.COM
    New Multi-Axis Tool from Virginia Tech Boosts Fiber-Reinforced 3D Printing
    Researchers from the Department of Mechanical Engineering at Virginia Tech have introduced a continuous fiber reinforcement (CFR) deposition tool designed for multi-axis 3D printing, significantly enhancing mechanical performance in composite structures. Led by Kieran D. Beaumont, Joseph R. Kubalak, and Christopher B. Williams, and published in Springer Nature Link, the study demonstrates an 820% improvement in maximum load capacity compared to conventional planar short carbon fiber (SCF) 3D printing methods. This tool integrates three key functions: reliable fiber cutting and re-feeding, in situ fiber volume fraction control, and a slender collision volume to support complex multi-axis toolpaths. The newly developed deposition tool addresses critical challenges in CFR additive manufacturing. It is capable of cutting and re-feeding continuous fibers during travel movements, a function required to create complex geometries without material tearing or print failure. In situ control of fiber volume fraction is also achieved by adjusting the polymer extrusion rate. A slender geometry minimizes collisions between the tool and the printed part during multi-axis movements. The researchers designed the tool to co-extrude a thermoplastic polymer matrix with a continuous carbon fiber (CCF) towpreg. This approach allowed reliable fiber re-feeding after each cut and enabled printing with variable fiber content within a single part. The tool’s slender collision volume supports increased range of motion for the robotic arm used in the experiments, allowing alignment of fibers with three-dimensional load paths in complex structures. The six Degree-of-Freedom Robotic Arm printing a multi-axis geometry from a CFR polymer composite. Photo via Springer Nature Link. Mechanical Testing Confirms Load-Bearing Improvements Mechanical tests evaluated the impact of continuous fiber reinforcement on polylactic acid (PLA) parts. In tensile tests, samples reinforced with continuous carbon fibers achieved a tensile strength of 190.76 MPa and a tensile modulus of 9.98 GPa in the fiber direction. These values compare to 60.31 MPa and 3.01 GPa for neat PLA, and 56.92 MPa and 4.30 GPa for parts containing short carbon fibers. Additional tests assessed intra-layer and inter-layer performance, revealing that the continuous fiber–reinforced material had reduced mechanical properties in these orientations. Compared to neat PLA, intra-layer tensile strength and modulus dropped by 66% and 63%, respectively, and inter-layer strength and modulus decreased by 86% and 60%. Researchers printed curved tensile bar geometries using three methods to evaluate performance in parts with three-dimensional load paths: planar short carbon fiber–reinforced PLA, multi-axis short fiber–reinforced samples, and multi-axis continuous fiber–reinforced composites. The multi-axis short fiber–reinforced parts showed a 41.6% increase in maximum load compared to their planar counterparts. Meanwhile, multi-axis continuous fiber–reinforced parts absorbed loads 8.2 times higher than the planar short fiber–reinforced specimens. Scanning electron microscopy (SEM) images of fracture surfaces revealed fiber pull-out and limited fiber-matrix bonding, particularly in samples with continuous fibers. Schematic illustration of common continuous fiber reinforcement–material extrusion (CFR-MEX) modalities: in situ impregnation, towpreg extrusion, and co-extrusion with towpreg. Photo via Springer Nature Link. To verify the tool’s fiber cutting and re-feeding capability, the researchers printed a 100 × 150 × 3 mm rectangular plaque that required 426 cutting and re-feeding operations across six layers. The deposition tool achieved a 100% success rate, demonstrating reliable cutting and re-feeding without fiber clogging. This reliability is critical for manufacturing complex structures that require frequent travel movements between deposition paths. In situ fiber volume fraction control was validated through printing a rectangular prism sample with varying polymer feed rates, road widths, and layer heights. The fiber volume fractions achieved in different sections of the part were 6.51%, 8.00%, and 9.86%, as measured by cross-sectional microscopy and image analysis. Although lower than some literature reports, the researchers attributed this to the specific combination of tool geometry, polymer-fiber interaction time, and print speed. The tool uses Anisoprint’s CCF towpreg, a pre-impregnated continuous carbon fiber product with a fiber volume fraction of 57% and a diameter of 0.35 mm. 3DXTECH’s black PLA and SCF-PLA filaments were selected to ensure consistent matrix properties and avoid the influence of pigment variations on mechanical testing. The experiments were conducted using an ABB IRB 4600–40/2.55 robotic arm equipped with a tool changer for switching between the CFR-MEX deposition tool and a standard MEX tool with an elongated nozzle for planar prints. Deposition Tool CAD and Assembly. Photo via Springer Nature Link. Context Within Existing Research and Future Directions Continuous fiber reinforcement in additive manufacturing has previously demonstrated significant improvements in part performance, with some studies reporting tensile strengths of up to 650 MPa for PLA composites reinforced with continuous carbon fibers. However, traditional three-axis printing methods restrict fiber orientation to planar directions, limiting these gains to within the XY-plane. Multi-axis 3D printing approaches have demonstrated improved load-bearing capacity in short-fiber reinforced parts. For example, multi-axis printed samples have shown failure loads several times higher than planar-printed counterparts in pressure cap and curved geometry applications. Virginia Tech’s tool integrates multiple functionalities that previous tools in literature could not achieve simultaneously. It combines a polymer feeder based on a dual drive extruder, a fiber cutter and re-feeder assembly, and a co-extrusion hotend with adjustable interaction time for fiber-polymer bonding. A needle-like geometry and external pneumatic cooling pipes reduce the risk of collision with the printed part during multi-axis reorientation. Measured collision volume angles were 56.2° for the full tool and 41.6° for the hotend assembly. Load-extension performance graphs for curved tensile bars. Photo via Springer Nature Link. Despite these advances, the researchers identified challenges related to weak bonding between the fiber and the polymer matrix. SEM images showed limited impregnation of the polymer into the fiber towpreg, with the fiber-matrix interface remaining a key area for future work. The study highlights that optimizing fiber tow sizing and improving the fiber-polymer interaction time during printing could enhance inter-layer and intra-layer performance. The results also suggest that advanced toolpath planning algorithms could further leverage the tool’s ability to align fiber deposition along three-dimensional load paths, improving mechanical performance in functional parts. The publication in Springer Nature Link documents the full design, validation experiments, and mechanical characterization of the CFR-MEX tool. The work adds to a growing body of research on multi-axis additive manufacturing, particularly in combining continuous fiber reinforcement with complex geometries. Take the 3DPI Reader Survey — shape the future of AM reporting in under 5 minutes. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Featured photo shows the six Degree-of-Freedom Robotic Arm printing a multi-axis geometry. Photo via Springer Nature Link. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    Like
    Love
    Wow
    Sad
    Angry
    323
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com