• The Dynabook Portégé Z40L-N is a laptop that, well, it’s just a laptop. It has replaceable batteries, which is kind of cool, I guess. But honestly, the high price tag doesn’t make it seem worth it. If you're looking for something exciting, you might want to keep looking. It’s just another corporate laptop. Nothing more, nothing less.

    #Dynabook #LaptopReview #ReplaceableBatteries #TechGadgets #CorporateLaptops
    The Dynabook Portégé Z40L-N is a laptop that, well, it’s just a laptop. It has replaceable batteries, which is kind of cool, I guess. But honestly, the high price tag doesn’t make it seem worth it. If you're looking for something exciting, you might want to keep looking. It’s just another corporate laptop. Nothing more, nothing less. #Dynabook #LaptopReview #ReplaceableBatteries #TechGadgets #CorporateLaptops
    Dynabook Portégé Z40L-N Review: Replaceable Batteries, High Price
    This corporate laptop brings back the replaceable battery.
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • Ozi: the animation film by Mikros is finally hitting the theaters today. It’s been a long wait since its preview at the Annecy Festival 2023. Directed by Tim Harper, Ozi, the voice of the forest, follows some kind of story. Not sure if it's going to be thrilling or just another movie to pass the time. Anyway, you can check it out if you want.

    #Ozi #AnimationFilm #Mikros #TimHarper #Movies
    Ozi: the animation film by Mikros is finally hitting the theaters today. It’s been a long wait since its preview at the Annecy Festival 2023. Directed by Tim Harper, Ozi, the voice of the forest, follows some kind of story. Not sure if it's going to be thrilling or just another movie to pass the time. Anyway, you can check it out if you want. #Ozi #AnimationFilm #Mikros #TimHarper #Movies
    Ozi : le film d’animation de Mikros sort enfin en salles
    En-fin ! Le film d’animation Ozi, la voix de la forêt sort aujourd’hui dans les salles françaises. L’attente aura été longue : le film avait été projeté durant le Festival d’Annecy 2023.C’est grâce au distributeur KMBO q
    1 Yorumlar 0 hisse senetleri 0 önizleme
  • Hitman: IO Interactive Has Big Plans For World of Assassination

    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France.
    Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller.

    Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through.
    At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table.

    Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels.
    Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down.
    MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard.
    Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future.
    The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2.
    MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC.
    #hitman #interactive #has #big #plans
    Hitman: IO Interactive Has Big Plans For World of Assassination
    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France. Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller. Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through. At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table. Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels. Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down. MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard. Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future. The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2. MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC. #hitman #interactive #has #big #plans
    WWW.DENOFGEEK.COM
    Hitman: IO Interactive Has Big Plans For World of Assassination
    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France. Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller. Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through. At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table. Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels. Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down. MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard. Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future. The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2. MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC.
    Like
    Love
    Wow
    Angry
    Sad
    498
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Microsoft accidentally replaced Windows 11 startup sound with one from Vista

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Microsoft accidentally replaced Windows 11 startup sound with one from Vista

    Taras Buria

    Neowin
    @TarasBuria ·

    Jun 14, 2025 14:46 EDT

    The recently released Windows 11 Dev and Beta builds introduced some welcome changes and improvements. However, those preview builds are not flawless and have a pretty long list of known issues. One of those issues, though, is a rather delightful one: Windows 11's default startup jingle has been accidentally replaced with one from 2006.
    After Windows Insiders discovered that Windows 11 now plays the Windows Vista startup sound and reported it to Microsoft, the company acknowledged it and added it to the list of known bugs in the latest Windows 11 Dev and Beta builds:

    This week’s flight comes with a delightful blast from the past and will play the Windows Vista boot sound instead of the Windows 11 boot sound. We’re working on a fix.

    Although Windows Vista is nearly two decades old, it was brought to everyone's attention this week after Apple introduced macOS 26 Tahoe with its controversial "Liquid Glass" redesign, which many consider a rather miserable remix of Windows Aero from Windows Vista and Windows 7.
    While it is definitely an interesting coincidence, Microsoft did not intentionally replace the startup sound in Windows 11 preview builds. Brandon LeBlanc from the Windows Insider team confirmed in his X that that is a bug after joking about everyone talking about Windows Vista once again in light of Apple's latest announcements:

    It is worth noting that if you miss the startup sound of Windows Vista, you can still use it in modern Windows versions. All it takes is the original WAV file and a few clicks in the Windows Registry and Sound settings. And for those who want a shot of nostalgia, here is the sound of Windows Vista startup:

    What startup jingle do you like more: Windows Vista or Windows 11? Share your thoughts in the comments.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #microsoft #accidentally #replaced #windows #startup
    Microsoft accidentally replaced Windows 11 startup sound with one from Vista
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft accidentally replaced Windows 11 startup sound with one from Vista Taras Buria Neowin @TarasBuria · Jun 14, 2025 14:46 EDT The recently released Windows 11 Dev and Beta builds introduced some welcome changes and improvements. However, those preview builds are not flawless and have a pretty long list of known issues. One of those issues, though, is a rather delightful one: Windows 11's default startup jingle has been accidentally replaced with one from 2006. After Windows Insiders discovered that Windows 11 now plays the Windows Vista startup sound and reported it to Microsoft, the company acknowledged it and added it to the list of known bugs in the latest Windows 11 Dev and Beta builds: This week’s flight comes with a delightful blast from the past and will play the Windows Vista boot sound instead of the Windows 11 boot sound. We’re working on a fix. Although Windows Vista is nearly two decades old, it was brought to everyone's attention this week after Apple introduced macOS 26 Tahoe with its controversial "Liquid Glass" redesign, which many consider a rather miserable remix of Windows Aero from Windows Vista and Windows 7. While it is definitely an interesting coincidence, Microsoft did not intentionally replace the startup sound in Windows 11 preview builds. Brandon LeBlanc from the Windows Insider team confirmed in his X that that is a bug after joking about everyone talking about Windows Vista once again in light of Apple's latest announcements: It is worth noting that if you miss the startup sound of Windows Vista, you can still use it in modern Windows versions. All it takes is the original WAV file and a few clicks in the Windows Registry and Sound settings. And for those who want a shot of nostalgia, here is the sound of Windows Vista startup: What startup jingle do you like more: Windows Vista or Windows 11? Share your thoughts in the comments. Tags Report a problem with article Follow @NeowinFeed #microsoft #accidentally #replaced #windows #startup
    WWW.NEOWIN.NET
    Microsoft accidentally replaced Windows 11 startup sound with one from Vista
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft accidentally replaced Windows 11 startup sound with one from Vista Taras Buria Neowin @TarasBuria · Jun 14, 2025 14:46 EDT The recently released Windows 11 Dev and Beta builds introduced some welcome changes and improvements. However, those preview builds are not flawless and have a pretty long list of known issues. One of those issues, though, is a rather delightful one: Windows 11's default startup jingle has been accidentally replaced with one from 2006. After Windows Insiders discovered that Windows 11 now plays the Windows Vista startup sound and reported it to Microsoft, the company acknowledged it and added it to the list of known bugs in the latest Windows 11 Dev and Beta builds: This week’s flight comes with a delightful blast from the past and will play the Windows Vista boot sound instead of the Windows 11 boot sound. We’re working on a fix. Although Windows Vista is nearly two decades old, it was brought to everyone's attention this week after Apple introduced macOS 26 Tahoe with its controversial "Liquid Glass" redesign, which many consider a rather miserable remix of Windows Aero from Windows Vista and Windows 7. While it is definitely an interesting coincidence, Microsoft did not intentionally replace the startup sound in Windows 11 preview builds. Brandon LeBlanc from the Windows Insider team confirmed in his X that that is a bug after joking about everyone talking about Windows Vista once again in light of Apple's latest announcements: It is worth noting that if you miss the startup sound of Windows Vista, you can still use it in modern Windows versions. All it takes is the original WAV file and a few clicks in the Windows Registry and Sound settings. And for those who want a shot of nostalgia, here is the sound of Windows Vista startup: What startup jingle do you like more: Windows Vista or Windows 11? Share your thoughts in the comments. Tags Report a problem with article Follow @NeowinFeed
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 

    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks.
    To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms.
    Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA. 
    Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior.
    Proving Rust program properties with Aeneas
    Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”.
    For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references.
    As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs.
    Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community.
    Compiling Rust to C supports backward compatibility  
    We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs.
    Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code.
    As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed.

    Microsoft research podcast

    Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness
    As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India.

    Listen now

    Opens in a new tab
    Timing analysis with Revizor 
    Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct. 
    To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.  
    Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel. 
    By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code.
    Verified Rust implementations begin with ML-KEM
    This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling.
    A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings. 
    Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations. 
    As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems.
    Looking forward 
    This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library.
    Opens in a new tab
    #rewriting #symcrypt #rust #modernize #microsofts
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab #rewriting #symcrypt #rust #modernize #microsofts
    WWW.MICROSOFT.COM
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt (opens in new tab)—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsics (compiler-provided low-level functions) and assembly language (direct processor instructions). It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneas (opens in new tab) because it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean (opens in new tab), allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice (opens in new tab), a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydice (opens in new tab) compiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries (via C or Rust APIs), or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor (opens in new tab), a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcrypto (opens in new tab) branch of the SymCrypt repository. We encourage users to try the Rust build and share feedback (opens in new tab). Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Mock up a website in five prompts

    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box.
    Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website.
    Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe
    Core pages
    Primary goal
    Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon.
    2. Home, About, Pricing / Subscription Box, Menu.
    3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.”
    4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for:
    • Home
    • About
    • Pricing
    • Menu
    Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon.
    The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    #mock #website #five #prompts
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu. 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today. #mock #website #five #prompts
    WWW.BUILDER.IO
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu (with daily specials). 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.(Free tier Builder gives you 5 AI credits/day and 25/month—plenty to follow along with today’s demo. Upgrade only when you need it.)An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.(You can grab our design-to-code guide for a lot more ideas of what this can help you accomplish.)Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker

    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately This model marks a step in Nike’s exploration of additive manufacturing, enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear.
    Building Buzz Online
    The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on.
    Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth.
    Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit.
    Reimagining the Air Max Legacy
    Drawing inspiration from the original Air Max 1, the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld.
    Zellerfeld’s fused filament fabricationprocess enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic.
    Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components.
    Expansion of 3D Printed Footwear Technology
    The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect.
    Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth.

    Paloma Duran
    Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    #nike #introduces #air #max #its
    Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker
    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately This model marks a step in Nike’s exploration of additive manufacturing, enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear. Building Buzz Online The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on. Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit. Reimagining the Air Max Legacy Drawing inspiration from the original Air Max 1, the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld. Zellerfeld’s fused filament fabricationprocess enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic. Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components. Expansion of 3D Printed Footwear Technology The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect. Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future. #nike #introduces #air #max #its
    3DPRINTINGINDUSTRY.COM
    Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker
    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately $210. This model marks a step in Nike’s exploration of additive manufacturing (AM), enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear. Building Buzz Online The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on. Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit. Reimagining the Air Max Legacy Drawing inspiration from the original Air Max 1 (1987), the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld. Zellerfeld’s fused filament fabrication (FFF) process enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic. Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components. Expansion of 3D Printed Footwear Technology The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025 (SS25) collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect. Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Pixar Slate Reveal: What We Learned About Toy Story 5, Hoppers, And More

    Pixar has been delighting audiences with its house animation style and world-building for three decades, and the Disney-owned animation studio is showing no signs of slowing down. And unlike Andy, they haven’t aged out of playing with their toys. 
    At the Annecy’s International Animation Film Festival, Pixar dropped a series of announcements, teasers, and special previews of their upcoming slate, including the much-anticipated first-look at Toy Story 5. 

    Den of Geek attended a private screening, with remarks from Pixar’s Chief Creative Officer, Pete Docter, in early June ahead of the festival. During the presentation to the press, Docter hinted at the company putting its focus and energy to its theatrical slate, a notable change after recent releases like Dream Productions, set in the Inside Out universe, and the original Win or Lose debuted in early 2025. It’s a telling sign for Disney’s shifting approach to Disney+. The studio’s latest film, Elio, hit theaters on June 20th.
    “Our hope is that we can somehow tap into the things that people remember about the communal experience of seeing things together,” Docter said. “It’s different than sitting at home on your computer watching somethingwhen you sit with other human beings in the dark and watch the flickering light on the screen. There’s something kind of magic about that.” 

    Pixar is aiming to be back on a timeline of three films every two years, with Toy Story 5 and an original story titled Hoppers releasing in 2026, and another original, Gatto, hitting theaters in 2027. 
    Docter boldly stated that Pixar is “standing on one of the strongest slates we’ve ever had.” While bullish for a studio that has had an unprecedented run of success in the world of animated features, the early footage we saw leaves plenty of room for optimism.
    Is Pixar so back? Here’s what we learned from the presentation and footage… 
    Toy Story 5 – June 19, 2026 
    Woody, Buzz, Jesse and the gang will all be returning for the fifth feature film in one of Pixar’s most beloved franchises. Docter confirmed Tom Hanks, Tim Allen and Joan Cusack will reprise their respective roles.
    Written and directed by Andrew Stanton, who has worked on all of the films, and co-directed by McKenna Harris, Toy Story 5 catches up to our modern, tech-oriented world, and how that affects children’s interests. Bonnie, now eight, is given a brand new, shiny tablet, called a Lily Pad. The new tech allows Bonnie to stay connected and chat with all of her friends, slowly detaching her from her old toys. But just like all the other toys, Lily can talk, and she’s quite sneaky. Lily believes Bonnie needs to get rid of her old, childish toys completely. Feeling Bonnie slipping away, the toys call Woody for back up, but after not seeing Buzz for some time, the two go back to their old ways of constantly butting heads. 
    “With some films, you’ll struggle to find new things to talk about. And you know, this is. We still are finding new aspects of what it is to be a toy… There’s more of a spotlight on Jesse, so there’s that’s a whole nother facet to it as well. And she’s just such a rich, wonderful character to see on screen,” Docter says.

    Pixar screened the opening scene for press, which saw a fresh pallet of new Buzz Lightyear figures washed up in a shipping container on a remote island. Think Toy Story meets Cast Away as the Lightyears band together to concoct a way to get home, wherever that might be, in an unexpectedly gripping start to the fifth installment.

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!

    HOPPERS – © 2025 Disney/Pixar. All Rights Reserved.
    Hoppers – March 6, 2026 
    Preceding Toy Story 5 and kicking off 2026 for Pixar will be an all-new story, Hoppers. 
    The film follows Mabel, a college student and nature enthusiast as she fights to save a beloved glade near her childhood home from a highway project that will bulldoze through it– brought forth by the greedy mayor voiced by Jon Hamm. With little support from those around her, Mabel enlists the help of “hoppers,” a clever group of scientists who’ve found a way to “hop” their minds into robots. When Mabel hops into the body of a beaver, she sets off to get other animals to return to the glade, hopefully halting construction. The animals take her to meet their rather conflict-avoidant leader, King George, and she soon learns that the animal world is a lot more complex than she had thought. 
    The footage screened saw Jon Hamm’s mayor abducted by beavers in a slapstick scene that corroborated Docter’s excitement for the project. Like Pixar’s highest highs, Hoppers appears to be charming and big-hearted, and it certainly won’t hurt merchandise sales at the Disney parks with the adorably designed animals in this film. Docter compared Hoppers to Mission Impossible meets Planet Earth. We’re locked in. 
    GATTO – © 2025 Disney/Pixar. All Rights Reserved.
    Gatto – Summer 2027 
    In maybe the most creatively intriguing announcement, a new film titled Gatto is in production from the team behind Luca. Gatto will employ the same classic Pixar animation-style, but with a painterly twist to match the artistic vibe of Venice. The art direction shown in short clips was stunning and unique spin on Pixar’s house style.
    The film is set in Venice, Italy, a destination popular for its stunning architecture and romantic ambience, that some only dream of visiting one day. It’s not so ideal, however, for Nero, the protagonist of the upcoming Pixar-original film, Gato. Nero is a black cat, who people turn the other way from because they fear he’s bad luck. With no other options, Nero turns to the seedier side of the stray cat scene in Venice, where he soon finds himself in hot water with Rocco, a cat mob boss. The heart of the film is Nero’s love for music, and his budding friendship with a street musician named Maya, who is also an outsider.
    #pixar #slate #reveal #what #learned
    Pixar Slate Reveal: What We Learned About Toy Story 5, Hoppers, And More
    Pixar has been delighting audiences with its house animation style and world-building for three decades, and the Disney-owned animation studio is showing no signs of slowing down. And unlike Andy, they haven’t aged out of playing with their toys.  At the Annecy’s International Animation Film Festival, Pixar dropped a series of announcements, teasers, and special previews of their upcoming slate, including the much-anticipated first-look at Toy Story 5.  Den of Geek attended a private screening, with remarks from Pixar’s Chief Creative Officer, Pete Docter, in early June ahead of the festival. During the presentation to the press, Docter hinted at the company putting its focus and energy to its theatrical slate, a notable change after recent releases like Dream Productions, set in the Inside Out universe, and the original Win or Lose debuted in early 2025. It’s a telling sign for Disney’s shifting approach to Disney+. The studio’s latest film, Elio, hit theaters on June 20th. “Our hope is that we can somehow tap into the things that people remember about the communal experience of seeing things together,” Docter said. “It’s different than sitting at home on your computer watching somethingwhen you sit with other human beings in the dark and watch the flickering light on the screen. There’s something kind of magic about that.”  Pixar is aiming to be back on a timeline of three films every two years, with Toy Story 5 and an original story titled Hoppers releasing in 2026, and another original, Gatto, hitting theaters in 2027.  Docter boldly stated that Pixar is “standing on one of the strongest slates we’ve ever had.” While bullish for a studio that has had an unprecedented run of success in the world of animated features, the early footage we saw leaves plenty of room for optimism. Is Pixar so back? Here’s what we learned from the presentation and footage…  Toy Story 5 – June 19, 2026  Woody, Buzz, Jesse and the gang will all be returning for the fifth feature film in one of Pixar’s most beloved franchises. Docter confirmed Tom Hanks, Tim Allen and Joan Cusack will reprise their respective roles. Written and directed by Andrew Stanton, who has worked on all of the films, and co-directed by McKenna Harris, Toy Story 5 catches up to our modern, tech-oriented world, and how that affects children’s interests. Bonnie, now eight, is given a brand new, shiny tablet, called a Lily Pad. The new tech allows Bonnie to stay connected and chat with all of her friends, slowly detaching her from her old toys. But just like all the other toys, Lily can talk, and she’s quite sneaky. Lily believes Bonnie needs to get rid of her old, childish toys completely. Feeling Bonnie slipping away, the toys call Woody for back up, but after not seeing Buzz for some time, the two go back to their old ways of constantly butting heads.  “With some films, you’ll struggle to find new things to talk about. And you know, this is. We still are finding new aspects of what it is to be a toy… There’s more of a spotlight on Jesse, so there’s that’s a whole nother facet to it as well. And she’s just such a rich, wonderful character to see on screen,” Docter says. Pixar screened the opening scene for press, which saw a fresh pallet of new Buzz Lightyear figures washed up in a shipping container on a remote island. Think Toy Story meets Cast Away as the Lightyears band together to concoct a way to get home, wherever that might be, in an unexpectedly gripping start to the fifth installment. Join our mailing list Get the best of Den of Geek delivered right to your inbox! HOPPERS – © 2025 Disney/Pixar. All Rights Reserved. Hoppers – March 6, 2026  Preceding Toy Story 5 and kicking off 2026 for Pixar will be an all-new story, Hoppers.  The film follows Mabel, a college student and nature enthusiast as she fights to save a beloved glade near her childhood home from a highway project that will bulldoze through it– brought forth by the greedy mayor voiced by Jon Hamm. With little support from those around her, Mabel enlists the help of “hoppers,” a clever group of scientists who’ve found a way to “hop” their minds into robots. When Mabel hops into the body of a beaver, she sets off to get other animals to return to the glade, hopefully halting construction. The animals take her to meet their rather conflict-avoidant leader, King George, and she soon learns that the animal world is a lot more complex than she had thought.  The footage screened saw Jon Hamm’s mayor abducted by beavers in a slapstick scene that corroborated Docter’s excitement for the project. Like Pixar’s highest highs, Hoppers appears to be charming and big-hearted, and it certainly won’t hurt merchandise sales at the Disney parks with the adorably designed animals in this film. Docter compared Hoppers to Mission Impossible meets Planet Earth. We’re locked in.  GATTO – © 2025 Disney/Pixar. All Rights Reserved. Gatto – Summer 2027  In maybe the most creatively intriguing announcement, a new film titled Gatto is in production from the team behind Luca. Gatto will employ the same classic Pixar animation-style, but with a painterly twist to match the artistic vibe of Venice. The art direction shown in short clips was stunning and unique spin on Pixar’s house style. The film is set in Venice, Italy, a destination popular for its stunning architecture and romantic ambience, that some only dream of visiting one day. It’s not so ideal, however, for Nero, the protagonist of the upcoming Pixar-original film, Gato. Nero is a black cat, who people turn the other way from because they fear he’s bad luck. With no other options, Nero turns to the seedier side of the stray cat scene in Venice, where he soon finds himself in hot water with Rocco, a cat mob boss. The heart of the film is Nero’s love for music, and his budding friendship with a street musician named Maya, who is also an outsider. #pixar #slate #reveal #what #learned
    WWW.DENOFGEEK.COM
    Pixar Slate Reveal: What We Learned About Toy Story 5, Hoppers, And More
    Pixar has been delighting audiences with its house animation style and world-building for three decades, and the Disney-owned animation studio is showing no signs of slowing down. And unlike Andy, they haven’t aged out of playing with their toys.  At the Annecy’s International Animation Film Festival, Pixar dropped a series of announcements, teasers, and special previews of their upcoming slate, including the much-anticipated first-look at Toy Story 5.  Den of Geek attended a private screening, with remarks from Pixar’s Chief Creative Officer, Pete Docter, in early June ahead of the festival. During the presentation to the press, Docter hinted at the company putting its focus and energy to its theatrical slate, a notable change after recent releases like Dream Productions, set in the Inside Out universe, and the original Win or Lose debuted in early 2025. It’s a telling sign for Disney’s shifting approach to Disney+. The studio’s latest film, Elio, hit theaters on June 20th. “Our hope is that we can somehow tap into the things that people remember about the communal experience of seeing things together,” Docter said. “It’s different than sitting at home on your computer watching something [compared to] when you sit with other human beings in the dark and watch the flickering light on the screen. There’s something kind of magic about that.”  Pixar is aiming to be back on a timeline of three films every two years, with Toy Story 5 and an original story titled Hoppers releasing in 2026, and another original, Gatto, hitting theaters in 2027.  Docter boldly stated that Pixar is “standing on one of the strongest slates we’ve ever had.” While bullish for a studio that has had an unprecedented run of success in the world of animated features, the early footage we saw leaves plenty of room for optimism. Is Pixar so back? Here’s what we learned from the presentation and footage…  Toy Story 5 – June 19, 2026  Woody, Buzz, Jesse and the gang will all be returning for the fifth feature film in one of Pixar’s most beloved franchises. Docter confirmed Tom Hanks, Tim Allen and Joan Cusack will reprise their respective roles. Written and directed by Andrew Stanton, who has worked on all of the films, and co-directed by McKenna Harris, Toy Story 5 catches up to our modern, tech-oriented world, and how that affects children’s interests. Bonnie, now eight, is given a brand new, shiny tablet, called a Lily Pad. The new tech allows Bonnie to stay connected and chat with all of her friends, slowly detaching her from her old toys. But just like all the other toys, Lily can talk, and she’s quite sneaky. Lily believes Bonnie needs to get rid of her old, childish toys completely. Feeling Bonnie slipping away, the toys call Woody for back up, but after not seeing Buzz for some time, the two go back to their old ways of constantly butting heads.  “With some films, you’ll struggle to find new things to talk about. And you know, this is [Toy Story 5]. We still are finding new aspects of what it is to be a toy… There’s more of a spotlight on Jesse, so there’s that’s a whole nother facet to it as well. And she’s just such a rich, wonderful character to see on screen,” Docter says. Pixar screened the opening scene for press, which saw a fresh pallet of new Buzz Lightyear figures washed up in a shipping container on a remote island. Think Toy Story meets Cast Away as the Lightyears band together to concoct a way to get home, wherever that might be, in an unexpectedly gripping start to the fifth installment. Join our mailing list Get the best of Den of Geek delivered right to your inbox! HOPPERS – © 2025 Disney/Pixar. All Rights Reserved. Hoppers – March 6, 2026  Preceding Toy Story 5 and kicking off 2026 for Pixar will be an all-new story, Hoppers.  The film follows Mabel (Piper Curda), a college student and nature enthusiast as she fights to save a beloved glade near her childhood home from a highway project that will bulldoze through it– brought forth by the greedy mayor voiced by Jon Hamm. With little support from those around her, Mabel enlists the help of “hoppers,” a clever group of scientists who’ve found a way to “hop” their minds into robots. When Mabel hops into the body of a beaver, she sets off to get other animals to return to the glade, hopefully halting construction. The animals take her to meet their rather conflict-avoidant leader, King George (Bobby Moynihan), and she soon learns that the animal world is a lot more complex than she had thought.  The footage screened saw Jon Hamm’s mayor abducted by beavers in a slapstick scene that corroborated Docter’s excitement for the project. Like Pixar’s highest highs, Hoppers appears to be charming and big-hearted, and it certainly won’t hurt merchandise sales at the Disney parks with the adorably designed animals in this film. Docter compared Hoppers to Mission Impossible meets Planet Earth. We’re locked in.  GATTO – © 2025 Disney/Pixar. All Rights Reserved. Gatto – Summer 2027  In maybe the most creatively intriguing announcement, a new film titled Gatto is in production from the team behind Luca. Gatto will employ the same classic Pixar animation-style, but with a painterly twist to match the artistic vibe of Venice. The art direction shown in short clips was stunning and unique spin on Pixar’s house style. The film is set in Venice, Italy, a destination popular for its stunning architecture and romantic ambience, that some only dream of visiting one day. It’s not so ideal, however, for Nero, the protagonist of the upcoming Pixar-original film, Gato. Nero is a black cat, who people turn the other way from because they fear he’s bad luck. With no other options, Nero turns to the seedier side of the stray cat scene in Venice, where he soon finds himself in hot water with Rocco, a cat mob boss. The heart of the film is Nero’s love for music, and his budding friendship with a street musician named Maya, who is also an outsider.
    0 Yorumlar 0 hisse senetleri 0 önizleme
CGShares https://cgshares.com