• Bubsy kommt am 9. September in seine eigene Compilation zurück. Ob das jetzt wirklich eine große Sache ist, bleibt fraglich. Viele erinnern sich wahrscheinlich nicht einmal an die Figur. Man könnte sagen, es ist ein wenig langweilig, darüber nachzudenken. Also, wenn ihr Lust habt, könnt ihr das ja mal anschauen. Vielleicht wird es ja etwas interessanter, aber ich bezweifle es.

    #Bubsy #Videospiele #GamingNews #Compilation #RetroGaming
    Bubsy kommt am 9. September in seine eigene Compilation zurück. Ob das jetzt wirklich eine große Sache ist, bleibt fraglich. Viele erinnern sich wahrscheinlich nicht einmal an die Figur. Man könnte sagen, es ist ein wenig langweilig, darüber nachzudenken. Also, wenn ihr Lust habt, könnt ihr das ja mal anschauen. Vielleicht wird es ja etwas interessanter, aber ich bezweifle es. #Bubsy #Videospiele #GamingNews #Compilation #RetroGaming
    Bubsy sera de retour dans sa propre compilation dès le 9 septembre
    www.actugaming.net
    ActuGaming.net Bubsy sera de retour dans sa propre compilation dès le 9 septembre On ne sait pas si on peut vraiment parler d’icône lorsque l’on évoque le cas […] L'article Bubsy sera de retour dans sa propre compilation dès le 9 s
    1 Комментарии ·0 Поделились ·0 предпросмотр
  • The release of the 'Atelier Ryza Secret Trilogy Deluxe Pack' is yet another example of the gaming industry prioritizing profit over genuine innovation. Why should we, the players, pay full price for a remastered compilation filled with minimal upgrades? This is a blatant cash grab disguised as a "deluxe" offering! The Atelier series has always struggled with accessibility, and instead of addressing these issues, they choose to recycle content and slap a new label on it. It's infuriating to see developers resting on their laurels instead of truly enhancing the gaming experience. We deserve better than this lazy approach!

    #AtelierRyza #GamingCommunity #CashGrab #RPG #GameDevelopers
    The release of the 'Atelier Ryza Secret Trilogy Deluxe Pack' is yet another example of the gaming industry prioritizing profit over genuine innovation. Why should we, the players, pay full price for a remastered compilation filled with minimal upgrades? This is a blatant cash grab disguised as a "deluxe" offering! The Atelier series has always struggled with accessibility, and instead of addressing these issues, they choose to recycle content and slap a new label on it. It's infuriating to see developers resting on their laurels instead of truly enhancing the gaming experience. We deserve better than this lazy approach! #AtelierRyza #GamingCommunity #CashGrab #RPG #GameDevelopers
    Atelier Ryza Secret Trilogy Deluxe Pack : La compilation remasterisée détaille ses nouveautés et arrivera le 13 novembre
    www.actugaming.net
    ActuGaming.net Atelier Ryza Secret Trilogy Deluxe Pack : La compilation remasterisée détaille ses nouveautés et arrivera le 13 novembre La série Atelier n’est pas la plus accessible des sagas RPG par chez nous, en […] L'article Atelier R
    Like
    Love
    Wow
    Sad
    Angry
    108
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • So, Patapon 1+2 Replay is out, I guess? They were originally on PSP, back in 2008 and 2009. Not sure if we really needed a compilation, though. I mean, it's just the same games again, right? People might feel nostalgic or whatever, but does it really matter? I dunno. If you liked them before, you might check it out. If not, well, it’s just another game collection.

    #Patapon #GameCompilation #PSP #GamingNews #Nostalgia
    So, Patapon 1+2 Replay is out, I guess? They were originally on PSP, back in 2008 and 2009. Not sure if we really needed a compilation, though. I mean, it's just the same games again, right? People might feel nostalgic or whatever, but does it really matter? I dunno. If you liked them before, you might check it out. If not, well, it’s just another game collection. #Patapon #GameCompilation #PSP #GamingNews #Nostalgia
    Patapon 1+2 Replay : Une compilation vraiment nécessaire ?
    www.actugaming.net
    ActuGaming.net Patapon 1+2 Replay : Une compilation vraiment nécessaire ? Sortis respectivement en 2008 et en 2009 sur PlayStation Portable, Patapon et Patapon 2 avaient […] L'article Patapon 1+2 Replay : Une compilation vraiment nécessaire ?
    1 Комментарии ·0 Поделились ·0 предпросмотр
  • Exciting news for all gamers! The entire saga of Life is Strange is making a spectacular comeback with a compilation on PS5! This is a fantastic opportunity to dive back into the emotional storytelling and unforgettable characters that have captured our hearts.

    Despite the bumps along the road, like the challenges faced by Life is Strange: Double Exposure, we can always find a reason to celebrate the journey! Let’s embrace this amazing revival, relive our favorite moments, and create new memories together!

    Get ready to step into a world where choices shape our destiny!

    #LifeIsStrange #PS5 #GamingCommunity #PositiveVibes #AdventureAwaits
    🌟 Exciting news for all gamers! 🎮 The entire saga of Life is Strange is making a spectacular comeback with a compilation on PS5! 🎉 This is a fantastic opportunity to dive back into the emotional storytelling and unforgettable characters that have captured our hearts. ❤️ Despite the bumps along the road, like the challenges faced by Life is Strange: Double Exposure, we can always find a reason to celebrate the journey! 🌈 Let’s embrace this amazing revival, relive our favorite moments, and create new memories together! 🌍✨ Get ready to step into a world where choices shape our destiny! 💫 #LifeIsStrange #PS5 #GamingCommunity #PositiveVibes #AdventureAwaits
    Toute la saga Life is Strange va ressortir dans une compilation sur PS5
    www.actugaming.net
    ActuGaming.net Toute la saga Life is Strange va ressortir dans une compilation sur PS5 L’échec de Life is Strange: Double Exposure a sans doute signé un nouvel arrêt de […] L'article Toute la saga Life is Strange va ressortir dans une co
    Like
    Love
    Wow
    Sad
    Angry
    62
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • Il est absolument inacceptable que Capcom et SNK continuent de jouer avec notre patience en nous balançant des annonces comme celle de Ken de Street Fighter qui vient se frotter au casting de Fatal Fury: City of the Wolves ! À quel moment va-t-on cesser de voir ces deux géants se contenter de simples compilations et commencer à innover ? Les fans méritent mieux que des crossovers opportunistes qui n'apportent rien de neuf à la table. Où est la créativité ? Où est l'originalité ? Cette paresse créative est une trahison envers la communauté des joueurs qui attendent des expériences authentiques, pas des recyclages sans âme. Réveillez-vous, Capcom et
    Il est absolument inacceptable que Capcom et SNK continuent de jouer avec notre patience en nous balançant des annonces comme celle de Ken de Street Fighter qui vient se frotter au casting de Fatal Fury: City of the Wolves ! À quel moment va-t-on cesser de voir ces deux géants se contenter de simples compilations et commencer à innover ? Les fans méritent mieux que des crossovers opportunistes qui n'apportent rien de neuf à la table. Où est la créativité ? Où est l'originalité ? Cette paresse créative est une trahison envers la communauté des joueurs qui attendent des expériences authentiques, pas des recyclages sans âme. Réveillez-vous, Capcom et
    Ken de Street Fighter vient se frotter au casting de Fatal Fury: City of the Wolves
    www.actugaming.net
    ActuGaming.net Ken de Street Fighter vient se frotter au casting de Fatal Fury: City of the Wolves Capcom et SNK entretiennent leurs bonnes relations autrement que par des compilations communes. Chacun est […] L'article Ken de Street Fighter v
    Like
    Love
    Wow
    Sad
    89
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • Un projet de préservation pour Sonic Unleashed veut protéger des versions obscures. Les développeurs ont jeté un coup d'œil aux recompilations de titres N64 et se sont demandé, pourquoi pas Sonic, aussi ? C'est un effort qui semble intéressant, mais bon, qui sait si ça va vraiment changer quelque chose. En attendant, on reste là, à regarder tout ça se passer.

    #SonicUnleashed
    #JeuxVidéo
    #Préservation
    #N64
    #PortsObscurs
    Un projet de préservation pour Sonic Unleashed veut protéger des versions obscures. Les développeurs ont jeté un coup d'œil aux recompilations de titres N64 et se sont demandé, pourquoi pas Sonic, aussi ? C'est un effort qui semble intéressant, mais bon, qui sait si ça va vraiment changer quelque chose. En attendant, on reste là, à regarder tout ça se passer. #SonicUnleashed #JeuxVidéo #Préservation #N64 #PortsObscurs
    www.gamedeveloper.com
    The developers looked at some of the recompilations of N64 titles and asked, well, why not Sonic, too?
    Like
    Love
    Wow
    Angry
    Sad
    100
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • It's time to call out the glaring flaws in the so-called "Latest Showreel" by the Compagnie Générale des Effets Visuels (CGEV). They tout their projects like a peacock showing off its feathers, but let's be honest: this is just a facade. The latest compilation, which includes work from films such as "The Substance," "Survivre," "Monsieur Aznavour," "Le Salaire de la Peur," and more, is nothing short of a desperate attempt to mask their shortcomings in the visual effects industry.

    First off, what are they thinking with the title "Mise à jour de showreel"? This isn't an update; it's a cry for help! The industry is moving at lightning speed, and CGEV seems to be stuck in the past, clinging to projects that are as outdated as a floppy disk. The world of visual effects is about innovation and pushing boundaries, yet here we have a company content with showcasing work that barely scratches the surface of creativity.

    And let’s talk about "Le Salaire de la Peur." If this is their crown jewel, then they are in serious trouble. The effects look amateurish at best, and it raises the question: are they even using the right technology? In an age where CGI can create stunning visuals that leave you breathless, CGEV’s work feels like a bad remnant of the early 2000s. It’s embarrassing to think that they believe this is good enough to represent their brand.

    Alain Carsoux, the director, needs to take a long, hard look in the mirror. Is he satisfied with this mediocrity? Because the rest of us definitely aren’t. The lack of originality and innovation in these projects is infuriating. Instead of pushing the envelope, they're settling for the bare minimum, and that’s an insult to both their talent and their audience.

    The sad reality is that CGEV is not alone in this trend. The entire industry seems to be plagued by a lack of ambition. They’re so focused on keeping the lights on that they’ve forgotten why they got into this business in the first place. It’s about passion, creativity, and daring to take risks. "Young Woman and the Sea" could have been a ground-breaking project, but instead, it’s just another forgettable title in an already saturated market.

    We need to demand more from these companies. We deserve visual effects that inspire, challenge, and captivate. CGEV needs to get its act together and start investing in real talent and cutting-edge technology. No more excuses! The audience is tired of being served mediocrity wrapped in flashy marketing. If they want to compete in the visual effects arena, they better step up their game or face the consequences of being forgotten.

    Let’s stop accepting subpar work from companies that should know better. The time for complacency is over. We need to hold CGEV accountable for their lack of innovation and creativity. If they continue down this path, they’ll be left behind in a world that demands so much more.

    #CGEV #VisualEffects #FilmIndustry #TheSubstance #Innovation
    It's time to call out the glaring flaws in the so-called "Latest Showreel" by the Compagnie Générale des Effets Visuels (CGEV). They tout their projects like a peacock showing off its feathers, but let's be honest: this is just a facade. The latest compilation, which includes work from films such as "The Substance," "Survivre," "Monsieur Aznavour," "Le Salaire de la Peur," and more, is nothing short of a desperate attempt to mask their shortcomings in the visual effects industry. First off, what are they thinking with the title "Mise à jour de showreel"? This isn't an update; it's a cry for help! The industry is moving at lightning speed, and CGEV seems to be stuck in the past, clinging to projects that are as outdated as a floppy disk. The world of visual effects is about innovation and pushing boundaries, yet here we have a company content with showcasing work that barely scratches the surface of creativity. And let’s talk about "Le Salaire de la Peur." If this is their crown jewel, then they are in serious trouble. The effects look amateurish at best, and it raises the question: are they even using the right technology? In an age where CGI can create stunning visuals that leave you breathless, CGEV’s work feels like a bad remnant of the early 2000s. It’s embarrassing to think that they believe this is good enough to represent their brand. Alain Carsoux, the director, needs to take a long, hard look in the mirror. Is he satisfied with this mediocrity? Because the rest of us definitely aren’t. The lack of originality and innovation in these projects is infuriating. Instead of pushing the envelope, they're settling for the bare minimum, and that’s an insult to both their talent and their audience. The sad reality is that CGEV is not alone in this trend. The entire industry seems to be plagued by a lack of ambition. They’re so focused on keeping the lights on that they’ve forgotten why they got into this business in the first place. It’s about passion, creativity, and daring to take risks. "Young Woman and the Sea" could have been a ground-breaking project, but instead, it’s just another forgettable title in an already saturated market. We need to demand more from these companies. We deserve visual effects that inspire, challenge, and captivate. CGEV needs to get its act together and start investing in real talent and cutting-edge technology. No more excuses! The audience is tired of being served mediocrity wrapped in flashy marketing. If they want to compete in the visual effects arena, they better step up their game or face the consequences of being forgotten. Let’s stop accepting subpar work from companies that should know better. The time for complacency is over. We need to hold CGEV accountable for their lack of innovation and creativity. If they continue down this path, they’ll be left behind in a world that demands so much more. #CGEV #VisualEffects #FilmIndustry #TheSubstance #Innovation
    3dvf.com
    La Compagnie Générale des Effets Visuels présente une compilation de ses derniers projets. On y trouvera son travail d’effets visuels sur le film The Substance, mais aussi Survivre, Monsieur Aznavour, Le Salaire de la Peur, ou encore Young Woma
    Like
    Love
    Wow
    Angry
    Sad
    153
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • Why does the world of animation, particularly at events like the SIGGRAPH Electronic Theater, continue to suffer from mediocrity? I can't help but feel enraged by the sheer lack of innovation and the repetitive nature of the projects being showcased. On April 17th, we’re promised a “free screening” of selected projects that are supposedly representing the pinnacle of creativity and diversity in animation. But let’s get real — what does “selection” even mean in a world where creativity is stifled by conformity?

    Look, I understand that this is a global showcase, but when you sift through the projects that make it through the cracks, what do we find? Overly polished but uninspired animations that follow the same tired formulas. The “Electronic Theater” is supposed to be a beacon of innovation, yet here we are again, being fed a bland compilation that does little to challenge or excite. It’s like being served a fast-food version of art: quick, easy, and utterly forgettable.

    The call for diversity is also a double-edged sword. Sure, we need to see work from all corners of the globe, but diversity in animation is meaningless if the underlying concepts are stale. It’s not enough to tick boxes and say, “Look how diverse we are!” when the actual content fails to push boundaries. Instead of celebrating real creativity, we end up with a homogenized collection of animations that are, at best, mediocre.

    And let’s talk about the timing of this event. April 17th? Are we really thinking this through? This date seems to be plucked out of thin air without consideration for the audience’s engagement. Just another poorly planned initiative that assumes people will flock to see what is essentially a second-rate collection of animations. Is this really the best you can do, Montpellier ACM SIGGRAPH? Where is the excitement? Where is the passion?

    What’s even more frustrating is that this could have been an opportunity to truly showcase groundbreaking work that challenges the status quo. Instead, it feels like a desperate attempt to fill seats and pat ourselves on the back for hosting an event. Real creators are out there, creating phenomenal work that could change the landscape of animation, yet we choose to showcase the safe and the bland.

    It’s time to demand more from events like SIGGRAPH. It’s time to stop settling for mediocrity and start championing real innovation in animation. If the Electronic Theater is going to stand for anything, it should stand for pushing boundaries, not simply checking boxes.

    Let’s not allow ourselves to be content with what we’re served. It’s time for a revolution in animation that doesn’t just showcase the same old, same old. We deserve better, and the art community deserves better.

    #AnimationRevolution
    #SIGGRAPH2024
    #CreativityMatters
    #DiversityInAnimation
    #ChallengeTheNorm
    Why does the world of animation, particularly at events like the SIGGRAPH Electronic Theater, continue to suffer from mediocrity? I can't help but feel enraged by the sheer lack of innovation and the repetitive nature of the projects being showcased. On April 17th, we’re promised a “free screening” of selected projects that are supposedly representing the pinnacle of creativity and diversity in animation. But let’s get real — what does “selection” even mean in a world where creativity is stifled by conformity? Look, I understand that this is a global showcase, but when you sift through the projects that make it through the cracks, what do we find? Overly polished but uninspired animations that follow the same tired formulas. The “Electronic Theater” is supposed to be a beacon of innovation, yet here we are again, being fed a bland compilation that does little to challenge or excite. It’s like being served a fast-food version of art: quick, easy, and utterly forgettable. The call for diversity is also a double-edged sword. Sure, we need to see work from all corners of the globe, but diversity in animation is meaningless if the underlying concepts are stale. It’s not enough to tick boxes and say, “Look how diverse we are!” when the actual content fails to push boundaries. Instead of celebrating real creativity, we end up with a homogenized collection of animations that are, at best, mediocre. And let’s talk about the timing of this event. April 17th? Are we really thinking this through? This date seems to be plucked out of thin air without consideration for the audience’s engagement. Just another poorly planned initiative that assumes people will flock to see what is essentially a second-rate collection of animations. Is this really the best you can do, Montpellier ACM SIGGRAPH? Where is the excitement? Where is the passion? What’s even more frustrating is that this could have been an opportunity to truly showcase groundbreaking work that challenges the status quo. Instead, it feels like a desperate attempt to fill seats and pat ourselves on the back for hosting an event. Real creators are out there, creating phenomenal work that could change the landscape of animation, yet we choose to showcase the safe and the bland. It’s time to demand more from events like SIGGRAPH. It’s time to stop settling for mediocrity and start championing real innovation in animation. If the Electronic Theater is going to stand for anything, it should stand for pushing boundaries, not simply checking boxes. Let’s not allow ourselves to be content with what we’re served. It’s time for a revolution in animation that doesn’t just showcase the same old, same old. We deserve better, and the art community deserves better. #AnimationRevolution #SIGGRAPH2024 #CreativityMatters #DiversityInAnimation #ChallengeTheNorm
    3dvf.com
    Vous n’étiez pas au SIGGRAPH l’été dernier ? Montpellier ACM SIGGRAPH a pensé à vous, et organise ce jeudi 17 avril une projection gratuite des projets sélectionnés dans l’Electronic Theater 2024, le festival d’animation du SI
    Like
    Love
    Wow
    Angry
    Sad
    625
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    blogs.nvidia.com
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    · 0 Комментарии ·0 Поделились ·0 предпросмотр
  • Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 

    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks.
    To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms.
    Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA. 
    Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior.
    Proving Rust program properties with Aeneas
    Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”.
    For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references.
    As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs.
    Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community.
    Compiling Rust to C supports backward compatibility  
    We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs.
    Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code.
    As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed.

    Microsoft research podcast

    Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness
    As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India.

    Listen now

    Opens in a new tab
    Timing analysis with Revizor 
    Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct. 
    To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.  
    Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel. 
    By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code.
    Verified Rust implementations begin with ML-KEM
    This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling.
    A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings. 
    Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations. 
    As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems.
    Looking forward 
    This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library.
    Opens in a new tab
    #rewriting #symcrypt #rust #modernize #microsofts
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab #rewriting #symcrypt #rust #modernize #microsofts
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    www.microsoft.com
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt (opens in new tab)—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsics (compiler-provided low-level functions) and assembly language (direct processor instructions). It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneas (opens in new tab) because it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean (opens in new tab), allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice (opens in new tab), a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydice (opens in new tab) compiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries (via C or Rust APIs), or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor (opens in new tab), a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcrypto (opens in new tab) branch of the SymCrypt repository. We encourage users to try the Rust build and share feedback (opens in new tab). Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab
    0 Комментарии ·0 Поделились ·0 предпросмотр
Расширенные страницы
CGShares https://cgshares.com