• Building an Architectural Visualization Community: The Case for Physical Gatherings

    Barbara Betlejewska is a PR consultant and manager with extensive experience in architecture and real estate, currently involved with World Visualization Festival, a global event bringing together CGI and digital storytelling professionals for 3 days of presentations, workshops, and networking in Warsaw, Poland, this October.
    Over the last twenty years, visualization and 3D rendering have evolved from supporting tools to become central pillars of architectural storytelling, design development, and marketing across various industries. As digital technologies have advanced, the landscape of creative work has changed dramatically. Artists can now collaborate with clients worldwide without leaving their homes, and their careers can flourish without ever setting foot in a traditional studio.
    In this hyper-connected world, where access to knowledge, clients, and inspiration is just a click away, do we still need to gather in person? Do conferences, festivals and meetups in the CGI and architectural visualization world still carry weight?

    The People Behind the Pixels
    Professionals from the visualization industry exchanging ideas at WVF 2024.
    For a growing number of professionals — especially those in creative and tech-driven fields — remote work has become the norm. The shift to digital workflows, accelerated by the pandemic, has brought freedom and flexibility that many are reluctant to give up. It’s easier than ever to work for clients in distant cities or countries, to build a freelance career from a laptop, or to pursue the lifestyle of a digital nomad.
    On the surface, it is a broadening of horizons. But for many, the freedom of remote work comes with a cost: isolation. For visualization artists, the reality often means spending long hours alone, rarely interacting face-to-face with peers or collaborators. And while there are undeniable advantages to independent work, the lack of human connection can lead to creative stagnation, professional burnout, and a sense of detachment from the industry as a whole.
    Despite being a highly technical and often solitary craft, visualization and CGI thrive on the exchange of ideas, feedback and inspiration. The tools and techniques evolve rapidly, and staying relevant usually means learning not just from tutorials but from honest conversations with others who understand the nuances of the field.

    A Community in the Making
    Professionals from the visualization industry exchanging ideas at WVF 2024.
    That need for connection is what pushed Michał Nowak, a Polish visualizer and founder of Nowak Studio, to organize Poland’s first-ever architectural visualization meetup in 2017. With no background in event planning, he wasn’t sure where to begin, but he knew something was missing. The Polish Arch Viz scene lacked a shared space for meetings, discussions, and idea exchange. Michał wanted more than screen time; he wanted honest conversations, spontaneous collaboration and a chance to grow alongside others in the field.
    What began as a modest gathering quickly grew into something much bigger. That original meetup evolved into what is now the World Visualization Festival, an international event that welcomes artists from across Europe and beyond.
    “I didn’t expect our small gathering to grow into a global festival,” Michał says. “But I knew I wanted a connection. I believed that through sharing ideas and experiences, we could all grow professionally, creatively, and personally. And that we’d enjoy the journey more.”
    The response was overwhelming. Each year, more artists from across Poland and Europe join the event in Wrocław, located in south-western Poland. Michał also traveled to other festivals in countries like Portugal and Austria, where he observed the same thing: a spirit of openness, generosity, and shared curiosity. No matter the country or the maturity of the market, the needs were the same — people wanted to connect, learn and grow.
    And beyond the professional side, there was something else: joy. These events were simply fun. They were energizing. They gave people a reason to step away from their desks and remember why they love what they do.

    The Professional Benefits
    Hands-on learning at the AI-driven visualization workshop in Warsaw, October 2024.
    The professional benefits of attending industry events are well documented. These gatherings provide access to mentorship, collaboration and knowledge that can be challenging to find online. Festivals and industry meetups serve as platforms for emerging trends, new tools and fresh workflows — often before they hit the mainstream. They’re places where ideas collide, assumptions are challenged and growth happens.
    The range of topics covered at such events is broad, encompassing everything from portfolio reviews and in-depth discussions of particular rendering engines to discussions about pricing your work and building a sustainable business. At the 2024 edition of the World Visualization Festival, panels focused on scaling creative businesses and navigating industry rates drew some of the biggest crowds, proving that artists are hungry for both artistic and entrepreneurial insights.
    Being part of a creative community also shapes professional identity. It’s not just about finding clients — it’s about finding your place. In a field as fast-moving and competitive as Arch Viz, connection and conversation aren’t luxuries. They’re tools for survival.
    There’s also the matter of building your social capital. Online interactions can only go so far. Meeting someone in person builds relationships that stick. The coffee-break conversations, the spontaneous feedback — these are the moments that cement a community and have the power to spark future projects or long-lasting partnerships. This usually doesn’t happen in Zoom calls.
    And let’s not forget the symbolic power of events like industry awards, such as the Architizer’s Vision Awards or CGArchitect’s 3D Awards. These aren’t just celebrations of talent; they’re affirmations of the craft itself. They contribute to the growth and cohesion of the industry while helping to establish and promote best practices. These events clearly define the role and significance of CGI and visualization as a distinct profession, positioned at the intersection of architecture, marketing, and sales. They advocate for the field to be recognized on its own terms, not merely as a support service, but as an independent discipline. For its creators, they bring visibility, credit, and recognition — elements that inspire growth and fuel motivation to keep pushing the craft forward. Occasions like these remind us that what we do has actual value, impact and meaning.

    The Energy We Take Home
    The WVF 2024 afterparty provided a vibrant space for networking and celebration in Warsaw.
    Many artists describe the post-event glow: a renewed sense of purpose, a fresh jolt of energy, an eagerness to get back to work. Sometimes, new projects emerge, new clients appear, or long-dormant ideas finally gain momentum. These events aren’t just about learning — they’re about recharging.
    One of the most potent moments of last year’s WVF was a series of talks focused on mental health and creative well-being. Co-organized by Michał Nowak and the Polish Arch Viz studio ELEMENT, the festival addressed the emotional realities of the profession, including burnout, self-doubt, and the pressure to constantly produce. These conversations resonated deeply because they were real.
    Seeing that others face the same struggles — and come through them — is profoundly reassuring. Listening to someone share a business strategy that worked, or a failure they learned from, turns competition into camaraderie. Vulnerability becomes strength. Shared experiences become the foundation of resilience.

    Make a Statement. Show up!
    Top industry leaders shared insights during presentations at WVF 2024
    In an era when nearly everything can be done online, showing up in person is a powerful statement. It says: I want more than just efficiency. I want connection, creativity and conversation.
    As the CGI and visualization industries continue to evolve, the need for human connection hasn’t disappeared — it’s grown stronger. Conferences, festivals and meetups, such as World Viz Fest, remain vital spaces for knowledge sharing, innovation and community building. They give us a chance to reset, reconnect and remember that we are part of something bigger than our screens.
    So, yes, despite the tools, the bandwidth, and the ever-faster workflows, we still need to meet in person. Not out of nostalgia, but out of necessity. Because, no matter how far technology takes us, creativity remains a human endeavor.
    Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Start your entry ahead of the Final Entry Deadline on July 11th. 
    The post Building an Architectural Visualization Community: The Case for Physical Gatherings appeared first on Journal.
    #building #architectural #visualization #community #case
    Building an Architectural Visualization Community: The Case for Physical Gatherings
    Barbara Betlejewska is a PR consultant and manager with extensive experience in architecture and real estate, currently involved with World Visualization Festival, a global event bringing together CGI and digital storytelling professionals for 3 days of presentations, workshops, and networking in Warsaw, Poland, this October. Over the last twenty years, visualization and 3D rendering have evolved from supporting tools to become central pillars of architectural storytelling, design development, and marketing across various industries. As digital technologies have advanced, the landscape of creative work has changed dramatically. Artists can now collaborate with clients worldwide without leaving their homes, and their careers can flourish without ever setting foot in a traditional studio. In this hyper-connected world, where access to knowledge, clients, and inspiration is just a click away, do we still need to gather in person? Do conferences, festivals and meetups in the CGI and architectural visualization world still carry weight? The People Behind the Pixels Professionals from the visualization industry exchanging ideas at WVF 2024. For a growing number of professionals — especially those in creative and tech-driven fields — remote work has become the norm. The shift to digital workflows, accelerated by the pandemic, has brought freedom and flexibility that many are reluctant to give up. It’s easier than ever to work for clients in distant cities or countries, to build a freelance career from a laptop, or to pursue the lifestyle of a digital nomad. On the surface, it is a broadening of horizons. But for many, the freedom of remote work comes with a cost: isolation. For visualization artists, the reality often means spending long hours alone, rarely interacting face-to-face with peers or collaborators. And while there are undeniable advantages to independent work, the lack of human connection can lead to creative stagnation, professional burnout, and a sense of detachment from the industry as a whole. Despite being a highly technical and often solitary craft, visualization and CGI thrive on the exchange of ideas, feedback and inspiration. The tools and techniques evolve rapidly, and staying relevant usually means learning not just from tutorials but from honest conversations with others who understand the nuances of the field. A Community in the Making Professionals from the visualization industry exchanging ideas at WVF 2024. That need for connection is what pushed Michał Nowak, a Polish visualizer and founder of Nowak Studio, to organize Poland’s first-ever architectural visualization meetup in 2017. With no background in event planning, he wasn’t sure where to begin, but he knew something was missing. The Polish Arch Viz scene lacked a shared space for meetings, discussions, and idea exchange. Michał wanted more than screen time; he wanted honest conversations, spontaneous collaboration and a chance to grow alongside others in the field. What began as a modest gathering quickly grew into something much bigger. That original meetup evolved into what is now the World Visualization Festival, an international event that welcomes artists from across Europe and beyond. “I didn’t expect our small gathering to grow into a global festival,” Michał says. “But I knew I wanted a connection. I believed that through sharing ideas and experiences, we could all grow professionally, creatively, and personally. And that we’d enjoy the journey more.” The response was overwhelming. Each year, more artists from across Poland and Europe join the event in Wrocław, located in south-western Poland. Michał also traveled to other festivals in countries like Portugal and Austria, where he observed the same thing: a spirit of openness, generosity, and shared curiosity. No matter the country or the maturity of the market, the needs were the same — people wanted to connect, learn and grow. And beyond the professional side, there was something else: joy. These events were simply fun. They were energizing. They gave people a reason to step away from their desks and remember why they love what they do. The Professional Benefits Hands-on learning at the AI-driven visualization workshop in Warsaw, October 2024. The professional benefits of attending industry events are well documented. These gatherings provide access to mentorship, collaboration and knowledge that can be challenging to find online. Festivals and industry meetups serve as platforms for emerging trends, new tools and fresh workflows — often before they hit the mainstream. They’re places where ideas collide, assumptions are challenged and growth happens. The range of topics covered at such events is broad, encompassing everything from portfolio reviews and in-depth discussions of particular rendering engines to discussions about pricing your work and building a sustainable business. At the 2024 edition of the World Visualization Festival, panels focused on scaling creative businesses and navigating industry rates drew some of the biggest crowds, proving that artists are hungry for both artistic and entrepreneurial insights. Being part of a creative community also shapes professional identity. It’s not just about finding clients — it’s about finding your place. In a field as fast-moving and competitive as Arch Viz, connection and conversation aren’t luxuries. They’re tools for survival. There’s also the matter of building your social capital. Online interactions can only go so far. Meeting someone in person builds relationships that stick. The coffee-break conversations, the spontaneous feedback — these are the moments that cement a community and have the power to spark future projects or long-lasting partnerships. This usually doesn’t happen in Zoom calls. And let’s not forget the symbolic power of events like industry awards, such as the Architizer’s Vision Awards or CGArchitect’s 3D Awards. These aren’t just celebrations of talent; they’re affirmations of the craft itself. They contribute to the growth and cohesion of the industry while helping to establish and promote best practices. These events clearly define the role and significance of CGI and visualization as a distinct profession, positioned at the intersection of architecture, marketing, and sales. They advocate for the field to be recognized on its own terms, not merely as a support service, but as an independent discipline. For its creators, they bring visibility, credit, and recognition — elements that inspire growth and fuel motivation to keep pushing the craft forward. Occasions like these remind us that what we do has actual value, impact and meaning. The Energy We Take Home The WVF 2024 afterparty provided a vibrant space for networking and celebration in Warsaw. Many artists describe the post-event glow: a renewed sense of purpose, a fresh jolt of energy, an eagerness to get back to work. Sometimes, new projects emerge, new clients appear, or long-dormant ideas finally gain momentum. These events aren’t just about learning — they’re about recharging. One of the most potent moments of last year’s WVF was a series of talks focused on mental health and creative well-being. Co-organized by Michał Nowak and the Polish Arch Viz studio ELEMENT, the festival addressed the emotional realities of the profession, including burnout, self-doubt, and the pressure to constantly produce. These conversations resonated deeply because they were real. Seeing that others face the same struggles — and come through them — is profoundly reassuring. Listening to someone share a business strategy that worked, or a failure they learned from, turns competition into camaraderie. Vulnerability becomes strength. Shared experiences become the foundation of resilience. Make a Statement. Show up! Top industry leaders shared insights during presentations at WVF 2024 In an era when nearly everything can be done online, showing up in person is a powerful statement. It says: I want more than just efficiency. I want connection, creativity and conversation. As the CGI and visualization industries continue to evolve, the need for human connection hasn’t disappeared — it’s grown stronger. Conferences, festivals and meetups, such as World Viz Fest, remain vital spaces for knowledge sharing, innovation and community building. They give us a chance to reset, reconnect and remember that we are part of something bigger than our screens. So, yes, despite the tools, the bandwidth, and the ever-faster workflows, we still need to meet in person. Not out of nostalgia, but out of necessity. Because, no matter how far technology takes us, creativity remains a human endeavor. Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Start your entry ahead of the Final Entry Deadline on July 11th.  The post Building an Architectural Visualization Community: The Case for Physical Gatherings appeared first on Journal. #building #architectural #visualization #community #case
    ARCHITIZER.COM
    Building an Architectural Visualization Community: The Case for Physical Gatherings
    Barbara Betlejewska is a PR consultant and manager with extensive experience in architecture and real estate, currently involved with World Visualization Festival, a global event bringing together CGI and digital storytelling professionals for 3 days of presentations, workshops, and networking in Warsaw, Poland, this October. Over the last twenty years, visualization and 3D rendering have evolved from supporting tools to become central pillars of architectural storytelling, design development, and marketing across various industries. As digital technologies have advanced, the landscape of creative work has changed dramatically. Artists can now collaborate with clients worldwide without leaving their homes, and their careers can flourish without ever setting foot in a traditional studio. In this hyper-connected world, where access to knowledge, clients, and inspiration is just a click away, do we still need to gather in person? Do conferences, festivals and meetups in the CGI and architectural visualization world still carry weight? The People Behind the Pixels Professionals from the visualization industry exchanging ideas at WVF 2024. For a growing number of professionals — especially those in creative and tech-driven fields — remote work has become the norm. The shift to digital workflows, accelerated by the pandemic, has brought freedom and flexibility that many are reluctant to give up. It’s easier than ever to work for clients in distant cities or countries, to build a freelance career from a laptop, or to pursue the lifestyle of a digital nomad. On the surface, it is a broadening of horizons. But for many, the freedom of remote work comes with a cost: isolation. For visualization artists, the reality often means spending long hours alone, rarely interacting face-to-face with peers or collaborators. And while there are undeniable advantages to independent work, the lack of human connection can lead to creative stagnation, professional burnout, and a sense of detachment from the industry as a whole. Despite being a highly technical and often solitary craft, visualization and CGI thrive on the exchange of ideas, feedback and inspiration. The tools and techniques evolve rapidly, and staying relevant usually means learning not just from tutorials but from honest conversations with others who understand the nuances of the field. A Community in the Making Professionals from the visualization industry exchanging ideas at WVF 2024. That need for connection is what pushed Michał Nowak, a Polish visualizer and founder of Nowak Studio, to organize Poland’s first-ever architectural visualization meetup in 2017. With no background in event planning, he wasn’t sure where to begin, but he knew something was missing. The Polish Arch Viz scene lacked a shared space for meetings, discussions, and idea exchange. Michał wanted more than screen time; he wanted honest conversations, spontaneous collaboration and a chance to grow alongside others in the field. What began as a modest gathering quickly grew into something much bigger. That original meetup evolved into what is now the World Visualization Festival (WVF), an international event that welcomes artists from across Europe and beyond. “I didn’t expect our small gathering to grow into a global festival,” Michał says. “But I knew I wanted a connection. I believed that through sharing ideas and experiences, we could all grow professionally, creatively, and personally. And that we’d enjoy the journey more.” The response was overwhelming. Each year, more artists from across Poland and Europe join the event in Wrocław, located in south-western Poland. Michał also traveled to other festivals in countries like Portugal and Austria, where he observed the same thing: a spirit of openness, generosity, and shared curiosity. No matter the country or the maturity of the market, the needs were the same — people wanted to connect, learn and grow. And beyond the professional side, there was something else: joy. These events were simply fun. They were energizing. They gave people a reason to step away from their desks and remember why they love what they do. The Professional Benefits Hands-on learning at the AI-driven visualization workshop in Warsaw, October 2024. The professional benefits of attending industry events are well documented. These gatherings provide access to mentorship, collaboration and knowledge that can be challenging to find online. Festivals and industry meetups serve as platforms for emerging trends, new tools and fresh workflows — often before they hit the mainstream. They’re places where ideas collide, assumptions are challenged and growth happens. The range of topics covered at such events is broad, encompassing everything from portfolio reviews and in-depth discussions of particular rendering engines to discussions about pricing your work and building a sustainable business. At the 2024 edition of the World Visualization Festival, panels focused on scaling creative businesses and navigating industry rates drew some of the biggest crowds, proving that artists are hungry for both artistic and entrepreneurial insights. Being part of a creative community also shapes professional identity. It’s not just about finding clients — it’s about finding your place. In a field as fast-moving and competitive as Arch Viz, connection and conversation aren’t luxuries. They’re tools for survival. There’s also the matter of building your social capital. Online interactions can only go so far. Meeting someone in person builds relationships that stick. The coffee-break conversations, the spontaneous feedback — these are the moments that cement a community and have the power to spark future projects or long-lasting partnerships. This usually doesn’t happen in Zoom calls. And let’s not forget the symbolic power of events like industry awards, such as the Architizer’s Vision Awards or CGArchitect’s 3D Awards. These aren’t just celebrations of talent; they’re affirmations of the craft itself. They contribute to the growth and cohesion of the industry while helping to establish and promote best practices. These events clearly define the role and significance of CGI and visualization as a distinct profession, positioned at the intersection of architecture, marketing, and sales. They advocate for the field to be recognized on its own terms, not merely as a support service, but as an independent discipline. For its creators, they bring visibility, credit, and recognition — elements that inspire growth and fuel motivation to keep pushing the craft forward. Occasions like these remind us that what we do has actual value, impact and meaning. The Energy We Take Home The WVF 2024 afterparty provided a vibrant space for networking and celebration in Warsaw. Many artists describe the post-event glow: a renewed sense of purpose, a fresh jolt of energy, an eagerness to get back to work. Sometimes, new projects emerge, new clients appear, or long-dormant ideas finally gain momentum. These events aren’t just about learning — they’re about recharging. One of the most potent moments of last year’s WVF was a series of talks focused on mental health and creative well-being. Co-organized by Michał Nowak and the Polish Arch Viz studio ELEMENT, the festival addressed the emotional realities of the profession, including burnout, self-doubt, and the pressure to constantly produce. These conversations resonated deeply because they were real. Seeing that others face the same struggles — and come through them — is profoundly reassuring. Listening to someone share a business strategy that worked, or a failure they learned from, turns competition into camaraderie. Vulnerability becomes strength. Shared experiences become the foundation of resilience. Make a Statement. Show up! Top industry leaders shared insights during presentations at WVF 2024 In an era when nearly everything can be done online, showing up in person is a powerful statement. It says: I want more than just efficiency. I want connection, creativity and conversation. As the CGI and visualization industries continue to evolve, the need for human connection hasn’t disappeared — it’s grown stronger. Conferences, festivals and meetups, such as World Viz Fest, remain vital spaces for knowledge sharing, innovation and community building. They give us a chance to reset, reconnect and remember that we are part of something bigger than our screens. So, yes, despite the tools, the bandwidth, and the ever-faster workflows, we still need to meet in person. Not out of nostalgia, but out of necessity. Because, no matter how far technology takes us, creativity remains a human endeavor. Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Start your entry ahead of the Final Entry Deadline on July 11th.  The post Building an Architectural Visualization Community: The Case for Physical Gatherings appeared first on Journal.
    Like
    Love
    Wow
    Sad
    Angry
    532
    2 Commentarii 0 Distribuiri 0 previzualizare
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Chaos Corona 13 — New features

    Get started with Corona →

    Learn everything about the new Corona 13 features from our release blog post:

    It’s here! The latest version of Corona provides a new set of artist-friendly features that make perfect renders and speedy animations more accessible, and enjoyable, than ever. From toon shading to GPU-accelerated animations and AI-powered image enhancements. Corona 13 goes beyond photorealism with more creative control and faster workflows for 3D artists and visualizers.
    #chaos #corona #new #features
    Chaos Corona 13 — New features
    🚀 Get started with Corona → Learn everything about the new Corona 13 features from our release blog post: It’s here! The latest version of Corona provides a new set of artist-friendly features that make perfect renders and speedy animations more accessible, and enjoyable, than ever. From toon shading to GPU-accelerated animations and AI-powered image enhancements. Corona 13 goes beyond photorealism with more creative control and faster workflows for 3D artists and visualizers. #chaos #corona #new #features
    WWW.YOUTUBE.COM
    Chaos Corona 13 — New features
    🚀 Get started with Corona → https://bit.ly/chaos_corona Learn everything about the new Corona 13 features from our release blog post: https://www.chaos.com/blog/corona-13 It’s here! The latest version of Corona provides a new set of artist-friendly features that make perfect renders and speedy animations more accessible, and enjoyable, than ever. From toon shading to GPU-accelerated animations and AI-powered image enhancements. Corona 13 goes beyond photorealism with more creative control and faster workflows for 3D artists and visualizers.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI

    Industrial AI isn’t slowing down. Germany is ready.
    Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud.
    This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics.
    “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”
    “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”
    This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software.
    NEURA Robotics’ training center for cognitive robots.
    NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure.
    At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments.
    “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”
    Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts.
    Driving Germany’s Industrial Ecosystem
    Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem.
    Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale.
    Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps.
    A Speedboat Toward AI Gigafactories
    The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey.
    The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners.
    Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers.
    As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources.
    NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities.
    Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications.
    Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay.
    #nvidia #deutsche #telekom #partner #advance
    NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
    Industrial AI isn’t slowing down. Germany is ready. Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud. This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software. NEURA Robotics’ training center for cognitive robots. NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure. At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments. “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts. Driving Germany’s Industrial Ecosystem Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem. Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale. Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps. A Speedboat Toward AI Gigafactories The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey. The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners. Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers. As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources. NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities. Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications. Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay. #nvidia #deutsche #telekom #partner #advance
    BLOGS.NVIDIA.COM
    NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
    Industrial AI isn’t slowing down. Germany is ready. Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud. This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software. NEURA Robotics’ training center for cognitive robots. NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure. At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments. “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts. Driving Germany’s Industrial Ecosystem Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem. Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale. Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps. A Speedboat Toward AI Gigafactories The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey. The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners. Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers. As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources. NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities. Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications. Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Trump scraps Biden software security, AI, post-quantum encryption efforts in new executive order

    This audio is auto-generated. Please let us know if you have feedback.

    President Donald Trump signed an executive orderFriday that scratched or revised several of his Democratic predecessors’ major cybersecurity initiatives.
    “Just days before President Trump took office, the Biden Administration attempted to sneak problematic and distracting issues into cybersecurity policy,” the White House said in a fact sheet about Trump’s new directive, referring to projects that Biden launched with his Jan. 15 executive order.
    Trump’s new EO eliminates those projects, which would have required software vendors to prove their compliance with new federal security standards, prioritized research and testing of artificial intelligence for cyber defense and accelerated the rollout of encryption that withstands the future code-cracking powers of quantum computers.
    “President Trump has made it clear that this Administration will do what it takes to make America cyber secure,” the White House said in its fact sheet, “including focusing relentlessly on technical and organizational professionalism to improve the security and resilience of the nation’s information systems and networks.”
    Major cyber regulation shift
    Trump’s elimination of Biden’s software security requirements for federal contractors represents a significant government reversal on cyber regulation. Following years of major cyberattacks linked to insecure software, the Biden administration sought to use federal procurement power to improve the software industry’s practices. That effort began with Biden’s 2021 cyber order and gained strength in 2024, and then Biden officials tried to add teeth to the initiative before leaving office in January. But as it eliminated that project on Friday, the Trump administration castigated Biden’s efforts as “imposing unproven and burdensome software accounting processes that prioritized compliance checklists over genuine security investments.”
    Trump’s order eliminates provisions from Biden’s directive that would have required federal contractors to submit “secure software development attestations,” along with technical data to back up those attestations. Also now eradicated are provisions that would have required the Cybersecurity and Infrastructure Security Agency to verify vendors’ attestations, required the Office of the National Cyber Director to publish the results of those reviews and encouraged ONCD to refer companies whose attestations fail a review to the Justice Department “for action as appropriate.”

    Trump’s order leaves in place a National Institute of Standards and Technology collaboration with industry to update NIST’s Software Software Development Framework, but it eliminates parts of Biden’s order that would have incorporated those SSDF updates into security requirements for federal vendors.
    In a related move, Trump eliminated provisions of his predecessor’s order that would have required NIST to “issue guidance identifying minimum cybersecurity practices”and required federal contractors to follow those practices.
    AI security cut
    Trump also took an axe to Biden requirements related to AI and its ability to help repel cyberattacks. He scrapped a Biden initiative to test AI’s power to “enhance cyber defense of critical infrastructure in the energy sector,” as well as one that would have directed federal research programs to prioritize topics like the security of AI-powered coding and “methods for designing secure AI systems.” The EO also killed a provision would have required the Pentagon to “use advanced AI models for cyber defense.”
    On quantum computing, Trump’s directive significantly pares back Biden’s attempts to accelerate the government’s adoption of post-quantum cryptography. Biden told agencies to start using quantum-resistant encryption “as soon as practicable” and to start requiring vendors to use it when technologically possible. Trump eliminated those requirements, leaving only a Biden requirement that CISA maintain “a list of product categories in which products that support post-quantum cryptography … are widely available.”
    Trump also eliminated instructions for the departments of State and Commerce to encourage key foreign allies and overseas industries to adopt NIST’s PQC algorithms.
    The EO dropped many other provisions of Biden’s January directive, including one requiring agencies to start testing phishing-resistant authentication technologies, one requiring NIST to advise other agencies on internet routing security and one requiring agencies to use strong email encryption. Trump also cut language directing the Office of Management and Budget to advise agencies on addressing risks related to IT vendor concentration.
    In his January order, Biden ordered agencies to explore and encourage the use of digital identity documents to prevent fraud, including in public benefits programs. Trump eliminated those initiatives, calling them “inappropriate.” 
    Trump also tweaked the language of Obama-era sanctions authorities targeting people involved in cyberattacks on the U.S., specifying that the Treasury Department can only sanction foreigners for these activities. The White House said Trump’s change would prevent the power’s “misuse against domestic political opponents.”
    Amid the whirlwind of changes, Trump left one major Biden-era cyber program intact: a Federal Communications Commission project, modeled on the Energy Star program, that will apply government seals of approval to technology products that undergo security testing by federally accredited labs. Trump preserved the language in Biden’s order that requires companies selling internet-of-things devices to the federal government to go through the FCC program by January 2027.
    #trump #scraps #biden #software #security
    Trump scraps Biden software security, AI, post-quantum encryption efforts in new executive order
    This audio is auto-generated. Please let us know if you have feedback. President Donald Trump signed an executive orderFriday that scratched or revised several of his Democratic predecessors’ major cybersecurity initiatives. “Just days before President Trump took office, the Biden Administration attempted to sneak problematic and distracting issues into cybersecurity policy,” the White House said in a fact sheet about Trump’s new directive, referring to projects that Biden launched with his Jan. 15 executive order. Trump’s new EO eliminates those projects, which would have required software vendors to prove their compliance with new federal security standards, prioritized research and testing of artificial intelligence for cyber defense and accelerated the rollout of encryption that withstands the future code-cracking powers of quantum computers. “President Trump has made it clear that this Administration will do what it takes to make America cyber secure,” the White House said in its fact sheet, “including focusing relentlessly on technical and organizational professionalism to improve the security and resilience of the nation’s information systems and networks.” Major cyber regulation shift Trump’s elimination of Biden’s software security requirements for federal contractors represents a significant government reversal on cyber regulation. Following years of major cyberattacks linked to insecure software, the Biden administration sought to use federal procurement power to improve the software industry’s practices. That effort began with Biden’s 2021 cyber order and gained strength in 2024, and then Biden officials tried to add teeth to the initiative before leaving office in January. But as it eliminated that project on Friday, the Trump administration castigated Biden’s efforts as “imposing unproven and burdensome software accounting processes that prioritized compliance checklists over genuine security investments.” Trump’s order eliminates provisions from Biden’s directive that would have required federal contractors to submit “secure software development attestations,” along with technical data to back up those attestations. Also now eradicated are provisions that would have required the Cybersecurity and Infrastructure Security Agency to verify vendors’ attestations, required the Office of the National Cyber Director to publish the results of those reviews and encouraged ONCD to refer companies whose attestations fail a review to the Justice Department “for action as appropriate.” Trump’s order leaves in place a National Institute of Standards and Technology collaboration with industry to update NIST’s Software Software Development Framework, but it eliminates parts of Biden’s order that would have incorporated those SSDF updates into security requirements for federal vendors. In a related move, Trump eliminated provisions of his predecessor’s order that would have required NIST to “issue guidance identifying minimum cybersecurity practices”and required federal contractors to follow those practices. AI security cut Trump also took an axe to Biden requirements related to AI and its ability to help repel cyberattacks. He scrapped a Biden initiative to test AI’s power to “enhance cyber defense of critical infrastructure in the energy sector,” as well as one that would have directed federal research programs to prioritize topics like the security of AI-powered coding and “methods for designing secure AI systems.” The EO also killed a provision would have required the Pentagon to “use advanced AI models for cyber defense.” On quantum computing, Trump’s directive significantly pares back Biden’s attempts to accelerate the government’s adoption of post-quantum cryptography. Biden told agencies to start using quantum-resistant encryption “as soon as practicable” and to start requiring vendors to use it when technologically possible. Trump eliminated those requirements, leaving only a Biden requirement that CISA maintain “a list of product categories in which products that support post-quantum cryptography … are widely available.” Trump also eliminated instructions for the departments of State and Commerce to encourage key foreign allies and overseas industries to adopt NIST’s PQC algorithms. The EO dropped many other provisions of Biden’s January directive, including one requiring agencies to start testing phishing-resistant authentication technologies, one requiring NIST to advise other agencies on internet routing security and one requiring agencies to use strong email encryption. Trump also cut language directing the Office of Management and Budget to advise agencies on addressing risks related to IT vendor concentration. In his January order, Biden ordered agencies to explore and encourage the use of digital identity documents to prevent fraud, including in public benefits programs. Trump eliminated those initiatives, calling them “inappropriate.”  Trump also tweaked the language of Obama-era sanctions authorities targeting people involved in cyberattacks on the U.S., specifying that the Treasury Department can only sanction foreigners for these activities. The White House said Trump’s change would prevent the power’s “misuse against domestic political opponents.” Amid the whirlwind of changes, Trump left one major Biden-era cyber program intact: a Federal Communications Commission project, modeled on the Energy Star program, that will apply government seals of approval to technology products that undergo security testing by federally accredited labs. Trump preserved the language in Biden’s order that requires companies selling internet-of-things devices to the federal government to go through the FCC program by January 2027. #trump #scraps #biden #software #security
    WWW.CYBERSECURITYDIVE.COM
    Trump scraps Biden software security, AI, post-quantum encryption efforts in new executive order
    This audio is auto-generated. Please let us know if you have feedback. President Donald Trump signed an executive order (EO) Friday that scratched or revised several of his Democratic predecessors’ major cybersecurity initiatives. “Just days before President Trump took office, the Biden Administration attempted to sneak problematic and distracting issues into cybersecurity policy,” the White House said in a fact sheet about Trump’s new directive, referring to projects that Biden launched with his Jan. 15 executive order. Trump’s new EO eliminates those projects, which would have required software vendors to prove their compliance with new federal security standards, prioritized research and testing of artificial intelligence for cyber defense and accelerated the rollout of encryption that withstands the future code-cracking powers of quantum computers. “President Trump has made it clear that this Administration will do what it takes to make America cyber secure,” the White House said in its fact sheet, “including focusing relentlessly on technical and organizational professionalism to improve the security and resilience of the nation’s information systems and networks.” Major cyber regulation shift Trump’s elimination of Biden’s software security requirements for federal contractors represents a significant government reversal on cyber regulation. Following years of major cyberattacks linked to insecure software, the Biden administration sought to use federal procurement power to improve the software industry’s practices. That effort began with Biden’s 2021 cyber order and gained strength in 2024, and then Biden officials tried to add teeth to the initiative before leaving office in January. But as it eliminated that project on Friday, the Trump administration castigated Biden’s efforts as “imposing unproven and burdensome software accounting processes that prioritized compliance checklists over genuine security investments.” Trump’s order eliminates provisions from Biden’s directive that would have required federal contractors to submit “secure software development attestations,” along with technical data to back up those attestations. Also now eradicated are provisions that would have required the Cybersecurity and Infrastructure Security Agency to verify vendors’ attestations, required the Office of the National Cyber Director to publish the results of those reviews and encouraged ONCD to refer companies whose attestations fail a review to the Justice Department “for action as appropriate.” Trump’s order leaves in place a National Institute of Standards and Technology collaboration with industry to update NIST’s Software Software Development Framework, but it eliminates parts of Biden’s order that would have incorporated those SSDF updates into security requirements for federal vendors. In a related move, Trump eliminated provisions of his predecessor’s order that would have required NIST to “issue guidance identifying minimum cybersecurity practices” (based on a review of globally accepted standards) and required federal contractors to follow those practices. AI security cut Trump also took an axe to Biden requirements related to AI and its ability to help repel cyberattacks. He scrapped a Biden initiative to test AI’s power to “enhance cyber defense of critical infrastructure in the energy sector,” as well as one that would have directed federal research programs to prioritize topics like the security of AI-powered coding and “methods for designing secure AI systems.” The EO also killed a provision would have required the Pentagon to “use advanced AI models for cyber defense.” On quantum computing, Trump’s directive significantly pares back Biden’s attempts to accelerate the government’s adoption of post-quantum cryptography. Biden told agencies to start using quantum-resistant encryption “as soon as practicable” and to start requiring vendors to use it when technologically possible. Trump eliminated those requirements, leaving only a Biden requirement that CISA maintain “a list of product categories in which products that support post-quantum cryptography … are widely available.” Trump also eliminated instructions for the departments of State and Commerce to encourage key foreign allies and overseas industries to adopt NIST’s PQC algorithms. The EO dropped many other provisions of Biden’s January directive, including one requiring agencies to start testing phishing-resistant authentication technologies, one requiring NIST to advise other agencies on internet routing security and one requiring agencies to use strong email encryption. Trump also cut language directing the Office of Management and Budget to advise agencies on addressing risks related to IT vendor concentration. In his January order, Biden ordered agencies to explore and encourage the use of digital identity documents to prevent fraud, including in public benefits programs. Trump eliminated those initiatives, calling them “inappropriate.”  Trump also tweaked the language of Obama-era sanctions authorities targeting people involved in cyberattacks on the U.S., specifying that the Treasury Department can only sanction foreigners for these activities. The White House said Trump’s change would prevent the power’s “misuse against domestic political opponents.” Amid the whirlwind of changes, Trump left one major Biden-era cyber program intact: a Federal Communications Commission project, modeled on the Energy Star program, that will apply government seals of approval to technology products that undergo security testing by federally accredited labs. Trump preserved the language in Biden’s order that requires companies selling internet-of-things devices to the federal government to go through the FCC program by January 2027.
    Like
    Love
    Wow
    Sad
    Angry
    709
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Klarna CEO: Engineers risk losing out to business people who can code

    Klarna’s CEO has warned that software engineers risk being left behind in the AI era — unless they’re also business-savvy.
    Speaking at SXSW London, Sebastian Siemiatkowski said the talent “who have really accelerated their careers at Klarna” are “business people who have learned to code.” The reason? “They can take their business understanding and turn it into deterministic or probabilistic statements with AI.”
    This shift, he warned, poses a threat to engineers. “A lot of them have allowed themselves to be isolated with technical challenges only, and not been that interested in what the business actually does,” he said.
    His message to them was blunt: “Engineers really need to step up and make sure they understand the business.”
    The of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!Siemiatkowski’s comments add another layer to Klarna’s controversial AI transformation. In December 2023, he said advances in the field had led the buy-now-pay-later firm to freeze hiring for all roles — except engineers. A year later, he had an update: the company had stopped bringing on new staff entirely.
    Open job listings, however, told a different story. Klarna also recently launched a new recruitment drive to ensure customers can always speak to a human.
    The apparent contradiction has drawn criticism, but the company is doubling down on automation.
    Last year, Klarna announced that its OpenAI-powered assistant was doing the work of 700 full-time customer service agents. It also used an AI-generated version of Siemiatkowski to present its financial update — suggesting even CEOs could be automated.
    The 43-year-old recently claimed that AI can already do “all of the jobs” that humans can do. At SXSW London, he stressed the need to be upfront about the risks.
    “I don’t want to be one of the tech CEOs that are like no worries everything will be fine, because I do think there will be major implications for white collar jobs and so I want to be honest about it,” he said.
    Despite the gloom, Siemiatkowski still sees big opportunities for people who blend business acumen with technical skills.
    “That category of people will become even more valuable going forward,” he said.
    Big names from both AI and fintech will be speaking at TNW Conference on June 19-20 in Amsterdam. Want to join them? Well, we have a special offer for you — use the code TNWXMEDIA2025 at the ticket checkout to get 30% off.

    Story by

    Thomas Macaulay

    Managing editor

    Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar.

    Get the TNW newsletter
    Get the most important tech news in your inbox each week.

    Also tagged with
    #klarna #ceo #engineers #risk #losing
    Klarna CEO: Engineers risk losing out to business people who can code
    Klarna’s CEO has warned that software engineers risk being left behind in the AI era — unless they’re also business-savvy. Speaking at SXSW London, Sebastian Siemiatkowski said the talent “who have really accelerated their careers at Klarna” are “business people who have learned to code.” The reason? “They can take their business understanding and turn it into deterministic or probabilistic statements with AI.” This shift, he warned, poses a threat to engineers. “A lot of them have allowed themselves to be isolated with technical challenges only, and not been that interested in what the business actually does,” he said. His message to them was blunt: “Engineers really need to step up and make sure they understand the business.” The 💜 of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!Siemiatkowski’s comments add another layer to Klarna’s controversial AI transformation. In December 2023, he said advances in the field had led the buy-now-pay-later firm to freeze hiring for all roles — except engineers. A year later, he had an update: the company had stopped bringing on new staff entirely. Open job listings, however, told a different story. Klarna also recently launched a new recruitment drive to ensure customers can always speak to a human. The apparent contradiction has drawn criticism, but the company is doubling down on automation. Last year, Klarna announced that its OpenAI-powered assistant was doing the work of 700 full-time customer service agents. It also used an AI-generated version of Siemiatkowski to present its financial update — suggesting even CEOs could be automated. The 43-year-old recently claimed that AI can already do “all of the jobs” that humans can do. At SXSW London, he stressed the need to be upfront about the risks. “I don’t want to be one of the tech CEOs that are like no worries everything will be fine, because I do think there will be major implications for white collar jobs and so I want to be honest about it,” he said. Despite the gloom, Siemiatkowski still sees big opportunities for people who blend business acumen with technical skills. “That category of people will become even more valuable going forward,” he said. Big names from both AI and fintech will be speaking at TNW Conference on June 19-20 in Amsterdam. Want to join them? Well, we have a special offer for you — use the code TNWXMEDIA2025 at the ticket checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he eThomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chessand the guitar. Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with #klarna #ceo #engineers #risk #losing
    THENEXTWEB.COM
    Klarna CEO: Engineers risk losing out to business people who can code
    Klarna’s CEO has warned that software engineers risk being left behind in the AI era — unless they’re also business-savvy. Speaking at SXSW London, Sebastian Siemiatkowski said the talent “who have really accelerated their careers at Klarna” are “business people who have learned to code.” The reason? “They can take their business understanding and turn it into deterministic or probabilistic statements with AI.” This shift, he warned, poses a threat to engineers. “A lot of them have allowed themselves to be isolated with technical challenges only, and not been that interested in what the business actually does,” he said. His message to them was blunt: “Engineers really need to step up and make sure they understand the business.” The 💜 of EU techThe latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!Siemiatkowski’s comments add another layer to Klarna’s controversial AI transformation. In December 2023, he said advances in the field had led the buy-now-pay-later firm to freeze hiring for all roles — except engineers. A year later, he had an update: the company had stopped bringing on new staff entirely. Open job listings, however, told a different story. Klarna also recently launched a new recruitment drive to ensure customers can always speak to a human. The apparent contradiction has drawn criticism, but the company is doubling down on automation. Last year, Klarna announced that its OpenAI-powered assistant was doing the work of 700 full-time customer service agents. It also used an AI-generated version of Siemiatkowski to present its financial update — suggesting even CEOs could be automated. The 43-year-old recently claimed that AI can already do “all of the jobs” that humans can do. At SXSW London, he stressed the need to be upfront about the risks. “I don’t want to be one of the tech CEOs that are like no worries everything will be fine, because I do think there will be major implications for white collar jobs and so I want to be honest about it,” he said. Despite the gloom, Siemiatkowski still sees big opportunities for people who blend business acumen with technical skills. “That category of people will become even more valuable going forward,” he said. Big names from both AI and fintech will be speaking at TNW Conference on June 19-20 in Amsterdam. Want to join them? Well, we have a special offer for you — use the code TNWXMEDIA2025 at the ticket checkout to get 30% off. Story by Thomas Macaulay Managing editor Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he e (show all) Thomas is the managing editor of TNW. He leads our coverage of European tech and oversees our talented team of writers. Away from work, he enjoys playing chess (badly) and the guitar (even worse). Get the TNW newsletter Get the most important tech news in your inbox each week. Also tagged with
    Like
    Love
    Wow
    Sad
    Angry
    207
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com