VentureBeat
VentureBeat
Obsessed with covering transformative technology.
  • 191 people like this
  • 1335 Posts
  • 2 Photos
  • 0 Videos
  • News
Search
Recent Updates
  • Studio555 raises $4.6M to build playable app for interior design

    Studio555 announced today that it has raised €4 million, or about million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King.
    Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise.
    The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.”
    Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.”
    #studio555 #raises #46m #build #playable
    Studio555 raises $4.6M to build playable app for interior design
    Studio555 announced today that it has raised €4 million, or about million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King. Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise. The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.” Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.” #studio555 #raises #46m #build #playable
    VENTUREBEAT.COM
    Studio555 raises $4.6M to build playable app for interior design
    Studio555 announced today that it has raised €4 million, or about $4.6 million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King. Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise. The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.” Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.”
    Like
    Love
    Wow
    Angry
    Sad
    428
    2 Comments 0 Shares
  • Just add humans: Oxford medical study underscores the missing link in chatbot testing

    Patients using chatbots to assess their own medical conditions may end up with worse outcomes than conventional methods, according to a new Oxford study.Read More
    #just #add #humans #oxford #medical
    Just add humans: Oxford medical study underscores the missing link in chatbot testing
    Patients using chatbots to assess their own medical conditions may end up with worse outcomes than conventional methods, according to a new Oxford study.Read More #just #add #humans #oxford #medical
    VENTUREBEAT.COM
    Just add humans: Oxford medical study underscores the missing link in chatbot testing
    Patients using chatbots to assess their own medical conditions may end up with worse outcomes than conventional methods, according to a new Oxford study.Read More
    0 Comments 0 Shares
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • Monument Valley 3 launches on console and PC on July 22

    Monument Valley 3 joins its predecessors on PC and consoles, as ustwo games announced the launch date at the Wholesome Direct.Read More
    #monument #valley #launches #console #july
    Monument Valley 3 launches on console and PC on July 22
    Monument Valley 3 joins its predecessors on PC and consoles, as ustwo games announced the launch date at the Wholesome Direct.Read More #monument #valley #launches #console #july
    VENTUREBEAT.COM
    Monument Valley 3 launches on console and PC on July 22
    Monument Valley 3 joins its predecessors on PC and consoles, as ustwo games announced the launch date at the Wholesome Direct.Read More
    Like
    Love
    Wow
    Sad
    Angry
    493
    0 Comments 0 Shares
  • Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    Google has released an updated preview of​​ Gemini 2.5 Pro, its “most intelligent” model, first announced in March and upgraded in May, as a preview, intending to release the same model to general availability in a couple of weeks. 
    Enterprises can test building new applications or replace earlier versions with an updated version of the “I/O edition” of Gemini 2.5 Pro that, according to a blog post by Google, is more creative in its responses and outperforms other models in coding and reasoning. 
    During its annual I/O developer conference in May, Google announced that it updated Gemini 2.5 Pro to be better than its earlier iteration, which it quietly released. Google DeepMind CEO Demis Hassabis said the I/O edition is the company’s best coding model yet. 
    But this new preview, called Gemini 2.5 Pro Preview 06-05 Thinking, is even better than the I/O edition. The stable version Google plans to release publicly is “ready for enterprise-scale capabilities.”
    The I/O edition, or gemini-2.5-pro-preview-05-06, was first made available to developers and enterprises in May through Google AI Studio and Vertex AI. Gemini 2.5 Pro Preview 06-05 Thinking can be accessed via the same platforms. 
    Performance metrics
    This new version of Gemini 2.5 Pro performs even better than the first release. 
    Google said the new version of Gemini 2.5 Pro improved by 24 points in LMArena and by 35 points in WebDevArena, where it currently tops the leaderboard. The company’s benchmark tests showed that the model outscored competitors like OpenAI’s o3, o3-mini, and o4-mini, Anthropic’s Claude 4 Opus, Grok 3 Beta from xAI and DeepSeek R1. 
    “We’ve also addressed feedback from our previous 2.5 Pro releases, improving its style and structure — it can be more creative with better-formatted responses,” Google said in the blog post. 

    What enterprises can expect
    Google’s continuous improvement of Gemini 2.5 Pro might be confusing for many, but Google previously framed these as a response to community feedback. Pricing for the new version is per million tokens without caching for inputs and for the output price. 
    When the very first version of Gemini 2.5 Pro launched in March, VentureBeat’s Matt Marshall called it “the smartest model you’re not using.” Since then, Google has integrated the model into many of its new applications and services, including “Deep Think,” where Gemini considers multiple hypotheses before responding. 
    The release of Gemini 2.5 Pro, and its two upgraded versions, revived Google’s place in the large language model space after competitors like DeepSeek and OpenAI diverted the industry’s attention to their reasoning models. 
    In just a few hours of announcing the updated Gemini 2.5 Pro, developers have already begun playing around with it. While many found the update to live up to Google’s promise of being faster, the jury is still out if this latest Gemini 2.5 Pro does actually perform better. 

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #google #claims #gemini #pro #preview
    Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Google has released an updated preview of​​ Gemini 2.5 Pro, its “most intelligent” model, first announced in March and upgraded in May, as a preview, intending to release the same model to general availability in a couple of weeks.  Enterprises can test building new applications or replace earlier versions with an updated version of the “I/O edition” of Gemini 2.5 Pro that, according to a blog post by Google, is more creative in its responses and outperforms other models in coding and reasoning.  During its annual I/O developer conference in May, Google announced that it updated Gemini 2.5 Pro to be better than its earlier iteration, which it quietly released. Google DeepMind CEO Demis Hassabis said the I/O edition is the company’s best coding model yet.  But this new preview, called Gemini 2.5 Pro Preview 06-05 Thinking, is even better than the I/O edition. The stable version Google plans to release publicly is “ready for enterprise-scale capabilities.” The I/O edition, or gemini-2.5-pro-preview-05-06, was first made available to developers and enterprises in May through Google AI Studio and Vertex AI. Gemini 2.5 Pro Preview 06-05 Thinking can be accessed via the same platforms.  Performance metrics This new version of Gemini 2.5 Pro performs even better than the first release.  Google said the new version of Gemini 2.5 Pro improved by 24 points in LMArena and by 35 points in WebDevArena, where it currently tops the leaderboard. The company’s benchmark tests showed that the model outscored competitors like OpenAI’s o3, o3-mini, and o4-mini, Anthropic’s Claude 4 Opus, Grok 3 Beta from xAI and DeepSeek R1.  “We’ve also addressed feedback from our previous 2.5 Pro releases, improving its style and structure — it can be more creative with better-formatted responses,” Google said in the blog post.  What enterprises can expect Google’s continuous improvement of Gemini 2.5 Pro might be confusing for many, but Google previously framed these as a response to community feedback. Pricing for the new version is per million tokens without caching for inputs and for the output price.  When the very first version of Gemini 2.5 Pro launched in March, VentureBeat’s Matt Marshall called it “the smartest model you’re not using.” Since then, Google has integrated the model into many of its new applications and services, including “Deep Think,” where Gemini considers multiple hypotheses before responding.  The release of Gemini 2.5 Pro, and its two upgraded versions, revived Google’s place in the large language model space after competitors like DeepSeek and OpenAI diverted the industry’s attention to their reasoning models.  In just a few hours of announcing the updated Gemini 2.5 Pro, developers have already begun playing around with it. While many found the update to live up to Google’s promise of being faster, the jury is still out if this latest Gemini 2.5 Pro does actually perform better.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #google #claims #gemini #pro #preview
    VENTUREBEAT.COM
    Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Google has released an updated preview of​​ Gemini 2.5 Pro, its “most intelligent” model, first announced in March and upgraded in May, as a preview, intending to release the same model to general availability in a couple of weeks.  Enterprises can test building new applications or replace earlier versions with an updated version of the “I/O edition” of Gemini 2.5 Pro that, according to a blog post by Google, is more creative in its responses and outperforms other models in coding and reasoning.  During its annual I/O developer conference in May, Google announced that it updated Gemini 2.5 Pro to be better than its earlier iteration, which it quietly released. Google DeepMind CEO Demis Hassabis said the I/O edition is the company’s best coding model yet.  But this new preview, called Gemini 2.5 Pro Preview 06-05 Thinking, is even better than the I/O edition. The stable version Google plans to release publicly is “ready for enterprise-scale capabilities.” The I/O edition, or gemini-2.5-pro-preview-05-06, was first made available to developers and enterprises in May through Google AI Studio and Vertex AI. Gemini 2.5 Pro Preview 06-05 Thinking can be accessed via the same platforms.  Performance metrics This new version of Gemini 2.5 Pro performs even better than the first release.  Google said the new version of Gemini 2.5 Pro improved by 24 points in LMArena and by 35 points in WebDevArena, where it currently tops the leaderboard. The company’s benchmark tests showed that the model outscored competitors like OpenAI’s o3, o3-mini, and o4-mini, Anthropic’s Claude 4 Opus, Grok 3 Beta from xAI and DeepSeek R1.  “We’ve also addressed feedback from our previous 2.5 Pro releases, improving its style and structure — it can be more creative with better-formatted responses,” Google said in the blog post.  What enterprises can expect Google’s continuous improvement of Gemini 2.5 Pro might be confusing for many, but Google previously framed these as a response to community feedback. Pricing for the new version is $1.25 per million tokens without caching for inputs and $10 for the output price.  When the very first version of Gemini 2.5 Pro launched in March, VentureBeat’s Matt Marshall called it “the smartest model you’re not using.” Since then, Google has integrated the model into many of its new applications and services, including “Deep Think,” where Gemini considers multiple hypotheses before responding.  The release of Gemini 2.5 Pro, and its two upgraded versions, revived Google’s place in the large language model space after competitors like DeepSeek and OpenAI diverted the industry’s attention to their reasoning models.  In just a few hours of announcing the updated Gemini 2.5 Pro, developers have already begun playing around with it. While many found the update to live up to Google’s promise of being faster, the jury is still out if this latest Gemini 2.5 Pro does actually perform better.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    Like
    Love
    Wow
    Angry
    Sad
    303
    0 Comments 0 Shares
  • Nintendo Switch 2 gets official gaming accessories from Belkin

    Belkin is officially breaking into the gaming accessories space with the launch of a new product portfolio designed specifically for the Nintendo Switch 2.
    Belkin is launching the accessories on the launch day for Nintendo’s newest hybrid game console, and all of the products are officially licensed to work with the Switch 2. If I were Nintendo, I would be happy about Belkin’s entry into this market as it validates the opportunity.
    The products include a Nintendo Switch 2 Charging Case, which combines protection and power with a built-in 10K battery, keeping your device charged and ready for seamless gaming on the go. 
    Belkin also has a new Nintendo Travel Case. It blends portability, premium protection, and sleek style for gamers on the go. 
    The company has also created its TemperedGlass Anti-Reflective Screen Protector for Switch 2. It offers durable, smudge-resistant protection with reduced glare for clearer visibility indoors and out. 
    A gaming power bank from Belkin.
    And Belkin has a TemperedGlass Blue Light Screen Protector for Switch 2. This delivers durable, smudge-resistant protection while reducing blue light for greater clarity and eye comfort during long gaming sessions.
    Logan Olson, director of Future Ventures at Belkin, said in an interview with GamesBeat that his group within Belkin is just a couple of years old.. And the company created the group to take it into new markets for accessories such as gaming.
    Belkin has an anti-reflective protector for the Switch 2.
    “This segment of Belkin is extremely strategic,” Olson said. “Its purpose is to really expand Belkin into new categories. So new product categories, new completely different business models, as well as new partnerships. Now we’re delivering on the organization’s promise.”
    Olson said gaming is a natural fit for Belkin as it is known for its charging devices, protection, quality, sustainability and more — and it wants to bring into gaming, he said..
    “Of course, there’s PC gaming, console gaming, mobile gaming. But what really makes sense with mobile console gaming is Nintendo, which with the Switch One created this category. Nintendo has sold more than 150 million Switch devices since 2017, and now it’s finally launching the Switch 2.
    “Mobile console gaming is just that natural progression from where we are in the mobile phone space and the demographics of the category,” he said. “Unlike PC gaming and regular console gaming, it’s 50-50 male/female, which is fantastic. It’s unique to this category in gaming. As the Switch grew as a market, it really started to make sense for us.”
    Belkin Switch 2 accessories.
    Nintendo also spans the generations, hitting everyone from GenAlpha to Boomers. Belkin targeted pain points — like the risk of having a bigger screen scratched, or running out of batteries on the road. The aim was to provide quality, elevated accessories that people would be proud to show off.
    “Our intent is to elevate the entire Switch accessory market. We want to set a new standard, a new bar of quality,” Olson said.
    The Switch 2 power challenge is a big one because it draws more power than a smartphone and is more like an iPad in that respect. The battery life is about the same, but people want to play their games longer these days, Olson said. That’s where the charger adds value.
    Belkin has also built charging into its case for the Switch 2, so people can charge it inside the bag. It comes in three colors, and it has a net for storage and a place to put game cards. The 10,000 milliamp power bank is integrated into the bag, with 20 watts of fast charging. The Switch 2 has a USB-C charging port, so a number of Belkin charging products will work with it.
    Belkin’s new charging case lets you charge the Switch 2 while you play.
    The charger supports tabletop mode, where you can charge while gaming. Belkin noted that 55% of Switch players play in handheld mode. The price is expected to be which is still subject to changes if tariffs take effect. Special firmware in the charger regulates the charging temperature, which can’t get too hot inside a storage case. You can put an Air Tag in the case to make sure you can find it. In terms of the size, it’s like a clutch to carry around. Meanwhile, the travel bag will be about As for screen protection, Belkin has long had products in Apple Stores to protect phone and tablet screens. Now you can get a Switch 2 screen protector, so people can play at night and not have to worry the light will cause them to stay up longer. Olson said that Belkin reached carbon neutrality this year with its products and is incorporating responsibly sourced materials in its products.
    “Screen protection so interesting because the amount of engineering and just science that has to go into it is insane. So all the way from the glass material itself and what you choose there, and then down to the glue and what you choose there, and down to what is on screen here the UX — all of that is meticulously engineered, tested, re engineered, retested,” Olson said.
    #nintendo #switch #gets #official #gaming
    Nintendo Switch 2 gets official gaming accessories from Belkin
    Belkin is officially breaking into the gaming accessories space with the launch of a new product portfolio designed specifically for the Nintendo Switch 2. Belkin is launching the accessories on the launch day for Nintendo’s newest hybrid game console, and all of the products are officially licensed to work with the Switch 2. If I were Nintendo, I would be happy about Belkin’s entry into this market as it validates the opportunity. The products include a Nintendo Switch 2 Charging Case, which combines protection and power with a built-in 10K battery, keeping your device charged and ready for seamless gaming on the go.  Belkin also has a new Nintendo Travel Case. It blends portability, premium protection, and sleek style for gamers on the go.  The company has also created its TemperedGlass Anti-Reflective Screen Protector for Switch 2. It offers durable, smudge-resistant protection with reduced glare for clearer visibility indoors and out.  A gaming power bank from Belkin. And Belkin has a TemperedGlass Blue Light Screen Protector for Switch 2. This delivers durable, smudge-resistant protection while reducing blue light for greater clarity and eye comfort during long gaming sessions. Logan Olson, director of Future Ventures at Belkin, said in an interview with GamesBeat that his group within Belkin is just a couple of years old.. And the company created the group to take it into new markets for accessories such as gaming. Belkin has an anti-reflective protector for the Switch 2. “This segment of Belkin is extremely strategic,” Olson said. “Its purpose is to really expand Belkin into new categories. So new product categories, new completely different business models, as well as new partnerships. Now we’re delivering on the organization’s promise.” Olson said gaming is a natural fit for Belkin as it is known for its charging devices, protection, quality, sustainability and more — and it wants to bring into gaming, he said.. “Of course, there’s PC gaming, console gaming, mobile gaming. But what really makes sense with mobile console gaming is Nintendo, which with the Switch One created this category. Nintendo has sold more than 150 million Switch devices since 2017, and now it’s finally launching the Switch 2. “Mobile console gaming is just that natural progression from where we are in the mobile phone space and the demographics of the category,” he said. “Unlike PC gaming and regular console gaming, it’s 50-50 male/female, which is fantastic. It’s unique to this category in gaming. As the Switch grew as a market, it really started to make sense for us.” Belkin Switch 2 accessories. Nintendo also spans the generations, hitting everyone from GenAlpha to Boomers. Belkin targeted pain points — like the risk of having a bigger screen scratched, or running out of batteries on the road. The aim was to provide quality, elevated accessories that people would be proud to show off. “Our intent is to elevate the entire Switch accessory market. We want to set a new standard, a new bar of quality,” Olson said. The Switch 2 power challenge is a big one because it draws more power than a smartphone and is more like an iPad in that respect. The battery life is about the same, but people want to play their games longer these days, Olson said. That’s where the charger adds value. Belkin has also built charging into its case for the Switch 2, so people can charge it inside the bag. It comes in three colors, and it has a net for storage and a place to put game cards. The 10,000 milliamp power bank is integrated into the bag, with 20 watts of fast charging. The Switch 2 has a USB-C charging port, so a number of Belkin charging products will work with it. Belkin’s new charging case lets you charge the Switch 2 while you play. The charger supports tabletop mode, where you can charge while gaming. Belkin noted that 55% of Switch players play in handheld mode. The price is expected to be which is still subject to changes if tariffs take effect. Special firmware in the charger regulates the charging temperature, which can’t get too hot inside a storage case. You can put an Air Tag in the case to make sure you can find it. In terms of the size, it’s like a clutch to carry around. Meanwhile, the travel bag will be about As for screen protection, Belkin has long had products in Apple Stores to protect phone and tablet screens. Now you can get a Switch 2 screen protector, so people can play at night and not have to worry the light will cause them to stay up longer. Olson said that Belkin reached carbon neutrality this year with its products and is incorporating responsibly sourced materials in its products. “Screen protection so interesting because the amount of engineering and just science that has to go into it is insane. So all the way from the glass material itself and what you choose there, and then down to the glue and what you choose there, and down to what is on screen here the UX — all of that is meticulously engineered, tested, re engineered, retested,” Olson said. #nintendo #switch #gets #official #gaming
    VENTUREBEAT.COM
    Nintendo Switch 2 gets official gaming accessories from Belkin
    Belkin is officially breaking into the gaming accessories space with the launch of a new product portfolio designed specifically for the Nintendo Switch 2. Belkin is launching the accessories on the launch day for Nintendo’s newest hybrid game console, and all of the products are officially licensed to work with the Switch 2. If I were Nintendo, I would be happy about Belkin’s entry into this market as it validates the opportunity. The products include a Nintendo Switch 2 Charging Case, which combines protection and power with a built-in 10K battery, keeping your device charged and ready for seamless gaming on the go.  Belkin also has a new Nintendo Travel Case. It blends portability, premium protection, and sleek style for gamers on the go.  The company has also created its TemperedGlass Anti-Reflective Screen Protector for Switch 2. It offers durable, smudge-resistant protection with reduced glare for clearer visibility indoors and out.  A gaming power bank from Belkin. And Belkin has a TemperedGlass Blue Light Screen Protector for Switch 2. This delivers durable, smudge-resistant protection while reducing blue light for greater clarity and eye comfort during long gaming sessions. Logan Olson, director of Future Ventures at Belkin, said in an interview with GamesBeat that his group within Belkin is just a couple of years old. (Belkin itself has been around since 1983). And the company created the group to take it into new markets for accessories such as gaming. Belkin has an anti-reflective protector for the Switch 2. “This segment of Belkin is extremely strategic,” Olson said. “Its purpose is to really expand Belkin into new categories. So new product categories, new completely different business models, as well as new partnerships. Now we’re delivering on the organization’s promise.” Olson said gaming is a natural fit for Belkin as it is known for its charging devices, protection, quality, sustainability and more — and it wants to bring into gaming, he said. (Belkin was in gaming back in the early 2000s with the release of a gaming mouse). “Of course, there’s PC gaming, console gaming, mobile gaming. But what really makes sense with mobile console gaming is Nintendo, which with the Switch One created this category. Nintendo has sold more than 150 million Switch devices since 2017, and now it’s finally launching the Switch 2. “Mobile console gaming is just that natural progression from where we are in the mobile phone space and the demographics of the category,” he said. “Unlike PC gaming and regular console gaming, it’s 50-50 male/female, which is fantastic. It’s unique to this category in gaming. As the Switch grew as a market, it really started to make sense for us.” Belkin Switch 2 accessories. Nintendo also spans the generations, hitting everyone from GenAlpha to Boomers. Belkin targeted pain points — like the risk of having a bigger screen scratched, or running out of batteries on the road. The aim was to provide quality, elevated accessories that people would be proud to show off. “Our intent is to elevate the entire Switch accessory market. We want to set a new standard, a new bar of quality,” Olson said. The Switch 2 power challenge is a big one because it draws more power than a smartphone and is more like an iPad in that respect. The battery life is about the same, but people want to play their games longer these days, Olson said. That’s where the charger adds value. Belkin has also built charging into its case for the Switch 2, so people can charge it inside the bag. It comes in three colors, and it has a net for storage and a place to put game cards. The 10,000 milliamp power bank is integrated into the bag, with 20 watts of fast charging. The Switch 2 has a USB-C charging port, so a number of Belkin charging products will work with it. Belkin’s new charging case lets you charge the Switch 2 while you play. The charger supports tabletop mode, where you can charge while gaming. Belkin noted that 55% of Switch players play in handheld mode. The price is expected to be $70, which is still subject to changes if tariffs take effect. Special firmware in the charger regulates the charging temperature, which can’t get too hot inside a storage case. You can put an Air Tag in the case to make sure you can find it. In terms of the size, it’s like a clutch to carry around. Meanwhile, the travel bag will be about $30. As for screen protection, Belkin has long had products in Apple Stores to protect phone and tablet screens. Now you can get a Switch 2 screen protector ($30 which come in packs of two or four. It makes the screen more scratch resistant and better for the environment as it is made from recycled glass. It’s also anti-reflective, which allows you to play games on the Switch 2 in the outdoors. That makes the device more mobile in direct sunlight.Belkin is also releasing a screen protector that can filter out Blue Light ($25), so people can play at night and not have to worry the light will cause them to stay up longer. Olson said that Belkin reached carbon neutrality this year with its products and is incorporating responsibly sourced materials in its products. “Screen protection so interesting because the amount of engineering and just science that has to go into it is insane. So all the way from the glass material itself and what you choose there, and then down to the glue and what you choose there, and down to what is on screen here the UX — all of that is meticulously engineered, tested, re engineered, retested,” Olson said.
    Like
    Love
    Wow
    Sad
    Angry
    155
    0 Comments 0 Shares
  • Nintendo brings back late-night console launches with debut of Switch 2

    Nintendo brought back late-night game console launches tonight as people across the world waited in line to get the Nintendo Switch 2.Read More
    #nintendo #brings #back #latenight #console
    Nintendo brings back late-night console launches with debut of Switch 2
    Nintendo brought back late-night game console launches tonight as people across the world waited in line to get the Nintendo Switch 2.Read More #nintendo #brings #back #latenight #console
    VENTUREBEAT.COM
    Nintendo brings back late-night console launches with debut of Switch 2
    Nintendo brought back late-night game console launches tonight as people across the world waited in line to get the Nintendo Switch 2.Read More
    Like
    Love
    Wow
    Sad
    Angry
    331
    0 Comments 0 Shares
  • Enterprise alert: PostgreSQL just became the database you can’t ignore for AI applications

    Analysts provide insight on what the latest acquisition of a PostgreSQL database vendor means for enterprise data and AI.Read More
    #enterprise #alert #postgresql #just #became
    Enterprise alert: PostgreSQL just became the database you can’t ignore for AI applications
    Analysts provide insight on what the latest acquisition of a PostgreSQL database vendor means for enterprise data and AI.Read More #enterprise #alert #postgresql #just #became
    VENTUREBEAT.COM
    Enterprise alert: PostgreSQL just became the database you can’t ignore for AI applications
    Analysts provide insight on what the latest acquisition of a PostgreSQL database vendor means for enterprise data and AI.Read More
    0 Comments 0 Shares
  • Micro Center nerd store fills the Fry’s vacuum with its return to Silicon Valley

    Silicon Valley nerds have been lonelier since Fry’s Electronics shut down in February 2021 in the midst of the pandemic. The electronics store chain was an embodiment of the valley’s tech roots.
    But Micro Center, an electronics retailer from Ohio, has opened its 29th store in Santa Clara, California. And so the nerd kingdom has returned. I see this as a big deal, following up on the opening of the Nintendo store — the second in the country after New York — in San Francisco earlier this month. After years of bad economic news, it’s nice to see signs that the Bay Area is coming back.
    No. To answer your question, nerds cannot live at the Micro Center store.
    But this isn’t just any store. It’s a symbol — a sign that shows tech still has a physical presence in Silicon Valley, in addition to places like the Buck’s Restaurant, the Denny’s where Nvidia started, the Intel Museum, the Computer History Museum, the California Academy of Sciences and the Tech Museum of Innovation. Other historic hangouts for techies like Walker’s Wagon Wheel, Atari’s headquarters, Lion & Compass — even Circuit City — have long since closed. But hey, we’ve got the Micro Center store, and the Apple spaceship is not that far away.
    The grand opening week has been going well and I got a tour of the superstore from Dan Ackerman, a veteran tech journalist who is editor-in-chief at Micro Center News. As I walked into the place, Ackerman was finishing a chat with iFixit, a tech repair publication which has its own space for podcasts inside the store. That was unexpected, as I’ve never seen a store embrace social media in such a way.
    Can you stump the geniuses at the Knowledge Bar at Micro Center?
    Nearby was the Knowledge Bar, where you can get all your tech questions answered — much like the Genius Bars in Apple Stores. And there were repair tables out in the open.
    There are a lot of things for tech enthusiasts can like about Micro Center. First, it’s not as sprawling as Fry’s, which had zany themes like ancient Egypt and a weird mix of electronics goods as well as household appliances, cosmetics, magazines and tons of snack foods.. Fry’s was a store that stereotyped nerds and Silicon Valley, which also had its own HBO television show that carried on the stereotypes.
    Nvidia’s latest RTX 50 Series GPUs were in stock at Micro Center.
    The Micro Center store, by contrast, is smaller at 40,000 square feet and stocked with many more practical nerd items. For the grand opening, this store had the very practical product of more than 4,000 graphics processing unitsin stock from Nvidiaand AMD, Ackerman told me. Some of those graphics cards cost as much as Not to be outdone. AMD has a row of GPUs at Micro Center too.
    “There were people waiting to get to the GPUs,” Ackerman said.
    On display was a gold-plated graphics card that was being auctioned off for charity. It was signed by Jensen Huang, Nvidia CEO.
    Nvidia CEO Jensen Huang signed this GPU being auctioned for charity at Micro Center.
    “I joke that whoever wins the bid should get a Jensen leather jacket as well,” said Ackerman.
    And this Micro Center store has a good locationthat is just a six-minute drive from Apple’s worldwide headquarters anda one-minute walk from the Korean Hair Salon.
    Micro Center had a previous store in Silicon Valley, near Intel’s headquarters in Santa Clara. But that store close in 2012 because the company couldn’t negotiate better terms with the landlord. For its return to the Bay Area, Micro Center bided its time and came back at a time when many other retail chains were failing. It proves that the once proud region — the birthplace of electronics — still merits its own electronics store.
    You can buy dyes for liquid-cooled tubes at Micro Center.
    Sure, we have Target, Best Buy and Walmart selling lots of electronics gear. But there’s nothing like the Akihabara electronics district in Japan, which is full of multi-story electronics stores and gaming arcades.
    But this store is loaded with today’s modern top gear, like AI PCs, Ubiquity home networking gear, and dyes for multi-colored water-cooling systems. Vendors like Razer and Logitech had their own sections. Ackerman was pleased to show me the USB-C to USB-A adapter in stock, among many obscure items. And he showed me the inventory machine that could rotate its stock of 3D-printing filaments and give you the exact SKU that you scanned with a bar code.
    Tech hobbyists can find their love at Micro Center.
    “That’s super fun. I call it Mr. Filaments,” Ackerman said of the inventory robot.
    There’s a section for hobbyists who like single-board computing and DIY projects. There’s a set of video, audio and digital content creation tools for content creators. All told, there are more than 20,000 products and over 100 tech experts who can help. It even has the numbered cashier locations where you can check out — the same kind of checkout stands that Fry’s had.
    The Mr. Filaments robot inventory system at Micro Center.
    Customers can receive authorized computer service for brands like Apple, Dell, and HP, benefiting from same-day diagnostics and repairs, thanks to over 3,000 parts on hand through partnerships with leading OEMs. I only wish it had a help desk for Comcast.
    Micro Center has gear to entertain geeks.
    Micro Center started in 1979 in Columbus, Ohio. It’s a surprise there aren’t more nerd stores, given how ubiquitous tech is around the world these days.
    But Ackerman said, “These guys are really doing it right, picking and choosing, finding the right cities, finding the right locations. That’s why Charlotte is great. Miami is a big tech hub, especially for health tech. And we’re literally five minutes away from Apple headquarters and plenty of other places. People from HP and Nvidia and other companies are coming in today to hang out.”
    “Even though this store is big, the CEOis really into curation, making sure it’s the right mix of stuff. He’s making sure it doesn’t go too far afield. So you’re not going to come in here and find, you know, hair dryers or lawncare equipment,” Ackerman said. “You’re going to find computer and home entertainment stuff, and DIY gear. There are components, just like in a Radio Shack, that hobbyists care about.”
    Dan Ackerman knows how to install a TV on your wall.
    As for the Micro Center News, Ackerman told me he has around 10 regular contributors and 20 more freelancers writing gadget reviews and other stories about tech gear. It is a kind of refuge for that vanishing breed of professional tech journalists. No wonder I was so nostalgic visiting Micro Center.
    #micro #center #nerd #store #fills
    Micro Center nerd store fills the Fry’s vacuum with its return to Silicon Valley
    Silicon Valley nerds have been lonelier since Fry’s Electronics shut down in February 2021 in the midst of the pandemic. The electronics store chain was an embodiment of the valley’s tech roots. But Micro Center, an electronics retailer from Ohio, has opened its 29th store in Santa Clara, California. And so the nerd kingdom has returned. I see this as a big deal, following up on the opening of the Nintendo store — the second in the country after New York — in San Francisco earlier this month. After years of bad economic news, it’s nice to see signs that the Bay Area is coming back. No. To answer your question, nerds cannot live at the Micro Center store. But this isn’t just any store. It’s a symbol — a sign that shows tech still has a physical presence in Silicon Valley, in addition to places like the Buck’s Restaurant, the Denny’s where Nvidia started, the Intel Museum, the Computer History Museum, the California Academy of Sciences and the Tech Museum of Innovation. Other historic hangouts for techies like Walker’s Wagon Wheel, Atari’s headquarters, Lion & Compass — even Circuit City — have long since closed. But hey, we’ve got the Micro Center store, and the Apple spaceship is not that far away. The grand opening week has been going well and I got a tour of the superstore from Dan Ackerman, a veteran tech journalist who is editor-in-chief at Micro Center News. As I walked into the place, Ackerman was finishing a chat with iFixit, a tech repair publication which has its own space for podcasts inside the store. That was unexpected, as I’ve never seen a store embrace social media in such a way. Can you stump the geniuses at the Knowledge Bar at Micro Center? Nearby was the Knowledge Bar, where you can get all your tech questions answered — much like the Genius Bars in Apple Stores. And there were repair tables out in the open. There are a lot of things for tech enthusiasts can like about Micro Center. First, it’s not as sprawling as Fry’s, which had zany themes like ancient Egypt and a weird mix of electronics goods as well as household appliances, cosmetics, magazines and tons of snack foods.. Fry’s was a store that stereotyped nerds and Silicon Valley, which also had its own HBO television show that carried on the stereotypes. Nvidia’s latest RTX 50 Series GPUs were in stock at Micro Center. The Micro Center store, by contrast, is smaller at 40,000 square feet and stocked with many more practical nerd items. For the grand opening, this store had the very practical product of more than 4,000 graphics processing unitsin stock from Nvidiaand AMD, Ackerman told me. Some of those graphics cards cost as much as Not to be outdone. AMD has a row of GPUs at Micro Center too. “There were people waiting to get to the GPUs,” Ackerman said. On display was a gold-plated graphics card that was being auctioned off for charity. It was signed by Jensen Huang, Nvidia CEO. Nvidia CEO Jensen Huang signed this GPU being auctioned for charity at Micro Center. “I joke that whoever wins the bid should get a Jensen leather jacket as well,” said Ackerman. And this Micro Center store has a good locationthat is just a six-minute drive from Apple’s worldwide headquarters anda one-minute walk from the Korean Hair Salon. Micro Center had a previous store in Silicon Valley, near Intel’s headquarters in Santa Clara. But that store close in 2012 because the company couldn’t negotiate better terms with the landlord. For its return to the Bay Area, Micro Center bided its time and came back at a time when many other retail chains were failing. It proves that the once proud region — the birthplace of electronics — still merits its own electronics store. You can buy dyes for liquid-cooled tubes at Micro Center. Sure, we have Target, Best Buy and Walmart selling lots of electronics gear. But there’s nothing like the Akihabara electronics district in Japan, which is full of multi-story electronics stores and gaming arcades. But this store is loaded with today’s modern top gear, like AI PCs, Ubiquity home networking gear, and dyes for multi-colored water-cooling systems. Vendors like Razer and Logitech had their own sections. Ackerman was pleased to show me the USB-C to USB-A adapter in stock, among many obscure items. And he showed me the inventory machine that could rotate its stock of 3D-printing filaments and give you the exact SKU that you scanned with a bar code. Tech hobbyists can find their love at Micro Center. “That’s super fun. I call it Mr. Filaments,” Ackerman said of the inventory robot. There’s a section for hobbyists who like single-board computing and DIY projects. There’s a set of video, audio and digital content creation tools for content creators. All told, there are more than 20,000 products and over 100 tech experts who can help. It even has the numbered cashier locations where you can check out — the same kind of checkout stands that Fry’s had. The Mr. Filaments robot inventory system at Micro Center. Customers can receive authorized computer service for brands like Apple, Dell, and HP, benefiting from same-day diagnostics and repairs, thanks to over 3,000 parts on hand through partnerships with leading OEMs. I only wish it had a help desk for Comcast. Micro Center has gear to entertain geeks. Micro Center started in 1979 in Columbus, Ohio. It’s a surprise there aren’t more nerd stores, given how ubiquitous tech is around the world these days. But Ackerman said, “These guys are really doing it right, picking and choosing, finding the right cities, finding the right locations. That’s why Charlotte is great. Miami is a big tech hub, especially for health tech. And we’re literally five minutes away from Apple headquarters and plenty of other places. People from HP and Nvidia and other companies are coming in today to hang out.” “Even though this store is big, the CEOis really into curation, making sure it’s the right mix of stuff. He’s making sure it doesn’t go too far afield. So you’re not going to come in here and find, you know, hair dryers or lawncare equipment,” Ackerman said. “You’re going to find computer and home entertainment stuff, and DIY gear. There are components, just like in a Radio Shack, that hobbyists care about.” Dan Ackerman knows how to install a TV on your wall. As for the Micro Center News, Ackerman told me he has around 10 regular contributors and 20 more freelancers writing gadget reviews and other stories about tech gear. It is a kind of refuge for that vanishing breed of professional tech journalists. No wonder I was so nostalgic visiting Micro Center. #micro #center #nerd #store #fills
    VENTUREBEAT.COM
    Micro Center nerd store fills the Fry’s vacuum with its return to Silicon Valley
    Silicon Valley nerds have been lonelier since Fry’s Electronics shut down in February 2021 in the midst of the pandemic. The electronics store chain was an embodiment of the valley’s tech roots. But Micro Center, an electronics retailer from Ohio, has opened its 29th store in Santa Clara, California. And so the nerd kingdom has returned. I see this as a big deal, following up on the opening of the Nintendo store — the second in the country after New York — in San Francisco earlier this month. After years of bad economic news, it’s nice to see signs that the Bay Area is coming back. No. To answer your question, nerds cannot live at the Micro Center store. But this isn’t just any store. It’s a symbol — a sign that shows tech still has a physical presence in Silicon Valley, in addition to places like the Buck’s Restaurant, the Denny’s where Nvidia started, the Intel Museum, the Computer History Museum, the California Academy of Sciences and the Tech Museum of Innovation. Other historic hangouts for techies like Walker’s Wagon Wheel, Atari’s headquarters, Lion & Compass — even Circuit City — have long since closed. But hey, we’ve got the Micro Center store, and the Apple spaceship is not that far away. The grand opening week has been going well and I got a tour of the superstore from Dan Ackerman, a veteran tech journalist who is editor-in-chief at Micro Center News. As I walked into the place, Ackerman was finishing a chat with iFixit, a tech repair publication which has its own space for podcasts inside the store. That was unexpected, as I’ve never seen a store embrace social media in such a way. Can you stump the geniuses at the Knowledge Bar at Micro Center? Nearby was the Knowledge Bar, where you can get all your tech questions answered — much like the Genius Bars in Apple Stores. And there were repair tables out in the open. There are a lot of things for tech enthusiasts can like about Micro Center. First, it’s not as sprawling as Fry’s, which had zany themes like ancient Egypt and a weird mix of electronics goods as well as household appliances, cosmetics, magazines and tons of snack foods. (The Egyptian-themed Campbell, California Fry’s store that I drove by often was 156,000 square feet, and now it’s home to a pickleball court complex). Fry’s was a store that stereotyped nerds and Silicon Valley, which also had its own HBO television show that carried on the stereotypes. Nvidia’s latest RTX 50 Series GPUs were in stock at Micro Center. The Micro Center store, by contrast, is smaller at 40,000 square feet and stocked with many more practical nerd items. For the grand opening, this store had the very practical product of more than 4,000 graphics processing units (GPUs) in stock from Nvidia (which just launched its 50 Series GPUs) and AMD, Ackerman told me. Some of those graphics cards cost as much as $4,000. Not to be outdone. AMD has a row of GPUs at Micro Center too. “There were people waiting to get to the GPUs,” Ackerman said. On display was a gold-plated graphics card that was being auctioned off for charity. It was signed by Jensen Huang, Nvidia CEO. Nvidia CEO Jensen Huang signed this GPU being auctioned for charity at Micro Center. “I joke that whoever wins the bid should get a Jensen leather jacket as well,” said Ackerman. And this Micro Center store has a good location (5201 Stevens Creek Boulevard in Santa Clara) that is just a six-minute drive from Apple’s worldwide headquarters and (perhaps better yet) a one-minute walk from the Korean Hair Salon. Micro Center had a previous store in Silicon Valley, near Intel’s headquarters in Santa Clara. But that store close in 2012 because the company couldn’t negotiate better terms with the landlord. For its return to the Bay Area, Micro Center bided its time and came back at a time when many other retail chains were failing. It proves that the once proud region — the birthplace of electronics — still merits its own electronics store. You can buy dyes for liquid-cooled tubes at Micro Center. Sure, we have Target, Best Buy and Walmart selling lots of electronics gear. But there’s nothing like the Akihabara electronics district in Japan, which is full of multi-story electronics stores and gaming arcades. But this store is loaded with today’s modern top gear, like AI PCs, Ubiquity home networking gear, and dyes for multi-colored water-cooling systems. Vendors like Razer and Logitech had their own sections. Ackerman was pleased to show me the USB-C to USB-A adapter in stock, among many obscure items. And he showed me the inventory machine that could rotate its stock of 3D-printing filaments and give you the exact SKU that you scanned with a bar code. Tech hobbyists can find their love at Micro Center. “That’s super fun. I call it Mr. Filaments,” Ackerman said of the inventory robot. There’s a section for hobbyists who like single-board computing and DIY projects. There’s a set of video, audio and digital content creation tools for content creators. All told, there are more than 20,000 products and over 100 tech experts who can help. It even has the numbered cashier locations where you can check out — the same kind of checkout stands that Fry’s had. The Mr. Filaments robot inventory system at Micro Center. Customers can receive authorized computer service for brands like Apple, Dell, and HP, benefiting from same-day diagnostics and repairs, thanks to over 3,000 parts on hand through partnerships with leading OEMs. I only wish it had a help desk for Comcast. Micro Center has gear to entertain geeks. Micro Center started in 1979 in Columbus, Ohio. It’s a surprise there aren’t more nerd stores, given how ubiquitous tech is around the world these days. But Ackerman said, “These guys are really doing it right, picking and choosing, finding the right cities, finding the right locations. That’s why Charlotte is great. Miami is a big tech hub, especially for health tech. And we’re literally five minutes away from Apple headquarters and plenty of other places. People from HP and Nvidia and other companies are coming in today to hang out.” “Even though this store is big, the CEO (Richard Mershad) is really into curation, making sure it’s the right mix of stuff. He’s making sure it doesn’t go too far afield. So you’re not going to come in here and find, you know, hair dryers or lawncare equipment,” Ackerman said. “You’re going to find computer and home entertainment stuff, and DIY gear. There are components, just like in a Radio Shack, that hobbyists care about.” Dan Ackerman knows how to install a TV on your wall. As for the Micro Center News, Ackerman told me he has around 10 regular contributors and 20 more freelancers writing gadget reviews and other stories about tech gear. It is a kind of refuge for that vanishing breed of professional tech journalists. No wonder I was so nostalgic visiting Micro Center.
    0 Comments 0 Shares
  • When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack

    Claude 4’s “whistle-blow” surprise shows why agentic AI risk lives in prompts and tool access, not benchmarks. Learn the 6 controls every enterprise must adopt.Read More
    #when #your #llm #calls #cops
    When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack
    Claude 4’s “whistle-blow” surprise shows why agentic AI risk lives in prompts and tool access, not benchmarks. Learn the 6 controls every enterprise must adopt.Read More #when #your #llm #calls #cops
    VENTUREBEAT.COM
    When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack
    Claude 4’s “whistle-blow” surprise shows why agentic AI risk lives in prompts and tool access, not benchmarks. Learn the 6 controls every enterprise must adopt.Read More
    0 Comments 0 Shares
  • Augmented World Expo 2025 will draw 400 speakers, 6K attendees and 300 global exhibitors

    Augmented World Expo 2025 will draw more than 6,000 attendees, 400 speakers and 300 global exhibitors to its event June 10 to June 12 in Long Beach, California.
    The speaker lineup includes Snap CEO Evan Spiegel, Atari cofounder Nolan Bushnell and Oculus/Anduril founder Palmer Luckey. If the show is any indication, the XR industry isn’t doing so bad. A variety of market researchers are forecasting fast growth for the industry through 2030. Ori Inbar, CEO of AWE, believes that the XR revolution is “ready to conquer the mainstream.” But to get there, he believes the industry still needs to create “head-turning content that must be experienced.”
    Of course, the red hot days of the “metaverse,” inspired by Neal Stephenson’s Snow Crash sci-fi novel in 1992, is no longer driving the industry forward. With less focus on sci-fi, the industry is focused on practical uses for mixed reality technology in the enterprise and consumer markets like gaming.
    But will XR and the metaverse be overrun by AI, or will it carry them to the mass market destination?
    Much is riding on how committed Mark Zuckerberg’s Meta will be even as it reprioritizes some resources away from XR to AI. Meta, which acquired Luckey’s Oculus back in 2014, has invested billions every quarter in the technology, with no profits so far. But, in a very unexpected turnaround, Zuckerberg and Luckey buried the hatchet on the past differences and set up an alliance between Meta and Anduril — the latter being Luckey’s AI/drone defense company.
    Zuckerberg has new competition from his own nemesis, Apple, which launched the Apple Vision Pro in February 2024. However, Apple has slowed down its development of the next-generation XR headset, while Zuckerberg has put more emphasis on AR/AI glasses.
    Spiegel, the CEO of Snap, has focused on augmented reality glasses. His Spectacles are now in their fifth generation, powered by the Snap OS and authoring tool Lens Studio.
    Nolan Bushnell, founder of Atari and Chuck E. Cheese, will deliver a one-of-a-kind talk on the main stage with five of his children, who are continuing his pioneering vision in gaming through XR. Brent Bushnell, Nolan’s eldest son, recently debuted DreamPark, a new XR startup that turns any park or playground into a mixed reality theme parks.
    Others speakers include Vicki Dobbs Beck – VP, Immersive Content Innovation, Lucasfilm & ILM Immersive; Ziad Asghar – SVP & GM XR, Qualcomm; Brian McClendon – Chief Technology Officer, Niantic Spatial, Inc.; Jason Rubin – VP, Metaverse Experiences, Meta; Hugo Swart, Senior Director of XR Ecosystem Strategy and Technology, Google; Jacqui Bransky – VP Web3 & Innovation, Warner Records; Chi Xu – CEO and Founder, XREAL; Helen Papagiannis – AR Pioneer and XR Hall of Famer; and Tom Furness – Grandfather of VR and Founder, Virtual World Society.
    AWE Builders Nexus will be a new program focused on startups this year. Startup founders, developers, designers, product managers, and business leaders alike will get the resources they need to build something extraordinary, get advice and funding, scale through partnerships, and win customers, Inbar said. The event will also feature the AWE Gaming Hub.
    I also interviewed some companies that are showcasing technology at the show. Here’s some snippets from what they are going to show.
    Pico VR
    Pico started out in Beijing, China, in 2015 and is now hitting its 10th anniversary. It is making the standalone Pico XR headsets, and it was acquired by ByteDance, the owner of TikTok, in 2021. In September 2024, the company launched the Pico 4 Ultra Enterprise headset, filling out the high end of its product line in addition to its G3 and Neo 3 legacy headsets.
    Pico also has its set of full-body motion trackers to its product offerings to allow for full-body and object tracking. That’s helping it with its focus on location-based entertainment in markets such as China. It’s focused on WiFi7, hand tracking and motion tracking.
    Leland Hedges, head of enterprise business at Pico, said that the LBE market in China has grown by 1,000% in the last six to nine months Pico has an app for PC streaming and another app for managing devices over a LAN. Pico can track play spaces with columns or cordoned-off areas. Hedges said the company will share 15 different user stories at AWE in public places such as zoos, museums, aquariums and planetariums.
    Convai
    Purnendu Mukherjee, CEO of Convai, showed me a bunch of demos at the Game Developers Conference where it has been able to create avatar-based demos of generative AI solutions with 3D animated people. These can be used to show off brands and greet people on web sites or as avatars in games.
    At AWE, Convai will also off learning and training scenarios for education and enterprises through a variety of simulations. Convai can render high fidelity avatars that are effectively coming from the cloud. At GDC, Convai scanned me and captured my voice so that it can create a lifelike avatar of me. These avatars can be created quickly and answer a variety of questions from website visitors. The idea is to enable non-technical people to create simulations without the need to code anything.
    In a demo, Convai’s avatar of me said, “I’ve been covering the games industry for many years now at games beat I’ve seen it evolve from the arcades to the massive global phenomenon it is today. I love digging into the business side of gaming, the technology, the culture, the whole shebang.” Convai will announce pricing for its self-serve platform as well as an enterprise subscription fee.
    Doublepoint
    Ohto Pentikäinen, CEO of Doublepoint, has a technology that detects the gesture you can make with your hand. It captures that movement via a smartwatch and allows you to control things on a TV interface or an XR device. With Android XR, Doublepoint is showing off demos where gesture control can unlock a more intuitive and comfortable augmented reality experience for those wearing AR glasses. Xreal is one of the glasses makers that is using the technology for controlling an AR user interface with gestures.
    “Our technology is able to fully control a XR system. A stat that we can update you on is that there’s 150,000 people who have downloaded the technology so far, and we have a developer community of over 2,000 people since January 2024,” Pentikäinen said.
    Now the company is starting its own Doublepoing developer program, and this adds layers on top of the enterprise client. So now the company can provide technology for indie developers or startups that are building augmented reality or AI hardware experiences.
    “We’re empowering developers in AR robotics and AI hardware, and we’re providing everything that we’re providing the enterprise clients, but for a much reduced price,” Pentikäinen said.
    #augmented #world #expo #will #draw
    Augmented World Expo 2025 will draw 400 speakers, 6K attendees and 300 global exhibitors
    Augmented World Expo 2025 will draw more than 6,000 attendees, 400 speakers and 300 global exhibitors to its event June 10 to June 12 in Long Beach, California. The speaker lineup includes Snap CEO Evan Spiegel, Atari cofounder Nolan Bushnell and Oculus/Anduril founder Palmer Luckey. If the show is any indication, the XR industry isn’t doing so bad. A variety of market researchers are forecasting fast growth for the industry through 2030. Ori Inbar, CEO of AWE, believes that the XR revolution is “ready to conquer the mainstream.” But to get there, he believes the industry still needs to create “head-turning content that must be experienced.” Of course, the red hot days of the “metaverse,” inspired by Neal Stephenson’s Snow Crash sci-fi novel in 1992, is no longer driving the industry forward. With less focus on sci-fi, the industry is focused on practical uses for mixed reality technology in the enterprise and consumer markets like gaming. But will XR and the metaverse be overrun by AI, or will it carry them to the mass market destination? Much is riding on how committed Mark Zuckerberg’s Meta will be even as it reprioritizes some resources away from XR to AI. Meta, which acquired Luckey’s Oculus back in 2014, has invested billions every quarter in the technology, with no profits so far. But, in a very unexpected turnaround, Zuckerberg and Luckey buried the hatchet on the past differences and set up an alliance between Meta and Anduril — the latter being Luckey’s AI/drone defense company. Zuckerberg has new competition from his own nemesis, Apple, which launched the Apple Vision Pro in February 2024. However, Apple has slowed down its development of the next-generation XR headset, while Zuckerberg has put more emphasis on AR/AI glasses. Spiegel, the CEO of Snap, has focused on augmented reality glasses. His Spectacles are now in their fifth generation, powered by the Snap OS and authoring tool Lens Studio. Nolan Bushnell, founder of Atari and Chuck E. Cheese, will deliver a one-of-a-kind talk on the main stage with five of his children, who are continuing his pioneering vision in gaming through XR. Brent Bushnell, Nolan’s eldest son, recently debuted DreamPark, a new XR startup that turns any park or playground into a mixed reality theme parks. Others speakers include Vicki Dobbs Beck – VP, Immersive Content Innovation, Lucasfilm & ILM Immersive; Ziad Asghar – SVP & GM XR, Qualcomm; Brian McClendon – Chief Technology Officer, Niantic Spatial, Inc.; Jason Rubin – VP, Metaverse Experiences, Meta; Hugo Swart, Senior Director of XR Ecosystem Strategy and Technology, Google; Jacqui Bransky – VP Web3 & Innovation, Warner Records; Chi Xu – CEO and Founder, XREAL; Helen Papagiannis – AR Pioneer and XR Hall of Famer; and Tom Furness – Grandfather of VR and Founder, Virtual World Society. AWE Builders Nexus will be a new program focused on startups this year. Startup founders, developers, designers, product managers, and business leaders alike will get the resources they need to build something extraordinary, get advice and funding, scale through partnerships, and win customers, Inbar said. The event will also feature the AWE Gaming Hub. I also interviewed some companies that are showcasing technology at the show. Here’s some snippets from what they are going to show. Pico VR Pico started out in Beijing, China, in 2015 and is now hitting its 10th anniversary. It is making the standalone Pico XR headsets, and it was acquired by ByteDance, the owner of TikTok, in 2021. In September 2024, the company launched the Pico 4 Ultra Enterprise headset, filling out the high end of its product line in addition to its G3 and Neo 3 legacy headsets. Pico also has its set of full-body motion trackers to its product offerings to allow for full-body and object tracking. That’s helping it with its focus on location-based entertainment in markets such as China. It’s focused on WiFi7, hand tracking and motion tracking. Leland Hedges, head of enterprise business at Pico, said that the LBE market in China has grown by 1,000% in the last six to nine months Pico has an app for PC streaming and another app for managing devices over a LAN. Pico can track play spaces with columns or cordoned-off areas. Hedges said the company will share 15 different user stories at AWE in public places such as zoos, museums, aquariums and planetariums. Convai Purnendu Mukherjee, CEO of Convai, showed me a bunch of demos at the Game Developers Conference where it has been able to create avatar-based demos of generative AI solutions with 3D animated people. These can be used to show off brands and greet people on web sites or as avatars in games. At AWE, Convai will also off learning and training scenarios for education and enterprises through a variety of simulations. Convai can render high fidelity avatars that are effectively coming from the cloud. At GDC, Convai scanned me and captured my voice so that it can create a lifelike avatar of me. These avatars can be created quickly and answer a variety of questions from website visitors. The idea is to enable non-technical people to create simulations without the need to code anything. In a demo, Convai’s avatar of me said, “I’ve been covering the games industry for many years now at games beat I’ve seen it evolve from the arcades to the massive global phenomenon it is today. I love digging into the business side of gaming, the technology, the culture, the whole shebang.” Convai will announce pricing for its self-serve platform as well as an enterprise subscription fee. Doublepoint Ohto Pentikäinen, CEO of Doublepoint, has a technology that detects the gesture you can make with your hand. It captures that movement via a smartwatch and allows you to control things on a TV interface or an XR device. With Android XR, Doublepoint is showing off demos where gesture control can unlock a more intuitive and comfortable augmented reality experience for those wearing AR glasses. Xreal is one of the glasses makers that is using the technology for controlling an AR user interface with gestures. “Our technology is able to fully control a XR system. A stat that we can update you on is that there’s 150,000 people who have downloaded the technology so far, and we have a developer community of over 2,000 people since January 2024,” Pentikäinen said. Now the company is starting its own Doublepoing developer program, and this adds layers on top of the enterprise client. So now the company can provide technology for indie developers or startups that are building augmented reality or AI hardware experiences. “We’re empowering developers in AR robotics and AI hardware, and we’re providing everything that we’re providing the enterprise clients, but for a much reduced price,” Pentikäinen said. #augmented #world #expo #will #draw
    VENTUREBEAT.COM
    Augmented World Expo 2025 will draw 400 speakers, 6K attendees and 300 global exhibitors
    Augmented World Expo 2025 will draw more than 6,000 attendees, 400 speakers and 300 global exhibitors to its event June 10 to June 12 in Long Beach, California. The speaker lineup includes Snap CEO Evan Spiegel, Atari cofounder Nolan Bushnell and Oculus/Anduril founder Palmer Luckey. If the show is any indication, the XR industry isn’t doing so bad. A variety of market researchers are forecasting fast growth for the industry through 2030. Ori Inbar, CEO of AWE, believes that the XR revolution is “ready to conquer the mainstream.” But to get there, he believes the industry still needs to create “head-turning content that must be experienced.” Of course, the red hot days of the “metaverse,” inspired by Neal Stephenson’s Snow Crash sci-fi novel in 1992, is no longer driving the industry forward. With less focus on sci-fi, the industry is focused on practical uses for mixed reality technology in the enterprise and consumer markets like gaming. But will XR and the metaverse be overrun by AI, or will it carry them to the mass market destination? Much is riding on how committed Mark Zuckerberg’s Meta will be even as it reprioritizes some resources away from XR to AI. Meta, which acquired Luckey’s Oculus back in 2014, has invested billions every quarter in the technology, with no profits so far. But, in a very unexpected turnaround, Zuckerberg and Luckey buried the hatchet on the past differences and set up an alliance between Meta and Anduril — the latter being Luckey’s AI/drone defense company. Zuckerberg has new competition from his own nemesis, Apple, which launched the Apple Vision Pro in February 2024. However, Apple has slowed down its development of the next-generation XR headset, while Zuckerberg has put more emphasis on AR/AI glasses. Spiegel, the CEO of Snap, has focused on augmented reality glasses. His Spectacles are now in their fifth generation, powered by the Snap OS and authoring tool Lens Studio. Nolan Bushnell, founder of Atari and Chuck E. Cheese, will deliver a one-of-a-kind talk on the main stage with five of his children, who are continuing his pioneering vision in gaming through XR. Brent Bushnell, Nolan’s eldest son, recently debuted DreamPark, a new XR startup that turns any park or playground into a mixed reality theme parks. Others speakers include Vicki Dobbs Beck – VP, Immersive Content Innovation, Lucasfilm & ILM Immersive; Ziad Asghar – SVP & GM XR, Qualcomm; Brian McClendon – Chief Technology Officer, Niantic Spatial, Inc.; Jason Rubin – VP, Metaverse Experiences, Meta; Hugo Swart, Senior Director of XR Ecosystem Strategy and Technology, Google; Jacqui Bransky – VP Web3 & Innovation, Warner Records; Chi Xu – CEO and Founder, XREAL; Helen Papagiannis – AR Pioneer and XR Hall of Famer; and Tom Furness – Grandfather of VR and Founder, Virtual World Society. AWE Builders Nexus will be a new program focused on startups this year. Startup founders, developers, designers, product managers, and business leaders alike will get the resources they need to build something extraordinary, get advice and funding, scale through partnerships, and win customers, Inbar said. The event will also feature the AWE Gaming Hub. I also interviewed some companies that are showcasing technology at the show. Here’s some snippets from what they are going to show. Pico VR Pico started out in Beijing, China, in 2015 and is now hitting its 10th anniversary. It is making the standalone Pico XR headsets, and it was acquired by ByteDance, the owner of TikTok, in 2021. In September 2024, the company launched the Pico 4 Ultra Enterprise headset, filling out the high end of its product line in addition to its G3 and Neo 3 legacy headsets. Pico also has its set of full-body motion trackers to its product offerings to allow for full-body and object tracking. That’s helping it with its focus on location-based entertainment in markets such as China. It’s focused on WiFi7, hand tracking and motion tracking. Leland Hedges, head of enterprise business at Pico, said that the LBE market in China has grown by 1,000% in the last six to nine months Pico has an app for PC streaming and another app for managing devices over a LAN. Pico can track play spaces with columns or cordoned-off areas. Hedges said the company will share 15 different user stories at AWE in public places such as zoos, museums, aquariums and planetariums. Convai Purnendu Mukherjee, CEO of Convai, showed me a bunch of demos at the Game Developers Conference where it has been able to create avatar-based demos of generative AI solutions with 3D animated people. These can be used to show off brands and greet people on web sites or as avatars in games. At AWE, Convai will also off learning and training scenarios for education and enterprises through a variety of simulations. Convai can render high fidelity avatars that are effectively coming from the cloud. At GDC, Convai scanned me and captured my voice so that it can create a lifelike avatar of me. These avatars can be created quickly and answer a variety of questions from website visitors. The idea is to enable non-technical people to create simulations without the need to code anything. In a demo, Convai’s avatar of me said, “I’ve been covering the games industry for many years now at games beat I’ve seen it evolve from the arcades to the massive global phenomenon it is today. I love digging into the business side of gaming, the technology, the culture, the whole shebang.” Convai will announce pricing for its self-serve platform as well as an enterprise subscription fee. Doublepoint Ohto Pentikäinen, CEO of Doublepoint, has a technology that detects the gesture you can make with your hand. It captures that movement via a smartwatch and allows you to control things on a TV interface or an XR device. With Android XR, Doublepoint is showing off demos where gesture control can unlock a more intuitive and comfortable augmented reality experience for those wearing AR glasses. Xreal is one of the glasses makers that is using the technology for controlling an AR user interface with gestures. “Our technology is able to fully control a XR system. A stat that we can update you on is that there’s 150,000 people who have downloaded the technology so far, and we have a developer community of over 2,000 people since January 2024,” Pentikäinen said. Now the company is starting its own Doublepoing developer program, and this adds layers on top of the enterprise client. So now the company can provide technology for indie developers or startups that are building augmented reality or AI hardware experiences. “We’re empowering developers in AR robotics and AI hardware, and we’re providing everything that we’re providing the enterprise clients, but for a much reduced price,” Pentikäinen said.
    0 Comments 0 Shares
  • The future of engineering belongs to those who build with AI, not without it

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    When Salesforce CEO Marc Benioff recently announced that the company would not hire any more engineers in 2025, citing a “30% productivity increase on engineering” due to AI, it sent ripples through the tech industry. Headlines quickly framed this as the beginning of the end for human engineers — AI was coming for their jobs.
    But those headlines miss the mark entirely. What’s really happening is a transformation of engineering itself. Gartner named agentic AI as its top tech trend for this year. The firm also predicts that 33% of enterprise software applications will include agentic AI by 2028 — a significant portion, but far from universal adoption. The extended timeline suggests a gradual evolution rather than a wholesale replacement. The real risk isn’t AI taking jobs; it’s engineers who fail to adapt and are left behind as the nature of engineering work evolves.
    The reality across the tech industry reveals an explosion of demand for engineers with AI expertise. Professional services firms are aggressively recruiting engineers with generative AI experience, and technology companies are creating entirely new engineering positions focused on AI implementation. The market for professionals who can effectively leverage AI tools is extraordinarily competitive.
    While claims of AI-driven productivity gains may be grounded in real progress, such announcements often reflect investor pressure for profitability as much as technological advancement. Many companies are adept at shaping narratives to position themselves as leaders in enterprise AI — a strategy that aligns well with broader market expectations.
    How AI is transforming engineering work
    The relationship between AI and engineering is evolving in four key ways, each representing a distinct capability that augments human engineering talent but certainly doesn’t replace it. 
    AI excels at summarization, helping engineers distill massive codebases, documentation and technical specifications into actionable insights. Rather than spending hours poring over documentation, engineers can get AI-generated summaries and focus on implementation.
    Also, AI’s inferencing capabilities allow it to analyze patterns in code and systems and proactively suggest optimizations. This empowers engineers to identify potential bugs and make informed decisions more quickly and with greater confidence.
    Third, AI has proven remarkably adept at converting code between languages. This capability is proving invaluable as organizations modernize their tech stacks and attempt to preserve institutional knowledge embedded in legacy systems.
    Finally, the true power of gen AI lies in its expansion capabilities — creating novel content like code, documentation or even system architectures. Engineers are using AI to explore more possibilities than they could alone, and we’re seeing these capabilities transform engineering across industries. 
    In healthcare, AI helps create personalized medical instruction systems that adjust based on a patient’s specific conditions and medical history. In pharmaceutical manufacturing, AI-enhanced systems optimize production schedules to reduce waste and ensure an adequate supply of critical medications. Major banks have invested in gen AI for longer than most people realize, too; they are building systems that help manage complex compliance requirements while improving customer service. 
    The new engineering skills landscape
    As AI reshapes engineering work, it’s creating entirely new in-demand specializations and skill sets, like the ability to effectively communicate with AI systems. Engineers who excel at working with AI can extract significantly better results.
    Similar to how DevOps emerged as a discipline, large language model operationsfocuses on deploying, monitoring and optimizing LLMs in production environments. Practitioners of LLMOps track model drift, evaluate alternative models and help to ensure consistent quality of AI-generated outputs.
    Creating standardized environments where AI tools can be safely and effectively deployed is becoming crucial. Platform engineering provides templates and guardrails that enable engineers to build AI-enhanced applications more efficiently. This standardization helps ensure consistency, security and maintainability across an organization’s AI implementations.
    Human-AI collaboration ranges from AI merely providing recommendations that humans may ignore, to fully autonomous systems that operate independently. The most effective engineers understand when and how to apply the appropriate level of AI autonomy based on the context and consequences of the task at hand. 
    Keys to successful AI integration
    Effective AI governance frameworks — which ranks No. 2 on Gartner’s top trends list — establish clear guidelines while leaving room for innovation. These frameworks address ethical considerations, regulatory compliance and risk management without stifling the creativity that makes AI valuable.
    Rather than treating security as an afterthought, successful organizations build it into their AI systems from the beginning. This includes robust testing for vulnerabilities like hallucinations, prompt injection and data leakage. By incorporating security considerations into the development process, organizations can move quickly without compromising safety.
    Engineers who can design agentic AI systems create significant value. We’re seeing systems where one AI model handles natural language understanding, another performs reasoning and a third generates appropriate responses, all working in concert to deliver better results than any single model could provide.
    As we look ahead, the relationship between engineers and AI systems will likely evolve from tool and user to something more symbiotic. Today’s AI systems are powerful but limited; they lack true understanding and rely heavily on human guidance. Tomorrow’s systems may become true collaborators, proposing novel solutions beyond what engineers might have considered and identifying potential risks humans might overlook.
    Yet the engineer’s essential role — understanding requirements, making ethical judgments and translating human needs into technological solutions — will remain irreplaceable. In this partnership between human creativity and AI, there lies the potential to solve problems we’ve never been able to tackle before — and that’s anything but a replacement.
    Rizwan Patel is head of information security and emerging technology at Altimetrik. 

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #future #engineering #belongs #those #who
    The future of engineering belongs to those who build with AI, not without it
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More When Salesforce CEO Marc Benioff recently announced that the company would not hire any more engineers in 2025, citing a “30% productivity increase on engineering” due to AI, it sent ripples through the tech industry. Headlines quickly framed this as the beginning of the end for human engineers — AI was coming for their jobs. But those headlines miss the mark entirely. What’s really happening is a transformation of engineering itself. Gartner named agentic AI as its top tech trend for this year. The firm also predicts that 33% of enterprise software applications will include agentic AI by 2028 — a significant portion, but far from universal adoption. The extended timeline suggests a gradual evolution rather than a wholesale replacement. The real risk isn’t AI taking jobs; it’s engineers who fail to adapt and are left behind as the nature of engineering work evolves. The reality across the tech industry reveals an explosion of demand for engineers with AI expertise. Professional services firms are aggressively recruiting engineers with generative AI experience, and technology companies are creating entirely new engineering positions focused on AI implementation. The market for professionals who can effectively leverage AI tools is extraordinarily competitive. While claims of AI-driven productivity gains may be grounded in real progress, such announcements often reflect investor pressure for profitability as much as technological advancement. Many companies are adept at shaping narratives to position themselves as leaders in enterprise AI — a strategy that aligns well with broader market expectations. How AI is transforming engineering work The relationship between AI and engineering is evolving in four key ways, each representing a distinct capability that augments human engineering talent but certainly doesn’t replace it.  AI excels at summarization, helping engineers distill massive codebases, documentation and technical specifications into actionable insights. Rather than spending hours poring over documentation, engineers can get AI-generated summaries and focus on implementation. Also, AI’s inferencing capabilities allow it to analyze patterns in code and systems and proactively suggest optimizations. This empowers engineers to identify potential bugs and make informed decisions more quickly and with greater confidence. Third, AI has proven remarkably adept at converting code between languages. This capability is proving invaluable as organizations modernize their tech stacks and attempt to preserve institutional knowledge embedded in legacy systems. Finally, the true power of gen AI lies in its expansion capabilities — creating novel content like code, documentation or even system architectures. Engineers are using AI to explore more possibilities than they could alone, and we’re seeing these capabilities transform engineering across industries.  In healthcare, AI helps create personalized medical instruction systems that adjust based on a patient’s specific conditions and medical history. In pharmaceutical manufacturing, AI-enhanced systems optimize production schedules to reduce waste and ensure an adequate supply of critical medications. Major banks have invested in gen AI for longer than most people realize, too; they are building systems that help manage complex compliance requirements while improving customer service.  The new engineering skills landscape As AI reshapes engineering work, it’s creating entirely new in-demand specializations and skill sets, like the ability to effectively communicate with AI systems. Engineers who excel at working with AI can extract significantly better results. Similar to how DevOps emerged as a discipline, large language model operationsfocuses on deploying, monitoring and optimizing LLMs in production environments. Practitioners of LLMOps track model drift, evaluate alternative models and help to ensure consistent quality of AI-generated outputs. Creating standardized environments where AI tools can be safely and effectively deployed is becoming crucial. Platform engineering provides templates and guardrails that enable engineers to build AI-enhanced applications more efficiently. This standardization helps ensure consistency, security and maintainability across an organization’s AI implementations. Human-AI collaboration ranges from AI merely providing recommendations that humans may ignore, to fully autonomous systems that operate independently. The most effective engineers understand when and how to apply the appropriate level of AI autonomy based on the context and consequences of the task at hand.  Keys to successful AI integration Effective AI governance frameworks — which ranks No. 2 on Gartner’s top trends list — establish clear guidelines while leaving room for innovation. These frameworks address ethical considerations, regulatory compliance and risk management without stifling the creativity that makes AI valuable. Rather than treating security as an afterthought, successful organizations build it into their AI systems from the beginning. This includes robust testing for vulnerabilities like hallucinations, prompt injection and data leakage. By incorporating security considerations into the development process, organizations can move quickly without compromising safety. Engineers who can design agentic AI systems create significant value. We’re seeing systems where one AI model handles natural language understanding, another performs reasoning and a third generates appropriate responses, all working in concert to deliver better results than any single model could provide. As we look ahead, the relationship between engineers and AI systems will likely evolve from tool and user to something more symbiotic. Today’s AI systems are powerful but limited; they lack true understanding and rely heavily on human guidance. Tomorrow’s systems may become true collaborators, proposing novel solutions beyond what engineers might have considered and identifying potential risks humans might overlook. Yet the engineer’s essential role — understanding requirements, making ethical judgments and translating human needs into technological solutions — will remain irreplaceable. In this partnership between human creativity and AI, there lies the potential to solve problems we’ve never been able to tackle before — and that’s anything but a replacement. Rizwan Patel is head of information security and emerging technology at Altimetrik.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #future #engineering #belongs #those #who
    VENTUREBEAT.COM
    The future of engineering belongs to those who build with AI, not without it
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More When Salesforce CEO Marc Benioff recently announced that the company would not hire any more engineers in 2025, citing a “30% productivity increase on engineering” due to AI, it sent ripples through the tech industry. Headlines quickly framed this as the beginning of the end for human engineers — AI was coming for their jobs. But those headlines miss the mark entirely. What’s really happening is a transformation of engineering itself. Gartner named agentic AI as its top tech trend for this year. The firm also predicts that 33% of enterprise software applications will include agentic AI by 2028 — a significant portion, but far from universal adoption. The extended timeline suggests a gradual evolution rather than a wholesale replacement. The real risk isn’t AI taking jobs; it’s engineers who fail to adapt and are left behind as the nature of engineering work evolves. The reality across the tech industry reveals an explosion of demand for engineers with AI expertise. Professional services firms are aggressively recruiting engineers with generative AI experience, and technology companies are creating entirely new engineering positions focused on AI implementation. The market for professionals who can effectively leverage AI tools is extraordinarily competitive. While claims of AI-driven productivity gains may be grounded in real progress, such announcements often reflect investor pressure for profitability as much as technological advancement. Many companies are adept at shaping narratives to position themselves as leaders in enterprise AI — a strategy that aligns well with broader market expectations. How AI is transforming engineering work The relationship between AI and engineering is evolving in four key ways, each representing a distinct capability that augments human engineering talent but certainly doesn’t replace it.  AI excels at summarization, helping engineers distill massive codebases, documentation and technical specifications into actionable insights. Rather than spending hours poring over documentation, engineers can get AI-generated summaries and focus on implementation. Also, AI’s inferencing capabilities allow it to analyze patterns in code and systems and proactively suggest optimizations. This empowers engineers to identify potential bugs and make informed decisions more quickly and with greater confidence. Third, AI has proven remarkably adept at converting code between languages. This capability is proving invaluable as organizations modernize their tech stacks and attempt to preserve institutional knowledge embedded in legacy systems. Finally, the true power of gen AI lies in its expansion capabilities — creating novel content like code, documentation or even system architectures. Engineers are using AI to explore more possibilities than they could alone, and we’re seeing these capabilities transform engineering across industries.  In healthcare, AI helps create personalized medical instruction systems that adjust based on a patient’s specific conditions and medical history. In pharmaceutical manufacturing, AI-enhanced systems optimize production schedules to reduce waste and ensure an adequate supply of critical medications. Major banks have invested in gen AI for longer than most people realize, too; they are building systems that help manage complex compliance requirements while improving customer service.  The new engineering skills landscape As AI reshapes engineering work, it’s creating entirely new in-demand specializations and skill sets, like the ability to effectively communicate with AI systems. Engineers who excel at working with AI can extract significantly better results. Similar to how DevOps emerged as a discipline, large language model operations (LLMOps) focuses on deploying, monitoring and optimizing LLMs in production environments. Practitioners of LLMOps track model drift, evaluate alternative models and help to ensure consistent quality of AI-generated outputs. Creating standardized environments where AI tools can be safely and effectively deployed is becoming crucial. Platform engineering provides templates and guardrails that enable engineers to build AI-enhanced applications more efficiently. This standardization helps ensure consistency, security and maintainability across an organization’s AI implementations. Human-AI collaboration ranges from AI merely providing recommendations that humans may ignore, to fully autonomous systems that operate independently. The most effective engineers understand when and how to apply the appropriate level of AI autonomy based on the context and consequences of the task at hand.  Keys to successful AI integration Effective AI governance frameworks — which ranks No. 2 on Gartner’s top trends list — establish clear guidelines while leaving room for innovation. These frameworks address ethical considerations, regulatory compliance and risk management without stifling the creativity that makes AI valuable. Rather than treating security as an afterthought, successful organizations build it into their AI systems from the beginning. This includes robust testing for vulnerabilities like hallucinations, prompt injection and data leakage. By incorporating security considerations into the development process, organizations can move quickly without compromising safety. Engineers who can design agentic AI systems create significant value. We’re seeing systems where one AI model handles natural language understanding, another performs reasoning and a third generates appropriate responses, all working in concert to deliver better results than any single model could provide. As we look ahead, the relationship between engineers and AI systems will likely evolve from tool and user to something more symbiotic. Today’s AI systems are powerful but limited; they lack true understanding and rely heavily on human guidance. Tomorrow’s systems may become true collaborators, proposing novel solutions beyond what engineers might have considered and identifying potential risks humans might overlook. Yet the engineer’s essential role — understanding requirements, making ethical judgments and translating human needs into technological solutions — will remain irreplaceable. In this partnership between human creativity and AI, there lies the potential to solve problems we’ve never been able to tackle before — and that’s anything but a replacement. Rizwan Patel is head of information security and emerging technology at Altimetrik.  Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models.
    Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing.
    This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications.
    The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3.
    It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature.
    According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences.
    Enhancing naturalistic speech
    A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model.
    This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems.
    By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen.
    This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation.
    Multilingual support
    Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration.
    This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction.
    The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences.
    Enterprise-grade
    One of the more powerful additions is the built-in Retrieval-Augmented Generationsystem. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections.
    For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively.
    Multimodality and alternate personas
    In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels.
    Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns.
    Batch outbound calling
    For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\
    Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages.
    This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts.
    Enterprise-grade standards and pricing plans
    Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe.
    ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments.
    As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website:

    Free: /month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing.
    Starter: /month, includes 50 minutes, 6 concurrency limit.
    Creator: /month, includes 250 minutes, 6 concurrency limit, ~per additional minute.
    Pro: /month, includes 1,100 minutes, 10 concurrency limit, ~per additional minute.
    Scale: /month, includes 3,600 minutes, 20 concurrency limit, ~per additional minute.
    Business: /month, includes 13,750 minutes, 30 concurrency limit, ~per additional minute.

    A new chapter in realistic, naturalistic AI voice interactions
    As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.”
    With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions.
    For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #elevenlabs #debuts #conversational #voice #assistants
    ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction. The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences. Enterprise-grade One of the more powerful additions is the built-in Retrieval-Augmented Generationsystem. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively. Multimodality and alternate personas In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels. Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns. Batch outbound calling For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\ Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages. This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts. Enterprise-grade standards and pricing plans Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe. ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments. As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website: Free: /month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing. Starter: /month, includes 50 minutes, 6 concurrency limit. Creator: /month, includes 250 minutes, 6 concurrency limit, ~per additional minute. Pro: /month, includes 1,100 minutes, 10 concurrency limit, ~per additional minute. Scale: /month, includes 3,600 minutes, 20 concurrency limit, ~per additional minute. Business: /month, includes 13,750 minutes, 30 concurrency limit, ~per additional minute. A new chapter in realistic, naturalistic AI voice interactions As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.” With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions. For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #elevenlabs #debuts #conversational #voice #assistants
    VENTUREBEAT.COM
    ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need for manual configuration. This capability ensures that the agent can recognize the language spoken by the user and respond accordingly within the same interaction. The feature caters to global enterprises seeking consistent service for diverse customer bases, removing language barriers and fostering more inclusive experiences. Enterprise-grade One of the more powerful additions is the built-in Retrieval-Augmented Generation (RAG) system. This feature allows the AI to access external knowledge bases and retrieve relevant information instantly, while maintaining minimal latency and strong privacy protections. For example, in healthcare settings, this means a medical assistant agent can pull up treatment guidelines directly from an institution’s database without delay. In customer support, agents can access up-to-date product details from internal documentation to assist users more effectively. Multimodality and alternate personas In addition to these core features, ElevenLabs’ new platform supports multimodality, meaning agents can communicate via voice, text, or a combination of both. This flexibility reduces the engineering burden on developers, as agents only need to be defined once to operate across different communication channels. Further enhancing agent expressiveness, Conversational AI 2.0 allows multi-character mode, enabling a single agent to switch between different personas. This capability could be valuable in scenarios such as creative content development, training simulations, or customer engagement campaigns. Batch outbound calling For enterprises looking to automate large-scale outreach, the platform now supports batch calls.\ Organizations can initiate multiple outbound calls simultaneously using Conversational AI agents, an approach well-suited for surveys, alerts, and personalized messages. This feature aims to increase both reach and operational efficiency, offering a more scalable alternative to manual outbound efforts. Enterprise-grade standards and pricing plans Beyond the features that enhance communication and engagement, Conversational AI 2.0 places a strong emphasis on trust and compliance. The platform is fully HIPAA-compliant, a critical requirement for healthcare applications that demand strict privacy and data protection. It also supports optional EU data residency, aligning with data sovereignty requirements in Europe. ElevenLabs reinforces these compliance-focused features with enterprise-grade security and reliability. Designed for high availability and integration with third-party systems, Conversational AI 2.0 is positioned as a secure and dependable choice for businesses operating in sensitive or regulated environments. As far as pricing is concerned, here are the available subscription plans that include Conversational AI currently listed on ElevenLabs’ website: Free: $0/month, includes 15 minutes, 4 concurrency limit, requires attribution and no commercial licensing. Starter: $5/month, includes 50 minutes, 6 concurrency limit. Creator: $11/month (discounted from $22), includes 250 minutes, 6 concurrency limit, ~$0.12 per additional minute. Pro: $99/month, includes 1,100 minutes, 10 concurrency limit, ~$0.11 per additional minute. Scale: $330/month, includes 3,600 minutes, 20 concurrency limit, ~$0.10 per additional minute. Business: $1,320/month, includes 13,750 minutes, 30 concurrency limit, ~$0.096 per additional minute. A new chapter in realistic, naturalistic AI voice interactions As stated in the company’s video introducing the new release, “The potential of conversational AI has never been greater. The time to build is now.” With Conversational AI 2.0, ElevenLabs aims to provide the tools and infrastructure for enterprises to create truly intelligent, context-aware voice agents that elevate the standard of digital interactions. For those interested in learning more, ElevenLabs encourages developers and organizations to explore its documentation, visit the developer portal, or reach out to the sales team to see how Conversational AI 2.0 can enhance their customer experiences. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • ZeniMax union reaches tentative agreement with Microsoft

    Unionized quality assurance workers on ZeniMax Media announced today that they had reached a tentative contract agreement with Microsoft, ZeniMax’s parent company. The workers, who are represented by the Communication Workers of America, have spent two years negotiating this contract, which they say “sets new standards for the industry” including new wage agreements and policies on AI tools.
    According to the union, the new agreement includes new minimum wages, wage increases across the board, arbitrary dismissal protections, grievance procedures and a new crediting policy that ensures QA workers are included in a game’s credits. The agreement also incorporates previously agreed-upon rules about the use of AI in the workplace.
    The CWA has previously reported that the lengthy negotiations had not proceeded because of disagreements over labor practices. The union members even voted to authorize a strike in April, noting that they hadn’t been able to come to an accord with Microsoft over a lack of remote work options and the replacement of in-house workers with outsourced contract labor.
    CWA President Claude Cummings said in a statement, “Workers in the video game industry are demonstrating once again that collective power works. This agreement shows what’s possible when workers stand together and refuse to accept the status quo. Whether it’s having a say about the use of AI in the workplace, fighting for significant wage increases and fair crediting policies, or protecting workers from retaliation, our members have raised the bar. We’re proud to support them every step of the way.”
    #zenimax #union #reaches #tentative #agreement
    ZeniMax union reaches tentative agreement with Microsoft
    Unionized quality assurance workers on ZeniMax Media announced today that they had reached a tentative contract agreement with Microsoft, ZeniMax’s parent company. The workers, who are represented by the Communication Workers of America, have spent two years negotiating this contract, which they say “sets new standards for the industry” including new wage agreements and policies on AI tools. According to the union, the new agreement includes new minimum wages, wage increases across the board, arbitrary dismissal protections, grievance procedures and a new crediting policy that ensures QA workers are included in a game’s credits. The agreement also incorporates previously agreed-upon rules about the use of AI in the workplace. The CWA has previously reported that the lengthy negotiations had not proceeded because of disagreements over labor practices. The union members even voted to authorize a strike in April, noting that they hadn’t been able to come to an accord with Microsoft over a lack of remote work options and the replacement of in-house workers with outsourced contract labor. CWA President Claude Cummings said in a statement, “Workers in the video game industry are demonstrating once again that collective power works. This agreement shows what’s possible when workers stand together and refuse to accept the status quo. Whether it’s having a say about the use of AI in the workplace, fighting for significant wage increases and fair crediting policies, or protecting workers from retaliation, our members have raised the bar. We’re proud to support them every step of the way.” #zenimax #union #reaches #tentative #agreement
    VENTUREBEAT.COM
    ZeniMax union reaches tentative agreement with Microsoft
    Unionized quality assurance workers on ZeniMax Media announced today that they had reached a tentative contract agreement with Microsoft, ZeniMax’s parent company. The workers, who are represented by the Communication Workers of America, have spent two years negotiating this contract, which they say “sets new standards for the industry” including new wage agreements and policies on AI tools. According to the union, the new agreement includes new minimum wages, wage increases across the board, arbitrary dismissal protections, grievance procedures and a new crediting policy that ensures QA workers are included in a game’s credits. The agreement also incorporates previously agreed-upon rules about the use of AI in the workplace. The CWA has previously reported that the lengthy negotiations had not proceeded because of disagreements over labor practices. The union members even voted to authorize a strike in April, noting that they hadn’t been able to come to an accord with Microsoft over a lack of remote work options and the replacement of in-house workers with outsourced contract labor. CWA President Claude Cummings said in a statement, “Workers in the video game industry are demonstrating once again that collective power works. This agreement shows what’s possible when workers stand together and refuse to accept the status quo. Whether it’s having a say about the use of AI in the workplace, fighting for significant wage increases and fair crediting policies, or protecting workers from retaliation, our members have raised the bar. We’re proud to support them every step of the way.”
    0 Comments 0 Shares
  • QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    Alibaba Group has introduced QwenLong-L1, a new framework that enables large language modelsto reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts.
    The challenge of long-form reasoning for AI
    Recent advances in large reasoning models, particularly through reinforcement learning, have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks.
    However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contextsremains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper.
    The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information. 
    Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse reasoning paths.
    QwenLong-L1: A multi-stage approach
    QwenLong-L1 is a reinforcement learning framework designed to help LRMs transition from proficiency with short texts to robust generalization across long contexts. The framework enhances existing short-context LRMs through a carefully structured, multi-stage process:
    Warm-up Supervised Fine-Tuning: The model first undergoes an SFT phase, where it is trained on examples of long-context reasoning. This stage establishes a solid foundation, enabling the model to ground information accurately from long inputs. It helps develop fundamental capabilities in understanding context, generating logical reasoning chains, and extracting answers.
    Curriculum-Guided Phased RL: At this stage, the model is trained through multiple phases, with the target length of the input documents gradually increasing. This systematic, step-by-step approach helps the model stably adapt its reasoning strategies from shorter to progressively longer contexts. It avoids the instability often seen when models are abruptly trained on very long texts.
    Difficulty-Aware Retrospective Sampling: The final training stage incorporates challenging examples from the preceding training phases, ensuring the model continues to learn from the hardest problems. This prioritizes difficult instances and encourages the model to explore more diverse and complex reasoning paths.
    QwenLong-L1 process Source: arXiv
    Beyond this structured training, QwenLong-L1 also uses a distinct reward system. While training for short-context reasoning tasks often relies on strict rule-based rewards, QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness criteria, with an “LLM-as-a-judge.” This judge model compares the semanticity of the generated answer with the ground truth, allowing for more flexibility and better handling of the diverse ways correct answers can be expressed when dealing with long, nuanced documents.
    Putting QwenLong-L1 to the test
    The Alibaba team evaluated QwenLong-L1 using document question-answeringas the primary task. This scenario is highly relevant to enterprise needs, where AI must understand dense documents to answer complex questions. 
    Experimental results across seven long-context DocQA benchmarks showed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B modelachieved performance comparable to Anthropic’s Claude-3.7 Sonnet Thinking, and outperformed models like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B model also outperformed Google’s Gemini 2.0 Flash Thinking and Qwen3-32B. 
    Source: arXiv
    An important finding relevant to real-world applications is how RL training results in the model developing specialized long-context reasoning behaviors. The paper notes that models trained with QwenLong-L1 become better at “grounding”, “subgoal setting”, “backtracking”, and “verification”.
    For instance, while a base model might get sidetracked by irrelevant details in a financial document or get stuck in a loop of over-analyzing unrelated information, the QwenLong-L1 trained model demonstrated an ability to engage in effective self-reflection. It could successfully filter out these distractor details, backtrack from incorrect paths, and arrive at the correct answer.
    Techniques like QwenLong-L1 could significantly expand the utility of AI in the enterprise. Potential applications include legal tech, financeand customer service. The researchers have released the code for the QwenLong-L1 recipe and the weights for the trained models.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #qwenlongl1 #solves #longcontext #reasoning #challenge
    QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Alibaba Group has introduced QwenLong-L1, a new framework that enables large language modelsto reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts. The challenge of long-form reasoning for AI Recent advances in large reasoning models, particularly through reinforcement learning, have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks. However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contextsremains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper. The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information.  Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse reasoning paths. QwenLong-L1: A multi-stage approach QwenLong-L1 is a reinforcement learning framework designed to help LRMs transition from proficiency with short texts to robust generalization across long contexts. The framework enhances existing short-context LRMs through a carefully structured, multi-stage process: Warm-up Supervised Fine-Tuning: The model first undergoes an SFT phase, where it is trained on examples of long-context reasoning. This stage establishes a solid foundation, enabling the model to ground information accurately from long inputs. It helps develop fundamental capabilities in understanding context, generating logical reasoning chains, and extracting answers. Curriculum-Guided Phased RL: At this stage, the model is trained through multiple phases, with the target length of the input documents gradually increasing. This systematic, step-by-step approach helps the model stably adapt its reasoning strategies from shorter to progressively longer contexts. It avoids the instability often seen when models are abruptly trained on very long texts. Difficulty-Aware Retrospective Sampling: The final training stage incorporates challenging examples from the preceding training phases, ensuring the model continues to learn from the hardest problems. This prioritizes difficult instances and encourages the model to explore more diverse and complex reasoning paths. QwenLong-L1 process Source: arXiv Beyond this structured training, QwenLong-L1 also uses a distinct reward system. While training for short-context reasoning tasks often relies on strict rule-based rewards, QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness criteria, with an “LLM-as-a-judge.” This judge model compares the semanticity of the generated answer with the ground truth, allowing for more flexibility and better handling of the diverse ways correct answers can be expressed when dealing with long, nuanced documents. Putting QwenLong-L1 to the test The Alibaba team evaluated QwenLong-L1 using document question-answeringas the primary task. This scenario is highly relevant to enterprise needs, where AI must understand dense documents to answer complex questions.  Experimental results across seven long-context DocQA benchmarks showed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B modelachieved performance comparable to Anthropic’s Claude-3.7 Sonnet Thinking, and outperformed models like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B model also outperformed Google’s Gemini 2.0 Flash Thinking and Qwen3-32B.  Source: arXiv An important finding relevant to real-world applications is how RL training results in the model developing specialized long-context reasoning behaviors. The paper notes that models trained with QwenLong-L1 become better at “grounding”, “subgoal setting”, “backtracking”, and “verification”. For instance, while a base model might get sidetracked by irrelevant details in a financial document or get stuck in a loop of over-analyzing unrelated information, the QwenLong-L1 trained model demonstrated an ability to engage in effective self-reflection. It could successfully filter out these distractor details, backtrack from incorrect paths, and arrive at the correct answer. Techniques like QwenLong-L1 could significantly expand the utility of AI in the enterprise. Potential applications include legal tech, financeand customer service. The researchers have released the code for the QwenLong-L1 recipe and the weights for the trained models. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #qwenlongl1 #solves #longcontext #reasoning #challenge
    VENTUREBEAT.COM
    QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Alibaba Group has introduced QwenLong-L1, a new framework that enables large language models (LLMs) to reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts. The challenge of long-form reasoning for AI Recent advances in large reasoning models (LRMs), particularly through reinforcement learning (RL), have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks. However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contexts (e.g., 120,000 tokens) remains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper. The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information.  Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse reasoning paths. QwenLong-L1: A multi-stage approach QwenLong-L1 is a reinforcement learning framework designed to help LRMs transition from proficiency with short texts to robust generalization across long contexts. The framework enhances existing short-context LRMs through a carefully structured, multi-stage process: Warm-up Supervised Fine-Tuning (SFT): The model first undergoes an SFT phase, where it is trained on examples of long-context reasoning. This stage establishes a solid foundation, enabling the model to ground information accurately from long inputs. It helps develop fundamental capabilities in understanding context, generating logical reasoning chains, and extracting answers. Curriculum-Guided Phased RL: At this stage, the model is trained through multiple phases, with the target length of the input documents gradually increasing. This systematic, step-by-step approach helps the model stably adapt its reasoning strategies from shorter to progressively longer contexts. It avoids the instability often seen when models are abruptly trained on very long texts. Difficulty-Aware Retrospective Sampling: The final training stage incorporates challenging examples from the preceding training phases, ensuring the model continues to learn from the hardest problems. This prioritizes difficult instances and encourages the model to explore more diverse and complex reasoning paths. QwenLong-L1 process Source: arXiv Beyond this structured training, QwenLong-L1 also uses a distinct reward system. While training for short-context reasoning tasks often relies on strict rule-based rewards (e.g., a correct answer in a math problem), QwenLong-L1 employs a hybrid reward mechanism. This combines rule-based verification, which ensures precision by checking for strict adherence to correctness criteria, with an “LLM-as-a-judge.” This judge model compares the semanticity of the generated answer with the ground truth, allowing for more flexibility and better handling of the diverse ways correct answers can be expressed when dealing with long, nuanced documents. Putting QwenLong-L1 to the test The Alibaba team evaluated QwenLong-L1 using document question-answering (DocQA) as the primary task. This scenario is highly relevant to enterprise needs, where AI must understand dense documents to answer complex questions.  Experimental results across seven long-context DocQA benchmarks showed QwenLong-L1’s capabilities. Notably, the QWENLONG-L1-32B model (based on DeepSeek-R1-Distill-Qwen-32B) achieved performance comparable to Anthropic’s Claude-3.7 Sonnet Thinking, and outperformed models like OpenAI’s o3-mini and Qwen3-235B-A22B. The smaller QWENLONG-L1-14B model also outperformed Google’s Gemini 2.0 Flash Thinking and Qwen3-32B.  Source: arXiv An important finding relevant to real-world applications is how RL training results in the model developing specialized long-context reasoning behaviors. The paper notes that models trained with QwenLong-L1 become better at “grounding” (linking answers to specific parts of a document), “subgoal setting” (breaking down complex questions), “backtracking” (recognizing and correcting their own mistakes mid-reasoning), and “verification” (double-checking their answers). For instance, while a base model might get sidetracked by irrelevant details in a financial document or get stuck in a loop of over-analyzing unrelated information, the QwenLong-L1 trained model demonstrated an ability to engage in effective self-reflection. It could successfully filter out these distractor details, backtrack from incorrect paths, and arrive at the correct answer. Techniques like QwenLong-L1 could significantly expand the utility of AI in the enterprise. Potential applications include legal tech (analyzing thousands of pages of legal documents), finance (deep research on annual reports and financial filings for risk assessment or investment opportunities) and customer service (analyzing long customer interaction histories to provide more informed support). The researchers have released the code for the QwenLong-L1 recipe and the weights for the trained models. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • Bitkraft doubles down on Web3 games with added leaders

    Bitkraft Ventures, an early-stage investment firm focused on games, has announced new leaders who will double down on Web3 gaming.Read More
    #bitkraft #doubles #down #web3 #games
    Bitkraft doubles down on Web3 games with added leaders
    Bitkraft Ventures, an early-stage investment firm focused on games, has announced new leaders who will double down on Web3 gaming.Read More #bitkraft #doubles #down #web3 #games
    VENTUREBEAT.COM
    Bitkraft doubles down on Web3 games with added leaders
    Bitkraft Ventures, an early-stage investment firm focused on games, has announced new leaders who will double down on Web3 gaming.Read More
    0 Comments 0 Shares
  • CampFire Studio will launch Soulmask DLC on June 5

    Soulmask, a survival sandbox game developed by CampFire Studio and published by Qooland Games, has announced its first major cultural expansion, the free Golden LegendDLC, coming on June 5.
    Inspired by ancient Sanxingdui Chinese civilization, the new DLC marks a turning point in Soulmask’s journey. What began in the primal chaos of rainforest survival pushes into uncharted territory, fusing ancient mythical symbolism with sandbox survival exploration systems.
    This DLC introduces new masks, exploration zones, and a collection of ornate bronze furnishings that allow players to shape their interpretations of a long-lost ritual civilization.
    The free Golden Legend DLC introduces The Golden Mask, an ornate ritual artifact adorned with copper eye protrusions and engraved Kui dragon patterns. Its powers include Divine Sight, where it detects threats and terrain changes from a distance; Heaven’s Watch, which can analyse enemy stats, specialities, and potential; Sunbird Blessing, which yields passive buffs that enhance movement and perception; and The Sunken Altar, where deep in the ocean lies you find a lost branch of Eastern civilization. It’s a new submerged zone that welcomes exploration, featuring shipwreck ruins, ritual relics, and ceremonial architecture wrapped in mythological symbolism.
    The DLC also features The Golden Legend Set, a new line of Bronze Age–themed furniture, mask displays, and ornamental props that let players transform their homesteads into stylized ancestral sanctuaries.
    Smarter survival through automation
    Soulmask players can explore The Golden Legend in free DLC.
    Coinciding with the DLC, Soulmask is rolling out core gameplay upgrades to the base game, focused on advancing automation in construction and logistics.
    This includes a building planning mode, where players can now record any custom-built structure plans.
    Tribesmen will automatically collect resources and rebuild them in other locations, dramatically improving the speed and efficiency of base expansion.It also has an automated logistics system where powered ziplines can be set up between homesteads and resource points, creating a flexible transport network for streamlined material delivery across the map.
    These features allow for faster, more organized tribe management and base development, especially for players aiming to build on a larger scale.
    Looking forward: 1.0, Egypt, and the future of civilization
    Soulmask’s update will offer a lot.
    The Golden Legend DLC arrives as a special gift to mark Soulmask’s first anniversary in Early Access – a thank-you to players who have shaped the world through feedback, exploration, and creativity.
    Soulmask will exit Early Access with its 1.0 release later this year, alongside a new Egypt-themed DLC. These milestones will continue to broaden the game’s cultural inspirations and deepen its automation systems, offering players new ways to build, govern, and survive across richly imagined ancient worlds.
    Producer Zima has described his long-term vision for Soulmask as “the intelligent multi-civilization survival sandbox,” where automation and cultural diversity form the foundation of a dynamic, player-driven experience.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #campfire #studio #will #launch #soulmask
    CampFire Studio will launch Soulmask DLC on June 5
    Soulmask, a survival sandbox game developed by CampFire Studio and published by Qooland Games, has announced its first major cultural expansion, the free Golden LegendDLC, coming on June 5. Inspired by ancient Sanxingdui Chinese civilization, the new DLC marks a turning point in Soulmask’s journey. What began in the primal chaos of rainforest survival pushes into uncharted territory, fusing ancient mythical symbolism with sandbox survival exploration systems. This DLC introduces new masks, exploration zones, and a collection of ornate bronze furnishings that allow players to shape their interpretations of a long-lost ritual civilization. The free Golden Legend DLC introduces The Golden Mask, an ornate ritual artifact adorned with copper eye protrusions and engraved Kui dragon patterns. Its powers include Divine Sight, where it detects threats and terrain changes from a distance; Heaven’s Watch, which can analyse enemy stats, specialities, and potential; Sunbird Blessing, which yields passive buffs that enhance movement and perception; and The Sunken Altar, where deep in the ocean lies you find a lost branch of Eastern civilization. It’s a new submerged zone that welcomes exploration, featuring shipwreck ruins, ritual relics, and ceremonial architecture wrapped in mythological symbolism. The DLC also features The Golden Legend Set, a new line of Bronze Age–themed furniture, mask displays, and ornamental props that let players transform their homesteads into stylized ancestral sanctuaries. Smarter survival through automation Soulmask players can explore The Golden Legend in free DLC. Coinciding with the DLC, Soulmask is rolling out core gameplay upgrades to the base game, focused on advancing automation in construction and logistics. This includes a building planning mode, where players can now record any custom-built structure plans. Tribesmen will automatically collect resources and rebuild them in other locations, dramatically improving the speed and efficiency of base expansion.It also has an automated logistics system where powered ziplines can be set up between homesteads and resource points, creating a flexible transport network for streamlined material delivery across the map. These features allow for faster, more organized tribe management and base development, especially for players aiming to build on a larger scale. Looking forward: 1.0, Egypt, and the future of civilization Soulmask’s update will offer a lot. The Golden Legend DLC arrives as a special gift to mark Soulmask’s first anniversary in Early Access – a thank-you to players who have shaped the world through feedback, exploration, and creativity. Soulmask will exit Early Access with its 1.0 release later this year, alongside a new Egypt-themed DLC. These milestones will continue to broaden the game’s cultural inspirations and deepen its automation systems, offering players new ways to build, govern, and survive across richly imagined ancient worlds. Producer Zima has described his long-term vision for Soulmask as “the intelligent multi-civilization survival sandbox,” where automation and cultural diversity form the foundation of a dynamic, player-driven experience. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #campfire #studio #will #launch #soulmask
    VENTUREBEAT.COM
    CampFire Studio will launch Soulmask DLC on June 5
    Soulmask, a survival sandbox game developed by CampFire Studio and published by Qooland Games, has announced its first major cultural expansion, the free Golden Legend (Sanxingdui) DLC, coming on June 5. Inspired by ancient Sanxingdui Chinese civilization, the new DLC marks a turning point in Soulmask’s journey. What began in the primal chaos of rainforest survival pushes into uncharted territory, fusing ancient mythical symbolism with sandbox survival exploration systems. This DLC introduces new masks, exploration zones, and a collection of ornate bronze furnishings that allow players to shape their interpretations of a long-lost ritual civilization. The free Golden Legend DLC introduces The Golden Mask, an ornate ritual artifact adorned with copper eye protrusions and engraved Kui dragon patterns. Its powers include Divine Sight, where it detects threats and terrain changes from a distance; Heaven’s Watch, which can analyse enemy stats, specialities, and potential; Sunbird Blessing, which yields passive buffs that enhance movement and perception; and The Sunken Altar, where deep in the ocean lies you find a lost branch of Eastern civilization. It’s a new submerged zone that welcomes exploration, featuring shipwreck ruins, ritual relics, and ceremonial architecture wrapped in mythological symbolism. The DLC also features The Golden Legend Set, a new line of Bronze Age–themed furniture, mask displays, and ornamental props that let players transform their homesteads into stylized ancestral sanctuaries. Smarter survival through automation Soulmask players can explore The Golden Legend in free DLC. Coinciding with the DLC, Soulmask is rolling out core gameplay upgrades to the base game, focused on advancing automation in construction and logistics. This includes a building planning mode, where players can now record any custom-built structure plans. Tribesmen will automatically collect resources and rebuild them in other locations, dramatically improving the speed and efficiency of base expansion.It also has an automated logistics system where powered ziplines can be set up between homesteads and resource points, creating a flexible transport network for streamlined material delivery across the map. These features allow for faster, more organized tribe management and base development, especially for players aiming to build on a larger scale. Looking forward: 1.0, Egypt, and the future of civilization Soulmask’s update will offer a lot. The Golden Legend DLC arrives as a special gift to mark Soulmask’s first anniversary in Early Access – a thank-you to players who have shaped the world through feedback, exploration, and creativity. Soulmask will exit Early Access with its 1.0 release later this year, alongside a new Egypt-themed DLC. These milestones will continue to broaden the game’s cultural inspirations and deepen its automation systems, offering players new ways to build, govern, and survive across richly imagined ancient worlds. Producer Zima has described his long-term vision for Soulmask as “the intelligent multi-civilization survival sandbox,” where automation and cultural diversity form the foundation of a dynamic, player-driven experience. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • DreamPark raises $1.1M to transform real-world spaces into mixed-reality theme parks

    DreamPark, the creator of what it calls “the world’s largest downloadable mixed realitytheme park,” said it has raised million in seed funding.
    The investment will accelerate DreamPark’s mission to make Earth worth playing again by transforming ordinary spaces into extraordinary adventures through mixed reality technology. I got a demo of the game in Yerba Buena Park in San Francisco and it made me smile. It also made me think it was part of a pretty good plan to convince property owners to get more out of their entertainment venues.
    But we’ll get to that in a bit. Long Journey Ventures led the investment round, with participation from Founders Inc.
    The company is the brainchild of Aidan Wolf, CEO of DreamPark; Kevin Habich, cofounder; and cofounder Brent Bushnell. They came up with the idea while working at Two-Bit Circus, a zany entertainment venue in Los Angeles run by Bushnell. Bushnell encouraged the idea, incubated it and became a cofounder.
    The DreamPark founders: Brent Bushnell, Aidan Wolf and Kevin Habich.
    Positioned at the forefront of mixed reality innovation, DreamPark said it is capturing a significant early advantage in the global XRlive event market, valued at billion in 2024 and projected to surge to billion by 2034 at a 48.7% compound annual growth rate. This explosive growth trajectory presents an opportunity that DreamPark’s technology and business model are uniquely designed to address, the company said.
    “We’re building the world’s largest theme park – one that exists everywhere and is accessible to everyone. We want to make getting out to play worthwhile again,” said Bushnell. “This investment allows us to expand our footprint of access points across the country rapidly, develop partnerships with premium IP holders, and continue enhancing our technology to deliver magical experiences that bring people back to real-world spaces.”
    Bushnell is the eldest son of Atari cofounder Nolan Bushnell. And the younger Bushnell knows the costs of investing in physical properties, as he runs Two-Bit Circus in downtown Los Angeles. It’s built inside a physical warehouse, and Bushnell’s company has to pay for that property — even weathering the pandemic. But with DreamPark, he can reinvigorate a physical venue without investing anything in a new property. By contrast, a new virtual reality entertainment venue can cost more than million to open.
    Hands-on demo
    DreamPark foundes in Yerba Buena Gardens park in San Francisco.
    Wolf and Habich, and Bushnell’s sister Alyssa Bushnell, showed me the DreamPark virtual theme park in San Francisco in the park near the Metreon building. There was a concert going on at the time and it was very noisy. But the game worked fine anyway.
    Looking down at my feet, Wolf said the QR code on the mat on the groun was an “access point.” That’s where you can scan and enter the virtual world. The company is still building a front end for distributing the headsets, but people will be able to bring their mixed-reality headsets from home and play the same content.
    “We’re setting these up all over,” Wolf said. “Once an area is mapped, it’s there and you just show up and play. The big difference here is that DreamParks are places. They exist in the real world.”
    Don’t be surprised if you see people doing this soon.
    The mapped area was around 50,000 square feet in the park, so it was a pretty big game space. Soon, the company will break into 100,000 square feet for the game with another update. That’s about 10 times the restricted size of Meta’s VR headsets.
    “We’re going way past the usual limits,” Wolf said. “I think this fundamentally changes what mixed reality means. Now it’s not this living room experience bound to the couch. It’s an actual world to walk around and explore and touch. Once we get people there, we’re gonna really see that cognitive shift, where now augmented realityis something I can go out and experience, like enjoying a concert.”
    The cofounders gave me a headset to wear. The first one didn’t work, but a second one functioned fine. It was a modified Meta Quest 3 headset that was locked down so it would play just the DreamPark game. It took a short time to load and then I looked through the headset. Thanks to the outward-facing cameras, I was able to see the park in mixed reality. That meant I didn’t trip over anything as I walked around.
    I held the headset to my forehead and looked around. I could see a Mario-like set of bricks floating in the air, and floating virtual coins along the physical path. I started walking around and picking up the coins and tapping the bricks to collect points in the game. I didn’t go where there were people lying on the grass, but I didn’t manage to navigate to some lava pits in the middle of the park. The founders pointed out that far away from me, on the Carnaval concert stage, there was a boss. Normally, if there was no concert, I could have waltzed over to that location and engaged in a boss fight.
    DreamPark overlaid on the Third Street Promenade in Santa Monica, California.
    The graphics were rudimentary, 8-bit style, and yet I didn’t mind it at all due to the novelty of seeing them overlaid on the real world. Still, I was reluctant to go walking in the lava pits, as that was a bad idea in the virtual world and I somehow felt like it would be a bad idea to walk there in the physical world.
    “Our graphics are more cartoonish, but our Wizard theme has a more realistic look,” Wolf said. “We’re creating four theme parks.”
    One of them is a sci-fi Crash Course, which is an obstacle course. And DreamPark is working with a partner as well. There’s one with a psychedelic theme and one that is ambient fun.
    It’s easy to turn the experience into a multiplayer game. You can, for instance, race around the park and complete a timed experience in competition with your friends.
    The appeal of a virtual overlay on the real world
    DreamPark mixes the virtual and real worlds.
    DreamPark transforms physical locations into immersive mixed-reality environments through its network of access points: physical markers, like QR codes, that, when scanned with a Meta Quest 3 headset or mobile device, unlock digital overlays on real-world spaces. The company has already established successful installations at Santa Monica’s Third Street Promenade and The LA County Fair, with planned expansions in Seattle, Orange County and several expos and corporate events.
    It’s pretty cheap to create new locations. All they really have to do is scan an area, overlay a digital game filled with simple games, and then drop a mat with a QR code on the property so people can scan it and start playing the game. For property owners, this means they can draw people back to their location, getting them to re-engage with the place because people want to play a digital game at the physical place. It’s a way to enhance the value of a physical property, using virtual entertainment.
    Bushnell pitched the idea for DreamPark on CNBC’s Shark Tank television show. The sharks didn’t go for it, but the publicity from the show helped surface investors, Bushnell said..
    “As a longtime investor, I have seen countless pitches promising to merge the digital and physical worlds, and DreamPark is the first that truly delivers on the real-world metaverse,” said Cyan Banister, cofounder and general partner at Long Journey Ventures, in a statement. “Aidan is a visionary builder of immersive systems, and Brent is a pioneer in playful public spaces, making them the perfect team to make emerging tech feel human, accessible, and unforgettable. They’ve cracked the code on location-based AR, delivering a 10x experience that’s as magical as it’s scalable. This isn’t just immersive entertainment; it’s a whole new category.”
    The funding comes when retail landlords and event venues seek innovative solutions to drive foot traffic and increase engagement. While typical VR venues cost over million to build, DreamPark delivers a fully immersive, multiplayer experience that pays for itself in its first month of revenue.
    DreamPark in Santa Monica.
    “Our capital expense is like one of a hundredth of our competitors, which is amazing. And then this lets us move astronomically faster than everyone else. I kind of believe in a Nintendo philosophy, which is, they take antiquated technology, but they use it in a new way that makes it valuable. We’re using access points,” Wolf said.
    There’s no construction or permanent infrastructure required. It’s a radically more affordable way to turn underused spaces into high-impact destinations.
    “We’re not just creating engaging content, we’re building a platform that revitalizes communities by giving people a reason to gather, play, and connect in physical spaces in real life,” said Wolf. “DreamPark bridges the digital and physical worlds, creating a new category of play where the magic of virtual worlds enhances real-life connections. We’re reimagining what’s possible when the spaces around us become canvases for shared adventure and imagination.”
    The seed funding will support DreamPark’s aggressive expansion plans, including deploying access points across new locations, launching partnerships with major IP holders to create branded theme park experiences, and expanding the company’s fleet of rental Meta Quest 3 headsets units nationwide.
    DreamPark is growing the development team to accelerate content creation and platform capabilities. DreamPark’s leadership team brings deep experience from companies including Two-Bit Circus, Smiley Cap, and SNAP, Inc., positioning them to execute their ambitious vision of creating the infrastructure for worldwide mixed-reality entertainment.
    Where it’s going
    What alien technology is this?
    Bushnell said the team has been working for around two years. But the founders have been involved with AR for more than a decade. They showed up at Two-Bit Circus and started making mixed-reality games, which take into account physical reality as a game space. There are about 10 contractors in the company working on content.
    They found that players are happy to wear the headsets for 30 minutes at a time, particularly when they are playing with friends.
    “We see ourselves more as a tech company than like a location based entertainment company. We hope to stay small as a core team while still reaching millions or billions of people,” Wolf said.The games are in a private alpha testing phase now.
    “I would say that the headset we currently have in our hands is the exact headset we need to bring this to the masses. So the nice part about the company we’re building is we aren’t waiting for some like watershed moment,” Wolf said. “We’re not waiting for anything now. We’re just getting it into lots of places where people already congregate.”DreamPark is coming out with an app that will let users scan their local park and then start using that space as a level, Wolf said. But DreamPark itself will create partnerships with some of the best places itself and get permission to do the game on the properties.
    At Two-Bit Circus, for instance, DreamPark could extend the entertainment into the outdoor parking lot, giving more square footage for entertainment.
    Bushnell had a great moment when he was playing an AR game with drift racing on a racetrack in the Two-Bit Circus parking lot. He noted that mixed reality doesn’t have the Achilles Heel of VR, which is that it makes half the people nauseous.
    “That was really the moment that broke my brain for mixed reality,” he said. “We were on actual drift bikes, pedaling around collecting coins. And I went twice around that thing, chasing after somebody else on a drift bike. And, you know, my heart rate was at 150. And I was just absolutely going bananas. And I took the headset off, and all that world that had motivated me to pedal my ass off was gone. It just really felt like this is not just going to change entertainment. This is going to change therapy and fitness and learning.”
    Bushnell said so many other kinds of entertainment are based on deploying huge amounts of capital. But this kind of theme park could be up and running in a matter of minutes. Bushnell believes people will be happy to buy tickets to get a chance to play. He said his four-year-old kid loves it, as does his 82-year-old father.
    DreamPark is adding virtual entertainment to real venues.
    To me, it felt a bit like the beginning of the world of Cyberpunk 2077, while Bushnell said it reminded him of the Korean drama, The Memories of Alhambra, where people wear contact lenss displays and have an adventure overlaid on real streets.
    “These are beautiful places naturally. Let’s augment them with a little more cool storytelling, and you’re off and running,” Bushnell said. “The world is lonely and isolated, We think of this a path to being social again, getting people out in public. And we want to invite landlords of all stripes to host DreamParks.”

    GB Daily
    Stay in the know! Get the latest news in your inbox daily
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #dreampark #raises #11m #transform #realworld
    DreamPark raises $1.1M to transform real-world spaces into mixed-reality theme parks
    DreamPark, the creator of what it calls “the world’s largest downloadable mixed realitytheme park,” said it has raised million in seed funding. The investment will accelerate DreamPark’s mission to make Earth worth playing again by transforming ordinary spaces into extraordinary adventures through mixed reality technology. I got a demo of the game in Yerba Buena Park in San Francisco and it made me smile. It also made me think it was part of a pretty good plan to convince property owners to get more out of their entertainment venues. But we’ll get to that in a bit. Long Journey Ventures led the investment round, with participation from Founders Inc. The company is the brainchild of Aidan Wolf, CEO of DreamPark; Kevin Habich, cofounder; and cofounder Brent Bushnell. They came up with the idea while working at Two-Bit Circus, a zany entertainment venue in Los Angeles run by Bushnell. Bushnell encouraged the idea, incubated it and became a cofounder. The DreamPark founders: Brent Bushnell, Aidan Wolf and Kevin Habich. Positioned at the forefront of mixed reality innovation, DreamPark said it is capturing a significant early advantage in the global XRlive event market, valued at billion in 2024 and projected to surge to billion by 2034 at a 48.7% compound annual growth rate. This explosive growth trajectory presents an opportunity that DreamPark’s technology and business model are uniquely designed to address, the company said. “We’re building the world’s largest theme park – one that exists everywhere and is accessible to everyone. We want to make getting out to play worthwhile again,” said Bushnell. “This investment allows us to expand our footprint of access points across the country rapidly, develop partnerships with premium IP holders, and continue enhancing our technology to deliver magical experiences that bring people back to real-world spaces.” Bushnell is the eldest son of Atari cofounder Nolan Bushnell. And the younger Bushnell knows the costs of investing in physical properties, as he runs Two-Bit Circus in downtown Los Angeles. It’s built inside a physical warehouse, and Bushnell’s company has to pay for that property — even weathering the pandemic. But with DreamPark, he can reinvigorate a physical venue without investing anything in a new property. By contrast, a new virtual reality entertainment venue can cost more than million to open. Hands-on demo DreamPark foundes in Yerba Buena Gardens park in San Francisco. Wolf and Habich, and Bushnell’s sister Alyssa Bushnell, showed me the DreamPark virtual theme park in San Francisco in the park near the Metreon building. There was a concert going on at the time and it was very noisy. But the game worked fine anyway. Looking down at my feet, Wolf said the QR code on the mat on the groun was an “access point.” That’s where you can scan and enter the virtual world. The company is still building a front end for distributing the headsets, but people will be able to bring their mixed-reality headsets from home and play the same content. “We’re setting these up all over,” Wolf said. “Once an area is mapped, it’s there and you just show up and play. The big difference here is that DreamParks are places. They exist in the real world.” Don’t be surprised if you see people doing this soon. The mapped area was around 50,000 square feet in the park, so it was a pretty big game space. Soon, the company will break into 100,000 square feet for the game with another update. That’s about 10 times the restricted size of Meta’s VR headsets. “We’re going way past the usual limits,” Wolf said. “I think this fundamentally changes what mixed reality means. Now it’s not this living room experience bound to the couch. It’s an actual world to walk around and explore and touch. Once we get people there, we’re gonna really see that cognitive shift, where now augmented realityis something I can go out and experience, like enjoying a concert.” The cofounders gave me a headset to wear. The first one didn’t work, but a second one functioned fine. It was a modified Meta Quest 3 headset that was locked down so it would play just the DreamPark game. It took a short time to load and then I looked through the headset. Thanks to the outward-facing cameras, I was able to see the park in mixed reality. That meant I didn’t trip over anything as I walked around. I held the headset to my forehead and looked around. I could see a Mario-like set of bricks floating in the air, and floating virtual coins along the physical path. I started walking around and picking up the coins and tapping the bricks to collect points in the game. I didn’t go where there were people lying on the grass, but I didn’t manage to navigate to some lava pits in the middle of the park. The founders pointed out that far away from me, on the Carnaval concert stage, there was a boss. Normally, if there was no concert, I could have waltzed over to that location and engaged in a boss fight. DreamPark overlaid on the Third Street Promenade in Santa Monica, California. The graphics were rudimentary, 8-bit style, and yet I didn’t mind it at all due to the novelty of seeing them overlaid on the real world. Still, I was reluctant to go walking in the lava pits, as that was a bad idea in the virtual world and I somehow felt like it would be a bad idea to walk there in the physical world. “Our graphics are more cartoonish, but our Wizard theme has a more realistic look,” Wolf said. “We’re creating four theme parks.” One of them is a sci-fi Crash Course, which is an obstacle course. And DreamPark is working with a partner as well. There’s one with a psychedelic theme and one that is ambient fun. It’s easy to turn the experience into a multiplayer game. You can, for instance, race around the park and complete a timed experience in competition with your friends. The appeal of a virtual overlay on the real world DreamPark mixes the virtual and real worlds. DreamPark transforms physical locations into immersive mixed-reality environments through its network of access points: physical markers, like QR codes, that, when scanned with a Meta Quest 3 headset or mobile device, unlock digital overlays on real-world spaces. The company has already established successful installations at Santa Monica’s Third Street Promenade and The LA County Fair, with planned expansions in Seattle, Orange County and several expos and corporate events. It’s pretty cheap to create new locations. All they really have to do is scan an area, overlay a digital game filled with simple games, and then drop a mat with a QR code on the property so people can scan it and start playing the game. For property owners, this means they can draw people back to their location, getting them to re-engage with the place because people want to play a digital game at the physical place. It’s a way to enhance the value of a physical property, using virtual entertainment. Bushnell pitched the idea for DreamPark on CNBC’s Shark Tank television show. The sharks didn’t go for it, but the publicity from the show helped surface investors, Bushnell said.. “As a longtime investor, I have seen countless pitches promising to merge the digital and physical worlds, and DreamPark is the first that truly delivers on the real-world metaverse,” said Cyan Banister, cofounder and general partner at Long Journey Ventures, in a statement. “Aidan is a visionary builder of immersive systems, and Brent is a pioneer in playful public spaces, making them the perfect team to make emerging tech feel human, accessible, and unforgettable. They’ve cracked the code on location-based AR, delivering a 10x experience that’s as magical as it’s scalable. This isn’t just immersive entertainment; it’s a whole new category.” The funding comes when retail landlords and event venues seek innovative solutions to drive foot traffic and increase engagement. While typical VR venues cost over million to build, DreamPark delivers a fully immersive, multiplayer experience that pays for itself in its first month of revenue. DreamPark in Santa Monica. “Our capital expense is like one of a hundredth of our competitors, which is amazing. And then this lets us move astronomically faster than everyone else. I kind of believe in a Nintendo philosophy, which is, they take antiquated technology, but they use it in a new way that makes it valuable. We’re using access points,” Wolf said. There’s no construction or permanent infrastructure required. It’s a radically more affordable way to turn underused spaces into high-impact destinations. “We’re not just creating engaging content, we’re building a platform that revitalizes communities by giving people a reason to gather, play, and connect in physical spaces in real life,” said Wolf. “DreamPark bridges the digital and physical worlds, creating a new category of play where the magic of virtual worlds enhances real-life connections. We’re reimagining what’s possible when the spaces around us become canvases for shared adventure and imagination.” The seed funding will support DreamPark’s aggressive expansion plans, including deploying access points across new locations, launching partnerships with major IP holders to create branded theme park experiences, and expanding the company’s fleet of rental Meta Quest 3 headsets units nationwide. DreamPark is growing the development team to accelerate content creation and platform capabilities. DreamPark’s leadership team brings deep experience from companies including Two-Bit Circus, Smiley Cap, and SNAP, Inc., positioning them to execute their ambitious vision of creating the infrastructure for worldwide mixed-reality entertainment. Where it’s going What alien technology is this? Bushnell said the team has been working for around two years. But the founders have been involved with AR for more than a decade. They showed up at Two-Bit Circus and started making mixed-reality games, which take into account physical reality as a game space. There are about 10 contractors in the company working on content. They found that players are happy to wear the headsets for 30 minutes at a time, particularly when they are playing with friends. “We see ourselves more as a tech company than like a location based entertainment company. We hope to stay small as a core team while still reaching millions or billions of people,” Wolf said.The games are in a private alpha testing phase now. “I would say that the headset we currently have in our hands is the exact headset we need to bring this to the masses. So the nice part about the company we’re building is we aren’t waiting for some like watershed moment,” Wolf said. “We’re not waiting for anything now. We’re just getting it into lots of places where people already congregate.”DreamPark is coming out with an app that will let users scan their local park and then start using that space as a level, Wolf said. But DreamPark itself will create partnerships with some of the best places itself and get permission to do the game on the properties. At Two-Bit Circus, for instance, DreamPark could extend the entertainment into the outdoor parking lot, giving more square footage for entertainment. Bushnell had a great moment when he was playing an AR game with drift racing on a racetrack in the Two-Bit Circus parking lot. He noted that mixed reality doesn’t have the Achilles Heel of VR, which is that it makes half the people nauseous. “That was really the moment that broke my brain for mixed reality,” he said. “We were on actual drift bikes, pedaling around collecting coins. And I went twice around that thing, chasing after somebody else on a drift bike. And, you know, my heart rate was at 150. And I was just absolutely going bananas. And I took the headset off, and all that world that had motivated me to pedal my ass off was gone. It just really felt like this is not just going to change entertainment. This is going to change therapy and fitness and learning.” Bushnell said so many other kinds of entertainment are based on deploying huge amounts of capital. But this kind of theme park could be up and running in a matter of minutes. Bushnell believes people will be happy to buy tickets to get a chance to play. He said his four-year-old kid loves it, as does his 82-year-old father. DreamPark is adding virtual entertainment to real venues. To me, it felt a bit like the beginning of the world of Cyberpunk 2077, while Bushnell said it reminded him of the Korean drama, The Memories of Alhambra, where people wear contact lenss displays and have an adventure overlaid on real streets. “These are beautiful places naturally. Let’s augment them with a little more cool storytelling, and you’re off and running,” Bushnell said. “The world is lonely and isolated, We think of this a path to being social again, getting people out in public. And we want to invite landlords of all stripes to host DreamParks.” GB Daily Stay in the know! Get the latest news in your inbox daily Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #dreampark #raises #11m #transform #realworld
    VENTUREBEAT.COM
    DreamPark raises $1.1M to transform real-world spaces into mixed-reality theme parks
    DreamPark, the creator of what it calls “the world’s largest downloadable mixed reality (XR) theme park,” said it has raised $1.1 million in seed funding. The investment will accelerate DreamPark’s mission to make Earth worth playing again by transforming ordinary spaces into extraordinary adventures through mixed reality technology. I got a demo of the game in Yerba Buena Park in San Francisco and it made me smile. It also made me think it was part of a pretty good plan to convince property owners to get more out of their entertainment venues. But we’ll get to that in a bit. Long Journey Ventures led the investment round, with participation from Founders Inc. The company is the brainchild of Aidan Wolf, CEO of DreamPark; Kevin Habich, cofounder; and cofounder Brent Bushnell. They came up with the idea while working at Two-Bit Circus, a zany entertainment venue in Los Angeles run by Bushnell. Bushnell encouraged the idea, incubated it and became a cofounder. The DreamPark founders (left to right): Brent Bushnell, Aidan Wolf and Kevin Habich. Positioned at the forefront of mixed reality innovation, DreamPark said it is capturing a significant early advantage in the global XR (extended reality) live event market, valued at $3.6 billion in 2024 and projected to surge to $190.3 billion by 2034 at a 48.7% compound annual growth rate (CAGR). This explosive growth trajectory presents an opportunity that DreamPark’s technology and business model are uniquely designed to address, the company said. “We’re building the world’s largest theme park – one that exists everywhere and is accessible to everyone. We want to make getting out to play worthwhile again,” said Bushnell. “This investment allows us to expand our footprint of access points across the country rapidly, develop partnerships with premium IP holders, and continue enhancing our technology to deliver magical experiences that bring people back to real-world spaces.” Bushnell is the eldest son of Atari cofounder Nolan Bushnell. And the younger Bushnell knows the costs of investing in physical properties, as he runs Two-Bit Circus in downtown Los Angeles. It’s built inside a physical warehouse, and Bushnell’s company has to pay for that property — even weathering the pandemic. But with DreamPark, he can reinvigorate a physical venue without investing anything in a new property. By contrast, a new virtual reality entertainment venue can cost more than $1 million to open. Hands-on demo DreamPark foundes in Yerba Buena Gardens park in San Francisco. Wolf and Habich, and Bushnell’s sister Alyssa Bushnell, showed me the DreamPark virtual theme park in San Francisco in the park near the Metreon building. There was a concert going on at the time and it was very noisy. But the game worked fine anyway. Looking down at my feet, Wolf said the QR code on the mat on the groun was an “access point.” That’s where you can scan and enter the virtual world. The company is still building a front end for distributing the headsets, but people will be able to bring their mixed-reality headsets from home and play the same content. “We’re setting these up all over,” Wolf said. “Once an area is mapped, it’s there and you just show up and play. The big difference here is that DreamParks are places. They exist in the real world.” Don’t be surprised if you see people doing this soon. The mapped area was around 50,000 square feet in the park, so it was a pretty big game space. Soon, the company will break into 100,000 square feet for the game with another update. That’s about 10 times the restricted size of Meta’s VR headsets. “We’re going way past the usual limits,” Wolf said. “I think this fundamentally changes what mixed reality means. Now it’s not this living room experience bound to the couch. It’s an actual world to walk around and explore and touch. Once we get people there, we’re gonna really see that cognitive shift, where now augmented reality (AR) is something I can go out and experience, like enjoying a concert.” The cofounders gave me a headset to wear. The first one didn’t work, but a second one functioned fine. It was a modified Meta Quest 3 headset that was locked down so it would play just the DreamPark game. It took a short time to load and then I looked through the headset. Thanks to the outward-facing cameras, I was able to see the park in mixed reality. That meant I didn’t trip over anything as I walked around. I held the headset to my forehead and looked around. I could see a Mario-like set of bricks floating in the air, and floating virtual coins along the physical path. I started walking around and picking up the coins and tapping the bricks to collect points in the game. I didn’t go where there were people lying on the grass, but I didn’t manage to navigate to some lava pits in the middle of the park. The founders pointed out that far away from me, on the Carnaval concert stage, there was a boss. Normally, if there was no concert, I could have waltzed over to that location and engaged in a boss fight. DreamPark overlaid on the Third Street Promenade in Santa Monica, California. The graphics were rudimentary, 8-bit style, and yet I didn’t mind it at all due to the novelty of seeing them overlaid on the real world. Still, I was reluctant to go walking in the lava pits, as that was a bad idea in the virtual world and I somehow felt like it would be a bad idea to walk there in the physical world. “Our graphics are more cartoonish, but our Wizard theme has a more realistic look,” Wolf said. “We’re creating four theme parks.” One of them is a sci-fi Crash Course, which is an obstacle course. And DreamPark is working with a partner as well. There’s one with a psychedelic theme and one that is ambient fun. It’s easy to turn the experience into a multiplayer game. You can, for instance, race around the park and complete a timed experience in competition with your friends. The appeal of a virtual overlay on the real world DreamPark mixes the virtual and real worlds. DreamPark transforms physical locations into immersive mixed-reality environments through its network of access points: physical markers, like QR codes, that, when scanned with a Meta Quest 3 headset or mobile device, unlock digital overlays on real-world spaces. The company has already established successful installations at Santa Monica’s Third Street Promenade and The LA County Fair, with planned expansions in Seattle, Orange County and several expos and corporate events. It’s pretty cheap to create new locations. All they really have to do is scan an area, overlay a digital game filled with simple games, and then drop a mat with a QR code on the property so people can scan it and start playing the game. For property owners, this means they can draw people back to their location, getting them to re-engage with the place because people want to play a digital game at the physical place. It’s a way to enhance the value of a physical property, using virtual entertainment. Bushnell pitched the idea for DreamPark on CNBC’s Shark Tank television show. The sharks didn’t go for it, but the publicity from the show helped surface investors, Bushnell said. (The Bushnell family is going to appear at Augmented World Expo in Long Beach, California, in June). “As a longtime investor, I have seen countless pitches promising to merge the digital and physical worlds, and DreamPark is the first that truly delivers on the real-world metaverse,” said Cyan Banister, cofounder and general partner at Long Journey Ventures, in a statement. “Aidan is a visionary builder of immersive systems, and Brent is a pioneer in playful public spaces, making them the perfect team to make emerging tech feel human, accessible, and unforgettable. They’ve cracked the code on location-based AR, delivering a 10x experience that’s as magical as it’s scalable. This isn’t just immersive entertainment; it’s a whole new category.” The funding comes when retail landlords and event venues seek innovative solutions to drive foot traffic and increase engagement. While typical VR venues cost over $1 million to build, DreamPark delivers a fully immersive, multiplayer experience that pays for itself in its first month of revenue. DreamPark in Santa Monica. “Our capital expense is like one of a hundredth of our competitors, which is amazing. And then this lets us move astronomically faster than everyone else. I kind of believe in a Nintendo philosophy, which is, they take antiquated technology, but they use it in a new way that makes it valuable. We’re using access points,” Wolf said. There’s no construction or permanent infrastructure required. It’s a radically more affordable way to turn underused spaces into high-impact destinations. “We’re not just creating engaging content, we’re building a platform that revitalizes communities by giving people a reason to gather, play, and connect in physical spaces in real life,” said Wolf. “DreamPark bridges the digital and physical worlds, creating a new category of play where the magic of virtual worlds enhances real-life connections. We’re reimagining what’s possible when the spaces around us become canvases for shared adventure and imagination.” The seed funding will support DreamPark’s aggressive expansion plans, including deploying access points across new locations, launching partnerships with major IP holders to create branded theme park experiences, and expanding the company’s fleet of rental Meta Quest 3 headsets units nationwide. DreamPark is growing the development team to accelerate content creation and platform capabilities. DreamPark’s leadership team brings deep experience from companies including Two-Bit Circus, Smiley Cap, and SNAP, Inc., positioning them to execute their ambitious vision of creating the infrastructure for worldwide mixed-reality entertainment. Where it’s going What alien technology is this? Bushnell said the team has been working for around two years. But the founders have been involved with AR for more than a decade. They showed up at Two-Bit Circus and started making mixed-reality games, which take into account physical reality as a game space. There are about 10 contractors in the company working on content. They found that players are happy to wear the headsets for 30 minutes at a time, particularly when they are playing with friends. “We see ourselves more as a tech company than like a location based entertainment company. We hope to stay small as a core team while still reaching millions or billions of people,” Wolf said.The games are in a private alpha testing phase now. “I would say that the headset we currently have in our hands is the exact headset we need to bring this to the masses. So the nice part about the company we’re building is we aren’t waiting for some like watershed moment,” Wolf said. “We’re not waiting for anything now. We’re just getting it into lots of places where people already congregate.”DreamPark is coming out with an app that will let users scan their local park and then start using that space as a level, Wolf said. But DreamPark itself will create partnerships with some of the best places itself and get permission to do the game on the properties. At Two-Bit Circus, for instance, DreamPark could extend the entertainment into the outdoor parking lot, giving more square footage for entertainment. Bushnell had a great moment when he was playing an AR game with drift racing on a racetrack in the Two-Bit Circus parking lot. He noted that mixed reality doesn’t have the Achilles Heel of VR, which is that it makes half the people nauseous. “That was really the moment that broke my brain for mixed reality,” he said. “We were on actual drift bikes, pedaling around collecting coins. And I went twice around that thing, chasing after somebody else on a drift bike. And, you know, my heart rate was at 150. And I was just absolutely going bananas. And I took the headset off, and all that world that had motivated me to pedal my ass off was gone. It just really felt like this is not just going to change entertainment. This is going to change therapy and fitness and learning.” Bushnell said so many other kinds of entertainment are based on deploying huge amounts of capital. But this kind of theme park could be up and running in a matter of minutes. Bushnell believes people will be happy to buy tickets to get a chance to play. He said his four-year-old kid loves it, as does his 82-year-old father. DreamPark is adding virtual entertainment to real venues. To me, it felt a bit like the beginning of the world of Cyberpunk 2077, while Bushnell said it reminded him of the Korean drama, The Memories of Alhambra, where people wear contact lenss displays and have an adventure overlaid on real streets. “These are beautiful places naturally. Let’s augment them with a little more cool storytelling, and you’re off and running,” Bushnell said. “The world is lonely and isolated, We think of this a path to being social again, getting people out in public. And we want to invite landlords of all stripes to host DreamParks.” GB Daily Stay in the know! Get the latest news in your inbox daily Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • Sleepagotchi Lite launches on Sony’s Soneium blockchain via Line Mini app

    Soneium, a blockchain started by Sony and Startale Group, announced the launch of Sleepagotchi Lite on the Line Mini app.Read More
    #sleepagotchi #lite #launches #sonys #soneium
    Sleepagotchi Lite launches on Sony’s Soneium blockchain via Line Mini app
    Soneium, a blockchain started by Sony and Startale Group, announced the launch of Sleepagotchi Lite on the Line Mini app.Read More #sleepagotchi #lite #launches #sonys #soneium
    VENTUREBEAT.COM
    Sleepagotchi Lite launches on Sony’s Soneium blockchain via Line Mini app
    Soneium, a blockchain started by Sony and Startale Group, announced the launch of Sleepagotchi Lite on the Line Mini app.Read More
    0 Comments 0 Shares
  • Acurast raises $5.4M for global decentralized cloud using smartphones

    Acurast has raised million to use smartphones to power a global decentralized cloud computing network.
    The company raised the money in a community led investment round by premier cryptocurrency
    launchpad CoinList. The sale concluded on May 22, 2025, with Acurast’s ACU token priced at nine cents, resulting in a fully diluted valuation of million.“Most of the newly raised capital will be used to enhance our protocol, which continues proving that compute can be verifiable, confidential, energy-efficient, and truly decentralised, powered by the phones
    in our pockets,” said Alessandro De Carli, president of the board and cofounder of Acurast, in a statement.Acurast has raised million.
    Acurast enables users to participate in confidential compute tasks, decentralized AI, and blockchain infrastructure, all while earning rewards by leveraging the processing power of smartphones. The protocol
    transforms everyday phones into secure, decentralized compute nodes, powering a global networkspanning over 130 countries.Acurast has onboarded over 72,000 smartphones, or “compute units,” worldwide onto its decentralized
    testnet with over 256 million transactions, making it the most decentralized and verifiable computenetwork available today, eliminating the need for centralized data centers. Acurast utilizes Trusted Execution Environmentsand Hardware Security Modulesof the mobile phones to ensure secure and scalable compute, while maintaining confidentiality without requiring trust in the device owner.
    “The ACU token lies at the heart of this economy. Acurast allows anyone and everyone to run compute
    with their mobile phones, providing real decentralisation, and become stakeholders in the networkpowering a secure, scalable and decentralised computer economy while incentivising activecollaboration and sustainable growth,” said De Carli.Over 300 websites and games were created on Acurast in the last 24 hours using just a prompt with Vibe Code and Deploy — live instantly on a decentralized network powered by phones.
    Acurast’s high-performance Proof of Stake blockchain orchestrates global demand and supply for secure decentralized compute, without centralized data centers. This chain verifies genuine hardware on-chain with Smartphone manufacture attestations and anchors confidential workloads, ensuring trustless and verifiable execution across billions of smartphones. Acurast’s Android and iOS apps offer end-to-end checks of each phone’s secure elements, delivering unstoppable trustless compute at scale.
    Acurast is creating a global decentralized cloud using smartphones in 130 countries.
    Acurast delivers a next-generation decentralized confidential compute platform, purpose-built for the demands of web3, AI, and beyond. Developers can seamlessly deploy and scale JavaScript, TypeScript,
    Node.js, and WASM workloads via a simple CLI, with access to thousands of NPM packages and deep integration across major ecosystems. Centered on openness, composability, and decentralization, Acurast forms the foundational layer for the decentralized compute economy, unlocking mass adoption and enabling unprecedented innovation for web3 and beyond.
    ACU tokens have a total supply of 1,000,000,000 tokens at genesis with the token utility being around network fees, settlement layers, staking and governance.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #acurast #raises #54m #global #decentralized
    Acurast raises $5.4M for global decentralized cloud using smartphones
    Acurast has raised million to use smartphones to power a global decentralized cloud computing network. The company raised the money in a community led investment round by premier cryptocurrency launchpad CoinList. The sale concluded on May 22, 2025, with Acurast’s ACU token priced at nine cents, resulting in a fully diluted valuation of million.“Most of the newly raised capital will be used to enhance our protocol, which continues proving that compute can be verifiable, confidential, energy-efficient, and truly decentralised, powered by the phones in our pockets,” said Alessandro De Carli, president of the board and cofounder of Acurast, in a statement.Acurast has raised million. Acurast enables users to participate in confidential compute tasks, decentralized AI, and blockchain infrastructure, all while earning rewards by leveraging the processing power of smartphones. The protocol transforms everyday phones into secure, decentralized compute nodes, powering a global networkspanning over 130 countries.Acurast has onboarded over 72,000 smartphones, or “compute units,” worldwide onto its decentralized testnet with over 256 million transactions, making it the most decentralized and verifiable computenetwork available today, eliminating the need for centralized data centers. Acurast utilizes Trusted Execution Environmentsand Hardware Security Modulesof the mobile phones to ensure secure and scalable compute, while maintaining confidentiality without requiring trust in the device owner. “The ACU token lies at the heart of this economy. Acurast allows anyone and everyone to run compute with their mobile phones, providing real decentralisation, and become stakeholders in the networkpowering a secure, scalable and decentralised computer economy while incentivising activecollaboration and sustainable growth,” said De Carli.Over 300 websites and games were created on Acurast in the last 24 hours using just a prompt with Vibe Code and Deploy — live instantly on a decentralized network powered by phones. Acurast’s high-performance Proof of Stake blockchain orchestrates global demand and supply for secure decentralized compute, without centralized data centers. This chain verifies genuine hardware on-chain with Smartphone manufacture attestations and anchors confidential workloads, ensuring trustless and verifiable execution across billions of smartphones. Acurast’s Android and iOS apps offer end-to-end checks of each phone’s secure elements, delivering unstoppable trustless compute at scale. Acurast is creating a global decentralized cloud using smartphones in 130 countries. Acurast delivers a next-generation decentralized confidential compute platform, purpose-built for the demands of web3, AI, and beyond. Developers can seamlessly deploy and scale JavaScript, TypeScript, Node.js, and WASM workloads via a simple CLI, with access to thousands of NPM packages and deep integration across major ecosystems. Centered on openness, composability, and decentralization, Acurast forms the foundational layer for the decentralized compute economy, unlocking mass adoption and enabling unprecedented innovation for web3 and beyond. ACU tokens have a total supply of 1,000,000,000 tokens at genesis with the token utility being around network fees, settlement layers, staking and governance. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #acurast #raises #54m #global #decentralized
    VENTUREBEAT.COM
    Acurast raises $5.4M for global decentralized cloud using smartphones
    Acurast has raised $5.4 million to use smartphones to power a global decentralized cloud computing network. The company raised the money in a community led investment round by premier cryptocurrency launchpad CoinList. The sale concluded on May 22, 2025, with Acurast’s ACU token priced at nine cents, resulting in a fully diluted valuation of $90 million.“Most of the newly raised capital will be used to enhance our protocol, which continues proving that compute can be verifiable, confidential, energy-efficient, and truly decentralised, powered by the phones in our pockets,” said Alessandro De Carli, president of the board and cofounder of Acurast, in a statement.Acurast has raised $5.4 million. Acurast enables users to participate in confidential compute tasks, decentralized AI, and blockchain infrastructure, all while earning rewards by leveraging the processing power of smartphones. The protocol transforms everyday phones into secure, decentralized compute nodes, powering a global networkspanning over 130 countries.Acurast has onboarded over 72,000 smartphones, or “compute units,” worldwide onto its decentralized testnet with over 256 million transactions, making it the most decentralized and verifiable computenetwork available today, eliminating the need for centralized data centers. Acurast utilizes Trusted Execution Environments (TEEs) and Hardware Security Modules (HSMs) of the mobile phones to ensure secure and scalable compute, while maintaining confidentiality without requiring trust in the device owner. “The ACU token lies at the heart of this economy. Acurast allows anyone and everyone to run compute with their mobile phones, providing real decentralisation, and become stakeholders in the networkpowering a secure, scalable and decentralised computer economy while incentivising activecollaboration and sustainable growth,” said De Carli.Over 300 websites and games were created on Acurast in the last 24 hours using just a prompt with Vibe Code and Deploy — live instantly on a decentralized network powered by phones. Acurast’s high-performance Proof of Stake blockchain orchestrates global demand and supply for secure decentralized compute, without centralized data centers. This chain verifies genuine hardware on-chain with Smartphone manufacture attestations and anchors confidential workloads, ensuring trustless and verifiable execution across billions of smartphones. Acurast’s Android and iOS apps offer end-to-end checks of each phone’s secure elements, delivering unstoppable trustless compute at scale. Acurast is creating a global decentralized cloud using smartphones in 130 countries. Acurast delivers a next-generation decentralized confidential compute platform, purpose-built for the demands of web3, AI, and beyond. Developers can seamlessly deploy and scale JavaScript, TypeScript, Node.js, and WASM workloads via a simple CLI, with access to thousands of NPM packages and deep integration across major ecosystems. Centered on openness, composability, and decentralization, Acurast forms the foundational layer for the decentralized compute economy, unlocking mass adoption and enabling unprecedented innovation for web3 and beyond. ACU tokens have a total supply of 1,000,000,000 tokens at genesis with the token utility being around network fees, settlement layers, staking and governance. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
More Stories
CGShares https://cgshares.com