• Google Introduces Beam, an AI-Driven Communication Platform That Turns 2D Video Into 3D Experiences

    Google is rebranding its Project Starline and turning it into a new 3D video communication platform, the company announced at its annual I/O developer conference on Tuesday. Dubbed Beam, the platform enables users to connect with each other in a more intuitive manner by turning 2D video streams into 3D experiences. It leverages the Google Cloud platform along with the company's AI prowess to deliver enterprise-grade reliability and compatibility with the existing workflow. Google says Beam may receive support for speech translation in real time and will be available in the market starting with HP devices later this year.Google Beam FeaturesGoogle detailed its new Beam platform in a blog post. It uses an array of different webcams to capture the user from different angles. Then, AI is used to merge the video streams together and render a 3D light field display. Google says it also has head tracking capabilities which are claimed to be accurate down to the millimetre and at 60 frames per second.Google Beam takes advantage of an AI volumetric video model to turn standard 2D video streams into realistic experiences which appear in 3D from any perspective. It, along with the light field display, develop a sense of dimensionality and depth, enabling you to make eye contact and read subtle cues.As per the company, its new platform replaces Project Starline which was initially announced at Google I/O in 2021 with the aim of providing users with a new video communication platform that was capable of showing them in 3D at natural scale, along with eye contact and spatially accurate audio capabilities. While this project did not completely materialise, it was repurposed to create the new 3D video communication platform which is now known as Google Beam.Photo Credit: GoogleFor enhanced communication, Google is exploring plans of bringing speech translation in real time to Beam. Additionally, the capability will also be available in Google Meet starting today.

    Google says it is working in collaboration with HP to introduce the first Google Beam devices in the market with select customers later this year. Further, the first Google Beam products from the original equipment manufacturerwill be made available via InfoComm 2025 which takes place in June.

    For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

    Further reading:
    Google Beam, Google, Project Starline, AI, Artificial Intelligence, video conferencing, Google IO 2025

    Shaurya Tomer

    Shaurya Tomer is a Sub Editor at Gadgets 360 with 2 years of experience across a diverse spectrum of topics. With a particular focus on smartphones, gadgets and the ever-evolving landscape of artificial intelligence, he often likes to explore the industry's intricacies and innovations – whether dissecting the latest smartphone release or exploring the ethical implications of AI advancements. In his free time, he often embarks on impromptu road trips to unwind, recharge, and
    ...More

    Related Stories
    #google #introduces #beam #aidriven #communication
    Google Introduces Beam, an AI-Driven Communication Platform That Turns 2D Video Into 3D Experiences
    Google is rebranding its Project Starline and turning it into a new 3D video communication platform, the company announced at its annual I/O developer conference on Tuesday. Dubbed Beam, the platform enables users to connect with each other in a more intuitive manner by turning 2D video streams into 3D experiences. It leverages the Google Cloud platform along with the company's AI prowess to deliver enterprise-grade reliability and compatibility with the existing workflow. Google says Beam may receive support for speech translation in real time and will be available in the market starting with HP devices later this year.Google Beam FeaturesGoogle detailed its new Beam platform in a blog post. It uses an array of different webcams to capture the user from different angles. Then, AI is used to merge the video streams together and render a 3D light field display. Google says it also has head tracking capabilities which are claimed to be accurate down to the millimetre and at 60 frames per second.Google Beam takes advantage of an AI volumetric video model to turn standard 2D video streams into realistic experiences which appear in 3D from any perspective. It, along with the light field display, develop a sense of dimensionality and depth, enabling you to make eye contact and read subtle cues.As per the company, its new platform replaces Project Starline which was initially announced at Google I/O in 2021 with the aim of providing users with a new video communication platform that was capable of showing them in 3D at natural scale, along with eye contact and spatially accurate audio capabilities. While this project did not completely materialise, it was repurposed to create the new 3D video communication platform which is now known as Google Beam.Photo Credit: GoogleFor enhanced communication, Google is exploring plans of bringing speech translation in real time to Beam. Additionally, the capability will also be available in Google Meet starting today. Google says it is working in collaboration with HP to introduce the first Google Beam devices in the market with select customers later this year. Further, the first Google Beam products from the original equipment manufacturerwill be made available via InfoComm 2025 which takes place in June. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Google Beam, Google, Project Starline, AI, Artificial Intelligence, video conferencing, Google IO 2025 Shaurya Tomer Shaurya Tomer is a Sub Editor at Gadgets 360 with 2 years of experience across a diverse spectrum of topics. With a particular focus on smartphones, gadgets and the ever-evolving landscape of artificial intelligence, he often likes to explore the industry's intricacies and innovations – whether dissecting the latest smartphone release or exploring the ethical implications of AI advancements. In his free time, he often embarks on impromptu road trips to unwind, recharge, and ...More Related Stories #google #introduces #beam #aidriven #communication
    WWW.GADGETS360.COM
    Google Introduces Beam, an AI-Driven Communication Platform That Turns 2D Video Into 3D Experiences
    Google is rebranding its Project Starline and turning it into a new 3D video communication platform, the company announced at its annual I/O developer conference on Tuesday. Dubbed Beam, the platform enables users to connect with each other in a more intuitive manner by turning 2D video streams into 3D experiences. It leverages the Google Cloud platform along with the company's AI prowess to deliver enterprise-grade reliability and compatibility with the existing workflow. Google says Beam may receive support for speech translation in real time and will be available in the market starting with HP devices later this year.Google Beam FeaturesGoogle detailed its new Beam platform in a blog post. It uses an array of different webcams to capture the user from different angles. Then, AI is used to merge the video streams together and render a 3D light field display. Google says it also has head tracking capabilities which are claimed to be accurate down to the millimetre and at 60 frames per second (fps).Google Beam takes advantage of an AI volumetric video model to turn standard 2D video streams into realistic experiences which appear in 3D from any perspective. It, along with the light field display, develop a sense of dimensionality and depth, enabling you to make eye contact and read subtle cues.As per the company, its new platform replaces Project Starline which was initially announced at Google I/O in 2021 with the aim of providing users with a new video communication platform that was capable of showing them in 3D at natural scale, along with eye contact and spatially accurate audio capabilities. While this project did not completely materialise, it was repurposed to create the new 3D video communication platform which is now known as Google Beam.Photo Credit: GoogleFor enhanced communication, Google is exploring plans of bringing speech translation in real time to Beam. Additionally, the capability will also be available in Google Meet starting today. Google says it is working in collaboration with HP to introduce the first Google Beam devices in the market with select customers later this year. Further, the first Google Beam products from the original equipment manufacturer (OEM) will be made available via InfoComm 2025 which takes place in June. For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube. Further reading: Google Beam, Google, Project Starline, AI, Artificial Intelligence, video conferencing, Google IO 2025 Shaurya Tomer Shaurya Tomer is a Sub Editor at Gadgets 360 with 2 years of experience across a diverse spectrum of topics. With a particular focus on smartphones, gadgets and the ever-evolving landscape of artificial intelligence (AI), he often likes to explore the industry's intricacies and innovations – whether dissecting the latest smartphone release or exploring the ethical implications of AI advancements. In his free time, he often embarks on impromptu road trips to unwind, recharge, and ...More Related Stories
    0 Commentarii 0 Distribuiri
  • Roblox open-sources Cube 3D model for AI-driven 3D object generation

    Roblox Corporation, a global platform for immersive user-generated content, has introduced Cube 3D, a foundational generative AI model for creating 3D digital content from text prompts.
    The company has made a version of the model open-source, now accessible via GitHub and HuggingFace.
    The beta version, which integrates into Roblox Studio and its Lua-based API, enables creators to produce 3D meshes directly within game environments using natural language input.
    Unlike image-based reconstruction methods that rely on limited visual data, Cube 3D is trained on native 3D assets generated and used across Roblox’s ecosystem.
    This allows it to output structurally complete digital objects compatible with game engines—objects that can be interacted with in gameplay, rather than being flat facades.
    A typical use case involves entering a command like “generate a motorcycle” into the platform’s Assistant, resulting in a full 3D mesh suitable for immediate in-game deployment.
    These objects can later be enhanced with texture and color but are generated as functionally usable meshes from the outset.
    Cube 3D applies a token-based system to understand and predict 3D shapes.
    Drawing from techniques used in large language models, the AI converts geometry into shape tokens and uses autoregressive transformers to forecast subsequent tokens in a sequence—effectively “building” a mesh piece by piece.
    This method supports both individual object completion and full scene layout generation.
    To align multimodal inputs, Roblox engineers developed a unified transformer architecture compatible with text, image, and future data types such as audio.
    The current release focuses on object generation from text, but future updates are expected to support scene-level outputs and hybrid input modalities.
    Roblox has positioned the software as part of a broader shift toward real-time, user-augmented content creation.
    Players and developers alike will be able to generate props, environments, or interactive objects on demand.
    The longer-term aim is “4D creation,” where AI understands not only object form but also interaction logic and environmental relationships.
    This includes bounding box placement for layout, mesh fusion for multi-object environments, and context-aware alterations—such as swapping seasonal elements or adapting geometry based on narrative triggers within a game.
    Prompt: A red buggy with knobby tires.
    Image via Roblox.
    While Cube 3D does not currently support 3D printing file formats such as STL, the underlying methodology of tokenizing 3D shapes may influence emerging tools in virtual prototyping, AI-assisted design, and even CAD automation.
    Open-source release of this nature remains rare among proprietary game development platforms, particularly those dealing with native 3D asset pipelines.
    Roblox has also co-founded ROOST, a nonprofit focused on open-source AI safety, and recently released other models tied to responsible AI development. 
    Emerging AI-generated 3D modeling
    Tencent, a Chinese multinational technology company with significant investments in gaming and cloud services, released Hunyuan3D 2.0 to streamline digital asset creation.
    The system features two specialized models—Hunyuan3D-DiT for geometry and Hunyuan3D-Paint for texture—designed to improve fidelity and responsiveness in 3D generation.
    Internal benchmarks using Condition-Model Matching Distance (CMMD) and Fréchet Inception Distance (FID) suggest higher alignment between user prompts and resulting outputs.
    A companion interface, Hunyuan3D-Studio, enables sketch-to-3D workflows and low-polygon mesh export.
    While Tencent has not formally targeted additive manufacturing, the platform’s ability to generate both high-resolution and simplified meshes may support adaptation in prototyping and multi-material 3D printing environments.
    A year earlier, Nvidia, a leading developer of GPUs and parallel computing platforms widely used in AI and visualization, introduced Magic3D in 2023 as part of its generative AI research.
    The tool produces textured 3D meshes from natural language using a two-stage pipeline that refines coarse models into high-resolution geometry.
    Demonstrations—such as rendering a blue poison dart frog from a single prompt—illustrated the system’s capacity for mesh synthesis, text-driven editing, and stylistic transformation.
    Although primarily geared toward video games and computer-generated imagery, Nvidia researchers identified potential crossover applications in VR development and special effects, noting that streamlined model generation could lower production barriers across digital design domains.
    Hunyuan3D 2.0.
    Image via Tencent.
    Ready to discover who won the 20243D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights.
    Featured image showcase Prompt: A red buggy with knobby tires.
    Image via Roblox.
    Anyer Tenorio Lara
    Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation.
    With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community.
    Anyer's articles aim to make complex subjects accessible and engaging for a broad audience.
    In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.

    Source: https://3dprintingindustry.com/news/roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation-239504/?utm_source=rss&utm_medium=rss&utm_campaign=roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation">https://3dprintingindustry.com/news/roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation-239504/?utm_source=rss&utm_medium=rss&utm_campaign=roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation">https://3dprintingindustry.com/news/roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation-239504/?utm_source=rss&utm_medium=rss&utm_campaign=roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation
    #roblox #opensources #cube #model #for #aidriven #object #generation
    Roblox open-sources Cube 3D model for AI-driven 3D object generation
    Roblox Corporation, a global platform for immersive user-generated content, has introduced Cube 3D, a foundational generative AI model for creating 3D digital content from text prompts. The company has made a version of the model open-source, now accessible via GitHub and HuggingFace. The beta version, which integrates into Roblox Studio and its Lua-based API, enables creators to produce 3D meshes directly within game environments using natural language input. Unlike image-based reconstruction methods that rely on limited visual data, Cube 3D is trained on native 3D assets generated and used across Roblox’s ecosystem. This allows it to output structurally complete digital objects compatible with game engines—objects that can be interacted with in gameplay, rather than being flat facades. A typical use case involves entering a command like “generate a motorcycle” into the platform’s Assistant, resulting in a full 3D mesh suitable for immediate in-game deployment. These objects can later be enhanced with texture and color but are generated as functionally usable meshes from the outset. Cube 3D applies a token-based system to understand and predict 3D shapes. Drawing from techniques used in large language models, the AI converts geometry into shape tokens and uses autoregressive transformers to forecast subsequent tokens in a sequence—effectively “building” a mesh piece by piece. This method supports both individual object completion and full scene layout generation. To align multimodal inputs, Roblox engineers developed a unified transformer architecture compatible with text, image, and future data types such as audio. The current release focuses on object generation from text, but future updates are expected to support scene-level outputs and hybrid input modalities. Roblox has positioned the software as part of a broader shift toward real-time, user-augmented content creation. Players and developers alike will be able to generate props, environments, or interactive objects on demand. The longer-term aim is “4D creation,” where AI understands not only object form but also interaction logic and environmental relationships. This includes bounding box placement for layout, mesh fusion for multi-object environments, and context-aware alterations—such as swapping seasonal elements or adapting geometry based on narrative triggers within a game. Prompt: A red buggy with knobby tires. Image via Roblox. While Cube 3D does not currently support 3D printing file formats such as STL, the underlying methodology of tokenizing 3D shapes may influence emerging tools in virtual prototyping, AI-assisted design, and even CAD automation. Open-source release of this nature remains rare among proprietary game development platforms, particularly those dealing with native 3D asset pipelines. Roblox has also co-founded ROOST, a nonprofit focused on open-source AI safety, and recently released other models tied to responsible AI development.  Emerging AI-generated 3D modeling Tencent, a Chinese multinational technology company with significant investments in gaming and cloud services, released Hunyuan3D 2.0 to streamline digital asset creation. The system features two specialized models—Hunyuan3D-DiT for geometry and Hunyuan3D-Paint for texture—designed to improve fidelity and responsiveness in 3D generation. Internal benchmarks using Condition-Model Matching Distance (CMMD) and Fréchet Inception Distance (FID) suggest higher alignment between user prompts and resulting outputs. A companion interface, Hunyuan3D-Studio, enables sketch-to-3D workflows and low-polygon mesh export. While Tencent has not formally targeted additive manufacturing, the platform’s ability to generate both high-resolution and simplified meshes may support adaptation in prototyping and multi-material 3D printing environments. A year earlier, Nvidia, a leading developer of GPUs and parallel computing platforms widely used in AI and visualization, introduced Magic3D in 2023 as part of its generative AI research. The tool produces textured 3D meshes from natural language using a two-stage pipeline that refines coarse models into high-resolution geometry. Demonstrations—such as rendering a blue poison dart frog from a single prompt—illustrated the system’s capacity for mesh synthesis, text-driven editing, and stylistic transformation. Although primarily geared toward video games and computer-generated imagery, Nvidia researchers identified potential crossover applications in VR development and special effects, noting that streamlined model generation could lower production barriers across digital design domains. Hunyuan3D 2.0. Image via Tencent. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Featured image showcase Prompt: A red buggy with knobby tires. Image via Roblox. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology. Source: https://3dprintingindustry.com/news/roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation-239504/?utm_source=rss&utm_medium=rss&utm_campaign=roblox-open-sources-cube-3d-model-for-ai-driven-3d-object-generation #roblox #opensources #cube #model #for #aidriven #object #generation
    3DPRINTINGINDUSTRY.COM
    Roblox open-sources Cube 3D model for AI-driven 3D object generation
    Roblox Corporation, a global platform for immersive user-generated content, has introduced Cube 3D, a foundational generative AI model for creating 3D digital content from text prompts. The company has made a version of the model open-source, now accessible via GitHub and HuggingFace. The beta version, which integrates into Roblox Studio and its Lua-based API, enables creators to produce 3D meshes directly within game environments using natural language input. Unlike image-based reconstruction methods that rely on limited visual data, Cube 3D is trained on native 3D assets generated and used across Roblox’s ecosystem. This allows it to output structurally complete digital objects compatible with game engines—objects that can be interacted with in gameplay, rather than being flat facades. A typical use case involves entering a command like “generate a motorcycle” into the platform’s Assistant, resulting in a full 3D mesh suitable for immediate in-game deployment. These objects can later be enhanced with texture and color but are generated as functionally usable meshes from the outset. Cube 3D applies a token-based system to understand and predict 3D shapes. Drawing from techniques used in large language models, the AI converts geometry into shape tokens and uses autoregressive transformers to forecast subsequent tokens in a sequence—effectively “building” a mesh piece by piece. This method supports both individual object completion and full scene layout generation. To align multimodal inputs, Roblox engineers developed a unified transformer architecture compatible with text, image, and future data types such as audio. The current release focuses on object generation from text, but future updates are expected to support scene-level outputs and hybrid input modalities. Roblox has positioned the software as part of a broader shift toward real-time, user-augmented content creation. Players and developers alike will be able to generate props, environments, or interactive objects on demand. The longer-term aim is “4D creation,” where AI understands not only object form but also interaction logic and environmental relationships. This includes bounding box placement for layout, mesh fusion for multi-object environments, and context-aware alterations—such as swapping seasonal elements or adapting geometry based on narrative triggers within a game. Prompt: A red buggy with knobby tires. Image via Roblox. While Cube 3D does not currently support 3D printing file formats such as STL, the underlying methodology of tokenizing 3D shapes may influence emerging tools in virtual prototyping, AI-assisted design, and even CAD automation. Open-source release of this nature remains rare among proprietary game development platforms, particularly those dealing with native 3D asset pipelines. Roblox has also co-founded ROOST, a nonprofit focused on open-source AI safety, and recently released other models tied to responsible AI development.  Emerging AI-generated 3D modeling Tencent, a Chinese multinational technology company with significant investments in gaming and cloud services, released Hunyuan3D 2.0 to streamline digital asset creation. The system features two specialized models—Hunyuan3D-DiT for geometry and Hunyuan3D-Paint for texture—designed to improve fidelity and responsiveness in 3D generation. Internal benchmarks using Condition-Model Matching Distance (CMMD) and Fréchet Inception Distance (FID) suggest higher alignment between user prompts and resulting outputs. A companion interface, Hunyuan3D-Studio, enables sketch-to-3D workflows and low-polygon mesh export. While Tencent has not formally targeted additive manufacturing, the platform’s ability to generate both high-resolution and simplified meshes may support adaptation in prototyping and multi-material 3D printing environments. A year earlier, Nvidia, a leading developer of GPUs and parallel computing platforms widely used in AI and visualization, introduced Magic3D in 2023 as part of its generative AI research. The tool produces textured 3D meshes from natural language using a two-stage pipeline that refines coarse models into high-resolution geometry. Demonstrations—such as rendering a blue poison dart frog from a single prompt—illustrated the system’s capacity for mesh synthesis, text-driven editing, and stylistic transformation. Although primarily geared toward video games and computer-generated imagery, Nvidia researchers identified potential crossover applications in VR development and special effects, noting that streamlined model generation could lower production barriers across digital design domains. Hunyuan3D 2.0. Image via Tencent. Ready to discover who won the 20243D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to stay updated with the latest news and insights. Featured image showcase Prompt: A red buggy with knobby tires. Image via Roblox. Anyer Tenorio Lara Anyer Tenorio Lara is an emerging tech journalist passionate about uncovering the latest advances in technology and innovation. With a sharp eye for detail and a talent for storytelling, Anyer has quickly made a name for himself in the tech community. Anyer's articles aim to make complex subjects accessible and engaging for a broad audience. In addition to his writing, Anyer enjoys participating in industry events and discussions, eager to learn and share knowledge in the dynamic world of technology.
    0 Commentarii 0 Distribuiri