• NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI

    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions.
    Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges.
    To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure.
    Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations.
    Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint.

    NVIDIA Omniverse Blueprint for Smart City AI 
    The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes:

    NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale.
    NVIDIA Cosmos to generate synthetic data at scale for post-training AI models.
    NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models.
    NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes.

    The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint.
    NVIDIA Partner Ecosystem Powers Smart Cities Worldwide
    The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own.
    SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning.
    This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management.
    Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption.

    The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second.
    Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events.
    To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second.

    Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance.
    Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases.
    The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems.

    Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins.
    Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%.

    Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance.
    Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities.
    Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents.
    Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #nvidia #brings #physical #european #cities
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #nvidia #brings #physical #european #cities
    BLOGS.NVIDIA.COM
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language models (VLMs) and large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization (VSS), helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. https://blogs.nvidia.com/wp-content/uploads/2025/06/01-Monaco-Akila.mp4 The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-K2K-Polermo-1600x900-1.mp4 Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. https://blogs.nvidia.com/wp-content/uploads/2025/06/03-Milestone.mp4 Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-Linker-Vision-1280x680-1.mp4 Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Wow
    34
    0 Comentários 0 Compartilhamentos
  • Chinese Hackers Exploit Trimble Cityworks Flaw to Infiltrate U.S. Government Networks

    A Chinese-speaking threat actor tracked as UAT-6382 has been linked to the exploitation of a now-patched remote-code-execution vulnerability in Trimble Cityworks to deliver Cobalt Strike and VShell.
    "UAT-6382 successfully exploited CVE-2025-0944, conducted reconnaissance, and rapidly deployed a variety of web shells and custom-made malware to maintain long-term access," Cisco Talos researchers Asheer Malhotra and Brandon White said in an analysis published today. "Upon gaining access, UAT-6382 expressed a clear interest in pivoting to systems related to utility management."
    The network security company said it observed the attacks targeting enterprise networks of local governing bodies in the United States starting January 2025.
    CVE-2025-0944refers to the deserialization of untrusted data vulnerability affecting the GIS-centric asset management software that could enable remote code execution. The vulnerability, since patched, was added to the Known Exploited Vulnerabilitiescatalog by the U.S. Cybersecurity and Infrastructure Security Agencyin February 2025.

    According to indicators of compromisereleased by Trimble, the vulnerability has been exploited to deliver a Rust-based loader that launches Cobalt Strike and a Go-based remote access tool named VShell in an attempt to maintain long-term access to infected systems.
    Cisco Talos, which is tracking the Rust-based loader as TetraLoader, said it's built using MaLoader, a publicly available malware-building framework written in Simplified Chinese.

    Successful exploitation of the vulnerable Cityworks application results in the threat actors conducting preliminary reconnaissance to identify and fingerprint the server, and then dropping web shells like AntSword, chinatso/Chopper, and Behinder that are widely put to use by Chinese hacking groups.
    "UAT-6382 enumerated multiple directories on servers of interest to identify files of interest to them and then staged them in directories where they had deployed web shells for easy exfiltration," the researchers said. "UAT-6382 downloaded and deployed multiple backdoors on compromised systems via PowerShell."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    #chinese #hackers #exploit #trimble #cityworks
    Chinese Hackers Exploit Trimble Cityworks Flaw to Infiltrate U.S. Government Networks
    A Chinese-speaking threat actor tracked as UAT-6382 has been linked to the exploitation of a now-patched remote-code-execution vulnerability in Trimble Cityworks to deliver Cobalt Strike and VShell. "UAT-6382 successfully exploited CVE-2025-0944, conducted reconnaissance, and rapidly deployed a variety of web shells and custom-made malware to maintain long-term access," Cisco Talos researchers Asheer Malhotra and Brandon White said in an analysis published today. "Upon gaining access, UAT-6382 expressed a clear interest in pivoting to systems related to utility management." The network security company said it observed the attacks targeting enterprise networks of local governing bodies in the United States starting January 2025. CVE-2025-0944refers to the deserialization of untrusted data vulnerability affecting the GIS-centric asset management software that could enable remote code execution. The vulnerability, since patched, was added to the Known Exploited Vulnerabilitiescatalog by the U.S. Cybersecurity and Infrastructure Security Agencyin February 2025. According to indicators of compromisereleased by Trimble, the vulnerability has been exploited to deliver a Rust-based loader that launches Cobalt Strike and a Go-based remote access tool named VShell in an attempt to maintain long-term access to infected systems. Cisco Talos, which is tracking the Rust-based loader as TetraLoader, said it's built using MaLoader, a publicly available malware-building framework written in Simplified Chinese. Successful exploitation of the vulnerable Cityworks application results in the threat actors conducting preliminary reconnaissance to identify and fingerprint the server, and then dropping web shells like AntSword, chinatso/Chopper, and Behinder that are widely put to use by Chinese hacking groups. "UAT-6382 enumerated multiple directories on servers of interest to identify files of interest to them and then staged them in directories where they had deployed web shells for easy exfiltration," the researchers said. "UAT-6382 downloaded and deployed multiple backdoors on compromised systems via PowerShell." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. #chinese #hackers #exploit #trimble #cityworks
    THEHACKERNEWS.COM
    Chinese Hackers Exploit Trimble Cityworks Flaw to Infiltrate U.S. Government Networks
    A Chinese-speaking threat actor tracked as UAT-6382 has been linked to the exploitation of a now-patched remote-code-execution vulnerability in Trimble Cityworks to deliver Cobalt Strike and VShell. "UAT-6382 successfully exploited CVE-2025-0944, conducted reconnaissance, and rapidly deployed a variety of web shells and custom-made malware to maintain long-term access," Cisco Talos researchers Asheer Malhotra and Brandon White said in an analysis published today. "Upon gaining access, UAT-6382 expressed a clear interest in pivoting to systems related to utility management." The network security company said it observed the attacks targeting enterprise networks of local governing bodies in the United States starting January 2025. CVE-2025-0944 (CVSS score: 8.6) refers to the deserialization of untrusted data vulnerability affecting the GIS-centric asset management software that could enable remote code execution. The vulnerability, since patched, was added to the Known Exploited Vulnerabilities (KEV) catalog by the U.S. Cybersecurity and Infrastructure Security Agency (CISA) in February 2025. According to indicators of compromise (IoCs) released by Trimble, the vulnerability has been exploited to deliver a Rust-based loader that launches Cobalt Strike and a Go-based remote access tool named VShell in an attempt to maintain long-term access to infected systems. Cisco Talos, which is tracking the Rust-based loader as TetraLoader, said it's built using MaLoader, a publicly available malware-building framework written in Simplified Chinese. Successful exploitation of the vulnerable Cityworks application results in the threat actors conducting preliminary reconnaissance to identify and fingerprint the server, and then dropping web shells like AntSword, chinatso/Chopper, and Behinder that are widely put to use by Chinese hacking groups. "UAT-6382 enumerated multiple directories on servers of interest to identify files of interest to them and then staged them in directories where they had deployed web shells for easy exfiltration," the researchers said. "UAT-6382 downloaded and deployed multiple backdoors on compromised systems via PowerShell." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 Comentários 0 Compartilhamentos
  • Beyond the Drawing Board: How Augmented Reality is Reshaping Architectural Design Review

    this picture!VARID A VR-AR Toolkit for Inclusive Design.. Image © Foster + PartnersOver the last decade, architectural design has relied on 2D methods of representation, such as elevations, sections, and floor plans, paired with digital renderings of 3D models. While these tools are essential to convey geometry and intent, they remain limited by their two-dimensional format. Even the most realistic renderings, created through programs like SketchUp, Revit, or AutoCAD, still flatten space and distance the viewer from the lived experience of a project. Recently, architects have begun to explore immersive technologies as a way to bridge this gap between drawing and experience, offering new ways to inhabit and assess spatial proposals.What are AR, VR, and MR?Extended Realitycan be classified into three main types: Augmented Reality, Virtual Reality, and Mixed Reality, each offering varying levels of immersion in digital environments. At one end of the spectrum, AR enhances the real world with digital content, while at the other, VR fully immerses the user in a completely virtual environment, blocking out the physical world. MR lies between these extremes but is essentially a more detailed classification of AR based on the type of display used. Their research proposes the following classification: Class 1 display refers to monitor-based systems, where users view the real world through a screen equipped with a camera that captures the environment and overlays digital information, such as in the Apple Vision Pro, which uses passthrough cameras. In contrast, Class 2 and 3 systems use head-mounted displayswith see-through lenses that superimpose 3D models onto the user's view, like the Microsoft HoloLens. In 2020, Trimble combined the HoloLens with a hard hat, creating the Trimble XR10, which makes this technology usable in the construction site. For clarity, this text will refer to Class 1 systems as AR and Class 2 and 3 systems as MR moving forward. Related Article Using Augmented Reality In Bamboo Architecture
    this picture!How do Users Perceive Space?Architectural design is not only about defining space, but also about anticipating how people will perceive and move through it. The way users interpret a space depends not just on geometry, but also on intuition, their individual knowledge, and experiences. Kevin Lynch described this as a space's "legibility," or how easily it can be understood and organized mentally, while Ittelsonemphasizes how users explore, categorize, and systematize spatial elements into a coherent whole. The user first explores an area to orient themselves and move around, then they will develop a taxonomy of the space elements to mentally organize it, and finally, they put everything together into a system that tells the brain why things are happening and how they relate to each other. Research suggests that immersive environments such as mixed reality can simulate this faithfully, allowing architects and clients alike to engage with a design not as an abstract plan, but as a place to walk through, observe, and interpret.this picture!Which One Improves Design Understanding: 2D Drawings or MR?Based on the above, a study made by the National Taiwan University in 2021 explored this topic by conducting an experiment where participants were brought to a room and were divided into two groups. The first would analyze an interior design proposal of the space using printed architectural drawings and colored renderings. The second group was asked to do the same but only used the explorable MR 3D model seen through an MR headset, in this case: The HoloLens. After the exploration was done, users would sit down, and researchers would ask questions about the space. For example, the general understanding of the elements in the architectural program, how well people perceive length and sizes of objects, perception and understanding of textures and materials, and knowledge of demolition or renovation of specific elements. A total of 42 people participated in the research, with an average age of 26 years, various ranges of architectural drawing understanding, and from diverse cultural backgrounds, including Africa, the Middle East, Asia, the Americas, and Europe. The results shed light on several topics for architects looking into implementing this technology in their work.this picture!First, the study suggests that MR technology allowed users to understand around 85% of the overall design proposal compared to 2D methods. At the same time, they also concluded that MR does not fully replace 2D; in fact, it's about balance. Both MR and 2D are suitable for identifying spaces and general layout, identifying where activities can be performed, and identifying heights. However, 2D plans are especially good for specific measurements of the space, understanding the demolition plan, and identifying countable elements in the design, like the number of lamps, switches, or sockets. On the other hand, MR was better for understanding how elements in the space interact with each other. MR  was especially useful for quickly identifying the specific materials and textures of the design and visually understanding size in terms of width, and mentally perceiving certain properties of materials like roughness, smoothness, warmth, or coldness.this picture!How Can We Integrate MR into our Current Design Review Workflows?MR has the potential to facilitate inclusive and interdisciplinary collaboration by bridging the gap between technical and non-technical stakeholders. Clients or end users with limited experience in reading architectural drawings often struggle to visualize how a space will look or function. AR, especially through Mixed Reality headsets, can mitigate this by allowing them to engage with the space intuitively. Given the transparent property of the MR lenses, non-architect users can experience the spatial and material qualities of a design proposal directly on-site, making it easier to identify potential issues such as circulation conflicts, scale misinterpretations, or material inconsistencies. This allows them to give feedback that is grounded in their own perceptual experience rather than abstract interpretations. This can help to democratize the design review process and can lead to more informed, client-centered decisions. For architectural teams, combining MR with traditional tools might mean that their detailed technical evaluationsare complemented by a richer experiential understanding from the client, which can lead to more holistic and user-validated design outcomes.this picture!This article is part of the ArchDaily Topics: What Is Future Intelligence?, proudly presented by Gendo, an AI co-pilot for Architects. Our mission at Gendo is to help architects produce concept images 100X faster by focusing on the core of the design process. We have built a cutting edge AI tool in collaboration with architects from some of the most renowned firms such as Zaha Hadid, KPF and David Chipperfield.Every month we explore a topic in-depth through articles, interviews, news, and architecture projects. We invite you to learn more about our ArchDaily Topics. And, as always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us.
    #beyond #drawing #board #how #augmented
    Beyond the Drawing Board: How Augmented Reality is Reshaping Architectural Design Review
    this picture!VARID A VR-AR Toolkit for Inclusive Design.. Image © Foster + PartnersOver the last decade, architectural design has relied on 2D methods of representation, such as elevations, sections, and floor plans, paired with digital renderings of 3D models. While these tools are essential to convey geometry and intent, they remain limited by their two-dimensional format. Even the most realistic renderings, created through programs like SketchUp, Revit, or AutoCAD, still flatten space and distance the viewer from the lived experience of a project. Recently, architects have begun to explore immersive technologies as a way to bridge this gap between drawing and experience, offering new ways to inhabit and assess spatial proposals.What are AR, VR, and MR?Extended Realitycan be classified into three main types: Augmented Reality, Virtual Reality, and Mixed Reality, each offering varying levels of immersion in digital environments. At one end of the spectrum, AR enhances the real world with digital content, while at the other, VR fully immerses the user in a completely virtual environment, blocking out the physical world. MR lies between these extremes but is essentially a more detailed classification of AR based on the type of display used. Their research proposes the following classification: Class 1 display refers to monitor-based systems, where users view the real world through a screen equipped with a camera that captures the environment and overlays digital information, such as in the Apple Vision Pro, which uses passthrough cameras. In contrast, Class 2 and 3 systems use head-mounted displayswith see-through lenses that superimpose 3D models onto the user's view, like the Microsoft HoloLens. In 2020, Trimble combined the HoloLens with a hard hat, creating the Trimble XR10, which makes this technology usable in the construction site. For clarity, this text will refer to Class 1 systems as AR and Class 2 and 3 systems as MR moving forward. Related Article Using Augmented Reality In Bamboo Architecture this picture!How do Users Perceive Space?Architectural design is not only about defining space, but also about anticipating how people will perceive and move through it. The way users interpret a space depends not just on geometry, but also on intuition, their individual knowledge, and experiences. Kevin Lynch described this as a space's "legibility," or how easily it can be understood and organized mentally, while Ittelsonemphasizes how users explore, categorize, and systematize spatial elements into a coherent whole. The user first explores an area to orient themselves and move around, then they will develop a taxonomy of the space elements to mentally organize it, and finally, they put everything together into a system that tells the brain why things are happening and how they relate to each other. Research suggests that immersive environments such as mixed reality can simulate this faithfully, allowing architects and clients alike to engage with a design not as an abstract plan, but as a place to walk through, observe, and interpret.this picture!Which One Improves Design Understanding: 2D Drawings or MR?Based on the above, a study made by the National Taiwan University in 2021 explored this topic by conducting an experiment where participants were brought to a room and were divided into two groups. The first would analyze an interior design proposal of the space using printed architectural drawings and colored renderings. The second group was asked to do the same but only used the explorable MR 3D model seen through an MR headset, in this case: The HoloLens. After the exploration was done, users would sit down, and researchers would ask questions about the space. For example, the general understanding of the elements in the architectural program, how well people perceive length and sizes of objects, perception and understanding of textures and materials, and knowledge of demolition or renovation of specific elements. A total of 42 people participated in the research, with an average age of 26 years, various ranges of architectural drawing understanding, and from diverse cultural backgrounds, including Africa, the Middle East, Asia, the Americas, and Europe. The results shed light on several topics for architects looking into implementing this technology in their work.this picture!First, the study suggests that MR technology allowed users to understand around 85% of the overall design proposal compared to 2D methods. At the same time, they also concluded that MR does not fully replace 2D; in fact, it's about balance. Both MR and 2D are suitable for identifying spaces and general layout, identifying where activities can be performed, and identifying heights. However, 2D plans are especially good for specific measurements of the space, understanding the demolition plan, and identifying countable elements in the design, like the number of lamps, switches, or sockets. On the other hand, MR was better for understanding how elements in the space interact with each other. MR  was especially useful for quickly identifying the specific materials and textures of the design and visually understanding size in terms of width, and mentally perceiving certain properties of materials like roughness, smoothness, warmth, or coldness.this picture!How Can We Integrate MR into our Current Design Review Workflows?MR has the potential to facilitate inclusive and interdisciplinary collaboration by bridging the gap between technical and non-technical stakeholders. Clients or end users with limited experience in reading architectural drawings often struggle to visualize how a space will look or function. AR, especially through Mixed Reality headsets, can mitigate this by allowing them to engage with the space intuitively. Given the transparent property of the MR lenses, non-architect users can experience the spatial and material qualities of a design proposal directly on-site, making it easier to identify potential issues such as circulation conflicts, scale misinterpretations, or material inconsistencies. This allows them to give feedback that is grounded in their own perceptual experience rather than abstract interpretations. This can help to democratize the design review process and can lead to more informed, client-centered decisions. For architectural teams, combining MR with traditional tools might mean that their detailed technical evaluationsare complemented by a richer experiential understanding from the client, which can lead to more holistic and user-validated design outcomes.this picture!This article is part of the ArchDaily Topics: What Is Future Intelligence?, proudly presented by Gendo, an AI co-pilot for Architects. Our mission at Gendo is to help architects produce concept images 100X faster by focusing on the core of the design process. We have built a cutting edge AI tool in collaboration with architects from some of the most renowned firms such as Zaha Hadid, KPF and David Chipperfield.Every month we explore a topic in-depth through articles, interviews, news, and architecture projects. We invite you to learn more about our ArchDaily Topics. And, as always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us. #beyond #drawing #board #how #augmented
    WWW.ARCHDAILY.COM
    Beyond the Drawing Board: How Augmented Reality is Reshaping Architectural Design Review
    Save this picture!VARID A VR-AR Toolkit for Inclusive Design.. Image © Foster + PartnersOver the last decade, architectural design has relied on 2D methods of representation, such as elevations, sections, and floor plans, paired with digital renderings of 3D models. While these tools are essential to convey geometry and intent, they remain limited by their two-dimensional format. Even the most realistic renderings, created through programs like SketchUp, Revit, or AutoCAD, still flatten space and distance the viewer from the lived experience of a project. Recently, architects have begun to explore immersive technologies as a way to bridge this gap between drawing and experience, offering new ways to inhabit and assess spatial proposals.What are AR, VR, and MR?Extended Reality (XR) can be classified into three main types: Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR), each offering varying levels of immersion in digital environments. At one end of the spectrum, AR enhances the real world with digital content, while at the other, VR fully immerses the user in a completely virtual environment, blocking out the physical world. MR lies between these extremes but is essentially a more detailed classification of AR based on the type of display used. Their research proposes the following classification: Class 1 display refers to monitor-based systems, where users view the real world through a screen equipped with a camera that captures the environment and overlays digital information, such as in the Apple Vision Pro, which uses passthrough cameras. In contrast, Class 2 and 3 systems use head-mounted displays (HMDs) with see-through lenses that superimpose 3D models onto the user's view, like the Microsoft HoloLens. In 2020, Trimble combined the HoloLens with a hard hat, creating the Trimble XR10, which makes this technology usable in the construction site. For clarity, this text will refer to Class 1 systems as AR and Class 2 and 3 systems as MR moving forward. Related Article Using Augmented Reality In Bamboo Architecture Save this picture!How do Users Perceive Space?Architectural design is not only about defining space, but also about anticipating how people will perceive and move through it. The way users interpret a space depends not just on geometry, but also on intuition, their individual knowledge, and experiences. Kevin Lynch described this as a space's "legibility," or how easily it can be understood and organized mentally, while Ittelson (1978) emphasizes how users explore, categorize, and systematize spatial elements into a coherent whole. The user first explores an area to orient themselves and move around, then they will develop a taxonomy of the space elements to mentally organize it, and finally, they put everything together into a system that tells the brain why things are happening and how they relate to each other. Research suggests that immersive environments such as mixed reality can simulate this faithfully, allowing architects and clients alike to engage with a design not as an abstract plan, but as a place to walk through, observe, and interpret.Save this picture!Which One Improves Design Understanding: 2D Drawings or MR?Based on the above, a study made by the National Taiwan University in 2021 explored this topic by conducting an experiment where participants were brought to a room and were divided into two groups. The first would analyze an interior design proposal of the space using printed architectural drawings and colored renderings. The second group was asked to do the same but only used the explorable MR 3D model seen through an MR headset, in this case: The HoloLens. After the exploration was done, users would sit down, and researchers would ask questions about the space. For example, the general understanding of the elements in the architectural program, how well people perceive length and sizes of objects, perception and understanding of textures and materials, and knowledge of demolition or renovation of specific elements. A total of 42 people participated in the research, with an average age of 26 years, various ranges of architectural drawing understanding, and from diverse cultural backgrounds, including Africa, the Middle East, Asia, the Americas, and Europe. The results shed light on several topics for architects looking into implementing this technology in their work.Save this picture!First, the study suggests that MR technology allowed users to understand around 85% of the overall design proposal compared to 2D methods (which allowed participants to obtain only around 75% of the information). At the same time, they also concluded that MR does not fully replace 2D; in fact, it's about balance. Both MR and 2D are suitable for identifying spaces and general layout, identifying where activities can be performed, and identifying heights. However, 2D plans are especially good for specific measurements of the space (Length and width), understanding the demolition plan, and identifying countable elements in the design, like the number of lamps, switches, or sockets. On the other hand, MR was better for understanding how elements in the space interact with each other (Like if the columns were wrapped by a specific material). MR  was especially useful for quickly identifying the specific materials and textures of the design and visually understanding size in terms of width, and mentally perceiving certain properties of materials like roughness, smoothness, warmth, or coldness.Save this picture!How Can We Integrate MR into our Current Design Review Workflows?MR has the potential to facilitate inclusive and interdisciplinary collaboration by bridging the gap between technical and non-technical stakeholders. Clients or end users with limited experience in reading architectural drawings often struggle to visualize how a space will look or function. AR, especially through Mixed Reality headsets, can mitigate this by allowing them to engage with the space intuitively. Given the transparent property of the MR lenses, non-architect users can experience the spatial and material qualities of a design proposal directly on-site, making it easier to identify potential issues such as circulation conflicts, scale misinterpretations, or material inconsistencies. This allows them to give feedback that is grounded in their own perceptual experience rather than abstract interpretations. This can help to democratize the design review process and can lead to more informed, client-centered decisions. For architectural teams, combining MR with traditional tools might mean that their detailed technical evaluations (e.g., clearances, counts, and demolition plans) are complemented by a richer experiential understanding from the client, which can lead to more holistic and user-validated design outcomes.Save this picture!This article is part of the ArchDaily Topics: What Is Future Intelligence?, proudly presented by Gendo, an AI co-pilot for Architects. Our mission at Gendo is to help architects produce concept images 100X faster by focusing on the core of the design process. We have built a cutting edge AI tool in collaboration with architects from some of the most renowned firms such as Zaha Hadid, KPF and David Chipperfield.Every month we explore a topic in-depth through articles, interviews, news, and architecture projects. We invite you to learn more about our ArchDaily Topics. And, as always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us.
    0 Comentários 0 Compartilhamentos
  • Searching for new architecture and design jobs? AUX, Nelson Byrd Woltz, Dumican Mosey, SITIO, and Trimble are hiring
    Look below for Archinect's latest curated selection of architecture and design firms currently hiring on Archinect Jobs.
    This week's featured employer highlight includes openings in NYC, LA, Philadelphia, and San Francisco.

    For even more opportunities, visit the Archinect job board and explore our active community of job seekers, firms, and schools.
    Landscape architecture firm Nelson Byrd Woltz Landscape Architects is hiring for a Studio & Business Coordinator in New York City.
    Candidates should possess a bachelor's degree in business administration, accounting, or a related field, be proficient with Microsoft Office Suite and accounting software, and have strong organizational and time management skills.
    One should also have proven experience in an executive assistant, administrative coordinator, or similar role.
    Kinder Land Bridge and Cyvia and Melvyn Wolff Prairie at Memorial Park by Nelson Byrd Woltz Landscape Architects.AUX Architecture has an opening for a Designer with one t...
    Source: https://archinect.com/news/article/150480530/searching-for-new-architecture-and-design-jobs-aux-nelson-byrd-woltz-dumican-mosey-sitio-and-trimble-are-hiring" style="color: #0066cc;">https://archinect.com/news/article/150480530/searching-for-new-architecture-and-design-jobs-aux-nelson-byrd-woltz-dumican-mosey-sitio-and-trimble-are-hiring
    #searching #for #new #architecture #and #design #jobs #aux #nelson #byrd #woltz #dumican #mosey #sitio #trimble #are #hiring
    Searching for new architecture and design jobs? AUX, Nelson Byrd Woltz, Dumican Mosey, SITIO, and Trimble are hiring
    Look below for Archinect's latest curated selection of architecture and design firms currently hiring on Archinect Jobs. This week's featured employer highlight includes openings in NYC, LA, Philadelphia, and San Francisco. For even more opportunities, visit the Archinect job board and explore our active community of job seekers, firms, and schools. Landscape architecture firm Nelson Byrd Woltz Landscape Architects is hiring for a Studio & Business Coordinator in New York City. Candidates should possess a bachelor's degree in business administration, accounting, or a related field, be proficient with Microsoft Office Suite and accounting software, and have strong organizational and time management skills. One should also have proven experience in an executive assistant, administrative coordinator, or similar role. Kinder Land Bridge and Cyvia and Melvyn Wolff Prairie at Memorial Park by Nelson Byrd Woltz Landscape Architects.AUX Architecture has an opening for a Designer with one t... Source: https://archinect.com/news/article/150480530/searching-for-new-architecture-and-design-jobs-aux-nelson-byrd-woltz-dumican-mosey-sitio-and-trimble-are-hiring #searching #for #new #architecture #and #design #jobs #aux #nelson #byrd #woltz #dumican #mosey #sitio #trimble #are #hiring
    ARCHINECT.COM
    Searching for new architecture and design jobs? AUX, Nelson Byrd Woltz, Dumican Mosey, SITIO, and Trimble are hiring
    Look below for Archinect's latest curated selection of architecture and design firms currently hiring on Archinect Jobs. This week's featured employer highlight includes openings in NYC, LA, Philadelphia, and San Francisco. For even more opportunities, visit the Archinect job board and explore our active community of job seekers, firms, and schools. Landscape architecture firm Nelson Byrd Woltz Landscape Architects is hiring for a Studio & Business Coordinator in New York City. Candidates should possess a bachelor's degree in business administration, accounting, or a related field, be proficient with Microsoft Office Suite and accounting software, and have strong organizational and time management skills. One should also have proven experience in an executive assistant, administrative coordinator, or similar role. Kinder Land Bridge and Cyvia and Melvyn Wolff Prairie at Memorial Park by Nelson Byrd Woltz Landscape Architects.AUX Architecture has an opening for a Designer with one t...
    0 Comentários 0 Compartilhamentos