• Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 Comments 0 Shares
  • HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift

    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas.
    The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI.
    The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market.
    The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster.
    This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
    These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows.
    HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October.
    In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption.
    The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center.
    To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis.
    HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity.
    Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments.

    Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay.
    Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    #hpe #nvidia #debut #factory #stack
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page. #hpe #nvidia #debut #factory #stack
    BLOGS.NVIDIA.COM
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers (HPE ProLiant Compute DL380a Gen12), to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    0 Comments 0 Shares
  • Hey everyone! Let's dive into something truly fascinating today: PVA filament! You might have heard about this unique type of filament that works wonders with any FDM printer. But wait, it’s not always what it seems!

    When we first encounter PVA filament, we might think of it as just another option in our 3D printing toolbox. However, it holds secrets that can truly elevate our creations! PVA, or Polyvinyl Alcohol, is not just a filament; it’s a game-changer! It’s known for its ability to dissolve in water, which opens up a world of possibilities for our projects. Imagine creating intricate designs with support structures that simply vanish, leaving behind only the masterpiece you envisioned! Isn’t that incredible?

    One of the most inspiring aspects of using PVA filament is the encouragement it gives us to push our creative boundaries. Sometimes, we face challenges in our projects, but with PVA, we can experiment without fear! If something doesn’t turn out as planned, we can simply dissolve those support structures and start anew. This teaches us resilience and the importance of embracing our creative journey!

    Moreover, working with PVA filament encourages collaboration and innovation in the 3D printing community. It inspires us to share our experiences, tips, and techniques with one another. Just think about all the amazing designs that have emerged because of this wonderful filament! Each creation tells a story of creativity, perseverance, and teamwork!

    Now, I know what some of you might be thinking: “Isn’t PVA filament tricky to work with?” While it does come with its challenges, like maintaining the right printing conditions and ensuring proper storage to avoid moisture, the rewards are absolutely worth it! With a little practice and patience, we can master this filament and unlock its full potential. Remember, every challenge is an opportunity for growth!

    So, let’s embrace the journey of using PVA filament! Let’s encourage one another to try new techniques, share our successes, and learn from our experiences. Together, we can create a community that celebrates creativity and innovation in 3D printing!

    Let’s keep inspiring each other to create, explore, and conquer new heights in our 3D printing adventures! Who’s with me?

    #PVAFilament #3DPrinting #CreativityUnleashed #Innovation #CommunitySpirit
    🌟 Hey everyone! Let's dive into something truly fascinating today: PVA filament! 🎉 You might have heard about this unique type of filament that works wonders with any FDM printer. 🌈 But wait, it’s not always what it seems! 💧 When we first encounter PVA filament, we might think of it as just another option in our 3D printing toolbox. However, it holds secrets that can truly elevate our creations! 🌟 PVA, or Polyvinyl Alcohol, is not just a filament; it’s a game-changer! It’s known for its ability to dissolve in water, which opens up a world of possibilities for our projects. 🛠️ Imagine creating intricate designs with support structures that simply vanish, leaving behind only the masterpiece you envisioned! 🌊 Isn’t that incredible? One of the most inspiring aspects of using PVA filament is the encouragement it gives us to push our creative boundaries. 🎨✨ Sometimes, we face challenges in our projects, but with PVA, we can experiment without fear! If something doesn’t turn out as planned, we can simply dissolve those support structures and start anew. This teaches us resilience and the importance of embracing our creative journey! 🌻 Moreover, working with PVA filament encourages collaboration and innovation in the 3D printing community. 🤝✨ It inspires us to share our experiences, tips, and techniques with one another. Just think about all the amazing designs that have emerged because of this wonderful filament! Each creation tells a story of creativity, perseverance, and teamwork! 🥳 Now, I know what some of you might be thinking: “Isn’t PVA filament tricky to work with?” 🤔 While it does come with its challenges, like maintaining the right printing conditions and ensuring proper storage to avoid moisture, the rewards are absolutely worth it! 🎯 With a little practice and patience, we can master this filament and unlock its full potential. Remember, every challenge is an opportunity for growth! 🌱 So, let’s embrace the journey of using PVA filament! Let’s encourage one another to try new techniques, share our successes, and learn from our experiences. 🌟 Together, we can create a community that celebrates creativity and innovation in 3D printing! 🌍💖 Let’s keep inspiring each other to create, explore, and conquer new heights in our 3D printing adventures! Who’s with me? 🙌🎉 #PVAFilament #3DPrinting #CreativityUnleashed #Innovation #CommunitySpirit
    PVA Filament: Not Always What it Seems
    PVA filament is an interesting filament type, for the reason that while it can be printed with any FDM printer, it supposedly readily dissolves in water, which is also the …read more
    Like
    Love
    Wow
    Sad
    Angry
    78
    1 Comments 0 Shares
  • In a world where cloud computing has become the digital equivalent of air (you know, something everyone breathes in but no one really thinks about), the latest trend in datacenter technology is to send our precious data skyrocketing into the cosmos. Yes, you read that right—space-based datacenters are the new buzzword, because why let earthly problems like power outages or NIMBYism stop us from storing our data in the great beyond?

    Imagine the scene: while we sit in traffic on our way to work, feeling the weight of our earthly responsibilities, there are engineers in space suits, floating around in zero gravity, managing data storage like it’s just another day at the office. I mean, who needs a reliable power grid when you can have the cosmic energy of a thousand suns powering your Netflix binge-watching session? Talk about an upgrade!

    Of course, this leap into the stratosphere isn't without its challenges. What happens if there’s a little too much space debris? Will our precious selfies come crashing back down to Earth? Or worse, will they be lost forever among the stars? But fear not! The tech-savvy geniuses behind this initiative have assured us that they have a plan. Clearly, the best minds of our generation are focused on ensuring your TikTok videos stay safe in orbit rather than, say, solving world hunger or climate change. Priorities, am I right?

    Let’s not forget about the cost. Space travel isn’t exactly cheap. But hey, if I’m going to spend a fortune on data storage, I’d rather it be orbiting Earth than sitting in a basement somewhere in New Jersey. Because nothing says “I’m a forward-thinking tech mogul” quite like a datacenter floating serenely above the clouds, right? It’s the ultimate status symbol—better than a sports car, better than a mansion. “Look at me! My data is literally out of this world!”

    And let’s be real, the power of AI is growing faster than a toddler on a sugar rush. Our current datacenters are sweating bullets trying to keep up. So, the solution? Just toss them into orbit! Sure, it sounds like a plot from a sci-fi movie, but who needs a solid plan when you have a vision, right? The next logical step is to start launching all our problems into space. Traffic jams? Launch them! Your ex? Into orbit they go!

    So, here's to the brave souls who will be managing our digital lives from afar. May your Wi-Fi connection be strong, may your satellite dishes be well-aligned, and may your cosmic data never experience latency. Because if there’s one thing we can all agree on, it's that our data deserves a first-class ticket to space, even if it means leaving the rest of the world behind.

    #SpaceBasedDatacenters #CloudComputing #DataInOrbit #TechTrends #AIFuture
    In a world where cloud computing has become the digital equivalent of air (you know, something everyone breathes in but no one really thinks about), the latest trend in datacenter technology is to send our precious data skyrocketing into the cosmos. Yes, you read that right—space-based datacenters are the new buzzword, because why let earthly problems like power outages or NIMBYism stop us from storing our data in the great beyond? Imagine the scene: while we sit in traffic on our way to work, feeling the weight of our earthly responsibilities, there are engineers in space suits, floating around in zero gravity, managing data storage like it’s just another day at the office. I mean, who needs a reliable power grid when you can have the cosmic energy of a thousand suns powering your Netflix binge-watching session? Talk about an upgrade! Of course, this leap into the stratosphere isn't without its challenges. What happens if there’s a little too much space debris? Will our precious selfies come crashing back down to Earth? Or worse, will they be lost forever among the stars? But fear not! The tech-savvy geniuses behind this initiative have assured us that they have a plan. Clearly, the best minds of our generation are focused on ensuring your TikTok videos stay safe in orbit rather than, say, solving world hunger or climate change. Priorities, am I right? Let’s not forget about the cost. Space travel isn’t exactly cheap. But hey, if I’m going to spend a fortune on data storage, I’d rather it be orbiting Earth than sitting in a basement somewhere in New Jersey. Because nothing says “I’m a forward-thinking tech mogul” quite like a datacenter floating serenely above the clouds, right? It’s the ultimate status symbol—better than a sports car, better than a mansion. “Look at me! My data is literally out of this world!” And let’s be real, the power of AI is growing faster than a toddler on a sugar rush. Our current datacenters are sweating bullets trying to keep up. So, the solution? Just toss them into orbit! Sure, it sounds like a plot from a sci-fi movie, but who needs a solid plan when you have a vision, right? The next logical step is to start launching all our problems into space. Traffic jams? Launch them! Your ex? Into orbit they go! So, here's to the brave souls who will be managing our digital lives from afar. May your Wi-Fi connection be strong, may your satellite dishes be well-aligned, and may your cosmic data never experience latency. Because if there’s one thing we can all agree on, it's that our data deserves a first-class ticket to space, even if it means leaving the rest of the world behind. #SpaceBasedDatacenters #CloudComputing #DataInOrbit #TechTrends #AIFuture
    Space-Based Datacenters Take The Cloud into Orbit
    Where’s the best place for a datacenter? It’s an increasing problem as the AI buildup continues seemingly without pause. It’s not just a problem of NIMBYism; earthly power grids are …read more
    Like
    Love
    Wow
    Angry
    Sad
    264
    1 Comments 0 Shares
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Comments 0 Shares
  • Time Complexity of Sorting Algorithms in Python, Java, and C++

    Posted on : June 13, 2025

    By

    Tech World Times

    Development and Testing 

    Rate this post

    Sorting helps organize data in a specific order. It is used in search, reports, and efficient storage. Different sorting algorithms offer different performance. In this article, we will explain the Time Complexity of Sorting Algorithms in simple words. We will cover Python, Java, and C++ examples.
    1. What Is Time Complexity?
    Time complexity tells how fast an algorithm runs. It measures the number of steps as input grows. It is written in Big-O notation. For example, Omeans steps grow with the square of inputs.
    2. Types of Time Complexity
    Here are common types:

    O: Constant time
    O: Linear time
    O: Log-linear time
    O: Quadratic time

    We will now apply these to sorting.
    3. Bubble Sort
    Bubble Sort compares two numbers and swaps them if needed. It repeats until the list is sorted.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OPython Example:
    pythonCopyEditdef bubble_sort:
    n = lenfor i in range:
    for j in range:
    if arr> arr:
    arr, arr= arr, arrJava Example:
    javaCopyEditvoid bubbleSort{
    int n = arr.length;
    forforif{
    int temp = arr;
    arr= arr;
    arr= temp;
    }
    }

    C++ Example:
    cppCopyEditvoid bubbleSort{
    forforifswap;
    }

    4. Selection Sort
    This sort picks the smallest number and places it at the front.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OPython Example:
    pythonCopyEditdef selection_sort:
    for i in range):
    min_idx = i
    for j in range):
    if arr< arr:
    min_idx = j
    arr, arr= arr, arr5. Insertion Sort
    This algorithm builds the final list one item at a time.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OJava Example:
    javaCopyEditvoid insertionSort{
    for{
    int key = arr;
    int j = i - 1;
    while{
    arr= arr;
    j = j - 1;
    }
    arr= key;
    }
    }

    6. Merge Sort
    Merge Sort splits the array into halves and merges them back in order.
    Time Complexity of Sorting Algorithms like Merge Sort is usually better.

    Best Case: OAverage Case: OWorst Case: OPython Example:
    pythonCopyEditdef merge_sort:
    if len> 1:
    mid = len// 2
    left = arrright = arrmerge_sortmerge_sorti = j = k = 0
    while i < lenand j < len:
    if left< right:
    arr= lefti += 1
    else:
    arr= rightj += 1
    k += 1

    arr= left+ right7. Quick Sort
    Quick Sort picks a pivot and places smaller numbers before it.
    Time Complexity:

    Best Case: OAverage Case: OWorst Case: OC++ Example:
    cppCopyEditint partition{
    int pivot = arr;
    int i = low - 1;
    for{
    if{
    i++;
    swap;
    }
    }
    swap;
    return i + 1;
    }

    void quickSort{
    if{
    int pi = partition;
    quickSort;
    quickSort;
    }
    }

    8. Built-in Sort Methods
    Languages have built-in sort functions. These are well-optimized.

    Python: sortedor list.sortuses TimSort

    Time Complexity: OJava: Arrays.sortuses Dual-Pivot QuickSort

    Time Complexity: OC++: std::sortuses IntroSort

    Time Complexity: OThese are better for most real-world tasks.
    9. Time Complexity Comparison Table
    AlgorithmBestAverageWorstStableBubble SortOOOYesSelection SortOOONoInsertion SortOOOYesMerge SortOOOYesQuick SortOOONoTimSortOOOYesIntroSortOOONo
    10. How to Choose the Right Algorithm?

    Use Merge Sort for large stable data.
    Use Quick Sort for faster average speed.
    Use Insertion Sort for small or nearly sorted lists.
    Use built-in sort functions unless you need control.

    Conclusion
    The Time Complexity of Sorting Algorithms helps us pick the right tool. Bubble, Selection, and Insertion Sort are simple but slow. Merge and Quick Sort are faster and used often. Built-in functions are highly optimized. Python, Java, and C++ each have their strengths.
    Understand your problem and input size. Then pick the sorting method. This ensures better speed and performance in your code.
    Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    #time #complexity #sorting #algorithms #python
    Time Complexity of Sorting Algorithms in Python, Java, and C++
    Posted on : June 13, 2025 By Tech World Times Development and Testing  Rate this post Sorting helps organize data in a specific order. It is used in search, reports, and efficient storage. Different sorting algorithms offer different performance. In this article, we will explain the Time Complexity of Sorting Algorithms in simple words. We will cover Python, Java, and C++ examples. 1. What Is Time Complexity? Time complexity tells how fast an algorithm runs. It measures the number of steps as input grows. It is written in Big-O notation. For example, Omeans steps grow with the square of inputs. 2. Types of Time Complexity Here are common types: O: Constant time O: Linear time O: Log-linear time O: Quadratic time We will now apply these to sorting. 3. Bubble Sort Bubble Sort compares two numbers and swaps them if needed. It repeats until the list is sorted. Time Complexity: Best Case: OAverage Case: OWorst Case: OPython Example: pythonCopyEditdef bubble_sort: n = lenfor i in range: for j in range: if arr> arr: arr, arr= arr, arrJava Example: javaCopyEditvoid bubbleSort{ int n = arr.length; forforif{ int temp = arr; arr= arr; arr= temp; } } C++ Example: cppCopyEditvoid bubbleSort{ forforifswap; } 4. Selection Sort This sort picks the smallest number and places it at the front. Time Complexity: Best Case: OAverage Case: OWorst Case: OPython Example: pythonCopyEditdef selection_sort: for i in range): min_idx = i for j in range): if arr< arr: min_idx = j arr, arr= arr, arr5. Insertion Sort This algorithm builds the final list one item at a time. Time Complexity: Best Case: OAverage Case: OWorst Case: OJava Example: javaCopyEditvoid insertionSort{ for{ int key = arr; int j = i - 1; while{ arr= arr; j = j - 1; } arr= key; } } 6. Merge Sort Merge Sort splits the array into halves and merges them back in order. Time Complexity of Sorting Algorithms like Merge Sort is usually better. Best Case: OAverage Case: OWorst Case: OPython Example: pythonCopyEditdef merge_sort: if len> 1: mid = len// 2 left = arrright = arrmerge_sortmerge_sorti = j = k = 0 while i < lenand j < len: if left< right: arr= lefti += 1 else: arr= rightj += 1 k += 1 arr= left+ right7. Quick Sort Quick Sort picks a pivot and places smaller numbers before it. Time Complexity: Best Case: OAverage Case: OWorst Case: OC++ Example: cppCopyEditint partition{ int pivot = arr; int i = low - 1; for{ if{ i++; swap; } } swap; return i + 1; } void quickSort{ if{ int pi = partition; quickSort; quickSort; } } 8. Built-in Sort Methods Languages have built-in sort functions. These are well-optimized. Python: sortedor list.sortuses TimSort Time Complexity: OJava: Arrays.sortuses Dual-Pivot QuickSort Time Complexity: OC++: std::sortuses IntroSort Time Complexity: OThese are better for most real-world tasks. 9. Time Complexity Comparison Table AlgorithmBestAverageWorstStableBubble SortOOOYesSelection SortOOONoInsertion SortOOOYesMerge SortOOOYesQuick SortOOONoTimSortOOOYesIntroSortOOONo 10. How to Choose the Right Algorithm? Use Merge Sort for large stable data. Use Quick Sort for faster average speed. Use Insertion Sort for small or nearly sorted lists. Use built-in sort functions unless you need control. Conclusion The Time Complexity of Sorting Algorithms helps us pick the right tool. Bubble, Selection, and Insertion Sort are simple but slow. Merge and Quick Sort are faster and used often. Built-in functions are highly optimized. Python, Java, and C++ each have their strengths. Understand your problem and input size. Then pick the sorting method. This ensures better speed and performance in your code. Tech World TimesTech World Times, a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com #time #complexity #sorting #algorithms #python
    TECHWORLDTIMES.COM
    Time Complexity of Sorting Algorithms in Python, Java, and C++
    Posted on : June 13, 2025 By Tech World Times Development and Testing  Rate this post Sorting helps organize data in a specific order. It is used in search, reports, and efficient storage. Different sorting algorithms offer different performance. In this article, we will explain the Time Complexity of Sorting Algorithms in simple words. We will cover Python, Java, and C++ examples. 1. What Is Time Complexity? Time complexity tells how fast an algorithm runs. It measures the number of steps as input grows. It is written in Big-O notation. For example, O(n²) means steps grow with the square of inputs. 2. Types of Time Complexity Here are common types: O(1): Constant time O(n): Linear time O(n log n): Log-linear time O(n²): Quadratic time We will now apply these to sorting. 3. Bubble Sort Bubble Sort compares two numbers and swaps them if needed. It repeats until the list is sorted. Time Complexity: Best Case: O(n) (if already sorted) Average Case: O(n²) Worst Case: O(n²) Python Example: pythonCopyEditdef bubble_sort(arr): n = len(arr) for i in range(n): for j in range(n - i - 1): if arr[j] > arr[j+1]: arr[j], arr[j+1] = arr[j+1], arr[j] Java Example: javaCopyEditvoid bubbleSort(int arr[]) { int n = arr.length; for (int i = 0; i < n-1; i++) for (int j = 0; j < n-i-1; j++) if (arr[j] > arr[j+1]) { int temp = arr[j]; arr[j] = arr[j+1]; arr[j+1] = temp; } } C++ Example: cppCopyEditvoid bubbleSort(int arr[], int n) { for (int i = 0; i < n-1; i++) for (int j = 0; j < n-i-1; j++) if (arr[j] > arr[j+1]) swap(arr[j], arr[j+1]); } 4. Selection Sort This sort picks the smallest number and places it at the front. Time Complexity: Best Case: O(n²) Average Case: O(n²) Worst Case: O(n²) Python Example: pythonCopyEditdef selection_sort(arr): for i in range(len(arr)): min_idx = i for j in range(i+1, len(arr)): if arr[j] < arr[min_idx]: min_idx = j arr[i], arr[min_idx] = arr[min_idx], arr[i] 5. Insertion Sort This algorithm builds the final list one item at a time. Time Complexity: Best Case: O(n) Average Case: O(n²) Worst Case: O(n²) Java Example: javaCopyEditvoid insertionSort(int arr[]) { for (int i = 1; i < arr.length; i++) { int key = arr[i]; int j = i - 1; while (j >= 0 && arr[j] > key) { arr[j + 1] = arr[j]; j = j - 1; } arr[j + 1] = key; } } 6. Merge Sort Merge Sort splits the array into halves and merges them back in order. Time Complexity of Sorting Algorithms like Merge Sort is usually better. Best Case: O(n log n) Average Case: O(n log n) Worst Case: O(n log n) Python Example: pythonCopyEditdef merge_sort(arr): if len(arr) > 1: mid = len(arr) // 2 left = arr[:mid] right = arr[mid:] merge_sort(left) merge_sort(right) i = j = k = 0 while i < len(left) and j < len(right): if left[i] < right[j]: arr[k] = left[i] i += 1 else: arr[k] = right[j] j += 1 k += 1 arr[k:] = left[i:] + right[j:] 7. Quick Sort Quick Sort picks a pivot and places smaller numbers before it. Time Complexity: Best Case: O(n log n) Average Case: O(n log n) Worst Case: O(n²) C++ Example: cppCopyEditint partition(int arr[], int low, int high) { int pivot = arr[high]; int i = low - 1; for (int j = low; j < high; j++) { if (arr[j] < pivot) { i++; swap(arr[i], arr[j]); } } swap(arr[i+1], arr[high]); return i + 1; } void quickSort(int arr[], int low, int high) { if (low < high) { int pi = partition(arr, low, high); quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); } } 8. Built-in Sort Methods Languages have built-in sort functions. These are well-optimized. Python: sorted() or list.sort() uses TimSort Time Complexity: O(n log n) Java: Arrays.sort() uses Dual-Pivot QuickSort Time Complexity: O(n log n) C++: std::sort() uses IntroSort Time Complexity: O(n log n) These are better for most real-world tasks. 9. Time Complexity Comparison Table AlgorithmBestAverageWorstStableBubble SortO(n)O(n²)O(n²)YesSelection SortO(n²)O(n²)O(n²)NoInsertion SortO(n)O(n²)O(n²)YesMerge SortO(n log n)O(n log n)O(n log n)YesQuick SortO(n log n)O(n log n)O(n²)NoTimSort (Python)O(n)O(n log n)O(n log n)YesIntroSort (C++)O(n log n)O(n log n)O(n log n)No 10. How to Choose the Right Algorithm? Use Merge Sort for large stable data. Use Quick Sort for faster average speed. Use Insertion Sort for small or nearly sorted lists. Use built-in sort functions unless you need control. Conclusion The Time Complexity of Sorting Algorithms helps us pick the right tool. Bubble, Selection, and Insertion Sort are simple but slow. Merge and Quick Sort are faster and used often. Built-in functions are highly optimized. Python, Java, and C++ each have their strengths. Understand your problem and input size. Then pick the sorting method. This ensures better speed and performance in your code. Tech World TimesTech World Times (TWT), a global collective focusing on the latest tech news and trends in blockchain, Fintech, Development & Testing, AI and Startups. If you are looking for the guest post then contact at techworldtimes@gmail.com
    Like
    Love
    Wow
    Sad
    Angry
    570
    2 Comments 0 Shares
  • Monitoring and Support Engineer at Keyword Studios

    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #monitoring #support #engineer #keyword #studios
    Monitoring and Support Engineer at Keyword Studios
    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure Create Your Profile — Game companies can contact you with their relevant job openings. Apply #monitoring #support #engineer #keyword #studios
    Monitoring and Support Engineer at Keyword Studios
    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    Like
    Love
    Wow
    Sad
    Angry
    559
    0 Comments 0 Shares
  • Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour

    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899.

    Posted By Joelle Daniels | On 16th, Jun. 2025

    While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With:

    Elden Ring: Nightreign
    Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More
    FBC: Firebreak
    Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More
    Death Stranding 2: On the Beach
    Publisher:Sony Developer:Kojima Productions Platforms:PS5View More
    Amazing Articles You Might Want To Check Out!

    Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ...
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo...
    Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo...
    The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f...
    The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb...
    Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    #asus #rog #xbox #ally #start
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More #asus #rog #xbox #ally #start
    GAMINGBOLT.COM
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    Like
    Love
    Wow
    Sad
    Angry
    600
    2 Comments 0 Shares
  • Oppo K13x 5G India Launch Date Set for June 23; Price Range, Key Features Revealed

    Oppo K13x 5G will be introduced in the Indian market later this month. The launch date has been announced, and the company has revealed some key specifications of the upcoming smartphone. It will be placed in the sub-Rs. 15,000 segments in the country and is promised to be available in 4GB and 6GB RAM variants. As per the company, the handset is claimed to offer the toughest build in its segment. It is confirmed to come with an IP65 rating, SGS Gold Drop-Resistance, SGS Military Standard, and MIL-STD 810-H durability certifications.Oppo K13x 5G India Launch: All We KnowThe Oppo K13x 5G will launch in India on June 23 at 12pm IST, the company confirmed in a press release. It will be priced in the country under Rs. 15,000, the company added. It will be available exclusively via Flipkart. The phone is confirmed to come in Midnight Violet and Sunset Peach colour options.Oppo revealed that the K13x 5G will be powered by a MediaTek Dimensity 6300 chipset. It will be available in 4GB and 6GB RAM options with support for 128GB of onboard storage. The handset will ship with Android 15-based ColorOS 15. It will support Google Gemini and other productivity features like AI Summary, AI Recorder, and AI Studio.The company has provided the Oppo K13x 5G with a 6,000mAh battery with 45W SuperVOOC charging support, it further revealed in the press release. It will carry a 50-megapixel AI-backed dual rear camera unit. The phone will support AI-backed imaging features like AI Eraser, AI Unblur, AI Reflection Remover, and AI Clarity Enhancer.Previously, Oppo revealed that the upcoming K13x 5G will come with an AM04 high-strength aluminium alloy middle frame and a 360-degree Damage-Proof Armour Body. It is claimed to meet the IP65 rating for dust and water resistance. Alongside the MIL-STD 810-H shock resistance certification, it will also come with SGS Gold Drop-Resistance and SGS Military Standard certifications.

    The Oppo K13x 5G build makes use of a biomimetic Sponge Shock Absorption System inspired by sea sponges, which is claimed to improve shock resistance. Its display will support Splash Touch and Glove Touch mode as well as Crystal Shield glass protection.
    #oppo #k13x #india #launch #date
    Oppo K13x 5G India Launch Date Set for June 23; Price Range, Key Features Revealed
    Oppo K13x 5G will be introduced in the Indian market later this month. The launch date has been announced, and the company has revealed some key specifications of the upcoming smartphone. It will be placed in the sub-Rs. 15,000 segments in the country and is promised to be available in 4GB and 6GB RAM variants. As per the company, the handset is claimed to offer the toughest build in its segment. It is confirmed to come with an IP65 rating, SGS Gold Drop-Resistance, SGS Military Standard, and MIL-STD 810-H durability certifications.Oppo K13x 5G India Launch: All We KnowThe Oppo K13x 5G will launch in India on June 23 at 12pm IST, the company confirmed in a press release. It will be priced in the country under Rs. 15,000, the company added. It will be available exclusively via Flipkart. The phone is confirmed to come in Midnight Violet and Sunset Peach colour options.Oppo revealed that the K13x 5G will be powered by a MediaTek Dimensity 6300 chipset. It will be available in 4GB and 6GB RAM options with support for 128GB of onboard storage. The handset will ship with Android 15-based ColorOS 15. It will support Google Gemini and other productivity features like AI Summary, AI Recorder, and AI Studio.The company has provided the Oppo K13x 5G with a 6,000mAh battery with 45W SuperVOOC charging support, it further revealed in the press release. It will carry a 50-megapixel AI-backed dual rear camera unit. The phone will support AI-backed imaging features like AI Eraser, AI Unblur, AI Reflection Remover, and AI Clarity Enhancer.Previously, Oppo revealed that the upcoming K13x 5G will come with an AM04 high-strength aluminium alloy middle frame and a 360-degree Damage-Proof Armour Body. It is claimed to meet the IP65 rating for dust and water resistance. Alongside the MIL-STD 810-H shock resistance certification, it will also come with SGS Gold Drop-Resistance and SGS Military Standard certifications. The Oppo K13x 5G build makes use of a biomimetic Sponge Shock Absorption System inspired by sea sponges, which is claimed to improve shock resistance. Its display will support Splash Touch and Glove Touch mode as well as Crystal Shield glass protection. #oppo #k13x #india #launch #date
    WWW.GADGETS360.COM
    Oppo K13x 5G India Launch Date Set for June 23; Price Range, Key Features Revealed
    Oppo K13x 5G will be introduced in the Indian market later this month. The launch date has been announced, and the company has revealed some key specifications of the upcoming smartphone. It will be placed in the sub-Rs. 15,000 segments in the country and is promised to be available in 4GB and 6GB RAM variants. As per the company, the handset is claimed to offer the toughest build in its segment. It is confirmed to come with an IP65 rating, SGS Gold Drop-Resistance, SGS Military Standard, and MIL-STD 810-H durability certifications.Oppo K13x 5G India Launch: All We KnowThe Oppo K13x 5G will launch in India on June 23 at 12pm IST, the company confirmed in a press release. It will be priced in the country under Rs. 15,000, the company added. It will be available exclusively via Flipkart. The phone is confirmed to come in Midnight Violet and Sunset Peach colour options.Oppo revealed that the K13x 5G will be powered by a MediaTek Dimensity 6300 chipset. It will be available in 4GB and 6GB RAM options with support for 128GB of onboard storage. The handset will ship with Android 15-based ColorOS 15. It will support Google Gemini and other productivity features like AI Summary, AI Recorder, and AI Studio.The company has provided the Oppo K13x 5G with a 6,000mAh battery with 45W SuperVOOC charging support, it further revealed in the press release. It will carry a 50-megapixel AI-backed dual rear camera unit. The phone will support AI-backed imaging features like AI Eraser, AI Unblur, AI Reflection Remover, and AI Clarity Enhancer.Previously, Oppo revealed that the upcoming K13x 5G will come with an AM04 high-strength aluminium alloy middle frame and a 360-degree Damage-Proof Armour Body. It is claimed to meet the IP65 rating for dust and water resistance. Alongside the MIL-STD 810-H shock resistance certification, it will also come with SGS Gold Drop-Resistance and SGS Military Standard certifications. The Oppo K13x 5G build makes use of a biomimetic Sponge Shock Absorption System inspired by sea sponges, which is claimed to improve shock resistance. Its display will support Splash Touch and Glove Touch mode as well as Crystal Shield glass protection.
    Like
    Love
    Wow
    Sad
    Angry
    542
    2 Comments 0 Shares
  • MillerKnoll opens new design archive showcasing over one million objects from the company’s history

    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters.

    In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war. 
    Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design.
    The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects.Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room. 

    The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles.
    The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer.“The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.”
    Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo.
    #millerknoll #opens #new #design #archive
    MillerKnoll opens new design archive showcasing over one million objects from the company’s history
    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters. In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war.  Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design. The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects.Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room.  The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer.“The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.” Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo. #millerknoll #opens #new #design #archive
    WWW.ARCHPAPER.COM
    MillerKnoll opens new design archive showcasing over one million objects from the company’s history
    In a 12,000-square-foot warehouse in Zeeland, Michigan, hundreds of chairs, sofas, and loveseats rest on open storage racks. Their bold colors and elegant forms stand in striking contrast to the industrial setting. A plush recliner, seemingly made for sinking into, sits beside a mesh desk chair like those found in generic office cubicles. Nearby, a rare prototype of the Knoll Womb® Chair, gifted by Eero Saarinen to his mother, blooms open like a flower–inviting someone to sit. There’s also mahogany furniture designed by Gilbert Rohde for Herman Miller, originally unveiled at the 1933 World’s Fair; early office pieces by Florence Knoll; and a sculptural paper lamp by Isamu Noguchi. This is the newly unveiled MillerKnoll Archive, a space that honors the distinct legacies of its formerly rival brands. In collaboration with New York–based design firm Standard Issue, MillerKnoll has created a permanent display of its most iconic designs at the company’s Michigan Design Yard headquarters. In the early 1920s, Dutch-born businessman Herman Miller became the majority stakeholder in a Zeeland, Michigan, company where his son-in-law served as president. Following the acquisition, Star Furniture Co. was renamed the Herman Miller Furniture Company. Meanwhile, across the Atlantic in Stuttgart, Germany, Walter Knoll joined his family’s furniture business and formed close ties with modernist pioneers Ludwig Mies van der Rohe and Walter Gropius, immersing himself in the Bauhaus movement as Germany edged toward war.  Just before the outbreak of World War II, Walter Knoll relocated to the United States and established his own furniture company in New York City. Around the same time, Michigan native Florence Schust was studying at the Cranbrook Academy of Art under Eliel Saarinen. There, she met Eero Saarinen and Charles Eames. Schust, who later married Walter Knoll, and Saarinen would go on to become key designers for the company, while Eames would play a similarly pivotal role at Herman Miller—setting both firms on parallel paths in the world of modern design. The facility was designed in collaboration with New York-based design firm Standard Issue. The archive, located in MillerKnoll’s Design Yard Headquarters, is 12,000 square feet and holds over one million objects. (Nicholas Calcott/Courtesy MillerKnoll) Formerly seen as competitors, Herman Miller acquired Knoll four years ago in a $1.8 billion merger that formed MillerKnoll. The deal united two of the most influential names in American furniture, merging their storied design legacies and the iconic pieces that helped define modern design. Now, MillerKnoll is honoring the distinct histories of each brand through this new archive. The archive is a permanent home for the brands’ archival collections and also exhibits the evolution of modern design. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room.  The facility’s first exhibition, Manufacturing Modern, explores the intertwined histories of Knoll and Herman Miller. It showcases designs from the individuals who helped shape each company. The open storage area displays over 300 pieces of modern furniture, featuring both original works from Knoll and Herman Miller as well as contemporary designs. In addition to viewing the furniture pieces, visitors can kick back in the reading room, which offers access to a collection of archival materials, including correspondence, photography, drawings, and textiles. The facility is organized into three distinct areas: an exhibition space, open storage, and a reading room and will be open for tours in partnership with the Cranbrook Art Academy this summer. (Nicholas Calcott/Courtesy MillerKnoll) “The debut of the MillerKnoll Archives invites our communities to experience design history – and imagine its future– in one dynamic space,” said MillerKnoll’s chief creative and product officer Ben Watson. “The ability to not only understand how iconic designs came to be, but how design solutions evolved over time, is a never-ending source of inspiration.” Exclusive tours of the archive will be available in July and August in partnership with the Cranbrook Art Museum and in October in partnership with Docomomo.
    Like
    Love
    Wow
    Sad
    Angry
    490
    0 Comments 0 Shares
More Results