• NVIDIA CEO Drops the Blueprint for Europe’s AI Boom

    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it.
    “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris.
    From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future.

    A New Industrial Revolution
    At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing.
    “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance.
    At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware.
    There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers.
    Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue.
    NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth.
    Quantum Meets Classical
    Europe’s quantum ambitions just got a boost.
    The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems.
    Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction.
    “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.”
    Sovereign Models, Smarter Agents
    European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs.
    “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said.
    These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe.
    “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said.
    Huang explained how NVIDIA is helping countries across Europe build AI infrastructure.
    Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments.
    The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents.
    To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity.
    “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute.
    The Industrial Cloud Goes Live
    AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution.
    “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent.
    Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.”
    To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.
    “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.”
    NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation.
    And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics.
    The Next Wave
    The next wave of AI has begun — and it’s exponential, Huang explained.
    “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.”
    This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said.
    To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.”
    Huang and Grek, as he explained how AI is driving advancements in robotics.
    These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence.
    “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.”
    With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe.
    Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    #nvidia #ceo #drops #blueprint #europes
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions. #nvidia #ceo #drops #blueprint #europes
    BLOGS.NVIDIA.COM
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    Like
    Love
    Sad
    23
    0 Comments 0 Shares
  • Hey, beautiful souls! Today, I want to shine a light on a topic that brings us hope and reminds us of the strength of justice. Recently, NSO Group, the infamous Israeli company known for its spyware Pegasus, faced a monumental verdict! They have been ordered to pay over $167 million in punitive damages to Meta for their unethical hacking campaign against WhatsApp users. Can you believe it? This is a HUGE win for all of us who value privacy and security!

    For five long years, this legal battle unfolded, shedding light on the dark practices of this tech giant. It’s a reminder that no matter how big the challenge, truth and justice will always prevail in the end. This ruling not only holds NSO Group accountable for their actions but also sends a powerful message to others in the tech industry. We must prioritize ethical practices and protect the rights of users around the globe!

    Let’s take a moment to celebrate the tireless efforts of those who fought for this victory! Every single person involved in this battle, from the lawyers to the advocates, showed that perseverance and belief in justice can lead to monumental change. Their dedication inspires us all to stand up for what is right, no matter how daunting the challenge may seem.

    This ruling is not just about money; it's about restoring faith in our digital world. It reminds us that we have the power to demand accountability from those who misuse technology. We can create a safer and more secure environment for everyone, where our privacy is respected, and our voices are heard!

    Let's keep that optimism alive! Use this moment as motivation to advocate for ethical tech practices, support companies that prioritize user security, and raise awareness about digital rights. Together, we can build a brighter future, where technology serves humanity positively and constructively!

    In conclusion, let’s celebrate this victory and continue to push for a world where every individual can feel safe in their digital interactions. Remember, every challenge is an opportunity for growth! Keep shining, keep fighting, and let your voice be heard! The future is bright, and it’s in our hands!

    #JusticeForUsers #EthicalTech #PrivacyMatters #DigitalRights #NSOGroup #Inspiration
    🌟✨ Hey, beautiful souls! 🌈💖 Today, I want to shine a light on a topic that brings us hope and reminds us of the strength of justice. Recently, NSO Group, the infamous Israeli company known for its spyware Pegasus, faced a monumental verdict! 🎉 They have been ordered to pay over $167 million in punitive damages to Meta for their unethical hacking campaign against WhatsApp users. Can you believe it? This is a HUGE win for all of us who value privacy and security! 🙌💪 For five long years, this legal battle unfolded, shedding light on the dark practices of this tech giant. It’s a reminder that no matter how big the challenge, truth and justice will always prevail in the end. 🌍💖 This ruling not only holds NSO Group accountable for their actions but also sends a powerful message to others in the tech industry. We must prioritize ethical practices and protect the rights of users around the globe! 🛡️✨ Let’s take a moment to celebrate the tireless efforts of those who fought for this victory! Every single person involved in this battle, from the lawyers to the advocates, showed that perseverance and belief in justice can lead to monumental change. 🙏🌟 Their dedication inspires us all to stand up for what is right, no matter how daunting the challenge may seem. This ruling is not just about money; it's about restoring faith in our digital world. It reminds us that we have the power to demand accountability from those who misuse technology. 📲💥 We can create a safer and more secure environment for everyone, where our privacy is respected, and our voices are heard! 🗣️❤️ Let's keep that optimism alive! Use this moment as motivation to advocate for ethical tech practices, support companies that prioritize user security, and raise awareness about digital rights. Together, we can build a brighter future, where technology serves humanity positively and constructively! 🌈🌟 In conclusion, let’s celebrate this victory and continue to push for a world where every individual can feel safe in their digital interactions. Remember, every challenge is an opportunity for growth! Keep shining, keep fighting, and let your voice be heard! The future is bright, and it’s in our hands! 💖💪✨ #JusticeForUsers #EthicalTech #PrivacyMatters #DigitalRights #NSOGroup #Inspiration
    Condenan a NSO Group a pagar un multa millonaria por el spyware Pegasus
    NSO Group, compañía israelita conocida por el software espía Pegasus, deberá pagar más de 167 millones de dólares en daños punitivos a Meta por una campaña de piratería informática y difusión de malware contra usuarios de WhatsApp. Así lo ha estimad
    Like
    Love
    Wow
    Sad
    Angry
    573
    1 Comments 0 Shares
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Comments 0 Shares
  • Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Comments 0 Shares
  • New Court Order in Stratasys v. Bambu Lab Lawsuit

    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit. 
    Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG. 
    Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299.
    On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward.
    Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases.
    This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits. 
    The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year. 
    Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.       
    A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    Another twist in the Stratasys v. Bambu Lab lawsuit 
    Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities.
    Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers.
    Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice.
    It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab. 
    Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022.
    In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas.
    Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.   
    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.  
    In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party.
    Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment.
    The Bambu Lab X1C 3D printer. Image via Bambu Lab.
    3D printing patent battles 
    The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit. 
    The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent.
    Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.  
    The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs. 
    In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.  
    San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer.
    3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets.
    Who won the 2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry.
    #new #court #order #stratasys #bambu
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member casesinto a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299. On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299. The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12. Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mcand a Bambu Lab X1C. Image by 3D Printing industry. #new #court #order #stratasys #bambu
    3DPRINTINGINDUSTRY.COM
    New Court Order in Stratasys v. Bambu Lab Lawsuit
    There has been a new update to the ongoing Stratasys v. Bambu Lab patent infringement lawsuit.  Both parties have agreed to consolidate the lead and member cases (2:24-CV-00644-JRG and 2:24-CV-00645-JRG) into a single case under Case No. 2:25-cv-00465-JRG.  Industrial 3D printing OEM Stratasys filed the request late last month. According to an official court document, Shenzhen-based Bambu Lab did not oppose the motion. Stratasys argued that this non-opposition amounted to the defendants waiving their right to challenge the request under U.S. patent law 35 U.S.C. § 299(a). On June 2, the U.S. District Court for the Eastern District of Texas, Marshall Division, ordered Bambu Lab to confirm in writing whether it agreed to the proposed case consolidation. The court took this step out of an “abundance of caution” to ensure both parties consented to the procedure before moving forward. Bambu Lab submitted its response on June 12, agreeing to the consolidation. The company, along with co-defendants Shenzhen Tuozhu Technology Co., Ltd., Shanghai Lunkuo Technology Co., Ltd., and Tuozhu Technology Limited, waived its rights under 35 U.S.C. § 299(a). The court will now decide whether to merge the cases. This followed U.S. District Judge Rodney Gilstrap’s decision last month to deny Bambu Lab’s motion to dismiss the lawsuits.  The Chinese desktop 3D printer manufacturer filed the motion in February 2025, arguing the cases were invalid because its US-based subsidiary, Bambu Lab USA, was not named in the original litigation. However, it agreed that the lawsuit could continue in the Austin division of the Western District of Texas, where a parallel case was filed last year.  Judge Gilstrap denied the motion, ruling that the cases properly target the named defendants. He concluded that Bambu Lab USA isn’t essential to the dispute, and that any misnaming should be addressed in summary judgment, not dismissal.        A Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry. Another twist in the Stratasys v. Bambu Lab lawsuit  Stratasys filed the two lawsuits against Bambu Lab in the Eastern District of Texas, Marshall Division, in August 2024. The company claims that Bambu Lab’s X1C, X1E, P1S, P1P, A1, and A1 mini 3D printers violate ten of its patents. These patents cover common 3D printing features, including purge towers, heated build plates, tool head force detection, and networking capabilities. Stratasys has requested a jury trial. It is seeking a ruling that Bambu Lab infringed its patents, along with financial damages and an injunction to stop Bambu from selling the allegedly infringing 3D printers. Last October, Stratasys dropped charges against two of the originally named defendants in the dispute. Court documents showed that Beijing Tiertime Technology Co., Ltd. and Beijing Yinhua Laser Rapid Prototyping and Mould Technology Co., Ltd were removed. Both defendants represent the company Tiertime, China’s first 3D printer manufacturer. The District Court accepted the dismissal, with all claims dropped without prejudice. It’s unclear why Stratasys named Beijing-based Tiertime as a defendant in the first place, given the lack of an obvious connection to Bambu Lab.  Tiertime and Stratasys have a history of legal disputes over patent issues. In 2013, Stratasys sued Afinia, Tiertime’s U.S. distributor and partner, for patent infringement. Afinia responded by suing uCRobotics, the Chinese distributor of MakerBot 3D printers, also alleging patent violations. Stratasys acquired MakerBot in June 2013. The company later merged with Ultimaker in 2022. In February 2025, Bambu Lab filed a motion to dismiss the original lawsuits. The company argued that Stratasys’ claims, focused on the sale, importation, and distribution of 3D printers in the United States, do not apply to the Shenzhen-based parent company. Bambu Lab contended that the allegations concern its American subsidiary, Bambu Lab USA, which was not named in the complaint filed in the Eastern District of Texas. Bambu Lab filed a motion to dismiss, claiming the case is invalid under Federal Rule of Civil Procedure 19. It argued that any party considered a “primary participant” in the allegations must be included as a defendant.    The court denied the motion on May 29, 2025. In the ruling, Judge Gilstrap explained that Stratasys’ allegations focus on the actions of the named defendants, not Bambu Lab USA. As a result, the official court document called Bambu Lab’s argument “unavailing.” Additionally, the Judge stated that, since Bambu Lab USA and Bambu Lab are both owned by Shenzhen Tuozhu, “the interest of these two entities align,” meaning the original cases are valid.   In the official court document, Judge Gilstrap emphasized that Stratasys can win or lose the lawsuits based solely on the actions of the current defendants, regardless of Bambu Lab USA’s involvement. He added that any potential risk to Bambu Lab USA’s business is too vague or hypothetical to justify making it a required party. Finally, the court noted that even if Stratasys named the wrong defendant, this does not justify dismissal under Rule 12(b)(7). Instead, the judge stated it would be more appropriate for the defendants to raise that argument in a motion for summary judgment. The Bambu Lab X1C 3D printer. Image via Bambu Lab. 3D printing patent battles  The 3D printing industry has seen its fair share of patent infringement disputes over recent months. In May 2025, 3D printer hotend developer Slice Engineering reached an agreement with Creality over a patent non-infringement lawsuit.  The Chinese 3D printer OEM filed the lawsuit in July 2024 in the U.S. District Court for the Northern District of Florida, Gainesville Division. The company claimed that Slice Engineering had falsely accused it of infringing two hotend patents, U.S. Patent Nos. 10,875,244 and 11,660,810. These cover mechanical and thermal features of Slice’s Mosquito 3D printer hotend. Creality requested a jury trial and sought a ruling confirming it had not infringed either patent. Court documents show that Slice Engineering filed a countersuit in December 2024. The Gainesville-based company maintained that Creaility “has infringed and continues to infringe” on both patents. In the filing, the company also denied allegations that it had harassed Creality’s partners, distributors, and customers, and claimed that Creality had refused to negotiate a resolution.   The Creality v. Slice Engineering lawsuit has since been dropped following a mutual resolution. Court documents show that both parties have permanently dismissed all claims and counterclaims, agreeing to cover their own legal fees and costs.  In other news, large-format resin 3D printer manufacturer Intrepid Automation sued 3D Systems over alleged patent infringement. The lawsuit, filed in February 2025, accused 3D Systems of using patented technology in its PSLA 270 industrial resin 3D printer. The filing called the PSLA 270 a “blatant knock off” of Intrepid’s DLP multi-projection “Range” 3D printer.   San Diego-based Intrepid Automation called this alleged infringement the “latest chapter of 3DS’s brazen, anticompetitive scheme to drive a smaller competitor with more advanced technology out of the marketplace.” The lawsuit also accused 3D Systems of corporate espionage, claiming one of its employees stole confidential trade secrets that were later used to develop the PSLA 270 printer. 3D Systems denied the allegations and filed a motion to dismiss the case. The company called the lawsuit “a desperate attempt” by Intrepid to distract from its own alleged theft of 3D Systems’ trade secrets. Who won the 2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.Featured image shows a Stratasys Fortus 450mc (left) and a Bambu Lab X1C (right). Image by 3D Printing industry.
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Comments 0 Shares
  • Air-Conditioning Can Help the Power Grid instead of Overloading It

    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    #airconditioning #can #help #power #grid
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article. #airconditioning #can #help #power #grid
    WWW.SCIENTIFICAMERICAN.COM
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    Like
    Love
    Wow
    Sad
    Angry
    602
    0 Comments 0 Shares
  • Tony Hawk’s Pro Skater 3 + 4 — Returning Skaters

    The roster of skaters originally featured in Tony Hawk’s™ Pro Skater™ 3 and Tony Hawk’s™ Pro Skater™ 4 helped to further catapult skateboarding culture into the mainstream as big names like Bob Burnquist, Steve Caballero, Elissa Steamer, and Chad Muska joined Tony Hawk in a stacked roster of award-winning pro skaters capable of shredding in and out of the game.
    In this feature, following the Demo announcement and the full soundtrack reveal, we’re proud to share the full roster of returning skaters in the upcoming Tony Hawk’s™ Pro Skater™ 3 + 4arriving on PlayStation 4 and 5, Xbox Series X|S, Xbox One, Nintendo Switch, Nintendo Switch 2, and PC.
    Tony Hawk’s Pro Skater 3 + 4 launches on July 11.
    THPS 3 + 4: Returning Skaters

    From gold medalists to progenitors of some of today’s most iconic skateboarding tricks, these classic skaters were instrumental in bringing skateboarding culture to a wider audience. Mixing courage, creativity, and an iron will, they’re more than ready to tackle any obstacle put before them.
    “Being in the original games was epic!” shares Elissa Steamer, who was the first playable female skater in the original Tony Hawk’s Pro Skater game. “It was semi-life changing. I can’t say enough about how stoked I was – and am now! – to be in the games.”
    “From the moment Tony asked, it was an honor, yet I had no idea of what it would come to mean,” says Rodney Mullen, originator of the kickflip and largely considered one of the most influential skaters in the sport. “The first time I showed up on tour after the release of the game, I recall ‘em shop owners having to put me on top of the tour van roof to manage so that I could sign things in all the madness. The crowd was rocking the van back and forth!blew my mind, the impact it had.”
    “The game attracted such a broader group of skaters, which has elevated our community in layered ways: from tricks to societal acceptance to the respect we get from people who often thought otherwise, like parents discouraging their kids who were simply outsiders looking for a place to belong,” Mullen continues. “Skating is integrated with a culture, a way of being, more than pretty much any other sport I can think of. The way Tony’s game shows that via the music, art, and vibe batted this home. It’s cool to be understood.”
     When Tony Hawk’s Pro Skater 3 + 4 launches this July, here are the returning skaters ready to hit the pavement once again, including skaters featured in the original Tony Hawk’s Pro Skater 3 and Tony Hawk’s Pro Skater 4 games plus other titles in the series.
    Tony Hawk

    San Diego, California
    Style: Vert / Stance: Goofy
    Tony Hawk made history by landing the first ever 900 at the 1999 X Games, skyrocketing the sport into the mainstream. Today he remains the sport’s most iconic figure.
    Bob Burnquist

    Rio de Janeiro, Brazil
    Style: Vert / Stance: Regular
    Bob Burnquist shocked the skateboarding world when he landed the first Fakie 900. His iconic “Dreamland” skatepark is home to a permanent Mega Ramp.
    Bucky Lasek

    Baltimore, Maryland
    Style: Vert / Stance: Regular
    Known for his vert skills, Bucky has won 10 gold medals at the X Games and is one of only two vert skateboarders to have won three gold medals consecutively.
    Steve Caballero

    San Jose, California
    Style: Vert / Stance: Goofy
    An iconic skateboarder responsible for inventing various vert tricks. He holds the record for the highest air ever achieved on a halfpipe.
    Kareem Campbell

    Harlem, New York
    Style: Street / Stance: Regular
    Called the godfather of smooth street style, Kareem left his mark by popularizing the skateboard trick, “The Ghetto Bird,” and founded City Stars Skateboards.
    Geoff Rowley

    Liverpool, England
    Style: Street / Stance: Regular
    Geoff Joseph Rowley Jr. is an English skateboarder and owner of Civilware Service Corporation. In 2000 he was crowned “Skater of the Year” by Thrasher Magazine.
    Andrew Reynolds

    North Hollywood, California
    Style: Street / Stance: Regular
    Co-founder and owner of Baker Skateboards, Andrew Reynolds turned pro in 1995 and won Thrasher Magazine’s “Skater of the Year” award just three years later.
    Elissa Steamer

    San Francisco, California
    Style: Street / Stance: Regular
    Elissa is a four-time X Games gold medalist, the first female skateboarder to go pro, and the first woman ever inducted into the Skateboarding Hall of Fame.
    Chad Muska

    Los Angeles, California
    Style: Street / Stance: Regular
    Artist, musician, and entrepreneur. Described by the Transworld Skateboarding editor-in-chief as “one of the most marketable pros skateboarding has ever seen.”
    Eric Koston

    Los Angeles, California
    Style: Street / Stance: Goofy
    Co-founder of Fourstar Clothing and the skate brand The Berrics, Eric is a master of street skateboarding and a two-time X Games gold medalist.
    Rodney Mullen

    Gainesville, Florida
    Style: Freestyle / Stance: Regular
    One of the most influential skateboarders of all time, Rodney Mullen is the progenitor of the Flatground Ollie, Kickflip, Heelflip, and dozens of other iconic tricks.
    Jamie Thomas

    Dothan, Alabama
    Style: Street / Stance: Regular
    Nicknamed “The Chief,” Jamie is the owner and founder of Zero Skateboards. He helped film 1996’s “Welcome to Hell,” one of the most iconic skate videos ever made.
    Rune Glifberg

    Copenhagen, Denmark
    Style: Vert / Stance: Regular
    Nicknamed “The Danish Destroyer,” Rune Glifberg is one of three skaters to have competed at every X Games, amassing over 12 medals at the competition.
    Aori Nishimura

    Tokyo, Japan
    Style: Street / Stance: Regular
    Born in Edogawa, Tokyo in Japan, Aori Nishimura started skateboarding at the age of 7 and went on to become the first athlete from Japan to win gold at the X Games.
    Leo Baker

    Brooklyn, New York
    Style: Street / Stance: Goofy
    Leo is the first non-binary and transgender professional skateboarder in the Pro Skater™ series and has won three gold medals, placing in over 32 competitions. 
    Leticia Bufoni

    São Paulo, Brazil
    Style: Street / Stance: Goofy
    Multiple world record holder and six-time gold medalist. Named the #1 women’s street skateboarder by World Cup of Skateboarding four years in a row.
    Lizzie Armanto

    Santa Monica, California
    Style: Park / Stance: Regular
    A member of the Birdhouse skate team, Lizzie has amassed over 30 skateboarding awards and was the first female skater to complete “The Loop,” a 360-degree ramp.
    Nyjah Huston

    Laguna Beach, California
    Style: Street / Stance: Goofy
    One of skateboarding’s biggest stars, Nyjah has earned over 12 X Games gold medals, 6 Championship titles, and a bronze medal at the 2024 Summer of Olympics.
    Riley Hawk

    San Diego, California
    Style: Street / Stance: Goofy
    Riley Hawk decided to turn pro on his 21st birthday and became Skateboarder Magazine’s 2013 Amateur of the Year later that same day.
    Shane O’Neill

    Melbourne, Australia
    Style: Street / Stance: Goofy
    Australian skateboarder who is one of only a few skateboarders to win gold in all four major skateboarding contests, including the X Games and SLS.
    Tyshawn Jones

    Bronx, New York
    Style: Street / Stance: Regular
    A New York City native and two-time Thrasher Magazine “Skate of the Year” winner, Tyshawn Jones is the youngest skateboarder to ever achieve that accolade.

    The above skaters are far from the only icons you’ll encounter in the game’s large roster. Keep your eyes on the Tony Hawk’s Pro Skater blog found here for more info on Tony Hawk’s Pro Skater 3 + 4 as we approach its July 11 release date, including the full reveal of new skaters joining in on the fun. 

    Tony Hawk’s Pro Skater 3 + 4 rebuilds the original games from the ground up with classic and new skaters, parks, tricks, tracks, and more. Skate through a robust Career mode taking on challenges across two tours, chase high scores in Single Sessions and Speedruns, or go at your own pace in Free Skate.
    Get original with enhanced creation tools, go big in New Game+, and skate with your friends in cross-platform online multiplayer* supporting up to eight skaters at a time. New to the series? Hit up the in-game tutorial led by Tony Hawk himself to kick off your skating journey with tips on Ollies, kick flips, vert tricks, reverts, manuals, special tricks, and more.

    Don’t miss the Foundry Demo, available now, featuring playable skaters, two parks, and a selection of songs from the soundtrack. Pre-order Tony Hawk’s Pro Skater 3 + 4 on select platforms* for access to the demo and find more info here.

    Purchase the Digital Deluxe Edition and gain Early Access*** to play Tony Hawk’s™ Pro Skater™ 3 + 4 three days before the official July 11 launch date.
    Shred the parks and spread fear as the Doom Slayer and Revenant skaters plus get extra music, skate decks, and Create-A-Skater gear:

    Doom Slayer: Play as Doom Slayer, featuring a Standard and Retro outfit plus two unique special tricks and the Unmaykr Hoverboard.
    Revenant: Get evil with the Revenant, including two unique special tricks.
    Additional Music: Headbang to a selection of classic and modern music tracks added to the in-game soundtrack.
    Skate Decks: Access additional skate decks including Doom Slayer and Revenant themed designs.
    Create-A-Skater Items: Kit out your skater with additional apparel items.

    Pre-orders are now available for Tony Hawk’s Pro Skater 3 + 4 on PlayStation 4 and 5, Xbox Series X|S, Xbox One, Nintendo Switch, and PC. For more information, visit tonyhawkthegame.com.
    * Activision account and internet required for online multiplayer and other features. Platform gaming subscription may be required for multiplayer and other features.
    **Foundry demo available on PlayStation 4 and 5, Xbox Series X|S, Xbox One, and PC. Not available on Nintendo Switch. Foundry Demo availability and launch datesubject to change. Internet connection required.
    *** Actual play time subject to possible outages and applicable time zone differences.
    #tony #hawks #pro #skater #returning
    Tony Hawk’s Pro Skater 3 + 4 — Returning Skaters
    The roster of skaters originally featured in Tony Hawk’s™ Pro Skater™ 3 and Tony Hawk’s™ Pro Skater™ 4 helped to further catapult skateboarding culture into the mainstream as big names like Bob Burnquist, Steve Caballero, Elissa Steamer, and Chad Muska joined Tony Hawk in a stacked roster of award-winning pro skaters capable of shredding in and out of the game. In this feature, following the Demo announcement and the full soundtrack reveal, we’re proud to share the full roster of returning skaters in the upcoming Tony Hawk’s™ Pro Skater™ 3 + 4arriving on PlayStation 4 and 5, Xbox Series X|S, Xbox One, Nintendo Switch, Nintendo Switch 2, and PC. Tony Hawk’s Pro Skater 3 + 4 launches on July 11. THPS 3 + 4: Returning Skaters From gold medalists to progenitors of some of today’s most iconic skateboarding tricks, these classic skaters were instrumental in bringing skateboarding culture to a wider audience. Mixing courage, creativity, and an iron will, they’re more than ready to tackle any obstacle put before them. “Being in the original games was epic!” shares Elissa Steamer, who was the first playable female skater in the original Tony Hawk’s Pro Skater game. “It was semi-life changing. I can’t say enough about how stoked I was – and am now! – to be in the games.” “From the moment Tony asked, it was an honor, yet I had no idea of what it would come to mean,” says Rodney Mullen, originator of the kickflip and largely considered one of the most influential skaters in the sport. “The first time I showed up on tour after the release of the game, I recall ‘em shop owners having to put me on top of the tour van roof to manage so that I could sign things in all the madness. The crowd was rocking the van back and forth!blew my mind, the impact it had.” “The game attracted such a broader group of skaters, which has elevated our community in layered ways: from tricks to societal acceptance to the respect we get from people who often thought otherwise, like parents discouraging their kids who were simply outsiders looking for a place to belong,” Mullen continues. “Skating is integrated with a culture, a way of being, more than pretty much any other sport I can think of. The way Tony’s game shows that via the music, art, and vibe batted this home. It’s cool to be understood.”  When Tony Hawk’s Pro Skater 3 + 4 launches this July, here are the returning skaters ready to hit the pavement once again, including skaters featured in the original Tony Hawk’s Pro Skater 3 and Tony Hawk’s Pro Skater 4 games plus other titles in the series. Tony Hawk San Diego, California Style: Vert / Stance: Goofy Tony Hawk made history by landing the first ever 900 at the 1999 X Games, skyrocketing the sport into the mainstream. Today he remains the sport’s most iconic figure. Bob Burnquist Rio de Janeiro, Brazil Style: Vert / Stance: Regular Bob Burnquist shocked the skateboarding world when he landed the first Fakie 900. His iconic “Dreamland” skatepark is home to a permanent Mega Ramp. Bucky Lasek Baltimore, Maryland Style: Vert / Stance: Regular Known for his vert skills, Bucky has won 10 gold medals at the X Games and is one of only two vert skateboarders to have won three gold medals consecutively. Steve Caballero San Jose, California Style: Vert / Stance: Goofy An iconic skateboarder responsible for inventing various vert tricks. He holds the record for the highest air ever achieved on a halfpipe. Kareem Campbell Harlem, New York Style: Street / Stance: Regular Called the godfather of smooth street style, Kareem left his mark by popularizing the skateboard trick, “The Ghetto Bird,” and founded City Stars Skateboards. Geoff Rowley Liverpool, England Style: Street / Stance: Regular Geoff Joseph Rowley Jr. is an English skateboarder and owner of Civilware Service Corporation. In 2000 he was crowned “Skater of the Year” by Thrasher Magazine. Andrew Reynolds North Hollywood, California Style: Street / Stance: Regular Co-founder and owner of Baker Skateboards, Andrew Reynolds turned pro in 1995 and won Thrasher Magazine’s “Skater of the Year” award just three years later. Elissa Steamer San Francisco, California Style: Street / Stance: Regular Elissa is a four-time X Games gold medalist, the first female skateboarder to go pro, and the first woman ever inducted into the Skateboarding Hall of Fame. Chad Muska Los Angeles, California Style: Street / Stance: Regular Artist, musician, and entrepreneur. Described by the Transworld Skateboarding editor-in-chief as “one of the most marketable pros skateboarding has ever seen.” Eric Koston Los Angeles, California Style: Street / Stance: Goofy Co-founder of Fourstar Clothing and the skate brand The Berrics, Eric is a master of street skateboarding and a two-time X Games gold medalist. Rodney Mullen Gainesville, Florida Style: Freestyle / Stance: Regular One of the most influential skateboarders of all time, Rodney Mullen is the progenitor of the Flatground Ollie, Kickflip, Heelflip, and dozens of other iconic tricks. Jamie Thomas Dothan, Alabama Style: Street / Stance: Regular Nicknamed “The Chief,” Jamie is the owner and founder of Zero Skateboards. He helped film 1996’s “Welcome to Hell,” one of the most iconic skate videos ever made. Rune Glifberg Copenhagen, Denmark Style: Vert / Stance: Regular Nicknamed “The Danish Destroyer,” Rune Glifberg is one of three skaters to have competed at every X Games, amassing over 12 medals at the competition. Aori Nishimura Tokyo, Japan Style: Street / Stance: Regular Born in Edogawa, Tokyo in Japan, Aori Nishimura started skateboarding at the age of 7 and went on to become the first athlete from Japan to win gold at the X Games. Leo Baker Brooklyn, New York Style: Street / Stance: Goofy Leo is the first non-binary and transgender professional skateboarder in the Pro Skater™ series and has won three gold medals, placing in over 32 competitions.  Leticia Bufoni São Paulo, Brazil Style: Street / Stance: Goofy Multiple world record holder and six-time gold medalist. Named the #1 women’s street skateboarder by World Cup of Skateboarding four years in a row. Lizzie Armanto Santa Monica, California Style: Park / Stance: Regular A member of the Birdhouse skate team, Lizzie has amassed over 30 skateboarding awards and was the first female skater to complete “The Loop,” a 360-degree ramp. Nyjah Huston Laguna Beach, California Style: Street / Stance: Goofy One of skateboarding’s biggest stars, Nyjah has earned over 12 X Games gold medals, 6 Championship titles, and a bronze medal at the 2024 Summer of Olympics. Riley Hawk San Diego, California Style: Street / Stance: Goofy Riley Hawk decided to turn pro on his 21st birthday and became Skateboarder Magazine’s 2013 Amateur of the Year later that same day. Shane O’Neill Melbourne, Australia Style: Street / Stance: Goofy Australian skateboarder who is one of only a few skateboarders to win gold in all four major skateboarding contests, including the X Games and SLS. Tyshawn Jones Bronx, New York Style: Street / Stance: Regular A New York City native and two-time Thrasher Magazine “Skate of the Year” winner, Tyshawn Jones is the youngest skateboarder to ever achieve that accolade. The above skaters are far from the only icons you’ll encounter in the game’s large roster. Keep your eyes on the Tony Hawk’s Pro Skater blog found here for more info on Tony Hawk’s Pro Skater 3 + 4 as we approach its July 11 release date, including the full reveal of new skaters joining in on the fun.  Tony Hawk’s Pro Skater 3 + 4 rebuilds the original games from the ground up with classic and new skaters, parks, tricks, tracks, and more. Skate through a robust Career mode taking on challenges across two tours, chase high scores in Single Sessions and Speedruns, or go at your own pace in Free Skate. Get original with enhanced creation tools, go big in New Game+, and skate with your friends in cross-platform online multiplayer* supporting up to eight skaters at a time. New to the series? Hit up the in-game tutorial led by Tony Hawk himself to kick off your skating journey with tips on Ollies, kick flips, vert tricks, reverts, manuals, special tricks, and more. Don’t miss the Foundry Demo, available now, featuring playable skaters, two parks, and a selection of songs from the soundtrack. Pre-order Tony Hawk’s Pro Skater 3 + 4 on select platforms* for access to the demo and find more info here. Purchase the Digital Deluxe Edition and gain Early Access*** to play Tony Hawk’s™ Pro Skater™ 3 + 4 three days before the official July 11 launch date. Shred the parks and spread fear as the Doom Slayer and Revenant skaters plus get extra music, skate decks, and Create-A-Skater gear: Doom Slayer: Play as Doom Slayer, featuring a Standard and Retro outfit plus two unique special tricks and the Unmaykr Hoverboard. Revenant: Get evil with the Revenant, including two unique special tricks. Additional Music: Headbang to a selection of classic and modern music tracks added to the in-game soundtrack. Skate Decks: Access additional skate decks including Doom Slayer and Revenant themed designs. Create-A-Skater Items: Kit out your skater with additional apparel items. Pre-orders are now available for Tony Hawk’s Pro Skater 3 + 4 on PlayStation 4 and 5, Xbox Series X|S, Xbox One, Nintendo Switch, and PC. For more information, visit tonyhawkthegame.com. * Activision account and internet required for online multiplayer and other features. Platform gaming subscription may be required for multiplayer and other features. **Foundry demo available on PlayStation 4 and 5, Xbox Series X|S, Xbox One, and PC. Not available on Nintendo Switch. Foundry Demo availability and launch datesubject to change. Internet connection required. *** Actual play time subject to possible outages and applicable time zone differences. #tony #hawks #pro #skater #returning
    WWW.TONYHAWKTHEGAME.COM
    Tony Hawk’s Pro Skater 3 + 4 — Returning Skaters
    The roster of skaters originally featured in Tony Hawk’s™ Pro Skater™ 3 and Tony Hawk’s™ Pro Skater™ 4 helped to further catapult skateboarding culture into the mainstream as big names like Bob Burnquist, Steve Caballero, Elissa Steamer, and Chad Muska joined Tony Hawk in a stacked roster of award-winning pro skaters capable of shredding in and out of the game. In this feature, following the Demo announcement and the full soundtrack reveal, we’re proud to share the full roster of returning skaters in the upcoming Tony Hawk’s™ Pro Skater™ 3 + 4arriving on PlayStation 4 and 5, Xbox Series X|S, Xbox One, Nintendo Switch, Nintendo Switch 2, and PC (Battle.net, Steam, Microsoft PC Store). Tony Hawk’s Pro Skater 3 + 4 launches on July 11. THPS 3 + 4: Returning Skaters From gold medalists to progenitors of some of today’s most iconic skateboarding tricks, these classic skaters were instrumental in bringing skateboarding culture to a wider audience. Mixing courage, creativity, and an iron will, they’re more than ready to tackle any obstacle put before them. “Being in the original games was epic!” shares Elissa Steamer, who was the first playable female skater in the original Tony Hawk’s Pro Skater game. “It was semi-life changing. I can’t say enough about how stoked I was – and am now! – to be in the games.” “From the moment Tony asked, it was an honor, yet I had no idea of what it would come to mean,” says Rodney Mullen, originator of the kickflip and largely considered one of the most influential skaters in the sport. “The first time I showed up on tour after the release of the game, I recall ‘em shop owners having to put me on top of the tour van roof to manage so that I could sign things in all the madness. The crowd was rocking the van back and forth! [It] blew my mind, the impact it had.” “The game attracted such a broader group of skaters, which has elevated our community in layered ways: from tricks to societal acceptance to the respect we get from people who often thought otherwise, like parents discouraging their kids who were simply outsiders looking for a place to belong,” Mullen continues. “Skating is integrated with a culture, a way of being, more than pretty much any other sport I can think of. The way Tony’s game shows that via the music, art, and vibe batted this home. It’s cool to be understood.”  When Tony Hawk’s Pro Skater 3 + 4 launches this July, here are the returning skaters ready to hit the pavement once again, including skaters featured in the original Tony Hawk’s Pro Skater 3 and Tony Hawk’s Pro Skater 4 games plus other titles in the series. Tony Hawk San Diego, California Style: Vert / Stance: Goofy Tony Hawk made history by landing the first ever 900 at the 1999 X Games, skyrocketing the sport into the mainstream. Today he remains the sport’s most iconic figure. Bob Burnquist Rio de Janeiro, Brazil Style: Vert / Stance: Regular Bob Burnquist shocked the skateboarding world when he landed the first Fakie 900. His iconic “Dreamland” skatepark is home to a permanent Mega Ramp. Bucky Lasek Baltimore, Maryland Style: Vert / Stance: Regular Known for his vert skills, Bucky has won 10 gold medals at the X Games and is one of only two vert skateboarders to have won three gold medals consecutively. Steve Caballero San Jose, California Style: Vert / Stance: Goofy An iconic skateboarder responsible for inventing various vert tricks. He holds the record for the highest air ever achieved on a halfpipe. Kareem Campbell Harlem, New York Style: Street / Stance: Regular Called the godfather of smooth street style, Kareem left his mark by popularizing the skateboard trick, “The Ghetto Bird,” and founded City Stars Skateboards. Geoff Rowley Liverpool, England Style: Street / Stance: Regular Geoff Joseph Rowley Jr. is an English skateboarder and owner of Civilware Service Corporation. In 2000 he was crowned “Skater of the Year” by Thrasher Magazine. Andrew Reynolds North Hollywood, California Style: Street / Stance: Regular Co-founder and owner of Baker Skateboards, Andrew Reynolds turned pro in 1995 and won Thrasher Magazine’s “Skater of the Year” award just three years later. Elissa Steamer San Francisco, California Style: Street / Stance: Regular Elissa is a four-time X Games gold medalist, the first female skateboarder to go pro, and the first woman ever inducted into the Skateboarding Hall of Fame. Chad Muska Los Angeles, California Style: Street / Stance: Regular Artist, musician, and entrepreneur. Described by the Transworld Skateboarding editor-in-chief as “one of the most marketable pros skateboarding has ever seen.” Eric Koston Los Angeles, California Style: Street / Stance: Goofy Co-founder of Fourstar Clothing and the skate brand The Berrics, Eric is a master of street skateboarding and a two-time X Games gold medalist. Rodney Mullen Gainesville, Florida Style: Freestyle / Stance: Regular One of the most influential skateboarders of all time, Rodney Mullen is the progenitor of the Flatground Ollie, Kickflip, Heelflip, and dozens of other iconic tricks. Jamie Thomas Dothan, Alabama Style: Street / Stance: Regular Nicknamed “The Chief,” Jamie is the owner and founder of Zero Skateboards. He helped film 1996’s “Welcome to Hell,” one of the most iconic skate videos ever made. Rune Glifberg Copenhagen, Denmark Style: Vert / Stance: Regular Nicknamed “The Danish Destroyer,” Rune Glifberg is one of three skaters to have competed at every X Games, amassing over 12 medals at the competition. Aori Nishimura Tokyo, Japan Style: Street / Stance: Regular Born in Edogawa, Tokyo in Japan, Aori Nishimura started skateboarding at the age of 7 and went on to become the first athlete from Japan to win gold at the X Games. Leo Baker Brooklyn, New York Style: Street / Stance: Goofy Leo is the first non-binary and transgender professional skateboarder in the Pro Skater™ series and has won three gold medals, placing in over 32 competitions.  Leticia Bufoni São Paulo, Brazil Style: Street / Stance: Goofy Multiple world record holder and six-time gold medalist. Named the #1 women’s street skateboarder by World Cup of Skateboarding four years in a row. Lizzie Armanto Santa Monica, California Style: Park / Stance: Regular A member of the Birdhouse skate team, Lizzie has amassed over 30 skateboarding awards and was the first female skater to complete “The Loop,” a 360-degree ramp. Nyjah Huston Laguna Beach, California Style: Street / Stance: Goofy One of skateboarding’s biggest stars, Nyjah has earned over 12 X Games gold medals, 6 Championship titles, and a bronze medal at the 2024 Summer of Olympics. Riley Hawk San Diego, California Style: Street / Stance: Goofy Riley Hawk decided to turn pro on his 21st birthday and became Skateboarder Magazine’s 2013 Amateur of the Year later that same day. Shane O’Neill Melbourne, Australia Style: Street / Stance: Goofy Australian skateboarder who is one of only a few skateboarders to win gold in all four major skateboarding contests, including the X Games and SLS. Tyshawn Jones Bronx, New York Style: Street / Stance: Regular A New York City native and two-time Thrasher Magazine “Skate of the Year” winner, Tyshawn Jones is the youngest skateboarder to ever achieve that accolade. The above skaters are far from the only icons you’ll encounter in the game’s large roster. Keep your eyes on the Tony Hawk’s Pro Skater blog found here for more info on Tony Hawk’s Pro Skater 3 + 4 as we approach its July 11 release date, including the full reveal of new skaters joining in on the fun.  Tony Hawk’s Pro Skater 3 + 4 rebuilds the original games from the ground up with classic and new skaters, parks, tricks, tracks, and more. Skate through a robust Career mode taking on challenges across two tours, chase high scores in Single Sessions and Speedruns, or go at your own pace in Free Skate. Get original with enhanced creation tools, go big in New Game+, and skate with your friends in cross-platform online multiplayer* supporting up to eight skaters at a time. New to the series? Hit up the in-game tutorial led by Tony Hawk himself to kick off your skating journey with tips on Ollies, kick flips, vert tricks, reverts, manuals, special tricks, and more. Don’t miss the Foundry Demo, available now, featuring playable skaters, two parks, and a selection of songs from the soundtrack. Pre-order Tony Hawk’s Pro Skater 3 + 4 on select platforms* for access to the demo and find more info here. Purchase the Digital Deluxe Edition and gain Early Access*** to play Tony Hawk’s™ Pro Skater™ 3 + 4 three days before the official July 11 launch date. Shred the parks and spread fear as the Doom Slayer and Revenant skaters plus get extra music, skate decks, and Create-A-Skater gear: Doom Slayer: Play as Doom Slayer, featuring a Standard and Retro outfit plus two unique special tricks and the Unmaykr Hoverboard. Revenant: Get evil with the Revenant, including two unique special tricks. Additional Music: Headbang to a selection of classic and modern music tracks added to the in-game soundtrack. Skate Decks: Access additional skate decks including Doom Slayer and Revenant themed designs. Create-A-Skater Items: Kit out your skater with additional apparel items. Pre-orders are now available for Tony Hawk’s Pro Skater 3 + 4 on PlayStation 4 and 5, Xbox Series X|S, Xbox One, Nintendo Switch, and PC. For more information, visit tonyhawkthegame.com. * Activision account and internet required for online multiplayer and other features. Platform gaming subscription may be required for multiplayer and other features (sold separately). **Foundry demo available on PlayStation 4 and 5, Xbox Series X|S, Xbox One, and PC. Not available on Nintendo Switch. Foundry Demo availability and launch date(s) subject to change. Internet connection required. *** Actual play time subject to possible outages and applicable time zone differences.
    Like
    Love
    Wow
    Sad
    Angry
    526
    2 Comments 0 Shares
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Comments 0 Shares
  • Why Designers Get Stuck In The Details And How To Stop

    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar?
    In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap.
    Reason #1 You’re Afraid To Show Rough Work
    We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed.
    I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them.
    The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief.
    The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem.
    So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this:

    Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den.
    Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off.

    Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback.
    Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift:
    Treat early sketches as disposable tools for thinking and actively share them to get feedback faster.

    Reason #2: You Fix The Symptom, Not The Cause
    Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data.
    From my experience, here are several reasons why users might not be clicking that coveted button:

    Users don’t understand that this step is for payment.
    They understand it’s about payment but expect order confirmation first.
    Due to incorrect translation, users don’t understand what the button means.
    Lack of trust signals.
    Unexpected additional coststhat appear at this stage.
    Technical issues.

    Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly.
    Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button.
    Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers.
    There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers.
    Reason #3: You’re Solving The Wrong Problem
    Before solving anything, ask whether the problem even deserves your attention.
    During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons.
    Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned:
    Without the right context, any visual tweak is lipstick on a pig.

    Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising.
    It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours.
    Reason #4: You’re Drowning In Unactionable Feedback
    We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow.
    What matters here are two things:

    The question you ask,
    The context you give.

    That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it.
    For instance:
    “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?”

    Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?”
    Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside.
    I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory.
    So, to wrap up this point, here are two recommendations:

    Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”.
    Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it.

    Reason #5 You’re Just Tired
    Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing.
    A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity.
    What helps here:

    Swap tasks.Trade tickets with another designer; novelty resets your focus.
    Talk to another designer.If NDA permits, ask peers outside the team for a sanity check.
    Step away.Even a ten‑minute walk can do more than a double‑shot espresso.

    By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit.

    And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time.
    Four Steps I Use to Avoid Drowning In Detail
    Knowing these potential traps, here’s the practical process I use to stay on track:
    1. Define the Core Problem & Business Goal
    Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream.
    2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels.
    3. Wireframe the Flow & Get Focused Feedback
    Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions.
    4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution.
    Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering.
    Wrapping Up
    Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution.
    Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    #why #designers #get #stuck #details
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychologyshows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals. Unexpected additional coststhat appear at this stage. Technical issues. Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers— and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B testsshowed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem, shared your insight, explained your solution, and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the daycompared to late in the daysimply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the MechanicOnce the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear contextto get actionable feedback, not just vague opinions. 4. Polish the VisualsI only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink. #why #designers #get #stuck #details
    SMASHINGMAGAZINE.COM
    Why Designers Get Stuck In The Details And How To Stop
    You’ve drawn fifty versions of the same screen — and you still hate every one of them. Begrudgingly, you pick three, show them to your product manager, and hear: “Looks cool, but the idea doesn’t work.” Sound familiar? In this article, I’ll unpack why designers fall into detail work at the wrong moment, examining both process pitfalls and the underlying psychological reasons, as understanding these traps is the first step to overcoming them. I’ll also share tactics I use to climb out of that trap. Reason #1 You’re Afraid To Show Rough Work We designers worship detail. We’re taught that true craft equals razor‑sharp typography, perfect grids, and pixel precision. So the minute a task arrives, we pop open Figma and start polishing long before polish is needed. I’ve skipped the sketch phase more times than I care to admit. I told myself it would be faster, yet I always ended up spending hours producing a tidy mock‑up when a scribbled thumbnail would have sparked a five‑minute chat with my product manager. Rough sketches felt “unprofessional,” so I hid them. The cost? Lost time, wasted energy — and, by the third redo, teammates were quietly wondering if I even understood the brief. The real problem here is the habit: we open Figma and start perfecting the UI before we’ve even solved the problem. So why do we hide these rough sketches? It’s not just a bad habit or plain silly. There are solid psychological reasons behind it. We often just call it perfectionism, but it’s deeper than wanting things neat. Digging into the psychology (like the research by Hewitt and Flett) shows there are a couple of flavors driving this: Socially prescribed perfectionismIt’s that nagging feeling that everyone else expects perfect work from you, which makes showing anything rough feel like walking into the lion’s den. Self-oriented perfectionismWhere you’re the one setting impossibly high standards for yourself, leading to brutal self-criticism if anything looks slightly off. Either way, the result’s the same: showing unfinished work feels wrong, and you miss out on that vital early feedback. Back to the design side, remember that clients rarely see architects’ first pencil sketches, but these sketches still exist; they guide structural choices before the 3D render. Treat your thumbnails the same way — artifacts meant to collapse uncertainty, not portfolio pieces. Once stakeholders see the upside, roughness becomes a badge of speed, not sloppiness. So, the key is to consciously make that shift: Treat early sketches as disposable tools for thinking and actively share them to get feedback faster. Reason #2: You Fix The Symptom, Not The Cause Before tackling any task, we need to understand what business outcome we’re aiming for. Product managers might come to us asking to enlarge the payment button in the shopping cart because users aren’t noticing it. The suggested solution itself isn’t necessarily bad, but before redesigning the button, we should ask, “What data suggests they aren’t noticing it?” Don’t get me wrong, I’m not saying you shouldn’t trust your product manager. On the contrary, these questions help ensure you’re on the same page and working with the same data. From my experience, here are several reasons why users might not be clicking that coveted button: Users don’t understand that this step is for payment. They understand it’s about payment but expect order confirmation first. Due to incorrect translation, users don’t understand what the button means. Lack of trust signals (no security icons, unclear seller information). Unexpected additional costs (hidden fees, shipping) that appear at this stage. Technical issues (inactive button, page freezing). Now, imagine you simply did what the manager suggested. Would you have solved the problem? Hardly. Moreover, the responsibility for the unresolved issue would fall on you, as the interface solution lies within the design domain. The product manager actually did their job correctly by identifying a problem: suspiciously, few users are clicking the button. Psychologically, taking on this bigger role isn’t easy. It means overcoming the fear of making mistakes and the discomfort of exploring unclear problems rather than just doing tasks. This shift means seeing ourselves as partners who create value — even if it means fighting a hesitation to question product managers (which might come from a fear of speaking up or a desire to avoid challenging authority) — and understanding that using our product logic expertise proactively is crucial for modern designers. There’s another critical reason why we, designers, need to be a bit like product managers: the rise of AI. I deliberately used a simple example about enlarging a button, but I’m confident that in the near future, AI will easily handle routine design tasks. This worries me, but at the same time, I’m already gladly stepping into the product manager’s territory: understanding product and business metrics, formulating hypotheses, conducting research, and so on. It might sound like I’m taking work away from PMs, but believe me, they undoubtedly have enough on their plates and are usually more than happy to delegate some responsibilities to designers. Reason #3: You’re Solving The Wrong Problem Before solving anything, ask whether the problem even deserves your attention. During a major home‑screen redesign, our goal was to drive more users into paid services. The initial hypothesis — making service buttons bigger and brighter might help returning users — seemed reasonable enough to test. However, even when A/B tests (a method of comparing two versions of a design to determine which performs better) showed minimal impact, we continued to tweak those buttons. Only later did it click: the home screen isn’t the place to sell; visitors open the app to start, not to buy. We removed that promo block, and nothing broke. Contextual entry points deeper into the journey performed brilliantly. Lesson learned: Without the right context, any visual tweak is lipstick on a pig. Why did we get stuck polishing buttons instead of stopping sooner? It’s easy to get tunnel vision. Psychologically, it’s likely the good old sunk cost fallacy kicking in: we’d already invested time in the buttons, so stopping felt like wasting that effort, even though the data wasn’t promising. It’s just easier to keep fiddling with something familiar than to admit we need a new plan. Perhaps the simple question I should have asked myself when results stalled was: “Are we optimizing the right thing or just polishing something that fundamentally doesn’t fit the user’s primary goal here?” That alone might have saved hours. Reason #4: You’re Drowning In Unactionable Feedback We all discuss our work with colleagues. But here’s a crucial point: what kind of question do you pose to kick off that discussion? If your go-to is “What do you think?” well, that question might lead you down a rabbit hole of personal opinions rather than actionable insights. While experienced colleagues will cut through the noise, others, unsure what to evaluate, might comment on anything and everything — fonts, button colors, even when you desperately need to discuss a user flow. What matters here are two things: The question you ask, The context you give. That means clearly stating the problem, what you’ve learned, and how your idea aims to fix it. For instance: “The problem is our payment conversion rate has dropped by X%. I’ve interviewed users and found they abandon payment because they don’t understand how the total amount is calculated. My solution is to show a detailed cost breakdown. Do you think this actually solves the problem for them?” Here, you’ve stated the problem (conversion drop), shared your insight (user confusion), explained your solution (cost breakdown), and asked a direct question. It’s even better if you prepare a list of specific sub-questions. For instance: “Are all items in the cost breakdown clear?” or “Does the placement of this breakdown feel intuitive within the payment flow?” Another good habit is to keep your rough sketches and previous iterations handy. Some of your colleagues’ suggestions might be things you’ve already tried. It’s great if you can discuss them immediately to either revisit those ideas or definitively set them aside. I’m not a psychologist, but experience tells me that, psychologically, the reluctance to be this specific often stems from a fear of our solution being rejected. We tend to internalize feedback: a seemingly innocent comment like, “Have you considered other ways to organize this section?” or “Perhaps explore a different structure for this part?” can instantly morph in our minds into “You completely messed up the structure. You’re a bad designer.” Imposter syndrome, in all its glory. So, to wrap up this point, here are two recommendations: Prepare for every design discussion.A couple of focused questions will yield far more valuable input than a vague “So, what do you think?”. Actively work on separating feedback on your design from your self-worth.If a mistake is pointed out, acknowledge it, learn from it, and you’ll be less likely to repeat it. This is often easier said than done. For me, it took years of working with a psychotherapist. If you struggle with this, I sincerely wish you strength in overcoming it. Reason #5 You’re Just Tired Sometimes, the issue isn’t strategic at all — it’s fatigue. Fussing over icon corners can feel like a cozy bunker when your brain is fried. There’s a name for this: decision fatigue. Basically, your brain’s battery for hard thinking is low, so it hides out in the easy, comfy zone of pixel-pushing. A striking example comes from a New York Times article titled “Do You Suffer From Decision Fatigue?.” It described how judges deciding on release requests were far more likely to grant release early in the day (about 70% of cases) compared to late in the day (less than 10%) simply because their decision-making energy was depleted. Luckily, designers rarely hold someone’s freedom in their hands, but the example dramatically shows how fatigue can impact our judgment and productivity. What helps here: Swap tasks.Trade tickets with another designer; novelty resets your focus. Talk to another designer.If NDA permits, ask peers outside the team for a sanity check. Step away.Even a ten‑minute walk can do more than a double‑shot espresso. By the way, I came up with these ideas while walking around my office. I was lucky to work near a river, and those short walks quickly turned into a helpful habit. And one more trick that helps me snap out of detail mode early: if I catch myself making around 20 little tweaks — changing font weight, color, border radius — I just stop. Over time, it turned into a habit. I have a similar one with Instagram: by the third reel, my brain quietly asks, “Wait, weren’t we working?” Funny how that kind of nudge saves a ton of time. Four Steps I Use to Avoid Drowning In Detail Knowing these potential traps, here’s the practical process I use to stay on track: 1. Define the Core Problem & Business Goal Before anything, dig deep: what’s the actual problem we’re solving, not just the requested task or a surface-level symptom? Ask ‘why’ repeatedly. What user pain or business need are we addressing? Then, state the clear business goal: “What metric am I moving, and do we have data to prove this is the right lever?” If retention is the goal, decide whether push reminders, gamification, or personalised content is the best route. The wrong lever, or tackling a symptom instead of the cause, dooms everything downstream. 2. Choose the Mechanic (Solution Principle) Once the core problem and goal are clear, lock the solution principle or ‘mechanic’ first. Going with a game layer? Decide if it’s leaderboards, streaks, or badges. Write it down. Then move on. No UI yet. This keeps the focus high-level before diving into pixels. 3. Wireframe the Flow & Get Focused Feedback Now open Figma. Map screens, layout, and transitions. Boxes and arrows are enough. Keep the fidelity low so the discussion stays on the flow, not colour. Crucially, when you share these early wires, ask specific questions and provide clear context (as discussed in ‘Reason #4’) to get actionable feedback, not just vague opinions. 4. Polish the Visuals (Mindfully) I only let myself tweak grids, type scales, and shadows after the flow is validated. If progress stalls, or before a major polish effort, I surface the work in a design critique — again using targeted questions and clear context — instead of hiding in version 47. This ensures detailing serves the now-validated solution. Even for something as small as a single button, running these four checkpoints takes about ten minutes and saves hours of decorative dithering. Wrapping Up Next time you feel the pull to vanish into mock‑ups before the problem is nailed down, pause and ask what you might be avoiding. Yes, that can expose an uncomfortable truth. But pausing to ask what you might be avoiding — maybe the fuzzy core problem, or just asking for tough feedback — gives you the power to face the real issue head-on. It keeps the project focused on solving the right problem, not just perfecting a flawed solution. Attention to detail is a superpower when used at the right moment. Obsessing over pixels too soon, though, is a bad habit and a warning light telling us the process needs a rethink.
    Like
    Love
    Wow
    Angry
    Sad
    596
    0 Comments 0 Shares
  • 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTO

    Cases News 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTOJune 9, 2025Last Updated: 2025-06-09We cover Phanteks’ new G370A budget case, the XT M3, and the Evolv X2 MatrixThe HighlightsPhanteks’ new X2 Matrix case has 900 LEDs and is aiming to be around Phanteks’ G370A is a case that includes 3x120mm fansThe company has a new T30-140 fan that required 6 years of engineering to makeTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Phanteks’ suite at Computex 2025 and the company showed off several cases along with a fan that took the company roughly 6 years to make.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangPhanteks Matrix CasesWe’ve talked about Phanteks’ X2 case in the past but the company was showing off its new Matrix version, which has matrix LEDs. The X2 Matrix has 900 LEDs in a 10x90 layout. It’s supposed to be about to more expensive than the base X2, which means it should end up around   The interesting thing about the case is that the LEDs wrap around the chassis. In terms of communication, the LEDs connect to the motherboard via USB 2.0 and use SATA for power. This allows Phanteks to bypass a WinRing 0 type situation. Another Matrix case had 600 of them in a 10x60 LED configuration and is supposed to be about  Phanteks also has software that allows you to reconfigure what the LEDs display. When we got to the company’s suite, it had been programmed to say, “Gamers Nexus here,” which was cool to see. We also saw that the LEDs can also be used to highlight CPU temperature. Phanteks G370A Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Phanteks also showed off its G370A case, which is a case that includes 3x120mm fans in the front coupled with a mesh front that offers 38% hole porosity. The company tells us that manufacturing typically offers around 25% porosity.  It has a glass side panel and the back side panel of the case is just steel and has no ventilation. Taking a look at the placement of the front fans, we asked Phanteks why they weren’t higher on the case so the bottom fan could get more exposure to the bottom power supply shroud area and the answer the company gave us was simply clearance for a 360mm radiator at the top. There’s not a lot of room for the air coming into the shroud. Some of it will go through the cable pass-through if it’s empty. The back of the case features a drive mount.XTM3The company also showed off a Micro ATX case called the XTM3. It comes with 3 fans and is For its front panel, it has a unique punch out for its fans. The top panel is part standard ventilation but it does have one side that provides less airflow, which covers where the PSU would exhaust out of. The side panel does have punch-outs for the PSU, however. We don’t test power supplies, though that may change in the future. Power supplies can take a lot of thermal abuse, however, so we’re not super concerned here.  The case should be shipping in the next month or so and is 39.5 liters, which includes the feet. We appreciate that as not a lot of companies will factor that in. There’s also a lot of cable management depth on the back and the case also supports BTF. In addition, there’s a panel that clamps down all of the power supply cables. T30 FanPhanteks’ T30 fan took the company 6 years to make and is a 140mm fan. The company is competing with Noctua in the high-end fan space, but is going for a grey theme instead of brown. Phanteks CTO Tenzin Rongen Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Finally, we interviewed Phanteks CTO Tenzin Rongen to discuss technical details behind the company’s long-developed fans. Make sure to check it out in our video.
    #years #make #fan #g370a #budget
    6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTO
    Cases News 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTOJune 9, 2025Last Updated: 2025-06-09We cover Phanteks’ new G370A budget case, the XT M3, and the Evolv X2 MatrixThe HighlightsPhanteks’ new X2 Matrix case has 900 LEDs and is aiming to be around Phanteks’ G370A is a case that includes 3x120mm fansThe company has a new T30-140 fan that required 6 years of engineering to makeTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Phanteks’ suite at Computex 2025 and the company showed off several cases along with a fan that took the company roughly 6 years to make.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangPhanteks Matrix CasesWe’ve talked about Phanteks’ X2 case in the past but the company was showing off its new Matrix version, which has matrix LEDs. The X2 Matrix has 900 LEDs in a 10x90 layout. It’s supposed to be about to more expensive than the base X2, which means it should end up around   The interesting thing about the case is that the LEDs wrap around the chassis. In terms of communication, the LEDs connect to the motherboard via USB 2.0 and use SATA for power. This allows Phanteks to bypass a WinRing 0 type situation. Another Matrix case had 600 of them in a 10x60 LED configuration and is supposed to be about  Phanteks also has software that allows you to reconfigure what the LEDs display. When we got to the company’s suite, it had been programmed to say, “Gamers Nexus here,” which was cool to see. We also saw that the LEDs can also be used to highlight CPU temperature. Phanteks G370A Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work!Phanteks also showed off its G370A case, which is a case that includes 3x120mm fans in the front coupled with a mesh front that offers 38% hole porosity. The company tells us that manufacturing typically offers around 25% porosity.  It has a glass side panel and the back side panel of the case is just steel and has no ventilation. Taking a look at the placement of the front fans, we asked Phanteks why they weren’t higher on the case so the bottom fan could get more exposure to the bottom power supply shroud area and the answer the company gave us was simply clearance for a 360mm radiator at the top. There’s not a lot of room for the air coming into the shroud. Some of it will go through the cable pass-through if it’s empty. The back of the case features a drive mount.XTM3The company also showed off a Micro ATX case called the XTM3. It comes with 3 fans and is For its front panel, it has a unique punch out for its fans. The top panel is part standard ventilation but it does have one side that provides less airflow, which covers where the PSU would exhaust out of. The side panel does have punch-outs for the PSU, however. We don’t test power supplies, though that may change in the future. Power supplies can take a lot of thermal abuse, however, so we’re not super concerned here.  The case should be shipping in the next month or so and is 39.5 liters, which includes the feet. We appreciate that as not a lot of companies will factor that in. There’s also a lot of cable management depth on the back and the case also supports BTF. In addition, there’s a panel that clamps down all of the power supply cables. T30 FanPhanteks’ T30 fan took the company 6 years to make and is a 140mm fan. The company is competing with Noctua in the high-end fan space, but is going for a grey theme instead of brown. Phanteks CTO Tenzin Rongen Interview Visit our Patreon page to contribute a few dollars toward this website's operationAdditionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Finally, we interviewed Phanteks CTO Tenzin Rongen to discuss technical details behind the company’s long-developed fans. Make sure to check it out in our video. #years #make #fan #g370a #budget
    GAMERSNEXUS.NET
    6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTO
    Cases News 6 Years to Make a Fan, G370A Budget Case, & Phanteks Technical Fan Discussion, ft. CTOJune 9, 2025Last Updated: 2025-06-09We cover Phanteks’ new G370A budget case, the XT M3, and the Evolv X2 MatrixThe HighlightsPhanteks’ new X2 Matrix case has 900 LEDs and is aiming to be around $200Phanteks’ G370A is a $60 case that includes 3x120mm fansThe company has a new T30-140 fan that required 6 years of engineering to makeTable of ContentsAutoTOC Grab a GN Tear-Down Toolkit to support our AD-FREE reviews and IN-DEPTH testing while also getting a high-quality, highly portable 10-piece toolkit that was custom designed for use with video cards for repasting and water block installation. Includes a portable roll bag, hook hangers for pegboards, a storage compartment, and instructional GPU disassembly cards.IntroWe visited Phanteks’ suite at Computex 2025 and the company showed off several cases along with a fan that took the company roughly 6 years to make.Editor's note: This was originally published on May 21, 2025 as a video. This content has been adapted to written format for this article and is unchanged from the original publication.CreditsHostSteve BurkeCamera, Video EditingMike GaglioneVitalii MakhnovetsWriting, Web EditingJimmy ThangPhanteks Matrix CasesWe’ve talked about Phanteks’ X2 case in the past but the company was showing off its new Matrix version, which has matrix LEDs. The X2 Matrix has 900 LEDs in a 10x90 layout. It’s supposed to be about $30 to $40 more expensive than the base X2, which means it should end up around $200.  The interesting thing about the case is that the LEDs wrap around the chassis. In terms of communication, the LEDs connect to the motherboard via USB 2.0 and use SATA for power. This allows Phanteks to bypass a WinRing 0 type situation. Another Matrix case had 600 of them in a 10x60 LED configuration and is supposed to be about $120. Phanteks also has software that allows you to reconfigure what the LEDs display. When we got to the company’s suite, it had been programmed to say, “Gamers Nexus here,” which was cool to see. We also saw that the LEDs can also be used to highlight CPU temperature. Phanteks G370A Grab a GN15 Large Anti-Static Modmat to celebrate our 15th Anniversary and for a high-quality PC building work surface. The Modmat features useful PC building diagrams and is anti-static conductive. Purchases directly fund our work! (or consider a direct donation or a Patreon contribution!)Phanteks also showed off its G370A case, which is a $60 case that includes 3x120mm fans in the front coupled with a mesh front that offers 38% hole porosity. The company tells us that manufacturing typically offers around 25% porosity.  It has a glass side panel and the back side panel of the case is just steel and has no ventilation. Taking a look at the placement of the front fans, we asked Phanteks why they weren’t higher on the case so the bottom fan could get more exposure to the bottom power supply shroud area and the answer the company gave us was simply clearance for a 360mm radiator at the top. There’s not a lot of room for the air coming into the shroud. Some of it will go through the cable pass-through if it’s empty. The back of the case features a drive mount.XTM3The company also showed off a Micro ATX case called the XTM3. It comes with 3 fans and is $70. For its front panel, it has a unique punch out for its fans. The top panel is part standard ventilation but it does have one side that provides less airflow, which covers where the PSU would exhaust out of. The side panel does have punch-outs for the PSU, however. We don’t test power supplies, though that may change in the future. Power supplies can take a lot of thermal abuse, however, so we’re not super concerned here.  The case should be shipping in the next month or so and is 39.5 liters, which includes the feet. We appreciate that as not a lot of companies will factor that in. There’s also a lot of cable management depth on the back and the case also supports BTF. In addition, there’s a panel that clamps down all of the power supply cables. T30 FanPhanteks’ T30 fan took the company 6 years to make and is a 140mm fan. The company is competing with Noctua in the high-end fan space, but is going for a grey theme instead of brown. Phanteks CTO Tenzin Rongen Interview Visit our Patreon page to contribute a few dollars toward this website's operation (or consider a direct donation or buying something from our GN Store!) Additionally, when you purchase through links to retailers on our site, we may earn a small affiliate commission.Finally, we interviewed Phanteks CTO Tenzin Rongen to discuss technical details behind the company’s long-developed fans. Make sure to check it out in our video.
    0 Comments 0 Shares
More Results