• Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Comments 0 Shares 0 Reviews
  • Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Comments 0 Shares 0 Reviews
  • Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained

    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb.But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy”. Only about 0.7 percent is uranium 235, a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream.Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsiusit is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month.
    #could #iran #have #been #close
    Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained
    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb.But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy”. Only about 0.7 percent is uranium 235, a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream.Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsiusit is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month. #could #iran #have #been #close
    WWW.SCIENTIFICAMERICAN.COM
    Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained
    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb. (Iran has denied that it has been pursuing nuclear weapons development.)But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy” (meaning it has more neutrons in its nucleus). Only about 0.7 percent is uranium 235 (U-235), a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream. (Neutrons are the neutral subatomic particle in an atom’s nucleus that adds to their mass.) Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsius (132.8 degrees Fahrenheit) it is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month.
    0 Comments 0 Shares 0 Reviews
  • Why I Would Choose a Steam Deck Over a Nintendo Switch 2

    We may earn a commission from links on this page.After spending about a week with the Nintendo Switch 2, I have to admit that it’s a good console. It’s priced fairly for its sleek form factor and the performance it offers, and it sets Nintendo up to stay relevant while gaming graphics only continue to get more complex. And yet, for my own personal tastes, it’s still not my handheld of choice. Instead, I’ll be sticking to Valve’s Steam Deck, the first and still overall best handheld gaming PC, at least going by value for money. And if you don’t necessarily care about Nintendo’s exclusive games, there’s a good chance it might be the better option for you, too.The Steam Deck is cheaper than the Switch 2Out of the gate, the most obvious reason to get a Steam Deck over a Nintendo Switch 2 is price. Starting at for a new model, it’s only modestly cheaper than the Switch 2’s but that’s only part of the story. Valve also runs a certified refurbished program that offers used Decks with only cosmetic blemishes for as low as Restocks are infrequent, since Valve is only able to sell as much as gets sent back to it, but when they do happen, it's a heck of a great deal.That said, there is one catch. The Steam Deck OLED, which offers a bigger, more colorful screen and a larger battery, is more expensive than the Switch 2, starting at However, it’s maybe a bit unfair to compare the two, since the Switch 2 does not use an OLED screen and comes with less storage. If all you care about is the basics, the base Steam Deck is good enough—it’s got the same performance as the more recent one. And that performance, by the way, ended up being about on par with the Switch 2 in my testing, at least in Cyberpunk 2077.The Steam Deck is more comfortable to hold than the Switch 2This one is a bit of a toss-up, depending on your preferences, although I think the Steam Deck takes a slight lead here. While the Nintendo Switch 2 aims for a completely flat and somewhat compact profile, the Steam Deck instead allows itself to stretch out, and even though it’s a little bigger and a little heavier for it, I ultimately think that makes it more comfortable.At 11.73 x 4.60 x 1.93 inches against the Switch 2’s 10.7 x 4.5 x 0.55 inches, and at 1.41 pounds against the Switch 2’s 1.18 pounds, I won’t deny that this will be a non-starter for some. But personally, I still feel like the Steam Deck comes out on top, and that’s thanks to its ergonomics.I’ve never been a big fan of Nintendo’s joy-con controllers, and while the Switch 2’s joy-con 2 controllers improve on the Switch 1’s with bigger buttons and sticks, as well as more room to hold onto them, they still pale in comparison next to the Steam Deck’s controls.

    Steam Deck in profilevs. Switch 2 in profileCredit: Michelle Ehrhardt

    On the Switch 2, there are no grips to wrap your fingers around. On the Steam Deck, there are. The triggers also flare out more, and because the console is wider, your hands can stretch out a bit, rather than choking up on the device. It can get a bit heavy to hold a Steam Deck after a while, but I still prefer this approach overall, and if you have a surface to rest the Steam Deck against, weight is a non-issue.Plus, there are some extra bonuses that come with the additional space. The Steam Deck has large touchpads on either side of the device, plus four grip buttons on the back of it, giving you some extra inputs to play around with. Nice.It’s a bit less portable and a bit heavier, but for my adult hands, the Steam Deck is just better shaped to them.The Steam Deck has a bigger, cheaper library than the Switch 2This is the kicker. While there are cheap games that can run on the Switch 2 courtesy of backwards compatibility and third-party eShop titles, the big system drawscan get as pricey as Not to say the Steam Deck doesn’t have expensive games as well, but on the whole, I think it’s easier to get cheap and free games on the Steam Deck than on the eShop.That’s because, being a handheld gaming PC, the Steam Deck can take advantage of the many sales and freebies PC gaming stores love to give out. These happen a bit more frequently on PC than on console, and that’s because there’s more competition on PC. Someone on PC could download games either from Steam or Epic, for instance, while someone on the Switch 2 can only download games from the Nintendo eShop.So, even sticking to just Steam, you’ll get access to regular weekend and mid-week sales, quarterly event sales, and developer or publisher highlight sales. That’s more sales events than you’ll usually find on the Nintendo eShop, and if you’re looking for cheaper first-party games, forget about it. Nintendo’s own games hardly ever go on sale, even years after release.But that’s just the beginning. Despite being named the Steam Deck, the device can actually run games from other stores, too. That’s thanks to an easily installed Linux program called Heroic Launcher, which is free and lets you download and play games from your Epic, GOG, and Amazon Prime Games accounts with just a few clicks.

    Credit: Heroic Games Launcher

    This is a game changer. Epic and Amazon Prime are both underdogs in the PC gaming space, and so to bolster their numbers, they both regularly give away free games. Epic in particular offers one free PC game every week, whereas if you’re a Twitch user, you might notice a decent but more infrequent amount of notifications allowing you to claim free Amazon Prime games. Some of these are big titles, too—it’s how I got Batman: Arkham Knight and Star Wars Battlefront II. With a simple install and a few months of waiting, you could have a Steam Deck filled to the brim with games that you didn’t even pay for. You just can’t do that on Nintendo.And then there’s the elephant in the room: your backlog. If you’re anything like me, you probably already have a Steam library that’s hundreds of games large. It was maybe even like this before the Switch 1 came out—regular sales have a tendency to build up the amount of games you own. By choosing the Steam Deck as your handheld, you’ll be able to play those games on the go, instantly giving you what might as well be a full library with no added cost to you. If you migrate over to the Nintendo Switch 2, you’re going to have to start with a fresh library, or at least a library that’s only as old as the Nintendo Switch 1.Basically, while the Switch 2’s hardware is only more expensive than the Steam Deck, it’ll be easier to fill your Steam Deck up with high quality, inexpensive games than it would be on the Switch 2. If you don’t care about having access to Nintendo exclusive games, that’s a huge draw.TV Play is a mixed bagFinally, I want to acknowledge that the Steam Deck still isn’t necessarily a better option than the Switch 2 for everyone. That’s why I’m writing from a personal perspective here. Like all gaming PCs, it’ll take some fiddling to get some games to run, so the Switch 2 is definitely a smoother experience out of the box. It’s also got less battery life, from my testing. But the big point of departure is TV play.Playing your portable games on a TV on the Switch 2 is as simple as plugging it into its dock. With the Steam Deck, you have to buy a dock separately, and even then, you have to connect your own controller to it and manually find suitable TV graphics settings for each game on its own. It’s not nearly as easy or flexible.And yet, for folks like me, I’m willing to say that even TV play is better. Or, depending on what type of PC gamer you are, monitor play.That’s because you’re not limited to playing your Steam Deck games on the Deck itself, dock or not. Instead, you can play on the Deck when you’re away from your home, and then swap over to your regular gaming PC when you’re back. Your Deck will upload your saves to the cloud automatically, and your PC will seamlessly download them. While not as intuitive as plugging your Switch 2 into its dock, the benefit here is that your non-portable play isn’t limited by the power of your portable device, whereas docked Switch 2 play is still held back by running on portable hardware.The tradeoff is that maintaining a dedicated gaming PC in addition to a Steam Deck is more expensive, but maybe more importantly, requires more tinkering. And I think that’s the key point here. If you want a simple-to-use, pick-up-and-play handheld, the Switch 2 is a great choice for you. But if you’re like me, and you’re not afraid to download some launchers and occasionally dive into compatibility settings or swap between two devices, the Steam Deck might still be the best handheld gaming device for you, even three years later.
    #why #would #choose #steam #deck
    Why I Would Choose a Steam Deck Over a Nintendo Switch 2
    We may earn a commission from links on this page.After spending about a week with the Nintendo Switch 2, I have to admit that it’s a good console. It’s priced fairly for its sleek form factor and the performance it offers, and it sets Nintendo up to stay relevant while gaming graphics only continue to get more complex. And yet, for my own personal tastes, it’s still not my handheld of choice. Instead, I’ll be sticking to Valve’s Steam Deck, the first and still overall best handheld gaming PC, at least going by value for money. And if you don’t necessarily care about Nintendo’s exclusive games, there’s a good chance it might be the better option for you, too.The Steam Deck is cheaper than the Switch 2Out of the gate, the most obvious reason to get a Steam Deck over a Nintendo Switch 2 is price. Starting at for a new model, it’s only modestly cheaper than the Switch 2’s but that’s only part of the story. Valve also runs a certified refurbished program that offers used Decks with only cosmetic blemishes for as low as Restocks are infrequent, since Valve is only able to sell as much as gets sent back to it, but when they do happen, it's a heck of a great deal.That said, there is one catch. The Steam Deck OLED, which offers a bigger, more colorful screen and a larger battery, is more expensive than the Switch 2, starting at However, it’s maybe a bit unfair to compare the two, since the Switch 2 does not use an OLED screen and comes with less storage. If all you care about is the basics, the base Steam Deck is good enough—it’s got the same performance as the more recent one. And that performance, by the way, ended up being about on par with the Switch 2 in my testing, at least in Cyberpunk 2077.The Steam Deck is more comfortable to hold than the Switch 2This one is a bit of a toss-up, depending on your preferences, although I think the Steam Deck takes a slight lead here. While the Nintendo Switch 2 aims for a completely flat and somewhat compact profile, the Steam Deck instead allows itself to stretch out, and even though it’s a little bigger and a little heavier for it, I ultimately think that makes it more comfortable.At 11.73 x 4.60 x 1.93 inches against the Switch 2’s 10.7 x 4.5 x 0.55 inches, and at 1.41 pounds against the Switch 2’s 1.18 pounds, I won’t deny that this will be a non-starter for some. But personally, I still feel like the Steam Deck comes out on top, and that’s thanks to its ergonomics.I’ve never been a big fan of Nintendo’s joy-con controllers, and while the Switch 2’s joy-con 2 controllers improve on the Switch 1’s with bigger buttons and sticks, as well as more room to hold onto them, they still pale in comparison next to the Steam Deck’s controls. Steam Deck in profilevs. Switch 2 in profileCredit: Michelle Ehrhardt On the Switch 2, there are no grips to wrap your fingers around. On the Steam Deck, there are. The triggers also flare out more, and because the console is wider, your hands can stretch out a bit, rather than choking up on the device. It can get a bit heavy to hold a Steam Deck after a while, but I still prefer this approach overall, and if you have a surface to rest the Steam Deck against, weight is a non-issue.Plus, there are some extra bonuses that come with the additional space. The Steam Deck has large touchpads on either side of the device, plus four grip buttons on the back of it, giving you some extra inputs to play around with. Nice.It’s a bit less portable and a bit heavier, but for my adult hands, the Steam Deck is just better shaped to them.The Steam Deck has a bigger, cheaper library than the Switch 2This is the kicker. While there are cheap games that can run on the Switch 2 courtesy of backwards compatibility and third-party eShop titles, the big system drawscan get as pricey as Not to say the Steam Deck doesn’t have expensive games as well, but on the whole, I think it’s easier to get cheap and free games on the Steam Deck than on the eShop.That’s because, being a handheld gaming PC, the Steam Deck can take advantage of the many sales and freebies PC gaming stores love to give out. These happen a bit more frequently on PC than on console, and that’s because there’s more competition on PC. Someone on PC could download games either from Steam or Epic, for instance, while someone on the Switch 2 can only download games from the Nintendo eShop.So, even sticking to just Steam, you’ll get access to regular weekend and mid-week sales, quarterly event sales, and developer or publisher highlight sales. That’s more sales events than you’ll usually find on the Nintendo eShop, and if you’re looking for cheaper first-party games, forget about it. Nintendo’s own games hardly ever go on sale, even years after release.But that’s just the beginning. Despite being named the Steam Deck, the device can actually run games from other stores, too. That’s thanks to an easily installed Linux program called Heroic Launcher, which is free and lets you download and play games from your Epic, GOG, and Amazon Prime Games accounts with just a few clicks. Credit: Heroic Games Launcher This is a game changer. Epic and Amazon Prime are both underdogs in the PC gaming space, and so to bolster their numbers, they both regularly give away free games. Epic in particular offers one free PC game every week, whereas if you’re a Twitch user, you might notice a decent but more infrequent amount of notifications allowing you to claim free Amazon Prime games. Some of these are big titles, too—it’s how I got Batman: Arkham Knight and Star Wars Battlefront II. With a simple install and a few months of waiting, you could have a Steam Deck filled to the brim with games that you didn’t even pay for. You just can’t do that on Nintendo.And then there’s the elephant in the room: your backlog. If you’re anything like me, you probably already have a Steam library that’s hundreds of games large. It was maybe even like this before the Switch 1 came out—regular sales have a tendency to build up the amount of games you own. By choosing the Steam Deck as your handheld, you’ll be able to play those games on the go, instantly giving you what might as well be a full library with no added cost to you. If you migrate over to the Nintendo Switch 2, you’re going to have to start with a fresh library, or at least a library that’s only as old as the Nintendo Switch 1.Basically, while the Switch 2’s hardware is only more expensive than the Steam Deck, it’ll be easier to fill your Steam Deck up with high quality, inexpensive games than it would be on the Switch 2. If you don’t care about having access to Nintendo exclusive games, that’s a huge draw.TV Play is a mixed bagFinally, I want to acknowledge that the Steam Deck still isn’t necessarily a better option than the Switch 2 for everyone. That’s why I’m writing from a personal perspective here. Like all gaming PCs, it’ll take some fiddling to get some games to run, so the Switch 2 is definitely a smoother experience out of the box. It’s also got less battery life, from my testing. But the big point of departure is TV play.Playing your portable games on a TV on the Switch 2 is as simple as plugging it into its dock. With the Steam Deck, you have to buy a dock separately, and even then, you have to connect your own controller to it and manually find suitable TV graphics settings for each game on its own. It’s not nearly as easy or flexible.And yet, for folks like me, I’m willing to say that even TV play is better. Or, depending on what type of PC gamer you are, monitor play.That’s because you’re not limited to playing your Steam Deck games on the Deck itself, dock or not. Instead, you can play on the Deck when you’re away from your home, and then swap over to your regular gaming PC when you’re back. Your Deck will upload your saves to the cloud automatically, and your PC will seamlessly download them. While not as intuitive as plugging your Switch 2 into its dock, the benefit here is that your non-portable play isn’t limited by the power of your portable device, whereas docked Switch 2 play is still held back by running on portable hardware.The tradeoff is that maintaining a dedicated gaming PC in addition to a Steam Deck is more expensive, but maybe more importantly, requires more tinkering. And I think that’s the key point here. If you want a simple-to-use, pick-up-and-play handheld, the Switch 2 is a great choice for you. But if you’re like me, and you’re not afraid to download some launchers and occasionally dive into compatibility settings or swap between two devices, the Steam Deck might still be the best handheld gaming device for you, even three years later. #why #would #choose #steam #deck
    LIFEHACKER.COM
    Why I Would Choose a Steam Deck Over a Nintendo Switch 2
    We may earn a commission from links on this page.After spending about a week with the Nintendo Switch 2, I have to admit that it’s a good console. It’s priced fairly for its sleek form factor and the performance it offers, and it sets Nintendo up to stay relevant while gaming graphics only continue to get more complex. And yet, for my own personal tastes, it’s still not my handheld of choice. Instead, I’ll be sticking to Valve’s Steam Deck, the first and still overall best handheld gaming PC, at least going by value for money. And if you don’t necessarily care about Nintendo’s exclusive games, there’s a good chance it might be the better option for you, too.The Steam Deck is cheaper than the Switch 2Out of the gate, the most obvious reason to get a Steam Deck over a Nintendo Switch 2 is price. Starting at $400 for a new model, it’s only modestly cheaper than the Switch 2’s $450, but that’s only part of the story. Valve also runs a certified refurbished program that offers used Decks with only cosmetic blemishes for as low as $279. Restocks are infrequent, since Valve is only able to sell as much as gets sent back to it, but when they do happen, it's a heck of a great deal.That said, there is one catch. The Steam Deck OLED, which offers a bigger, more colorful screen and a larger battery, is more expensive than the Switch 2, starting at $549. However, it’s maybe a bit unfair to compare the two, since the Switch 2 does not use an OLED screen and comes with less storage. If all you care about is the basics (I’m perfectly happy with my LCD model), the base Steam Deck is good enough—it’s got the same performance as the more recent one. And that performance, by the way, ended up being about on par with the Switch 2 in my testing, at least in Cyberpunk 2077 (one of my go-to benchmark games).The Steam Deck is more comfortable to hold than the Switch 2This one is a bit of a toss-up, depending on your preferences, although I think the Steam Deck takes a slight lead here. While the Nintendo Switch 2 aims for a completely flat and somewhat compact profile, the Steam Deck instead allows itself to stretch out, and even though it’s a little bigger and a little heavier for it, I ultimately think that makes it more comfortable.At 11.73 x 4.60 x 1.93 inches against the Switch 2’s 10.7 x 4.5 x 0.55 inches, and at 1.41 pounds against the Switch 2’s 1.18 pounds, I won’t deny that this will be a non-starter for some. But personally, I still feel like the Steam Deck comes out on top, and that’s thanks to its ergonomics.I’ve never been a big fan of Nintendo’s joy-con controllers, and while the Switch 2’s joy-con 2 controllers improve on the Switch 1’s with bigger buttons and sticks, as well as more room to hold onto them, they still pale in comparison next to the Steam Deck’s controls. Steam Deck in profile (above) vs. Switch 2 in profile (below) Credit: Michelle Ehrhardt On the Switch 2, there are no grips to wrap your fingers around. On the Steam Deck, there are. The triggers also flare out more, and because the console is wider, your hands can stretch out a bit, rather than choking up on the device. It can get a bit heavy to hold a Steam Deck after a while, but I still prefer this approach overall, and if you have a surface to rest the Steam Deck against (like an airplane tray table), weight is a non-issue.Plus, there are some extra bonuses that come with the additional space. The Steam Deck has large touchpads on either side of the device, plus four grip buttons on the back of it, giving you some extra inputs to play around with. Nice.It’s a bit less portable and a bit heavier, but for my adult hands, the Steam Deck is just better shaped to them.The Steam Deck has a bigger, cheaper library than the Switch 2This is the kicker. While there are cheap games that can run on the Switch 2 courtesy of backwards compatibility and third-party eShop titles, the big system draws (Nintendo-developed titles like Mario Kart World, for example) can get as pricey as $80. Not to say the Steam Deck doesn’t have expensive games as well, but on the whole, I think it’s easier to get cheap and free games on the Steam Deck than on the eShop.That’s because, being a handheld gaming PC, the Steam Deck can take advantage of the many sales and freebies PC gaming stores love to give out. These happen a bit more frequently on PC than on console, and that’s because there’s more competition on PC. Someone on PC could download games either from Steam or Epic, for instance, while someone on the Switch 2 can only download games from the Nintendo eShop.So, even sticking to just Steam, you’ll get access to regular weekend and mid-week sales, quarterly event sales, and developer or publisher highlight sales. That’s more sales events than you’ll usually find on the Nintendo eShop, and if you’re looking for cheaper first-party games, forget about it. Nintendo’s own games hardly ever go on sale, even years after release.But that’s just the beginning. Despite being named the Steam Deck, the device can actually run games from other stores, too. That’s thanks to an easily installed Linux program called Heroic Launcher, which is free and lets you download and play games from your Epic, GOG, and Amazon Prime Games accounts with just a few clicks. Credit: Heroic Games Launcher This is a game changer. Epic and Amazon Prime are both underdogs in the PC gaming space, and so to bolster their numbers, they both regularly give away free games. Epic in particular offers one free PC game every week, whereas if you’re a Twitch user, you might notice a decent but more infrequent amount of notifications allowing you to claim free Amazon Prime games. Some of these are big titles, too—it’s how I got Batman: Arkham Knight and Star Wars Battlefront II. With a simple install and a few months of waiting, you could have a Steam Deck filled to the brim with games that you didn’t even pay for. You just can’t do that on Nintendo.And then there’s the elephant in the room: your backlog. If you’re anything like me, you probably already have a Steam library that’s hundreds of games large. It was maybe even like this before the Switch 1 came out—regular sales have a tendency to build up the amount of games you own. By choosing the Steam Deck as your handheld, you’ll be able to play those games on the go, instantly giving you what might as well be a full library with no added cost to you. If you migrate over to the Nintendo Switch 2, you’re going to have to start with a fresh library, or at least a library that’s only as old as the Nintendo Switch 1.Basically, while the Switch 2’s hardware is only $50 more expensive than the Steam Deck, it’ll be easier to fill your Steam Deck up with high quality, inexpensive games than it would be on the Switch 2. If you don’t care about having access to Nintendo exclusive games, that’s a huge draw.TV Play is a mixed bagFinally, I want to acknowledge that the Steam Deck still isn’t necessarily a better option than the Switch 2 for everyone. That’s why I’m writing from a personal perspective here. Like all gaming PCs, it’ll take some fiddling to get some games to run, so the Switch 2 is definitely a smoother experience out of the box. It’s also got less battery life, from my testing. But the big point of departure is TV play.Playing your portable games on a TV on the Switch 2 is as simple as plugging it into its dock. With the Steam Deck, you have to buy a dock separately (the official one is $79), and even then, you have to connect your own controller to it and manually find suitable TV graphics settings for each game on its own. It’s not nearly as easy or flexible.And yet, for folks like me, I’m willing to say that even TV play is better. Or, depending on what type of PC gamer you are, monitor play.That’s because you’re not limited to playing your Steam Deck games on the Deck itself, dock or not. Instead, you can play on the Deck when you’re away from your home, and then swap over to your regular gaming PC when you’re back. Your Deck will upload your saves to the cloud automatically, and your PC will seamlessly download them. While not as intuitive as plugging your Switch 2 into its dock, the benefit here is that your non-portable play isn’t limited by the power of your portable device, whereas docked Switch 2 play is still held back by running on portable hardware.The tradeoff is that maintaining a dedicated gaming PC in addition to a Steam Deck is more expensive, but maybe more importantly, requires more tinkering (there are ways to build a cheap gaming PC, after all). And I think that’s the key point here. If you want a simple-to-use, pick-up-and-play handheld, the Switch 2 is a great choice for you. But if you’re like me, and you’re not afraid to download some launchers and occasionally dive into compatibility settings or swap between two devices, the Steam Deck might still be the best handheld gaming device for you, even three years later.
    0 Comments 0 Shares 0 Reviews
  • How jam jars explain Apple’s success

    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #how #jam #jars #explain #apples
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a categoryand the average customer review.Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #how #jam #jars #explain #apples
    UXDESIGN.CC
    How jam jars explain Apple’s success
    We are told to customize, expand, and provide more options, but that might be a silent killer for our conversion rate. Using behavioral psychology and modern product design, this piece explains why brands like Apple use fewer, smarter choices to convert better.Image generated using ChatgptJam-packed decisionsImagine standing in a supermarket aisle in front of the jam section. How do you decide which jam to buy? You could go for your usual jam, or maybe this is your first time buying jam. Either way, a choice has to be made. Or does it?You may have seen the vast number of choices, gotten overwhelmed, and walked away. The same scenario was reflected in the findings of a 2000 study by Iyengar and Lepper that explored how the number of choice options can affect decision-making.Iyengar and Lepper set up two scenarios; the first customers in a random supermarket being offered 24 jams for a free tasting. In another, they were offered only 6. One would expect that the first scenario would see more sales. After all, more variety means a happier customer. However:Image created using CanvaWhile 60% of customers stopped by for a tasting, only 3% ended up making a purchase.On the other hand, when faced with 6 options, 40% of customers stopped by, but 30% of this number ended up making a purchase.The implications of the study were evident. While one may think that more choices are better when faced with the same, decision-makers prefer fewer.This phenomenon is known as the Paradox of Choice. More choice leads to less satisfaction because one gets overwhelmed.This analysis paralysis results from humans being cognitive misers that is decisions that require deeper thinking feel exhausting and like they come at a cognitive cost. In such scenarios, we tend not to make a choice or choose a default option. Even after a decision has been made, in many cases, regret or the thought of whether you have made the ‘right’ choice can linger.A sticky situationHowever, a 2010 meta-analysis by Benjamin Scheibehenne was unable to replicate the findings. Scheibehenne questioned whether it was choice overload or information overload that was the issue. Other researchers have argued that it is the lack of meaningful choice that affects satisfaction. Additionally, Barry Schwartz, a renowned psychologist and the author of the book ‘The Paradox of Choice: Why Less Is More,’ also later suggested that the paradox of choice diminishes in the presence of a person’s knowledge of the options and if the choices have been presented well.Does that mean the paradox of choice was an overhyped notion? I conducted a mini-study to test this hypothesis.From shelves to spreadsheets: testing the jam jar theoryI created a simple scatterplot in R using a publicly available dataset from the Brazilian e-commerce site Olist. Olist is Brazil’s largest department store on marketplaces. After delivery, customers are asked to fill out a satisfaction survey with a rating or comment option. I analysed the relationship between the number of distinct products in a category (choices) and the average customer review (satisfaction).Scatterplot generated in R using the Olist datasetBased on the almost horizontal regression line on the plot above, it is evident that more choice does not lead to more satisfaction. Furthermore, categories with fewer than 200 products tend to have average review scores between 4.0 and 4.3. Whereas, categories with more than 1,000 products do not have a higher average satisfaction score, with some even falling below 4.0. This suggests that more choices do not equal more satisfaction and could also reduce satisfaction levels.These findings support the Paradox of Choice, and the dataset helps bring theory into real-world commerce. A curation of lesser, well-presented, and differentiated options could lead to more customer satisfaction.Image created using CanvaFurthermore, the plot could help suggest a more nuanced perspective; people want more choices, as this gives them autonomy. However, beyond a certain point, excessive choice overwhelms rather than empowers, leaving people dissatisfied. Many product strategies reflect this insight: the goal is to inspire confident decision-making rather than limiting freedom. A powerful example of this shift in thinking comes from Apple’s history.Simple tastes, sweeter decisionsImage source: Apple InsiderIt was 1997, and Steve Jobs had just made his return to Apple. The company at the time offered 40 different products; however, its sales were declining. Jobs made one question the company’s mantra,“What are the four products we should be building?”The following year, Apple saw itself return to profitability after introducing the iMac G3. While its success can be attributed to the introduction of a new product line and increased efficiency, one cannot deny that the reduction in the product line simplified the decision-making process for its consumers.To this day, Apple continues to implement this strategy by having a few SKUs and confident defaults.Apple does not just sell premium products; it sells a premium decision-making experience by reducing friction in decision-making for the consumer.Furthermore, a 2015 study based on analyzing scenarios where fewer choice options led to increased sales found the following mitigating factors in buying choices:Time Pressure: Easier and quicker choices led to more sales.Complexity of options: The easier it was to understand what a product was, the better the outcome.Clarity of Preference: How easy it was to compare alternatives and the clarity of one’s preferences.Motivation to Optimize: Whether the consumer wanted to put in the effort to find the ‘best’ option.Picking the right spreadWhile the extent of the validity of the Paradox of Choice is up for debate, its impact cannot be denied. It is still a helpful model that can be used to drive sales and boost customer satisfaction. So, how can one use it as a part of your business’s strategy?Remember, what people want isn’t 50 good choices. They want one confident, easy-to-understand decision that they think they will not regret.Here are some common mistakes that confuse consumers and how you can apply the Jam Jar strategy to curate choices instead:Image is created using CanvaToo many choices lead to decision fatigue.Offering many SKU options usually causes customers to get overwhelmed. Instead, try curating 2–3 strong options that will cover the majority of their needs.2. Being dependent on the users to use filters and specificationsWhen users have to compare specifications themselves, they usually end up doing nothing. Instead, it is better to replace filters with clear labels like “Best for beginners” or “Best for oily skin.”3. Leaving users to make comparisons by themselvesToo many options can make users overwhelmed. Instead, offer default options to show what you recommend. This instills within them a sense of confidence when making the final decision.4. More transparency does not always mean more trustInformation overload never leads to conversions. Instead, create a thoughtful flow that guides the users to the right choices.5. Users do not aim for optimizationAssuming that users will weigh every detail before making a decision is not rooted in reality. In most cases, they will go with their gut. Instead, highlight emotional outcomes, benefits, and uses instead of numbers.6. Not onboarding users is a critical mistakeHoping that users will easily navigate a sea of products without guidance is unrealistic. Instead, use onboarding tools like starter kits, quizzes, or bundles that act as starting points.7. Variety for the sake of varietyUsers crave clarity more than they crave variety. Instead, focus on simplicity when it comes to differentiation.And lastly, remember that while the paradox of choice is a helpful tool in your business strategy arsenal, more choice is not inherently bad. It is the lack of structure in the decision-making process that is the problem. Clear framing will always make decision-making a seamless experience for both your consumers and your business.How jam jars explain Apple’s success was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Comments 0 Shares 0 Reviews
  • I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off

    ZDNET's key takeaways The Pixel 9 is Google's latest baseline flagship phone, with prices starting at It comes with the new Tensor G4 processor, an updated design, a bigger battery, and a slightly higher asking price. The hardware improvements over last year's model are relatively small. more buying choices At Amazon, the 256GB Google Pixel 9 is on sale for a discount. This deal applies to all color options except Peony.I had a chance to attend the Made by Google event back in August 2024, and after the keynote wrapped up, I was more excited to go hands-on with the baseline version of the Pixel 9 than the Pro or the Pro XL. Why? Because the Pixel 9's accessibility makes it a fascinating device, and one I recommend for a handful of reasons.Also: I changed 10 settings on my Pixel phone to significantly improve the user experienceI'm spoiling this review right at the top, but it's true. Google's latest entry-level flagship, the Pixel 9, is here, with prices starting at Even though its hardware is a minor improvement over the Pixel 8, it's an impressive phone overall. It offers a new design, slightly upgraded performance, slightly better cameras, a slightly bigger battery, and a host of new AI features.Google has positioned the Pixel 9 as the default Android alternative to the iPhone 16, partly because it looks like one. Google gave the entire Pixel 9 family flat sides with rounded corners, which makes it look like something from a design lab in Cupertino. The good news is that it makes these phones look and feel great.
    details
    View at Best Buy In fact, they're my favorite-looking Pixel phones yet. The Pixel 9 feels especially unique while still offering a premium feel that's blissfully cold to the touch when you pick it up. The sides are aluminum, while the front and back feature Corning Gorilla Glass Victus 2. The whole thing is IP68-rated for water and dust resistance, and it's just the right size for use with one hand. Max Buondonno/ZDNETAnother characteristic of the Pixel is its nice display, and the Pixel 9 definitely has one. It features a 6.3-inch Actua display that is a tenth of an inch bigger than the Pixel 8. The sharp 2424x1080 resolution, OLED panel, and 120Hz dynamic refresh rate give the Pixel 9 exceptional visuals, whether you're just reading email or watching your favorite movie. This year, the screen can reach way up to 2,700 nits of brightness, making it one of the brightest Android phones you can buy.Also: I replaced my Pixel 9 Pro XL with the 9a for a month - and it was pretty dang closeAlso, its performance feels better. Powered by the new Tensor G4 processor, 12GB of RAM, and 128GB or 256GB of storage, the Pixel 9 is a screamer. It's one of the most responsive Android phones I've used all year, and that's just with the standard version of this phone.The cameras are also impressive. Google kept the same 50MP main camera as last year but swapped the old 12MP ultra-wide for a new 48MP 123-degree camera. Photos are simply stunning on this phone, and Google's post-processing algorithms do a great job of retaining details and contrast. Video quality is also very good, especially with the company's Video Boost technology. This phone can easily rival any device that costs + more. Max Buondonno/ZDNETIf there's a downside to the hardware, it's the inclusion of the lower-quality 10.5MP selfie camera, whereas the Pro phones get a new 42MP camera. There's also an extra telephoto camera on the Pro model, so you won't get the same zoom quality on the regular Pixel 9.Regarding this phone's AI features, Google has jammed quite a bit into the Pixel 9. Not only does it ship with the company's Gemini chatbot out of the box, but thanks to the Tensor G4 processor, it also comes with Gemini Live, so you can have real-life conversations with it.Also: I found a physical keyboard for my Pixel 9 Pro that isn't a jokeIt requires a Google One AI Premium plan, but you'll get one for free if you buy a Pixel 9. I've asked it numerous questions that were similar to web queriesand it answered them all with ease -- even with speech interruptions. It's in the early stages, but it's exciting technology that could change how we use our phones.You also get features like Add Me, which allows you to take a picture of your friends, then have them take a picture of you in the same place, and merge the two so no one's left out. I've played around with it during my testing, which worked surprisingly well. There are also some nice updates to Magic Editor for framing your photos. Max Buondonno/ZDNETGoogle also included two new AI-powered apps on the Pixel 9 series: Pixel Screenshots and Pixel Studio. With the former, you can organize your screenshots and search through them with AI prompts, allowing you to easily reference information like Wi-Fi passwords or recipes. Meanwhile, the latter lets you generate images on the fly and customize them with text, stickers, and other effects. I've enjoyed using both apps in my limited testing time, but I'll need to play with them over the long run to see whether they're worth it.Also: The best Google Pixel phones to buy in 2025I found battery life to be quite good. There's a 4,700mAh cell inside that can last all day on a charge and then some, which means you won't need to worry about this phone's battery after a long day. Google includes 45W charging support on the Pixel 9 series, which is awesome, but you'll need to buy a separate wall adapter to take advantage of it. In addition, there's 15W wireless chargingand 5W reverse wireless charging called "Battery Share."ZDNET's buying adviceIf your budget is it's hard not to recommend Google's Pixel 9, especially while it's on sale at off. Sure, the Samsung Galaxy S24 is a tough competitor, but I actually think this is the better buy. It gives you access to some useful new AI features, and you get all the perks of the Pixel experience, like excellent software, display quality, and cameras. The Pixel 9 Pro and Pixel 9 Pro XL models may be flashier, but the baseline version of Google's flagship phone should not be overlooked. This article was originally published on August 22, 2024, and was updated on June 6, 2025 What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions.
    Show more
    Featured reviews
    #recommend #pixel #most #people #looking
    I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off
    ZDNET's key takeaways The Pixel 9 is Google's latest baseline flagship phone, with prices starting at It comes with the new Tensor G4 processor, an updated design, a bigger battery, and a slightly higher asking price. The hardware improvements over last year's model are relatively small. more buying choices At Amazon, the 256GB Google Pixel 9 is on sale for a discount. This deal applies to all color options except Peony.I had a chance to attend the Made by Google event back in August 2024, and after the keynote wrapped up, I was more excited to go hands-on with the baseline version of the Pixel 9 than the Pro or the Pro XL. Why? Because the Pixel 9's accessibility makes it a fascinating device, and one I recommend for a handful of reasons.Also: I changed 10 settings on my Pixel phone to significantly improve the user experienceI'm spoiling this review right at the top, but it's true. Google's latest entry-level flagship, the Pixel 9, is here, with prices starting at Even though its hardware is a minor improvement over the Pixel 8, it's an impressive phone overall. It offers a new design, slightly upgraded performance, slightly better cameras, a slightly bigger battery, and a host of new AI features.Google has positioned the Pixel 9 as the default Android alternative to the iPhone 16, partly because it looks like one. Google gave the entire Pixel 9 family flat sides with rounded corners, which makes it look like something from a design lab in Cupertino. The good news is that it makes these phones look and feel great. details View at Best Buy In fact, they're my favorite-looking Pixel phones yet. The Pixel 9 feels especially unique while still offering a premium feel that's blissfully cold to the touch when you pick it up. The sides are aluminum, while the front and back feature Corning Gorilla Glass Victus 2. The whole thing is IP68-rated for water and dust resistance, and it's just the right size for use with one hand. Max Buondonno/ZDNETAnother characteristic of the Pixel is its nice display, and the Pixel 9 definitely has one. It features a 6.3-inch Actua display that is a tenth of an inch bigger than the Pixel 8. The sharp 2424x1080 resolution, OLED panel, and 120Hz dynamic refresh rate give the Pixel 9 exceptional visuals, whether you're just reading email or watching your favorite movie. This year, the screen can reach way up to 2,700 nits of brightness, making it one of the brightest Android phones you can buy.Also: I replaced my Pixel 9 Pro XL with the 9a for a month - and it was pretty dang closeAlso, its performance feels better. Powered by the new Tensor G4 processor, 12GB of RAM, and 128GB or 256GB of storage, the Pixel 9 is a screamer. It's one of the most responsive Android phones I've used all year, and that's just with the standard version of this phone.The cameras are also impressive. Google kept the same 50MP main camera as last year but swapped the old 12MP ultra-wide for a new 48MP 123-degree camera. Photos are simply stunning on this phone, and Google's post-processing algorithms do a great job of retaining details and contrast. Video quality is also very good, especially with the company's Video Boost technology. This phone can easily rival any device that costs + more. Max Buondonno/ZDNETIf there's a downside to the hardware, it's the inclusion of the lower-quality 10.5MP selfie camera, whereas the Pro phones get a new 42MP camera. There's also an extra telephoto camera on the Pro model, so you won't get the same zoom quality on the regular Pixel 9.Regarding this phone's AI features, Google has jammed quite a bit into the Pixel 9. Not only does it ship with the company's Gemini chatbot out of the box, but thanks to the Tensor G4 processor, it also comes with Gemini Live, so you can have real-life conversations with it.Also: I found a physical keyboard for my Pixel 9 Pro that isn't a jokeIt requires a Google One AI Premium plan, but you'll get one for free if you buy a Pixel 9. I've asked it numerous questions that were similar to web queriesand it answered them all with ease -- even with speech interruptions. It's in the early stages, but it's exciting technology that could change how we use our phones.You also get features like Add Me, which allows you to take a picture of your friends, then have them take a picture of you in the same place, and merge the two so no one's left out. I've played around with it during my testing, which worked surprisingly well. There are also some nice updates to Magic Editor for framing your photos. Max Buondonno/ZDNETGoogle also included two new AI-powered apps on the Pixel 9 series: Pixel Screenshots and Pixel Studio. With the former, you can organize your screenshots and search through them with AI prompts, allowing you to easily reference information like Wi-Fi passwords or recipes. Meanwhile, the latter lets you generate images on the fly and customize them with text, stickers, and other effects. I've enjoyed using both apps in my limited testing time, but I'll need to play with them over the long run to see whether they're worth it.Also: The best Google Pixel phones to buy in 2025I found battery life to be quite good. There's a 4,700mAh cell inside that can last all day on a charge and then some, which means you won't need to worry about this phone's battery after a long day. Google includes 45W charging support on the Pixel 9 series, which is awesome, but you'll need to buy a separate wall adapter to take advantage of it. In addition, there's 15W wireless chargingand 5W reverse wireless charging called "Battery Share."ZDNET's buying adviceIf your budget is it's hard not to recommend Google's Pixel 9, especially while it's on sale at off. Sure, the Samsung Galaxy S24 is a tough competitor, but I actually think this is the better buy. It gives you access to some useful new AI features, and you get all the perks of the Pixel experience, like excellent software, display quality, and cameras. The Pixel 9 Pro and Pixel 9 Pro XL models may be flashier, but the baseline version of Google's flagship phone should not be overlooked. This article was originally published on August 22, 2024, and was updated on June 6, 2025 What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions. Show more Featured reviews #recommend #pixel #most #people #looking
    WWW.ZDNET.COM
    I recommend the Pixel 9 to most people looking to upgrade - especially while it's $250 off
    ZDNET's key takeaways The Pixel 9 is Google's latest baseline flagship phone, with prices starting at $800. It comes with the new Tensor G4 processor, an updated design, a bigger battery, and a slightly higher asking price. The hardware improvements over last year's model are relatively small. more buying choices At Amazon, the 256GB Google Pixel 9 is on sale for $649, a $250 discount. This deal applies to all color options except Peony (pink).I had a chance to attend the Made by Google event back in August 2024, and after the keynote wrapped up, I was more excited to go hands-on with the baseline version of the Pixel 9 than the Pro or the Pro XL. Why? Because the Pixel 9's accessibility makes it a fascinating device, and one I recommend for a handful of reasons.Also: I changed 10 settings on my Pixel phone to significantly improve the user experienceI'm spoiling this review right at the top, but it's true. Google's latest entry-level flagship, the Pixel 9, is here, with prices starting at $799. Even though its hardware is a minor improvement over the Pixel 8, it's an impressive phone overall. It offers a new design, slightly upgraded performance, slightly better cameras, a slightly bigger battery, and a host of new AI features.Google has positioned the Pixel 9 as the default Android alternative to the iPhone 16, partly because it looks like one. Google gave the entire Pixel 9 family flat sides with rounded corners, which makes it look like something from a design lab in Cupertino. The good news is that it makes these phones look and feel great. details View at Best Buy In fact, they're my favorite-looking Pixel phones yet. The Pixel 9 feels especially unique while still offering a premium feel that's blissfully cold to the touch when you pick it up. The sides are aluminum, while the front and back feature Corning Gorilla Glass Victus 2. The whole thing is IP68-rated for water and dust resistance, and it's just the right size for use with one hand. Max Buondonno/ZDNETAnother characteristic of the Pixel is its nice display, and the Pixel 9 definitely has one. It features a 6.3-inch Actua display that is a tenth of an inch bigger than the Pixel 8. The sharp 2424x1080 resolution, OLED panel, and 120Hz dynamic refresh rate give the Pixel 9 exceptional visuals, whether you're just reading email or watching your favorite movie. This year, the screen can reach way up to 2,700 nits of brightness, making it one of the brightest Android phones you can buy.Also: I replaced my Pixel 9 Pro XL with the 9a for a month - and it was pretty dang closeAlso, its performance feels better. Powered by the new Tensor G4 processor, 12GB of RAM, and 128GB or 256GB of storage, the Pixel 9 is a screamer. It's one of the most responsive Android phones I've used all year, and that's just with the standard version of this phone.The cameras are also impressive. Google kept the same 50MP main camera as last year but swapped the old 12MP ultra-wide for a new 48MP 123-degree camera. Photos are simply stunning on this phone, and Google's post-processing algorithms do a great job of retaining details and contrast. Video quality is also very good, especially with the company's Video Boost technology. This phone can easily rival any device that costs $200+ more. Max Buondonno/ZDNETIf there's a downside to the hardware, it's the inclusion of the lower-quality 10.5MP selfie camera, whereas the Pro phones get a new 42MP camera. There's also an extra telephoto camera on the Pro model, so you won't get the same zoom quality on the regular Pixel 9.Regarding this phone's AI features, Google has jammed quite a bit into the Pixel 9. Not only does it ship with the company's Gemini chatbot out of the box, but thanks to the Tensor G4 processor, it also comes with Gemini Live, so you can have real-life conversations with it.Also: I found a physical keyboard for my Pixel 9 Pro that isn't a jokeIt requires a Google One AI Premium plan, but you'll get one for free if you buy a Pixel 9. I've asked it numerous questions that were similar to web queries ("What's the best place to live near New York City that's relatively affordable," "How many stars are in the sky -- wait, in the galaxy?") and it answered them all with ease -- even with speech interruptions. It's in the early stages, but it's exciting technology that could change how we use our phones.You also get features like Add Me, which allows you to take a picture of your friends, then have them take a picture of you in the same place, and merge the two so no one's left out. I've played around with it during my testing, which worked surprisingly well. There are also some nice updates to Magic Editor for framing your photos. Max Buondonno/ZDNETGoogle also included two new AI-powered apps on the Pixel 9 series: Pixel Screenshots and Pixel Studio. With the former, you can organize your screenshots and search through them with AI prompts, allowing you to easily reference information like Wi-Fi passwords or recipes. Meanwhile, the latter lets you generate images on the fly and customize them with text, stickers, and other effects. I've enjoyed using both apps in my limited testing time, but I'll need to play with them over the long run to see whether they're worth it.Also: The best Google Pixel phones to buy in 2025I found battery life to be quite good. There's a 4,700mAh cell inside that can last all day on a charge and then some, which means you won't need to worry about this phone's battery after a long day. Google includes 45W charging support on the Pixel 9 series, which is awesome, but you'll need to buy a separate wall adapter to take advantage of it. In addition, there's 15W wireless charging (not Qi2, notably) and 5W reverse wireless charging called "Battery Share."ZDNET's buying adviceIf your budget is $800, it's hard not to recommend Google's Pixel 9, especially while it's on sale at $250 off. Sure, the Samsung Galaxy S24 is a tough competitor, but I actually think this is the better buy. It gives you access to some useful new AI features, and you get all the perks of the Pixel experience, like excellent software, display quality, and cameras. The Pixel 9 Pro and Pixel 9 Pro XL models may be flashier, but the baseline version of Google's flagship phone should not be overlooked. This article was originally published on August 22, 2024, and was updated on June 6, 2025 What are the tariffs in the US? The recent US tariffs on imports from countries like China, Vietnam, and India aim to boost domestic manufacturing but are likely to drive up prices on consumer electronics. Products like smartphones, laptops, and TVs may become more expensive as companies rethink global supply chains and weigh the cost of shifting production.Smartphones are among the most affected by the new US tariffs, with devices imported from China and Vietnam facing steep duties that could raise retail prices by 20% or more. Brands like Apple and Google, which rely heavily on Asian manufacturing, may either pass these costs on to consumers or absorb them at the expense of profit margins. The tariffs could also lead to delays in product launches or shifts in where and how phones are made, forcing companies to diversify production to countries with more favorable trade conditions. Show more Featured reviews
    Like
    Love
    Wow
    Angry
    Sad
    710
    0 Comments 0 Shares 0 Reviews
  • An Astronaut Finds Symbiosis with Nature in Agus Putu Suyadnya’s Uncanny Paintings

    “Utopian Visions of Hope”. All images courtesy of Sapar Contemporary, shared with permission
    An Astronaut Finds Symbiosis with Nature in Agus Putu Suyadnya’s Uncanny Paintings
    June 6, 2025
    ArtClimate
    Grace Ebert

    In Symbiotic Utopia, Agus Putu Suyadnya imagines a future in which tropical ecosystems not unlike those of Southeast Asia become sites for humanity to commune with nature.
    Surrounded by verdant foliage and moss-covered roots that seem to glow with blue and green fuzz, a recurring astronaut figure approaches each scene with comfort and ease. In one work, the suited character cradles a chimpanzee à la notable conservationist Jane Goodall and waves a large bubble wand to create trails of the iridescent orbs in another. And in “Cosmic Self Healing,” the figure sits in a comfortable chair, a large potted plant at his side. This typical domestic scene, though, is situated on the moon, and Earth’s swirling atmosphere appears behind him.
    “Cosmic Self Healing”While alluring in color and density, Suyadnya’s paintings are surreal and portend an eerie future irredeemably impacted by the climate crisis. The astronaut, after all, is fully covered in a protective capsule, a sign that people can only survive with this critical adaptation. “Humans cannot live without nature,” the artist says, “whereas the natural world without mankind will continue to survive. So why, as humans, do we think we have the upper hand?”
    Symbiotic Utopia is on view through July 7 at Sapar Contemporary in New York. Find more from Suyadnya on Instagram.
    Detail of “Cosmic Self Healing”“A Hug for Hope”
    “Steady Humility Wins Every Time”“Yearning for Home”“Playful Nature is the Future”Next article
    #astronaut #finds #symbiosis #with #nature
    An Astronaut Finds Symbiosis with Nature in Agus Putu Suyadnya’s Uncanny Paintings
    “Utopian Visions of Hope”. All images courtesy of Sapar Contemporary, shared with permission An Astronaut Finds Symbiosis with Nature in Agus Putu Suyadnya’s Uncanny Paintings June 6, 2025 ArtClimate Grace Ebert In Symbiotic Utopia, Agus Putu Suyadnya imagines a future in which tropical ecosystems not unlike those of Southeast Asia become sites for humanity to commune with nature. Surrounded by verdant foliage and moss-covered roots that seem to glow with blue and green fuzz, a recurring astronaut figure approaches each scene with comfort and ease. In one work, the suited character cradles a chimpanzee à la notable conservationist Jane Goodall and waves a large bubble wand to create trails of the iridescent orbs in another. And in “Cosmic Self Healing,” the figure sits in a comfortable chair, a large potted plant at his side. This typical domestic scene, though, is situated on the moon, and Earth’s swirling atmosphere appears behind him. “Cosmic Self Healing”While alluring in color and density, Suyadnya’s paintings are surreal and portend an eerie future irredeemably impacted by the climate crisis. The astronaut, after all, is fully covered in a protective capsule, a sign that people can only survive with this critical adaptation. “Humans cannot live without nature,” the artist says, “whereas the natural world without mankind will continue to survive. So why, as humans, do we think we have the upper hand?” Symbiotic Utopia is on view through July 7 at Sapar Contemporary in New York. Find more from Suyadnya on Instagram. Detail of “Cosmic Self Healing”“A Hug for Hope” “Steady Humility Wins Every Time”“Yearning for Home”“Playful Nature is the Future”Next article #astronaut #finds #symbiosis #with #nature
    WWW.THISISCOLOSSAL.COM
    An Astronaut Finds Symbiosis with Nature in Agus Putu Suyadnya’s Uncanny Paintings
    “Utopian Visions of Hope” (2025). All images courtesy of Sapar Contemporary, shared with permission An Astronaut Finds Symbiosis with Nature in Agus Putu Suyadnya’s Uncanny Paintings June 6, 2025 ArtClimate Grace Ebert In Symbiotic Utopia, Agus Putu Suyadnya imagines a future in which tropical ecosystems not unlike those of Southeast Asia become sites for humanity to commune with nature. Surrounded by verdant foliage and moss-covered roots that seem to glow with blue and green fuzz, a recurring astronaut figure approaches each scene with comfort and ease. In one work, the suited character cradles a chimpanzee à la notable conservationist Jane Goodall and waves a large bubble wand to create trails of the iridescent orbs in another. And in “Cosmic Self Healing,” the figure sits in a comfortable chair, a large potted plant at his side. This typical domestic scene, though, is situated on the moon, and Earth’s swirling atmosphere appears behind him. “Cosmic Self Healing” (2022) While alluring in color and density, Suyadnya’s paintings are surreal and portend an eerie future irredeemably impacted by the climate crisis. The astronaut, after all, is fully covered in a protective capsule, a sign that people can only survive with this critical adaptation. “Humans cannot live without nature,” the artist says, “whereas the natural world without mankind will continue to survive. So why, as humans, do we think we have the upper hand?” Symbiotic Utopia is on view through July 7 at Sapar Contemporary in New York. Find more from Suyadnya on Instagram. Detail of “Cosmic Self Healing” (2022) “A Hug for Hope” “Steady Humility Wins Every Time” (2025) “Yearning for Home” (2024) “Playful Nature is the Future” (2024) Next article
    Like
    Love
    Wow
    Sad
    Angry
    502
    0 Comments 0 Shares 0 Reviews
  • How to Set Up and Start Using Your New Nintendo Switch 2

    So, you’ve braved the pre-order sites, or maybe you’ve just gotten lucky while waiting in line—either way, you’ve got yourself a Nintendo Switch 2. Congratulations! But before you start gaming, there are a few things you’ll need to keep in mind while setting up your console. Nintendo is known for being user friendly, but also a bit particular. Case in point: You can only do a full transfer of your Switch 1 data to your Switch 2 during setup, and if you miss this opportunity, you’ll have to reset your device to try again, or manually copy over your games and save data piece-by-piece later on.Luckily, I’ve got your back. Read on for a quick guide on how to set up your Nintendo Switch 2, and the three other features you should set up before you start playing.How to start setting up a Nintendo Switch 2For the most part, setting up a new Switch 2 out of the box is straightforward, but you’ll still want to pay close attention to each step before moving on, especially when it comes to transferring console data.First, remove your Switch 2 and your joy-con controllers from their packaging. Then, plug your joy-cons into their respective slots. If you don’t know which joy-con goes where, the one with red highlights goes to the right of the screen, and the one with blue highlights goes to the left.Next, plug your Switch into power using the included charging brick and cable, and power it on. On the screens that follow, select your language and region, then read and accept the end-user license agreement.

    Credit: Michelle Ehrhardt

    You’ll see a screen to connect to the internet and download the console’s day-one system update. This technically isn’t mandatory, and skipping itwill instead take you to time zone settings. However, most features will be locked down, including backward compatibility, until you download it, so I recommend doing it during setup if possible. If you do skip this step, you can access the update later under Settings > System > System Update.Once you’re connected to the internet and you’ve started downloading the update, you’ll be able to continue setup while it downloads. Now, you’ll pick your time zone and click through a couple of tutorial pages. These will instruct you about portable and TV play, tell you how to use the kickstand and extra USB-C port, and walk you through detaching your joy-con from the console. You can also click through an optional tutorial on connecting your Switch 2 to a TV, if you like, after which you’ll get quick guides on using the included joy-con grip accessory and the joy-con wrist straps.If your console hasn’t finished updating, it’ll finish that now, and then take you to your first big decision: do you want to transfer your Switch 1 data to your Switch 2?Transferring Switch 1 data to the Switch 2During Switch 2 setup, Nintendo will allow you to transfer your Switch 1 data to your Switch 2, but there are a few caveats.You’ll know you’re ready for this once your system update is downloaded and you’re on a screen that says “To Nintendo Switch Console Owners,” above a graphic of someone holding a Switch 1 and Switch 2. Next to the graphic, you’ll see two buttons: Begin System Transfer, Don’t Transfer Data, plus a third button below that explains the process to you, but leaves out a few key details.Before you make your decision, the most important thing to remember is this: There are actually two ways to transfer data from the Switch 1 to the Switch 2, and despite what you might have read elsewhere, locally transferring your Switch 1 data to the Switch 2 during setup will not factory reset your original Switch. Unless you’ve taken extra steps beforehand, this is the option Nintendo’s setup process will recommend to you, so most users don’t need to be scared about accidentally erasing their original consoles.

    Credit: Michelle Ehrhardt

    If you stick with a local transfer, it will simply copy over your data to your Switch 2, so that it exists on both systems. There are a few specific cases where some data will get removed from your original device as it makes its way over to your new one, but for the most part, you’ll be able to keep using your original device as usual after the transfer, and there are ways to get that data back later on. Just know that save data for specific games, as well as some free-to-play games, may have been deleted from your Switch 1 and moved over to your Switch 2. Don’t worry— Nintendo will warn you about which software will be affected during the transfer process. Additionally, screenshots and video captures stored on a microSD card attached to the Switch 1 will need to be moved over manually later on.How to transfer your Switch 1 data locallyWith that in mind, if you want to transfer your data locally, which is what most people should do, click the Begin System Transfer button and follow the instructions—this involves signing into your Nintendo account, keeping your original Switch powered on and in close proximity to the Switch 2, and activating the transfer on your original Switch under Settings > System Settings > System Transfer to Nintendo Switch 2.How to transfer your Switch 1 data using Nintendo's serversThe confusion about factory resets comes from this data transfer option, which involves using the Nintendo servers. This will factory reset your Switch, and is best if you plan to sell it anyway, or if you expect to be away from your original Switch during Switch 2 setup and don’t mind setting up your original console from scratch when you get back to it. To start this kind of transfer, power on your original Switch, navigate to the System Transfer page mentioned above, then select I don’t have a Nintendo Switch 2 yet. Take note of the Download Deadline for later. Conveniently, that does point to one upside to this method: you can start it before you even have a Switch 2 in hand.Now, click Next, then Upload Data, then OK, followed by another OK. Click Start Initialization to begin factory resetting your original Switch. From here, your original Switch will revert to how it was before you bought it, and you’ll need to move over to your Switch 2, click Begin System Transfer, and sign into your Nintendo account. If the system detects that you have transfer data to download from the cloud, it’ll walk you through the process. Note, however, that if you don’t download your transfer data before the deadline you jotted down earlier, you’ll lose access to it.If you want to skip the data transfer process...If you’d rather not transfer your data, that’s also fine, but you won’t have an opportunity to do so later, and will instead need to move games and save data over manually. Click the Don’t Transfer Data button, then Continue to move to the next step.Adding a user and parental controlsWith system transfers out of the way, you’re through the hardest part of setting up your new console. Now, you’ll be prompted to add a user to the system. Here, you can sign in with your Nintendo Account to get access to your Switch Online subscription and your collection of downloadable games, or create a local user profile. After that, you can add more users as you like, or you can save that for later.Next up, parental controls. Like with additional users, you can set these up later under System Settings > Parental Controls, but there’s no harm to setting them up now as well. To do so, click Set Parental Controls. 

    Credit: Michelle Ehrhardt

    You’ll have a few options. Most of these will prompt you to use Nintendo’s Parental Controls app, but you can also click the X button on the right-hand joy-con to set up limited parental controls directly on the console. Doing so will allow you to select from a number of presets that will block access to certain games and communication features, but not much else. Using the app, meanwhile, will let you set a daily play time limit, bedtime settings, restrictions on the new GameChat feature, and see reports on play time and games played. It also doesn’t require a Switch Online subscription, so it’s worth using if you have a smart device.To set up parental controls using the app, first download it for either iOS or Android using the information on the screen, then click the “If You’ve Already Downloaded the App” button. Enter the registration code from your app into your Switch 2 system, then follow the instructions in the app to finish setup. Which buttons you’ll need to click will depend on the controls you’d like to activate, as well as for which users and systems, but it’s fairly straightforward.MicroSD card limitationsJust a couple more screens. First, a quick warning about microSD cards. Unlike the Switch 1, the Switch 2 is only compatible with microSD Express cards, which are faster, but options for them are also a bit more limited—in other words, there’s a good chance you won’t be able to use the same microSD card from your Switch 1 on your Switch 2. To use a microSD card on Switch 2, it’ll need either of the two logos shown in the image below. A bit of a bummer, but at least a microSD card is optional.

    Credit: Michelle Ehrhardt

    Oh, and like on the Switch 1, the microSD slot is hidden under the kickstand, in case you’re having trouble finding it.Virtual Game CardsYou’re technically through setup at this point, but there are still a few features you’ll probably want to configure before you start gaming. The most obvious of these is Virtual Game Cards, Nintendo’s new system for managing games purchased digitally.Essentially, like the name implies, these work similarly to physical game cards, but over the internet. This means that, unlike with your Steam library, you can only load a game to one console at a time. "Loading" is Nintendo specific term, but for the most part, it just means your game is downloaded and ready to play."To access your Virtual Game Cards, click the Virtual Game Card icon in the bottom row on your Switch 2’s home screen—it’ll look like a game cartridge. From here, if you’ve signed into your Nintendo account, you’ll see all your digital purchases and will be able to download and play them from here. If you haven’t signed into your Nintendo Account, you’ll have the option to do so.

    Credit: Michelle Ehrhardt

    Now, you’ll have a few options. First, if a game isn’t loaded onto your original Switch, you can simply download it to your Switch 2 by clicking Load to This Console. If the console isn’t set as your primary device, you might see a warning if you try to open a game, depending on how up-to-date your original Switch's software is. If your original Switch doesn't have the Virtual Game Cards update yet, you can click the If You Don’t Have That Console button to download your game anyway. It will simply cease being playable on the other console while you use it on this one, although that’s always the case when moving a Virtual Game Card between systems. Otherwise, you might need to link your two systems by bringing them close together and following the instructions on screen before you can load a Virtual Game Card on your new device. If you're not able to do this, like if you've gotten rid of your original Switch while it's still set as your primary device, you can remove your old Switch from your account by deregistering it. After deregistering your old console, you can set your Switch 2 as your new primary device by connecting it to the eShop. If you're able to link your old console to your new one, this won’t be necessary for simply accessing your library, but it will extend any Nintendo Online benefits to all users on your new primary device, rather than the one associated with your Nintendo Account.

    Credit: Michelle Ehrhardt

    Alternatively, if you've managed to link your devices, you can use the device that currently has your Virtual Game Cardon it to load it to your new one. Simply open your games, click Load to Another Console, and follow the instructions on screen. This will have the same effect as the Load to This Console Button. Also, if you'd like to be able to continue playing a game on a device even after moving its Virtual Game Card to another device, you can enable Use Online License under System Settings > User Settings > Online License Settings to do just that. You'll need to be connected to the internet for this to work, whereas you can play a Virtual Game Card offline, but it's better than nothing. Plus, this enables that workaround from earlier in this section that allows you to play the same game on both devices at once.How to lend a Virtual Game Card to someone elseYou’ll also notice that you can lend a Virtual Game Card to members of a “Family Group.” To do this, you’ll first need to set up a Family Group online. On Nintendo’s website, log into your Nintendo Account, then click the Family Group tab on the left hand side of your account page. Here, you can invite members to join your Family Group via email, or create a Family Group account for your child. Note that if you have a Nintendo Switch online Family Plan subscription, members of your Family Group will be able to use its benefits, although accounts that are part of your family group can also still use their individual subscriptions.With a Family Group set up, on the Virtual Game Card page, click the game you’d like to lend out, then Lend to a Family Group Member. Next, bring your Switch 2 in close proximity with that Family Group Member’s device—this needs to be done in person.Finally, click Select a User to Lend to. You can lend up to three games to three different accounts at once, and borrowers will be able to play these games for 14 days. During that time, you won’t be able to play the Game Card, and the borrower won’t get access to your save data while borrowing. However, they will keep their own save data for their next borrowing period, or if they choose to buy the game themselves. There are no limits to how often you can lend out a game, and you can re-lend games immediately upon the borrowing period expiring. Also, while you’ll need to lend out your games in person, they’ll return to you remotely.Transferring save dataEven if you didn’t transfer your Switch 1 data to your Switch 2 during setup, you can still access its save data on your new device. You have a couple of options here.First, the free option. On your original Switch, go to System Settings > Data Management > Transfer Your Data. Click Send Data to Another Console, then pick the user whose saves you want to send to your Switch 2. Pick the saves you want to send over, then click OK. Note that these saves will be deleted from your original console once moved over.Next, with your Switch 2 in close proximity to your Switch 1, navigate to System Settings > Data Management > Transfer Your Data. Click Receive Data. To move data from your Switch 2 to your Switch 1, simply perform these steps in reverse.Second, the paid option. If you have a Nintendo Switch Online membership, you can also use cloud saves to move save data between devices. By default, these are enabled automatically and will keep both of your systems up to date with the most recent saves. However, you can also manually download cloud saves either from a game’s software menuor from System Settings > Data Management > Data Cloud. You can also disable automatic save data download from here, if you like.Lock your home screen behind a passcodeFinally, you can lock your Switch 2 with a PIN for some added security, kind of like a cell phone. To set this up, simply go to Settings > System > Console Lock. Click OK, then follow the instructions on the screen that pops up to enter your PIN.There’s plenty more to dive into with the Switch 2, which I’ll cover over the following week. For now, though, this should be enough to get you started. Happy gaming!
    #how #set #start #using #your
    How to Set Up and Start Using Your New Nintendo Switch 2
    So, you’ve braved the pre-order sites, or maybe you’ve just gotten lucky while waiting in line—either way, you’ve got yourself a Nintendo Switch 2. Congratulations! But before you start gaming, there are a few things you’ll need to keep in mind while setting up your console. Nintendo is known for being user friendly, but also a bit particular. Case in point: You can only do a full transfer of your Switch 1 data to your Switch 2 during setup, and if you miss this opportunity, you’ll have to reset your device to try again, or manually copy over your games and save data piece-by-piece later on.Luckily, I’ve got your back. Read on for a quick guide on how to set up your Nintendo Switch 2, and the three other features you should set up before you start playing.How to start setting up a Nintendo Switch 2For the most part, setting up a new Switch 2 out of the box is straightforward, but you’ll still want to pay close attention to each step before moving on, especially when it comes to transferring console data.First, remove your Switch 2 and your joy-con controllers from their packaging. Then, plug your joy-cons into their respective slots. If you don’t know which joy-con goes where, the one with red highlights goes to the right of the screen, and the one with blue highlights goes to the left.Next, plug your Switch into power using the included charging brick and cable, and power it on. On the screens that follow, select your language and region, then read and accept the end-user license agreement. Credit: Michelle Ehrhardt You’ll see a screen to connect to the internet and download the console’s day-one system update. This technically isn’t mandatory, and skipping itwill instead take you to time zone settings. However, most features will be locked down, including backward compatibility, until you download it, so I recommend doing it during setup if possible. If you do skip this step, you can access the update later under Settings > System > System Update.Once you’re connected to the internet and you’ve started downloading the update, you’ll be able to continue setup while it downloads. Now, you’ll pick your time zone and click through a couple of tutorial pages. These will instruct you about portable and TV play, tell you how to use the kickstand and extra USB-C port, and walk you through detaching your joy-con from the console. You can also click through an optional tutorial on connecting your Switch 2 to a TV, if you like, after which you’ll get quick guides on using the included joy-con grip accessory and the joy-con wrist straps.If your console hasn’t finished updating, it’ll finish that now, and then take you to your first big decision: do you want to transfer your Switch 1 data to your Switch 2?Transferring Switch 1 data to the Switch 2During Switch 2 setup, Nintendo will allow you to transfer your Switch 1 data to your Switch 2, but there are a few caveats.You’ll know you’re ready for this once your system update is downloaded and you’re on a screen that says “To Nintendo Switch Console Owners,” above a graphic of someone holding a Switch 1 and Switch 2. Next to the graphic, you’ll see two buttons: Begin System Transfer, Don’t Transfer Data, plus a third button below that explains the process to you, but leaves out a few key details.Before you make your decision, the most important thing to remember is this: There are actually two ways to transfer data from the Switch 1 to the Switch 2, and despite what you might have read elsewhere, locally transferring your Switch 1 data to the Switch 2 during setup will not factory reset your original Switch. Unless you’ve taken extra steps beforehand, this is the option Nintendo’s setup process will recommend to you, so most users don’t need to be scared about accidentally erasing their original consoles. Credit: Michelle Ehrhardt If you stick with a local transfer, it will simply copy over your data to your Switch 2, so that it exists on both systems. There are a few specific cases where some data will get removed from your original device as it makes its way over to your new one, but for the most part, you’ll be able to keep using your original device as usual after the transfer, and there are ways to get that data back later on. Just know that save data for specific games, as well as some free-to-play games, may have been deleted from your Switch 1 and moved over to your Switch 2. Don’t worry— Nintendo will warn you about which software will be affected during the transfer process. Additionally, screenshots and video captures stored on a microSD card attached to the Switch 1 will need to be moved over manually later on.How to transfer your Switch 1 data locallyWith that in mind, if you want to transfer your data locally, which is what most people should do, click the Begin System Transfer button and follow the instructions—this involves signing into your Nintendo account, keeping your original Switch powered on and in close proximity to the Switch 2, and activating the transfer on your original Switch under Settings > System Settings > System Transfer to Nintendo Switch 2.How to transfer your Switch 1 data using Nintendo's serversThe confusion about factory resets comes from this data transfer option, which involves using the Nintendo servers. This will factory reset your Switch, and is best if you plan to sell it anyway, or if you expect to be away from your original Switch during Switch 2 setup and don’t mind setting up your original console from scratch when you get back to it. To start this kind of transfer, power on your original Switch, navigate to the System Transfer page mentioned above, then select I don’t have a Nintendo Switch 2 yet. Take note of the Download Deadline for later. Conveniently, that does point to one upside to this method: you can start it before you even have a Switch 2 in hand.Now, click Next, then Upload Data, then OK, followed by another OK. Click Start Initialization to begin factory resetting your original Switch. From here, your original Switch will revert to how it was before you bought it, and you’ll need to move over to your Switch 2, click Begin System Transfer, and sign into your Nintendo account. If the system detects that you have transfer data to download from the cloud, it’ll walk you through the process. Note, however, that if you don’t download your transfer data before the deadline you jotted down earlier, you’ll lose access to it.If you want to skip the data transfer process...If you’d rather not transfer your data, that’s also fine, but you won’t have an opportunity to do so later, and will instead need to move games and save data over manually. Click the Don’t Transfer Data button, then Continue to move to the next step.Adding a user and parental controlsWith system transfers out of the way, you’re through the hardest part of setting up your new console. Now, you’ll be prompted to add a user to the system. Here, you can sign in with your Nintendo Account to get access to your Switch Online subscription and your collection of downloadable games, or create a local user profile. After that, you can add more users as you like, or you can save that for later.Next up, parental controls. Like with additional users, you can set these up later under System Settings > Parental Controls, but there’s no harm to setting them up now as well. To do so, click Set Parental Controls.  Credit: Michelle Ehrhardt You’ll have a few options. Most of these will prompt you to use Nintendo’s Parental Controls app, but you can also click the X button on the right-hand joy-con to set up limited parental controls directly on the console. Doing so will allow you to select from a number of presets that will block access to certain games and communication features, but not much else. Using the app, meanwhile, will let you set a daily play time limit, bedtime settings, restrictions on the new GameChat feature, and see reports on play time and games played. It also doesn’t require a Switch Online subscription, so it’s worth using if you have a smart device.To set up parental controls using the app, first download it for either iOS or Android using the information on the screen, then click the “If You’ve Already Downloaded the App” button. Enter the registration code from your app into your Switch 2 system, then follow the instructions in the app to finish setup. Which buttons you’ll need to click will depend on the controls you’d like to activate, as well as for which users and systems, but it’s fairly straightforward.MicroSD card limitationsJust a couple more screens. First, a quick warning about microSD cards. Unlike the Switch 1, the Switch 2 is only compatible with microSD Express cards, which are faster, but options for them are also a bit more limited—in other words, there’s a good chance you won’t be able to use the same microSD card from your Switch 1 on your Switch 2. To use a microSD card on Switch 2, it’ll need either of the two logos shown in the image below. A bit of a bummer, but at least a microSD card is optional. Credit: Michelle Ehrhardt Oh, and like on the Switch 1, the microSD slot is hidden under the kickstand, in case you’re having trouble finding it.Virtual Game CardsYou’re technically through setup at this point, but there are still a few features you’ll probably want to configure before you start gaming. The most obvious of these is Virtual Game Cards, Nintendo’s new system for managing games purchased digitally.Essentially, like the name implies, these work similarly to physical game cards, but over the internet. This means that, unlike with your Steam library, you can only load a game to one console at a time. "Loading" is Nintendo specific term, but for the most part, it just means your game is downloaded and ready to play."To access your Virtual Game Cards, click the Virtual Game Card icon in the bottom row on your Switch 2’s home screen—it’ll look like a game cartridge. From here, if you’ve signed into your Nintendo account, you’ll see all your digital purchases and will be able to download and play them from here. If you haven’t signed into your Nintendo Account, you’ll have the option to do so. Credit: Michelle Ehrhardt Now, you’ll have a few options. First, if a game isn’t loaded onto your original Switch, you can simply download it to your Switch 2 by clicking Load to This Console. If the console isn’t set as your primary device, you might see a warning if you try to open a game, depending on how up-to-date your original Switch's software is. If your original Switch doesn't have the Virtual Game Cards update yet, you can click the If You Don’t Have That Console button to download your game anyway. It will simply cease being playable on the other console while you use it on this one, although that’s always the case when moving a Virtual Game Card between systems. Otherwise, you might need to link your two systems by bringing them close together and following the instructions on screen before you can load a Virtual Game Card on your new device. If you're not able to do this, like if you've gotten rid of your original Switch while it's still set as your primary device, you can remove your old Switch from your account by deregistering it. After deregistering your old console, you can set your Switch 2 as your new primary device by connecting it to the eShop. If you're able to link your old console to your new one, this won’t be necessary for simply accessing your library, but it will extend any Nintendo Online benefits to all users on your new primary device, rather than the one associated with your Nintendo Account. Credit: Michelle Ehrhardt Alternatively, if you've managed to link your devices, you can use the device that currently has your Virtual Game Cardon it to load it to your new one. Simply open your games, click Load to Another Console, and follow the instructions on screen. This will have the same effect as the Load to This Console Button. Also, if you'd like to be able to continue playing a game on a device even after moving its Virtual Game Card to another device, you can enable Use Online License under System Settings > User Settings > Online License Settings to do just that. You'll need to be connected to the internet for this to work, whereas you can play a Virtual Game Card offline, but it's better than nothing. Plus, this enables that workaround from earlier in this section that allows you to play the same game on both devices at once.How to lend a Virtual Game Card to someone elseYou’ll also notice that you can lend a Virtual Game Card to members of a “Family Group.” To do this, you’ll first need to set up a Family Group online. On Nintendo’s website, log into your Nintendo Account, then click the Family Group tab on the left hand side of your account page. Here, you can invite members to join your Family Group via email, or create a Family Group account for your child. Note that if you have a Nintendo Switch online Family Plan subscription, members of your Family Group will be able to use its benefits, although accounts that are part of your family group can also still use their individual subscriptions.With a Family Group set up, on the Virtual Game Card page, click the game you’d like to lend out, then Lend to a Family Group Member. Next, bring your Switch 2 in close proximity with that Family Group Member’s device—this needs to be done in person.Finally, click Select a User to Lend to. You can lend up to three games to three different accounts at once, and borrowers will be able to play these games for 14 days. During that time, you won’t be able to play the Game Card, and the borrower won’t get access to your save data while borrowing. However, they will keep their own save data for their next borrowing period, or if they choose to buy the game themselves. There are no limits to how often you can lend out a game, and you can re-lend games immediately upon the borrowing period expiring. Also, while you’ll need to lend out your games in person, they’ll return to you remotely.Transferring save dataEven if you didn’t transfer your Switch 1 data to your Switch 2 during setup, you can still access its save data on your new device. You have a couple of options here.First, the free option. On your original Switch, go to System Settings > Data Management > Transfer Your Data. Click Send Data to Another Console, then pick the user whose saves you want to send to your Switch 2. Pick the saves you want to send over, then click OK. Note that these saves will be deleted from your original console once moved over.Next, with your Switch 2 in close proximity to your Switch 1, navigate to System Settings > Data Management > Transfer Your Data. Click Receive Data. To move data from your Switch 2 to your Switch 1, simply perform these steps in reverse.Second, the paid option. If you have a Nintendo Switch Online membership, you can also use cloud saves to move save data between devices. By default, these are enabled automatically and will keep both of your systems up to date with the most recent saves. However, you can also manually download cloud saves either from a game’s software menuor from System Settings > Data Management > Data Cloud. You can also disable automatic save data download from here, if you like.Lock your home screen behind a passcodeFinally, you can lock your Switch 2 with a PIN for some added security, kind of like a cell phone. To set this up, simply go to Settings > System > Console Lock. Click OK, then follow the instructions on the screen that pops up to enter your PIN.There’s plenty more to dive into with the Switch 2, which I’ll cover over the following week. For now, though, this should be enough to get you started. Happy gaming! #how #set #start #using #your
    LIFEHACKER.COM
    How to Set Up and Start Using Your New Nintendo Switch 2
    So, you’ve braved the pre-order sites, or maybe you’ve just gotten lucky while waiting in line—either way, you’ve got yourself a Nintendo Switch 2. Congratulations! But before you start gaming, there are a few things you’ll need to keep in mind while setting up your console. Nintendo is known for being user friendly, but also a bit particular. Case in point: You can only do a full transfer of your Switch 1 data to your Switch 2 during setup, and if you miss this opportunity, you’ll have to reset your device to try again, or manually copy over your games and save data piece-by-piece later on.Luckily, I’ve got your back. Read on for a quick guide on how to set up your Nintendo Switch 2, and the three other features you should set up before you start playing.How to start setting up a Nintendo Switch 2For the most part, setting up a new Switch 2 out of the box is straightforward, but you’ll still want to pay close attention to each step before moving on, especially when it comes to transferring console data.First, remove your Switch 2 and your joy-con controllers from their packaging. Then, plug your joy-cons into their respective slots (they’ll attach magnetically, so it’s much simpler than on the first Switch). If you don’t know which joy-con goes where, the one with red highlights goes to the right of the screen, and the one with blue highlights goes to the left.Next, plug your Switch into power using the included charging brick and cable, and power it on. On the screens that follow, select your language and region, then read and accept the end-user license agreement. Credit: Michelle Ehrhardt You’ll see a screen to connect to the internet and download the console’s day-one system update. This technically isn’t mandatory, and skipping it (with the X button on the right joy-con) will instead take you to time zone settings. However, most features will be locked down, including backward compatibility, until you download it, so I recommend doing it during setup if possible. If you do skip this step, you can access the update later under Settings > System > System Update.Once you’re connected to the internet and you’ve started downloading the update, you’ll be able to continue setup while it downloads. Now, you’ll pick your time zone and click through a couple of tutorial pages. These will instruct you about portable and TV play, tell you how to use the kickstand and extra USB-C port, and walk you through detaching your joy-con from the console (press in the button on the back of the joy-con, underneath the trigger, and pull). You can also click through an optional tutorial on connecting your Switch 2 to a TV, if you like, after which you’ll get quick guides on using the included joy-con grip accessory and the joy-con wrist straps.If your console hasn’t finished updating, it’ll finish that now, and then take you to your first big decision: do you want to transfer your Switch 1 data to your Switch 2?Transferring Switch 1 data to the Switch 2During Switch 2 setup, Nintendo will allow you to transfer your Switch 1 data to your Switch 2, but there are a few caveats.You’ll know you’re ready for this once your system update is downloaded and you’re on a screen that says “To Nintendo Switch Console Owners,” above a graphic of someone holding a Switch 1 and Switch 2. Next to the graphic, you’ll see two buttons: Begin System Transfer, Don’t Transfer Data, plus a third button below that explains the process to you, but leaves out a few key details.Before you make your decision, the most important thing to remember is this: There are actually two ways to transfer data from the Switch 1 to the Switch 2, and despite what you might have read elsewhere, locally transferring your Switch 1 data to the Switch 2 during setup will not factory reset your original Switch. Unless you’ve taken extra steps beforehand, this is the option Nintendo’s setup process will recommend to you, so most users don’t need to be scared about accidentally erasing their original consoles. Credit: Michelle Ehrhardt If you stick with a local transfer, it will simply copy over your data to your Switch 2, so that it exists on both systems. There are a few specific cases where some data will get removed from your original device as it makes its way over to your new one, but for the most part, you’ll be able to keep using your original device as usual after the transfer, and there are ways to get that data back later on (I’ll get into that). Just know that save data for specific games, as well as some free-to-play games, may have been deleted from your Switch 1 and moved over to your Switch 2. Don’t worry— Nintendo will warn you about which software will be affected during the transfer process. Additionally, screenshots and video captures stored on a microSD card attached to the Switch 1 will need to be moved over manually later on.How to transfer your Switch 1 data locallyWith that in mind, if you want to transfer your data locally, which is what most people should do, click the Begin System Transfer button and follow the instructions—this involves signing into your Nintendo account, keeping your original Switch powered on and in close proximity to the Switch 2, and activating the transfer on your original Switch under Settings > System Settings > System Transfer to Nintendo Switch 2.How to transfer your Switch 1 data using Nintendo's serversThe confusion about factory resets comes from this data transfer option, which involves using the Nintendo servers. This will factory reset your Switch, and is best if you plan to sell it anyway, or if you expect to be away from your original Switch during Switch 2 setup and don’t mind setting up your original console from scratch when you get back to it. To start this kind of transfer, power on your original Switch, navigate to the System Transfer page mentioned above, then select I don’t have a Nintendo Switch 2 yet. Take note of the Download Deadline for later. Conveniently, that does point to one upside to this method: you can start it before you even have a Switch 2 in hand.Now, click Next, then Upload Data, then OK, followed by another OK. Click Start Initialization to begin factory resetting your original Switch. From here, your original Switch will revert to how it was before you bought it, and you’ll need to move over to your Switch 2, click Begin System Transfer, and sign into your Nintendo account. If the system detects that you have transfer data to download from the cloud, it’ll walk you through the process. Note, however, that if you don’t download your transfer data before the deadline you jotted down earlier, you’ll lose access to it.If you want to skip the data transfer process...If you’d rather not transfer your data, that’s also fine, but you won’t have an opportunity to do so later, and will instead need to move games and save data over manually. Click the Don’t Transfer Data button, then Continue to move to the next step.Adding a user and parental controlsWith system transfers out of the way, you’re through the hardest part of setting up your new console. Now, you’ll be prompted to add a user to the system. Here, you can sign in with your Nintendo Account to get access to your Switch Online subscription and your collection of downloadable games, or create a local user profile. After that, you can add more users as you like, or you can save that for later (simply navigate to System Settings > User > Add User).Next up, parental controls. Like with additional users, you can set these up later under System Settings > Parental Controls, but there’s no harm to setting them up now as well. To do so, click Set Parental Controls.  Credit: Michelle Ehrhardt You’ll have a few options. Most of these will prompt you to use Nintendo’s Parental Controls app, but you can also click the X button on the right-hand joy-con to set up limited parental controls directly on the console. Doing so will allow you to select from a number of presets that will block access to certain games and communication features, but not much else. Using the app, meanwhile, will let you set a daily play time limit, bedtime settings, restrictions on the new GameChat feature, and see reports on play time and games played. It also doesn’t require a Switch Online subscription, so it’s worth using if you have a smart device.To set up parental controls using the app, first download it for either iOS or Android using the information on the screen, then click the “If You’ve Already Downloaded the App” button. Enter the registration code from your app into your Switch 2 system, then follow the instructions in the app to finish setup. Which buttons you’ll need to click will depend on the controls you’d like to activate, as well as for which users and systems, but it’s fairly straightforward.MicroSD card limitationsJust a couple more screens. First, a quick warning about microSD cards. Unlike the Switch 1, the Switch 2 is only compatible with microSD Express cards, which are faster, but options for them are also a bit more limited—in other words, there’s a good chance you won’t be able to use the same microSD card from your Switch 1 on your Switch 2. To use a microSD card on Switch 2, it’ll need either of the two logos shown in the image below. A bit of a bummer, but at least a microSD card is optional (it’ll help you store more games, but the included storage on the Switch 2 is more generous than on the Switch 1). Credit: Michelle Ehrhardt Oh, and like on the Switch 1, the microSD slot is hidden under the kickstand, in case you’re having trouble finding it.Virtual Game CardsYou’re technically through setup at this point, but there are still a few features you’ll probably want to configure before you start gaming. The most obvious of these is Virtual Game Cards, Nintendo’s new system for managing games purchased digitally.Essentially, like the name implies, these work similarly to physical game cards, but over the internet. This means that, unlike with your Steam library, you can only load a game to one console at a time. "Loading" is Nintendo specific term, but for the most part, it just means your game is downloaded and ready to play."(Technically, you can still play the same game on two separate consoles at the same time, even if it isn't loaded on one, but doing so is a bit obtuse—click through here for more details.)To access your Virtual Game Cards, click the Virtual Game Card icon in the bottom row on your Switch 2’s home screen—it’ll look like a game cartridge. From here, if you’ve signed into your Nintendo account, you’ll see all your digital purchases and will be able to download and play them from here. If you haven’t signed into your Nintendo Account, you’ll have the option to do so. Credit: Michelle Ehrhardt Now, you’ll have a few options. First, if a game isn’t loaded onto your original Switch, you can simply download it to your Switch 2 by clicking Load to This Console. If the console isn’t set as your primary device (likely the case if you didn’t do a transfer), you might see a warning if you try to open a game, depending on how up-to-date your original Switch's software is. If your original Switch doesn't have the Virtual Game Cards update yet, you can click the If You Don’t Have That Console button to download your game anyway. It will simply cease being playable on the other console while you use it on this one, although that’s always the case when moving a Virtual Game Card between systems. Otherwise, you might need to link your two systems by bringing them close together and following the instructions on screen before you can load a Virtual Game Card on your new device. If you're not able to do this, like if you've gotten rid of your original Switch while it's still set as your primary device, you can remove your old Switch from your account by deregistering it. After deregistering your old console, you can set your Switch 2 as your new primary device by connecting it to the eShop. If you're able to link your old console to your new one, this won’t be necessary for simply accessing your library, but it will extend any Nintendo Online benefits to all users on your new primary device, rather than the one associated with your Nintendo Account. Credit: Michelle Ehrhardt Alternatively, if you've managed to link your devices, you can use the device that currently has your Virtual Game Card (i.e. your Switch 1) on it to load it to your new one (i.e. your Switch 2). Simply open your games, click Load to Another Console, and follow the instructions on screen. This will have the same effect as the Load to This Console Button. Also, if you'd like to be able to continue playing a game on a device even after moving its Virtual Game Card to another device, you can enable Use Online License under System Settings > User Settings > Online License Settings to do just that. You'll need to be connected to the internet for this to work, whereas you can play a Virtual Game Card offline, but it's better than nothing. Plus, this enables that workaround from earlier in this section that allows you to play the same game on both devices at once.How to lend a Virtual Game Card to someone elseYou’ll also notice that you can lend a Virtual Game Card to members of a “Family Group.” To do this, you’ll first need to set up a Family Group online. On Nintendo’s website, log into your Nintendo Account, then click the Family Group tab on the left hand side of your account page. Here, you can invite members to join your Family Group via email, or create a Family Group account for your child. Note that if you have a Nintendo Switch online Family Plan subscription, members of your Family Group will be able to use its benefits (for up to eight accounts), although accounts that are part of your family group can also still use their individual subscriptions.With a Family Group set up, on the Virtual Game Card page, click the game you’d like to lend out, then Lend to a Family Group Member. Next, bring your Switch 2 in close proximity with that Family Group Member’s device—this needs to be done in person.Finally, click Select a User to Lend to. You can lend up to three games to three different accounts at once, and borrowers will be able to play these games for 14 days. During that time, you won’t be able to play the Game Card, and the borrower won’t get access to your save data while borrowing. However, they will keep their own save data for their next borrowing period, or if they choose to buy the game themselves. There are no limits to how often you can lend out a game, and you can re-lend games immediately upon the borrowing period expiring. Also, while you’ll need to lend out your games in person, they’ll return to you remotely.Transferring save dataEven if you didn’t transfer your Switch 1 data to your Switch 2 during setup, you can still access its save data on your new device. You have a couple of options here.First, the free option. On your original Switch, go to System Settings > Data Management > Transfer Your Save Data. Click Send Data to Another Console, then pick the user whose saves you want to send to your Switch 2. Pick the saves you want to send over, then click OK. Note that these saves will be deleted from your original console once moved over.Next, with your Switch 2 in close proximity to your Switch 1 (this also needs to be done in person), navigate to System Settings > Data Management > Transfer Your Save Data. Click Receive Save Data. To move data from your Switch 2 to your Switch 1, simply perform these steps in reverse.Second, the paid option. If you have a Nintendo Switch Online membership, you can also use cloud saves to move save data between devices. By default, these are enabled automatically and will keep both of your systems up to date with the most recent saves. However, you can also manually download cloud saves either from a game’s software menu (press + or - while hovering over it on the Switch home screen) or from System Settings > Data Management > Save Data Cloud. You can also disable automatic save data download from here, if you like.Lock your home screen behind a passcodeFinally, you can lock your Switch 2 with a PIN for some added security, kind of like a cell phone. To set this up, simply go to Settings > System > Console Lock. Click OK, then follow the instructions on the screen that pops up to enter your PIN.There’s plenty more to dive into with the Switch 2, which I’ll cover over the following week. For now, though, this should be enough to get you started. Happy gaming!
    Like
    Love
    Wow
    Sad
    Angry
    567
    0 Comments 0 Shares 0 Reviews
  • Europe’s Call for Tech Sovereignty Takes a Hit. European Commission to Adopt a More Collaborative Approach

    Key Takeaways

    The European Commission will introduce the new International Digital Strategy that will focus on tech collaboration with the US and other countries.
    This is in contradiction to the growing pressure in Europe that calls for tech sovereignty and reducing tech dependence on the US.
    Low venture capital funding and a diverse regulatory landscape have been major challenges for tech innovation in Europe.

    Europe is seeing an increasing push to establish technological sovereignty and reduce the region’s reliance on US technology. However, there’s always been an undertone of acceptance that the EU is years behind the US when it comes to technological innovation and advancements.
    Now, the European Commission is planning to acknowledge this publicly. The EC will introduce a new International Digital Strategy, which will focus on collaboration with the United States and other tech-forward countries, such as Japan, South Korea, and India.
    The lawmakers believe that “decoupling” from the West is unrealistic and will instead push Europe further back in the technological race.
    Europe’s Call for Tech Sovereignty
    Several prominent political leaders and lawmakers have advocated for European sovereignty over technology and artificial intelligence. 
    For instance, Emmanuel Macron, the president of France, said in a speech in 2024 that Europe’s strategic autonomy is a conscious choice to end the region’s dependence on others. In an earlier speech, he also said that if Europe fails to build champions in areas such as digital and artificial intelligence, its choices will be dictated by others. 
    Similarly, Thierry Breton, the former EU Commissioner for Internal Markets, said that digital spending in the EU will breach the 20% target, underlining the importance of investing in European tech sovereignty.
    He focused on Europe’s declining market share in the semiconductor industry and called for the development of groundbreaking European tech.
    The Eurostack movement in the EU has also been gaining a lot of traction and support from various think tanks, academic researchers, and industry voices.
    Eurostack staff calls for the development of an indigenous infrastructure stack in the European region, including cloud, AI, semiconductors, digital services, and data centers. The main aim is to reduce Europe’s dependence on Chinese and US technology.
    This growing concern among EU well-wishers is quite understandable. Excessive reliance on the United States puts the USin a controlling position, where he can arm-twist the European Union in matters of trade and even politics. 
    We have already seen an example of this during the tariff war, where Trump imposed a 25% tariff on automobilesand steel and aluminum products imported from the EU.
    Also, the fact that Google, Microsoft, and Amazon account for around 69% of the cloud market in the EU is quite concerning, too.
    Europe Wakes Up to Reality
    Despite positive speeches and statements by people in power in the EU, the fact remains that Europe is still lagging behind the United States. Bulgarian lawmaker Eva Maydell said that Europe should “sober up” and accept that the train has left the station.
    Dan Nechita, the current EU director for the Transatlantic Policy Network, said that it’s not the right time to be politically absolute and say that “we are going to do everything in Europe.”
    EU tech chief Virkkunen has been working hard to bring home the support of influential tech lobbies and emphasized the need to continue working with the United States.
    To put it in a nutshell, the European Commission is ready to accept the fact that the damage is done, and now Europe needs to play second fiddle and forge strategic partnerships with key technological leadersto stay alive in the race.
    Lack of Investments and the EU’s Regulation-First Approach
    One of the major reasons behind the European Union’s sluggish tech development is the lack of venture capital investments in the region. As per an IMF post, EU VC funds raised around B between 2013 and 2022. During the same time, the US raised B. 
    Similarly, annual VC investments in the EU are only 0.2% of the GDP when compared to 0.7% in the US. Currently, the United States accounts for a massive 52% of the global VC funds, whereas the EU holds just 5%. 

    Lack of investments has forced European startups to look elsewhere, especially the United States, for funding and support. One of the major reasons for such low venture capital interest in Europe is its diversity.
    Each of the 27 countries in the EU has its own regulatory and legal challenges, which make it difficult for multinational corporations to operate in the region. 
    Plus, the EU has adopted a regulation-first approach, which is very different from the United States. Of course, this approach has its own merits, but it has surely slowed down the speed of technological investments in the region. 
    For instance, the GDPR puts a truckload of regulatory and moral responsibility on companies to protect user data. Similarly, the AI Act focuses on the more ethical development of artificial intelligence that’s aligned with human values and refrains companies from exploiting public data.
    Sure, all of these are positive tech regulations important to protect the long-term sovereignty of the public at large. Even global tech advocates have praised the EU’s efforts for the development of safe and ethical technologies. However, we cannot ignore the fact that this has come at the cost of sluggish tech investments and overall growth.
    With its back against the wall, Europe needs to reassess its strengths and focus on areas such as open-source technologies, such as France’s La Suite numérique, and government-backed technological initiatives to have a say in the upcoming artificial intelligence and semiconductor race.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #europes #call #tech #sovereignty #takes
    Europe’s Call for Tech Sovereignty Takes a Hit. European Commission to Adopt a More Collaborative Approach
    Key Takeaways The European Commission will introduce the new International Digital Strategy that will focus on tech collaboration with the US and other countries. This is in contradiction to the growing pressure in Europe that calls for tech sovereignty and reducing tech dependence on the US. Low venture capital funding and a diverse regulatory landscape have been major challenges for tech innovation in Europe. Europe is seeing an increasing push to establish technological sovereignty and reduce the region’s reliance on US technology. However, there’s always been an undertone of acceptance that the EU is years behind the US when it comes to technological innovation and advancements. Now, the European Commission is planning to acknowledge this publicly. The EC will introduce a new International Digital Strategy, which will focus on collaboration with the United States and other tech-forward countries, such as Japan, South Korea, and India. The lawmakers believe that “decoupling” from the West is unrealistic and will instead push Europe further back in the technological race. Europe’s Call for Tech Sovereignty Several prominent political leaders and lawmakers have advocated for European sovereignty over technology and artificial intelligence.  For instance, Emmanuel Macron, the president of France, said in a speech in 2024 that Europe’s strategic autonomy is a conscious choice to end the region’s dependence on others. In an earlier speech, he also said that if Europe fails to build champions in areas such as digital and artificial intelligence, its choices will be dictated by others.  Similarly, Thierry Breton, the former EU Commissioner for Internal Markets, said that digital spending in the EU will breach the 20% target, underlining the importance of investing in European tech sovereignty. He focused on Europe’s declining market share in the semiconductor industry and called for the development of groundbreaking European tech. The Eurostack movement in the EU has also been gaining a lot of traction and support from various think tanks, academic researchers, and industry voices. Eurostack staff calls for the development of an indigenous infrastructure stack in the European region, including cloud, AI, semiconductors, digital services, and data centers. The main aim is to reduce Europe’s dependence on Chinese and US technology. This growing concern among EU well-wishers is quite understandable. Excessive reliance on the United States puts the USin a controlling position, where he can arm-twist the European Union in matters of trade and even politics.  We have already seen an example of this during the tariff war, where Trump imposed a 25% tariff on automobilesand steel and aluminum products imported from the EU. Also, the fact that Google, Microsoft, and Amazon account for around 69% of the cloud market in the EU is quite concerning, too. Europe Wakes Up to Reality Despite positive speeches and statements by people in power in the EU, the fact remains that Europe is still lagging behind the United States. Bulgarian lawmaker Eva Maydell said that Europe should “sober up” and accept that the train has left the station. Dan Nechita, the current EU director for the Transatlantic Policy Network, said that it’s not the right time to be politically absolute and say that “we are going to do everything in Europe.” EU tech chief Virkkunen has been working hard to bring home the support of influential tech lobbies and emphasized the need to continue working with the United States. To put it in a nutshell, the European Commission is ready to accept the fact that the damage is done, and now Europe needs to play second fiddle and forge strategic partnerships with key technological leadersto stay alive in the race. Lack of Investments and the EU’s Regulation-First Approach One of the major reasons behind the European Union’s sluggish tech development is the lack of venture capital investments in the region. As per an IMF post, EU VC funds raised around B between 2013 and 2022. During the same time, the US raised B.  Similarly, annual VC investments in the EU are only 0.2% of the GDP when compared to 0.7% in the US. Currently, the United States accounts for a massive 52% of the global VC funds, whereas the EU holds just 5%.  Lack of investments has forced European startups to look elsewhere, especially the United States, for funding and support. One of the major reasons for such low venture capital interest in Europe is its diversity. Each of the 27 countries in the EU has its own regulatory and legal challenges, which make it difficult for multinational corporations to operate in the region.  Plus, the EU has adopted a regulation-first approach, which is very different from the United States. Of course, this approach has its own merits, but it has surely slowed down the speed of technological investments in the region.  For instance, the GDPR puts a truckload of regulatory and moral responsibility on companies to protect user data. Similarly, the AI Act focuses on the more ethical development of artificial intelligence that’s aligned with human values and refrains companies from exploiting public data. Sure, all of these are positive tech regulations important to protect the long-term sovereignty of the public at large. Even global tech advocates have praised the EU’s efforts for the development of safe and ethical technologies. However, we cannot ignore the fact that this has come at the cost of sluggish tech investments and overall growth. With its back against the wall, Europe needs to reassess its strengths and focus on areas such as open-source technologies, such as France’s La Suite numérique, and government-backed technological initiatives to have a say in the upcoming artificial intelligence and semiconductor race. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #europes #call #tech #sovereignty #takes
    TECHREPORT.COM
    Europe’s Call for Tech Sovereignty Takes a Hit. European Commission to Adopt a More Collaborative Approach
    Key Takeaways The European Commission will introduce the new International Digital Strategy that will focus on tech collaboration with the US and other countries. This is in contradiction to the growing pressure in Europe that calls for tech sovereignty and reducing tech dependence on the US. Low venture capital funding and a diverse regulatory landscape have been major challenges for tech innovation in Europe. Europe is seeing an increasing push to establish technological sovereignty and reduce the region’s reliance on US technology. However, there’s always been an undertone of acceptance that the EU is years behind the US when it comes to technological innovation and advancements. Now, the European Commission is planning to acknowledge this publicly. The EC will introduce a new International Digital Strategy, which will focus on collaboration with the United States and other tech-forward countries, such as Japan, South Korea, and India. The lawmakers believe that “decoupling” from the West is unrealistic and will instead push Europe further back in the technological race. Europe’s Call for Tech Sovereignty Several prominent political leaders and lawmakers have advocated for European sovereignty over technology and artificial intelligence.  For instance, Emmanuel Macron, the president of France, said in a speech in 2024 that Europe’s strategic autonomy is a conscious choice to end the region’s dependence on others. In an earlier speech, he also said that if Europe fails to build champions in areas such as digital and artificial intelligence, its choices will be dictated by others.  Similarly, Thierry Breton, the former EU Commissioner for Internal Markets, said that digital spending in the EU will breach the 20% target, underlining the importance of investing in European tech sovereignty. He focused on Europe’s declining market share in the semiconductor industry and called for the development of groundbreaking European tech. The Eurostack movement in the EU has also been gaining a lot of traction and support from various think tanks, academic researchers, and industry voices. Eurostack staff calls for the development of an indigenous infrastructure stack in the European region, including cloud, AI, semiconductors, digital services, and data centers. The main aim is to reduce Europe’s dependence on Chinese and US technology. This growing concern among EU well-wishers is quite understandable. Excessive reliance on the United States puts the US (meaning Donald Trump) in a controlling position, where he can arm-twist the European Union in matters of trade and even politics.  We have already seen an example of this during the tariff war, where Trump imposed a 25% tariff on automobiles (and their parts) and steel and aluminum products imported from the EU. Also, the fact that Google, Microsoft, and Amazon account for around 69% of the cloud market in the EU is quite concerning, too. Europe Wakes Up to Reality Despite positive speeches and statements by people in power in the EU, the fact remains that Europe is still lagging behind the United States. Bulgarian lawmaker Eva Maydell said that Europe should “sober up” and accept that the train has left the station. Dan Nechita, the current EU director for the Transatlantic Policy Network, said that it’s not the right time to be politically absolute and say that “we are going to do everything in Europe.” EU tech chief Virkkunen has been working hard to bring home the support of influential tech lobbies and emphasized the need to continue working with the United States. To put it in a nutshell, the European Commission is ready to accept the fact that the damage is done, and now Europe needs to play second fiddle and forge strategic partnerships with key technological leaders (primarily in the US) to stay alive in the race. Lack of Investments and the EU’s Regulation-First Approach One of the major reasons behind the European Union’s sluggish tech development is the lack of venture capital investments in the region. As per an IMF post, EU VC funds raised around $130B between 2013 and 2022. During the same time, the US raised $924B.  Similarly, annual VC investments in the EU are only 0.2% of the GDP when compared to 0.7% in the US. Currently, the United States accounts for a massive 52% of the global VC funds, whereas the EU holds just 5%.  Lack of investments has forced European startups to look elsewhere, especially the United States, for funding and support. One of the major reasons for such low venture capital interest in Europe is its diversity. Each of the 27 countries in the EU has its own regulatory and legal challenges, which make it difficult for multinational corporations to operate in the region.  Plus, the EU has adopted a regulation-first approach, which is very different from the United States. Of course, this approach has its own merits, but it has surely slowed down the speed of technological investments in the region.  For instance, the GDPR puts a truckload of regulatory and moral responsibility on companies to protect user data. Similarly, the AI Act focuses on the more ethical development of artificial intelligence that’s aligned with human values and refrains companies from exploiting public data. Sure, all of these are positive tech regulations important to protect the long-term sovereignty of the public at large. Even global tech advocates have praised the EU’s efforts for the development of safe and ethical technologies. However, we cannot ignore the fact that this has come at the cost of sluggish tech investments and overall growth. With its back against the wall, Europe needs to reassess its strengths and focus on areas such as open-source technologies, such as France’s La Suite numérique, and government-backed technological initiatives to have a say in the upcoming artificial intelligence and semiconductor race. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    Like
    Love
    Wow
    Angry
    Sad
    274
    0 Comments 0 Shares 0 Reviews
  • Manus has kick-started an AI agent boom in China

    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them. 

    There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March. 

    These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions. 

    China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life. 

    For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country. 

    As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom.

    Set the standard

    It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees. 

    Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project. 

    Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks.

    “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.”

    In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s. 

    Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue.

    Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”.

    What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market.

    A global address

    Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products. 

    Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.”

    But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away. 

    Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant.

    But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model. 

    An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch.

    Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.”

    For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month.

    A super‑app approach

    Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges. 

    ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers.

    Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis.

    Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat. 

    Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy.

    Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience.

    But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups.

    Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date. 

    ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments.

    “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.”

    Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    #manus #has #kickstarted #agent #boom
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management softwarethan a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.” #manus #has #kickstarted #agent #boom
    WWW.TECHNOLOGYREVIEW.COM
    Manus has kick-started an AI agent boom in China
    Last year, China saw a boom in foundation models, the do-everything large language models that underpin the AI revolution. This year, the focus has shifted to AI agents—systems that are less about responding to users’ queries and more about autonomously accomplishing things for them.  There are now a host of Chinese startups building these general-purpose digital tools, which can answer emails, browse the internet to plan vacations, and even design an interactive website. Many of these have emerged in just the last two months, following in the footsteps of Manus—a general AI agent that sparked weeks of social media frenzy for invite codes after its limited-release launch in early March.  These emerging AI agents aren’t large language models themselves. Instead, they’re built on top of them, using a workflow-based structure designed to get things done. A lot of these systems also introduce a different way of interacting with AI. Rather than just chatting back and forth with users, they are optimized for managing and executing multistep tasks—booking flights, managing schedules, conducting research—by using external tools and remembering instructions.  China could take the lead on building these kinds of agents. The country’s tightly integrated app ecosystems, rapid product cycles, and digitally fluent user base could provide a favorable environment for embedding AI into daily life.  For now, its leading AI agent startups are focusing their attention on the global market, because the best Western models don’t operate inside China’s firewalls. But that could change soon: Tech giants like ByteDance and Tencent are preparing their own AI agents that could bake automation directly into their native super-apps, pulling data from their vast ecosystem of programs that dominate many aspects of daily life in the country.  As the race to define what a useful AI agent looks like unfolds, a mix of ambitious startups and entrenched tech giants are now testing how these tools might actually work in practice—and for whom. Set the standard It’s been a whirlwind few months for Manus, which was developed by the Wuhan-based startup Butterfly Effect. The company raised $75 million in a funding round led by the US venture capital firm Benchmark, took the product on an ambitious global roadshow, and hired dozens of new employees.  Even before registration opened to the public in May, Manus had become a reference point for what a broad, consumer‑oriented AI agent should accomplish. Rather than handling narrow chores for businesses, this “general” agent is designed to be able to help with everyday tasks like trip planning, stock comparison, or your kid’s school project.  Unlike previous AI agents, Manus uses a browser-based sandbox that lets users supervise the agent like an intern, watching in real time as it scrolls through web pages, reads articles, or codes actions. It also proactively asks clarifying questions, supports long-term memory that would serve as context for future tasks. “Manus represents a promising product experience for AI agents,” says Ang Li, cofounder and CEO of Simular, a startup based in Palo Alto, California, that’s building computer use agents, AI agents that control a virtual computer. “I believe Chinese startups have a huge advantage when it comes to designing consumer products, thanks to cutthroat domestic competition that leads to fast execution and greater attention to product details.” In the case of Manus, the competition is moving fast. Two of the most buzzy follow‑ups, Genspark and Flowith, for example, are already boasting benchmark scores that match or edge past Manus’s.  Genspark, led by former Baidu executives Eric Jing and Kay Zhu, links many small “super agents” through what it calls multi‑component prompting. The agent can switch among several large language models, accepts both images and text, and carries out tasks from making slide decks to placing phone calls. Whereas Manus relies heavily on Browser Use, a popular open-source product that lets agents operate a web browser in a virtual window like a human, Genspark directly integrates with a wide array of tools and APIs. Launched in April, the company says that it already has over 5 million users and over $36 million in yearly revenue. Flowith, the work of a young team that first grabbed public attention in April 2025 at a developer event hosted by the popular social media app Xiaohongshu, takes a different tack. Marketed as an “infinite agent,” it opens on a blank canvas where each question becomes a node on a branching map. Users can backtrack, take new branches, and store results in personal or sharable “knowledge gardens”—a design that feels more like project management software (think Notion) than a typical chat interface. Every inquiry or task builds its own mind-map-like graph, encouraging a more nonlinear and creative interaction with AI. Flowith’s core agent, NEO, runs in the cloud and can perform scheduled tasks like sending emails and compiling files. The founders want the app to be a “knowledge marketbase”, and aims to tap into the social aspect of AI with the aspiration of becoming “the OnlyFans of AI knowledge creators”. What they also share with Manus is the global ambition. Both Genspark and Flowith have stated that their primary focus is the international market. A global address Startups like Manus, Genspark, and Flowith—though founded by Chinese entrepreneurs—could blend seamlessly into the global tech scene and compete effectively abroad. Founders, investors, and analysts that MIT Technology Review has spoken to believe Chinese companies are moving fast, executing well, and quickly coming up with new products.  Money reinforces the pull to launch overseas. Customers there pay more, and there are plenty to go around. “You can price in USD, and with the exchange rate that’s a sevenfold multiplier,” Manus cofounder Xiao Hong quipped on a podcast. “Even if we’re only operating at 10% power because of cultural differences overseas, we’ll still make more than in China.” But creating the same functionality in China is a challenge. Major US AI companies including OpenAI and Anthropic have opted out of mainland China because of geopolitical risks and challenges with regulatory compliance. Their absence initially created a black market as users resorted to VPNs and third-party mirrors to access tools like ChatGPT and Claude. That vacuum has since been filled by a new wave of Chinese chatbots—DeepSeek, Doubao, Kimi—but the appetite for foreign models hasn’t gone away.  Manus, for example, uses Anthropic’s Claude Sonnet—widely considered the top model for agentic tasks. Manus cofounder Zhang Tao has repeatedly praised Claude’s ability to juggle tools, remember contexts, and hold multi‑round conversations—all crucial for turning chatty software into an effective executive assistant. But the company’s use of Sonnet has made its agent functionally unusable inside China without a VPN. If you open Manus from a mainland IP address, you’ll see a notice explaining that the team is “working on integrating Qwen’s model,” a special local version that is built on top of Alibaba’s open-source model.  An engineer overseeing ByteDance’s work on developing an agent, who spoke to MIT Technology Review anonymously to avoid sanction, said that the absence of Claude Sonnet models “limits everything we do in China.” DeepSeek’s open models, he added, still hallucinate too often and lack training on real‑world workflows. Developers we spoke with rank Alibaba’s Qwen series as the best domestic alternative, yet most say that switching to Qwen knocks performance down a notch. Jiaxin Pei, a postdoctoral researcher at Stanford’s Institute for Human‑Centered AI, thinks that gap will close: “Building agentic capabilities in base LLMs has become a key focus for many LLM builders, and once people realize the value of this, it will only be a matter of time.” For now, Manus is doubling down on audiences it can already serve. In a written response, the company said its “primary focus is overseas expansion,” noting that new offices in San Francisco, Singapore, and Tokyo have opened in the past month. A super‑app approach Although the concept of AI agents is still relatively new, the consumer-facing AI app market in China is already crowded with major tech players. DeepSeek remains the most widely used, while ByteDance’s Doubao and Moonshot’s Kimi have also become household names. However, most of these apps are still optimized for chat and entertainment rather than task execution. This gap in the local market has pushed China’s big tech firms to roll out their own user-facing agents, though early versions remain uneven in quality and rough around the edges.  ByteDance is testing Coze Space, an AI agent based on its own Doubao model family that lets users toggle between “plan” and “execute” modes, so they can either directly guide the agent’s actions or step back and watch it work autonomously. It connects up to 14 popular apps, including GitHub, Notion, and the company’s own Lark office suite. Early reviews say the tool can feel clunky and has a high failure rate, but it clearly aims to match what Manus offers. Meanwhile, Zhipu AI has released a free agent called AutoGLM Rumination, built on its proprietary ChatGLM models. Shanghai‑based Minimax has launched Minimax Agent. Both products look almost identical to Manus and demo basic tasks such as building a simple website, planning a trip, making a small Flash game, or running quick data analysis. Despite the limited usability of most general AI agents launched within China, big companies have plans to change that. During a May 15 earnings call, Tencent president Liu Zhiping teased an agent that would weave automation directly into China’s most ubiquitous app, WeChat.  Considered the original super-app, WeChat already handles messaging, mobile payments, news, and millions of mini‑programs that act like embedded apps. These programs give Tencent, its developer, access to data from millions of services that pervade everyday life in China, an advantage most competitors can only envy. Historically, China’s consumer internet has splintered into competing walled gardens—share a Taobao link in WeChat and it resolves as plaintext, not a preview card. Unlike the more interoperable Western internet, China’s tech giants have long resisted integration with one another, choosing to wage platform war at the expense of a seamless user experience. But the use of mini‑programs has given WeChat unprecedented reach across services that once resisted interoperability, from gym bookings to grocery orders. An agent able to roam that ecosystem could bypass the integration headaches dogging independent startups. Alibaba, the e-commerce giant behind the Qwen model series, has been a front-runner in China’s AI race but has been slower to release consumer-facing products. Even though Qwen was the most downloaded open-source model on Hugging Face in 2024, it didn’t power a dedicated chatbot app until early 2025. In March, Alibaba rebranded its cloud storage and search app Quark into an all-in-one AI search tool. By June, Quark had introduced DeepResearch—a new mode that marks its most agent-like effort to date.  ByteDance and Alibaba did not reply to MIT Technology Review’s request for comments. “Historically, Chinese tech products tend to pursue the all-in-one, super-app approach, and the latest Chinese AI agents reflect just that,” says Li of Simular, who previously worked at Google DeepMind on AI-enabled work automation. “In contrast, AI agents in the US are more focused on serving specific verticals.” Pei, the researcher at Stanford, says that existing tech giants could have a huge advantage in bringing the vision of general AI agents to life—especially those with built-in integration across services. “The customer-facing AI agent market is still very early, with tons of problems like authentication and liability,” he says. “But companies that already operate across a wide range of services have a natural advantage in deploying agents at scale.”
    Like
    Love
    Wow
    Sad
    Angry
    421
    0 Comments 0 Shares 0 Reviews
More Results
CGShares https://cgshares.com