• Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Comments 0 Shares 0 Reviews
  • Air-Conditioning Can Help the Power Grid instead of Overloading It

    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    #airconditioning #can #help #power #grid
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article. #airconditioning #can #help #power #grid
    WWW.SCIENTIFICAMERICAN.COM
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    Like
    Love
    Wow
    Sad
    Angry
    602
    0 Comments 0 Shares 0 Reviews
  • 9 menial tasks ChatGPT can handle in seconds, saving you hours

    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it.
    What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples.

    Further reading: This tiny ChatGPT feature helps me tackle my days more productively

    Write your emails for you
    Dave Parrack / Foundry
    We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning theperfect email based on whatever information you feed it.
    Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start.
    A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You canalso rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails.

    Generate itineraries and schedules
    Dave Parrack / Foundry
    If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide.
    As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me.
    As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like.

    Break down difficult concepts
    Dave Parrack / Foundry
    One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements.
    Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking!
    Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images.

    Analyze and make tough decisions
    Dave Parrack / Foundry
    We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice.
    It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic.
    It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life.

    Plan complex projects and strategies
    Dave Parrack / Foundry
    Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts.
    ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases.
    If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes.

    Compile research notes
    Dave Parrack / Foundry
    If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened.
    After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players.
    You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reportsbased on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days.

    Summarize articles, meetings, and more
    Dave Parrack / Foundry
    There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper.
    As an example, I ran one of my own PCWorld articlesthrough ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles.If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link.
    This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like.

    Create Q&A flashcards for learning
    Dave Parrack / Foundry
    Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying.
    You can specify the format, as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way.

    Provide interview practice
    Dave Parrack / Foundry
    Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively.
    Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be, and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way.
    When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager.
    Further reading: Non-gimmicky AI apps I actually use every day
    #menial #tasks #chatgpt #can #handle
    9 menial tasks ChatGPT can handle in seconds, saving you hours
    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it. What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples. Further reading: This tiny ChatGPT feature helps me tackle my days more productively Write your emails for you Dave Parrack / Foundry We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning theperfect email based on whatever information you feed it. Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start. A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You canalso rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails. Generate itineraries and schedules Dave Parrack / Foundry If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide. As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me. As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like. Break down difficult concepts Dave Parrack / Foundry One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements. Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking! Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images. Analyze and make tough decisions Dave Parrack / Foundry We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice. It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic. It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life. Plan complex projects and strategies Dave Parrack / Foundry Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts. ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases. If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes. Compile research notes Dave Parrack / Foundry If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened. After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players. You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reportsbased on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days. Summarize articles, meetings, and more Dave Parrack / Foundry There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper. As an example, I ran one of my own PCWorld articlesthrough ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles.If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link. This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like. Create Q&A flashcards for learning Dave Parrack / Foundry Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying. You can specify the format, as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way. Provide interview practice Dave Parrack / Foundry Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively. Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be, and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way. When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager. Further reading: Non-gimmicky AI apps I actually use every day #menial #tasks #chatgpt #can #handle
    WWW.PCWORLD.COM
    9 menial tasks ChatGPT can handle in seconds, saving you hours
    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it. What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples. Further reading: This tiny ChatGPT feature helps me tackle my days more productively Write your emails for you Dave Parrack / Foundry We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning the (hopefully) perfect email based on whatever information you feed it. Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start. A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You can (and should) also rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails. Generate itineraries and schedules Dave Parrack / Foundry If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide. As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me. As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like. Break down difficult concepts Dave Parrack / Foundry One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements. Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking! Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images. Analyze and make tough decisions Dave Parrack / Foundry We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice. It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic. It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life. Plan complex projects and strategies Dave Parrack / Foundry Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts. ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases (if required). If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes. Compile research notes Dave Parrack / Foundry If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened. After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players. You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reports (with citations!) based on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days. Summarize articles, meetings, and more Dave Parrack / Foundry There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper. As an example, I ran one of my own PCWorld articles (where I compared Bluesky and Threads as alternatives to X) through ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles. (Hmph.) If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link. This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like. Create Q&A flashcards for learning Dave Parrack / Foundry Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying. You can specify the format (such as Q&A or multiple choice), as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way. Provide interview practice Dave Parrack / Foundry Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively. Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be (e.g., screener, technical assessment, group/panel, one-on-one with CEO), and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way. When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager. Further reading: Non-gimmicky AI apps I actually use every day
    0 Comments 0 Shares 0 Reviews
  • IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Comments 0 Shares 0 Reviews
  • CIOs baffled by ‘buzzwords, hype and confusion’ around AI

    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems.
    Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders.
    “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said.
    “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.”
    CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable.
    “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler.
    Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive.

    But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations.
    “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said.
    Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said.
    “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.”
    One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome.
    For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected.
    “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler.

    Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications.
    Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance.
    Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice.
    Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow.
    As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers.
    “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said.

    Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly.
    The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler.
    “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.”
    Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim.
    That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.”
    “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler.
    He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear.
    The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving.
    Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses.

    An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses.
    Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint.
    They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform.
    “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler.
    That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies.
    “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added.

    When AI agents behave in unexpected ways
    Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent.
    When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work.
    Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.”
    The developers banned Iris from sending an email to anyone other than the person who sent the original request.
    Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response.
    Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker.
    She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    #cios #baffled #buzzwords #hype #confusion
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.” #cios #baffled #buzzwords #hype #confusion
    WWW.COMPUTERWEEKLY.COM
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence (AI), according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a $1.5bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language models (LLMs) are not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takes [large quantities of] electricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unit [GPU] to do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    0 Comments 0 Shares 0 Reviews
  • Biofuels policy has been a failure for the climate, new report claims

    Fewer food crops

    Biofuels policy has been a failure for the climate, new report claims

    Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions.

    Georgina Gustin, Inside Climate News



    Jun 14, 2025 7:10 am

    |

    24

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.
    The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels.
    But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits.
    The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand.
    “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.”
    The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel.

    The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed.
    The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline.
    In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest.
    But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries.
    The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest.
    On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too.

    The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.”
    New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead.
    President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel.
    The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews.
    An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist.
    They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs.
    “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said.

    Georgina Gustin, Inside Climate News

    24 Comments
    #biofuels #policy #has #been #failure
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments #biofuels #policy #has #been #failure
    ARSTECHNICA.COM
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has. (The report defines the Midwest as Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin.) “The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than $42 billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments
    0 Comments 0 Shares 0 Reviews
  • UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge

    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques.
    “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added.
    The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration.
    Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    Tackling America’s Bridge Crisis with Cold Spray Technology
    Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels. 
    The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge.
    One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.”
    To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan.
    Next steps: Testing Cold-Sprayed Repairs
    The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests.
    “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.”
    3D Printing for Infrastructure Repairs
    Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained.
    Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst.
    #umass #mit #test #cold #spray
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst, in collaboration with the Massachusetts Institute of TechnologyDepartment of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation, the Massachusetts Technology Collaborative, the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us toon this actual bridge while cars are going.” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College Londonhave developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis. Photo via UMass Amherst. #umass #mit #test #cold #spray
    3DPRINTINGINDUSTRY.COM
    UMass and MIT Test Cold Spray 3D Printing to Repair Aging Massachusetts Bridge
    Researchers from the US-based University of Massachusetts Amherst (UMass), in collaboration with the Massachusetts Institute of Technology (MIT) Department of Mechanical Engineering, have applied cold spray to repair the deteriorating “Brown Bridge” in Great Barrington, built in 1949. The project marks the first known use of this method on bridge infrastructure and aims to evaluate its effectiveness as a faster, more cost-effective, and less disruptive alternative to conventional repair techniques. “Now that we’ve completed this proof-of-concept repair, we see a clear path to a solution that is much faster, less costly, easier, and less invasive,” said Simos Gerasimidis, associate professor of civil and environmental engineering at the University of Massachusetts Amherst. “To our knowledge, this is a first. Of course, there is some R&D that needs to be developed, but this is a huge milestone to that,” he added. The pilot project is also a collaboration with the Massachusetts Department of Transportation (MassDOT), the Massachusetts Technology Collaborative (MassTech), the U.S. Department of Transportation, and the Federal Highway Administration. It was supported by the Massachusetts Manufacturing Innovation Initiative, which provided essential equipment for the demonstration. Members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst. Tackling America’s Bridge Crisis with Cold Spray Technology Nearly half of the bridges across the United States are in “fair” condition, while 6.8% are classified as “poor,” according to the 2025 Report Card for America’s Infrastructure. In Massachusetts, about 9% of the state’s 5,295 bridges are considered structurally deficient. The costs of restoring this infrastructure are projected to exceed $190 billion—well beyond current funding levels.  The cold spray method consists of propelling metal powder particles at high velocity onto the beam’s surface. Successive applications build up additional layers, helping restore its thickness and structural integrity. This method has successfully been used to repair large structures such as submarines, airplanes, and ships, but this marks the first instance of its application to a bridge. One of cold spray’s key advantages is its ability to be deployed with minimal traffic disruption.  “Every time you do repairs on a bridge you have to block traffic, you have to make traffic controls for substantial amounts of time,” explained Gerasimidis. “This will allow us to [apply the technique] on this actual bridge while cars are going [across].” To enhance precision, the research team integrated 3D LiDAR scanning technology into the process. Unlike visual inspections, which can be subjective and time-consuming, LiDAR creates high-resolution digital models that pinpoint areas of corrosion. This allows teams to develop targeted repair plans and deposit materials only where needed—reducing waste and potentially extending a bridge’s lifespan. Next steps: Testing Cold-Sprayed Repairs The bridge is scheduled for demolition in the coming years. When that happens, researchers will retrieve the repaired sections for further analysis. They plan to assess the durability, corrosion resistance, and mechanical performance of the cold-sprayed steel in real-world conditions, comparing it to results from laboratory tests. “This is a tremendous collaboration where cutting-edge technology is brought to address a critical need for infrastructure in the commonwealth and across the United States,” said John Hart, Class of 1922 Professor in the Department of Mechanical Engineering at MIT. “I think we’re just at the beginning of a digital transformation of bridge inspection, repair and maintenance, among many other important use cases.” 3D Printing for Infrastructure Repairs Beyond cold spray techniques, other innovative 3D printing methods are emerging to address construction repair challenges. For example, researchers at University College London (UCL) have developed an asphalt 3D printer specifically designed to repair road cracks and potholes. “The material properties of 3D printed asphalt are tunable, and combined with the flexibility and efficiency of the printing platform, this technique offers a compelling new design approach to the maintenance of infrastructure,” the UCL team explained. Similarly, in 2018, Cintec, a Wales-based international structural engineering firm, contributed to restoring the historic Government building known as the Red House in the Republic of Trinidad and Tobago. This project, managed by Cintec’s North American branch, marked the first use of additive manufacturing within sacrificial structures. It also featured the installation of what are claimed to be the longest reinforcement anchors ever inserted into a structure—measuring an impressive 36.52 meters. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows members of the UMass Amherst and MIT Department of Mechanical Engineering research team, led by Simos Gerasimidis (left, standing). Photo via UMass Amherst.
    0 Comments 0 Shares 0 Reviews
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares 0 Reviews
  • Recipients of Public Awareness Sponsorship Program announced

    The latest recipients of the OAA’s Public Awareness Sponsorship program, held twice a year, have been announced.
    Under its five-year strategic plan, the OAA has identified public education as a key pillar with the goal to advance the public’s understanding and recognition that architecture is integral to the quality of life and well-being of society. As a result, the OAA offers Public Awareness Funding in amounts from to to applicants working to expand an awareness of the value of architecture in their communities.
    The Communications and Public Education Committeehas agreed to fund the following applicants.

    Toronto Public Space Committee and Cyan Station – To the Loo! Toronto Toilet Design Challenge
    The “To the Loo! Toronto Toilet Design Challenge” is a global call to reimagine public washrooms as vital elements of the urban landscape. A joint effort by the Toronto Public Space Committee and Cyan Station, the initiative emphasizes accessibility, public health, and innovative design. Featuring a summer 2025 public event and exhibition, the challenge invites architects, designers, and engaged citizens to explore creative solutions that transform how we experience these essential public spaces.
    Heritage Ottawa – 2025 Heritage Ottawa Walking Tours
    Heritage Ottawa is an advocate for the preservation and appreciation of Ottawa’s built heritage. For more than 50 years, its signature guided Walking Tours, offered in both English and French, have attracted diverse audiences and have highlighted the city’s architectural and cultural history.
    Kelvin Kung – Designing Dignity: Community-Driven Insights for Better Palliative and Long-Term Care Spaces
    “Designing Dignity: Community-Driven Insights for Better Palliative and Long-Term Care Spaces” focuses on enhancing the quality of life for aging populations by reimagining care spaces through thoughtful architectural design. By leveraging online engagement tools, AI-driven analysis, and stakeholder input, this initiative will develop data-driven reports and recommendations for the public, policymakers, and design professionals. The project aims to raise awareness about architecture’s crucial role in shaping compassionate care spaces, empowering communities to advocate for better design and influence future policies and practices.
    McEwen School of Architecture, Laurentian University – Archi-North Summer Camp
    Archi·North Summer Camp, offered by Laurentian University’s McEwen School of Architecture, is a bilingual and tricultural program designed for Northern Ontario high school students entering Grades 11 and 12. The week-long, immersive camp aims to provide an affordable introduction to architectural design through hands-on experience in drafting, model-making, and digital tools with an emphasis on sustainable materials. Led by faculty and recent graduates, the Sudbury-based camp encourages youth to be agents of change and reimagine their own communities.
    Moses Structural Engineers Inc. – TimberFever 2025
    Now in its 11th year, TimberFever 2025, presented by Moses Structural Engineers, is a hands-on design-build competition that brings together architecture and engineering students from Canadian and U.S. universities to collaborate, create, and innovate. Under the guidance of professional mentors, carpenters, and industry leaders, participants tackle real-world challenges like affordable housing and climate resilience while refining both design and construction skills.
    RAW Design – Architectural and Design Summer Camp, “Diversity in Design”
    RAW Design’s “Diversity in Design” Summer Camp introduces underrepresented high school students to the architecture profession through an immersive, hands-on experience. Now in its fifth year, this free week-long mentorship program fosters creativity, critical thinking, and teamwork with activities like model-making, workshops, and urban exploration led by architects and volunteers.
    Urban Minds – 1UP Fellowship 2025-2026
    Urban Minds’ 1UP Fellowship 2025-2026 aims to empower high school students across Ontario to become urban changemakers through mentorship and hands-on projects. The Fellowship features two streams: the Design-Builders Stream, where students launch school chapters to tackle community design challenges, and the Learners Stream, which introduces students to city-building topics through structured learning activities.

    The next deadline for submissions is September 15, 2025.
    For more information, click here.
    The post Recipients of Public Awareness Sponsorship Program announced appeared first on Canadian Architect.
    #recipients #public #awareness #sponsorship #program
    Recipients of Public Awareness Sponsorship Program announced
    The latest recipients of the OAA’s Public Awareness Sponsorship program, held twice a year, have been announced. Under its five-year strategic plan, the OAA has identified public education as a key pillar with the goal to advance the public’s understanding and recognition that architecture is integral to the quality of life and well-being of society. As a result, the OAA offers Public Awareness Funding in amounts from to to applicants working to expand an awareness of the value of architecture in their communities. The Communications and Public Education Committeehas agreed to fund the following applicants. Toronto Public Space Committee and Cyan Station – To the Loo! Toronto Toilet Design Challenge The “To the Loo! Toronto Toilet Design Challenge” is a global call to reimagine public washrooms as vital elements of the urban landscape. A joint effort by the Toronto Public Space Committee and Cyan Station, the initiative emphasizes accessibility, public health, and innovative design. Featuring a summer 2025 public event and exhibition, the challenge invites architects, designers, and engaged citizens to explore creative solutions that transform how we experience these essential public spaces. Heritage Ottawa – 2025 Heritage Ottawa Walking Tours Heritage Ottawa is an advocate for the preservation and appreciation of Ottawa’s built heritage. For more than 50 years, its signature guided Walking Tours, offered in both English and French, have attracted diverse audiences and have highlighted the city’s architectural and cultural history. Kelvin Kung – Designing Dignity: Community-Driven Insights for Better Palliative and Long-Term Care Spaces “Designing Dignity: Community-Driven Insights for Better Palliative and Long-Term Care Spaces” focuses on enhancing the quality of life for aging populations by reimagining care spaces through thoughtful architectural design. By leveraging online engagement tools, AI-driven analysis, and stakeholder input, this initiative will develop data-driven reports and recommendations for the public, policymakers, and design professionals. The project aims to raise awareness about architecture’s crucial role in shaping compassionate care spaces, empowering communities to advocate for better design and influence future policies and practices. McEwen School of Architecture, Laurentian University – Archi-North Summer Camp Archi·North Summer Camp, offered by Laurentian University’s McEwen School of Architecture, is a bilingual and tricultural program designed for Northern Ontario high school students entering Grades 11 and 12. The week-long, immersive camp aims to provide an affordable introduction to architectural design through hands-on experience in drafting, model-making, and digital tools with an emphasis on sustainable materials. Led by faculty and recent graduates, the Sudbury-based camp encourages youth to be agents of change and reimagine their own communities. Moses Structural Engineers Inc. – TimberFever 2025 Now in its 11th year, TimberFever 2025, presented by Moses Structural Engineers, is a hands-on design-build competition that brings together architecture and engineering students from Canadian and U.S. universities to collaborate, create, and innovate. Under the guidance of professional mentors, carpenters, and industry leaders, participants tackle real-world challenges like affordable housing and climate resilience while refining both design and construction skills. RAW Design – Architectural and Design Summer Camp, “Diversity in Design” RAW Design’s “Diversity in Design” Summer Camp introduces underrepresented high school students to the architecture profession through an immersive, hands-on experience. Now in its fifth year, this free week-long mentorship program fosters creativity, critical thinking, and teamwork with activities like model-making, workshops, and urban exploration led by architects and volunteers. Urban Minds – 1UP Fellowship 2025-2026 Urban Minds’ 1UP Fellowship 2025-2026 aims to empower high school students across Ontario to become urban changemakers through mentorship and hands-on projects. The Fellowship features two streams: the Design-Builders Stream, where students launch school chapters to tackle community design challenges, and the Learners Stream, which introduces students to city-building topics through structured learning activities. The next deadline for submissions is September 15, 2025. For more information, click here. The post Recipients of Public Awareness Sponsorship Program announced appeared first on Canadian Architect. #recipients #public #awareness #sponsorship #program
    WWW.CANADIANARCHITECT.COM
    Recipients of Public Awareness Sponsorship Program announced
    The latest recipients of the OAA’s Public Awareness Sponsorship program, held twice a year, have been announced. Under its five-year strategic plan, the OAA has identified public education as a key pillar with the goal to advance the public’s understanding and recognition that architecture is integral to the quality of life and well-being of society. As a result, the OAA offers Public Awareness Funding in amounts from $500 to $10,000 to applicants working to expand an awareness of the value of architecture in their communities. The Communications and Public Education Committee (CPEC) has agreed to fund the following applicants. Toronto Public Space Committee and Cyan Station – To the Loo! Toronto Toilet Design Challenge The “To the Loo! Toronto Toilet Design Challenge” is a global call to reimagine public washrooms as vital elements of the urban landscape. A joint effort by the Toronto Public Space Committee and Cyan Station, the initiative emphasizes accessibility, public health, and innovative design. Featuring a summer 2025 public event and exhibition, the challenge invites architects, designers, and engaged citizens to explore creative solutions that transform how we experience these essential public spaces. Heritage Ottawa – 2025 Heritage Ottawa Walking Tours Heritage Ottawa is an advocate for the preservation and appreciation of Ottawa’s built heritage. For more than 50 years, its signature guided Walking Tours, offered in both English and French, have attracted diverse audiences and have highlighted the city’s architectural and cultural history. Kelvin Kung – Designing Dignity: Community-Driven Insights for Better Palliative and Long-Term Care Spaces “Designing Dignity: Community-Driven Insights for Better Palliative and Long-Term Care Spaces” focuses on enhancing the quality of life for aging populations by reimagining care spaces through thoughtful architectural design. By leveraging online engagement tools, AI-driven analysis, and stakeholder input, this initiative will develop data-driven reports and recommendations for the public, policymakers, and design professionals. The project aims to raise awareness about architecture’s crucial role in shaping compassionate care spaces, empowering communities to advocate for better design and influence future policies and practices. McEwen School of Architecture, Laurentian University – Archi-North Summer Camp Archi·North Summer Camp, offered by Laurentian University’s McEwen School of Architecture, is a bilingual and tricultural program designed for Northern Ontario high school students entering Grades 11 and 12. The week-long, immersive camp aims to provide an affordable introduction to architectural design through hands-on experience in drafting, model-making, and digital tools with an emphasis on sustainable materials. Led by faculty and recent graduates, the Sudbury-based camp encourages youth to be agents of change and reimagine their own communities. Moses Structural Engineers Inc. – TimberFever 2025 Now in its 11th year, TimberFever 2025, presented by Moses Structural Engineers, is a hands-on design-build competition that brings together architecture and engineering students from Canadian and U.S. universities to collaborate, create, and innovate. Under the guidance of professional mentors, carpenters, and industry leaders, participants tackle real-world challenges like affordable housing and climate resilience while refining both design and construction skills. RAW Design – Architectural and Design Summer Camp, “Diversity in Design” RAW Design’s “Diversity in Design” Summer Camp introduces underrepresented high school students to the architecture profession through an immersive, hands-on experience. Now in its fifth year, this free week-long mentorship program fosters creativity, critical thinking, and teamwork with activities like model-making, workshops, and urban exploration led by architects and volunteers. Urban Minds – 1UP Fellowship 2025-2026 Urban Minds’ 1UP Fellowship 2025-2026 aims to empower high school students across Ontario to become urban changemakers through mentorship and hands-on projects. The Fellowship features two streams: the Design-Builders Stream, where students launch school chapters to tackle community design challenges, and the Learners Stream, which introduces students to city-building topics through structured learning activities. The next deadline for submissions is September 15, 2025. For more information, click here. The post Recipients of Public Awareness Sponsorship Program announced appeared first on Canadian Architect.
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com