• Meta’s $15 Billion Scale AI Deal Could Leave Gig Workers Behind

    Meta is reportedly set to invest billion to acquire a 49% stake in Scale AI, in a deal that would make Scale CEO Alexandr Wang head of the tech giant’s new AI unit dedicated to pursuing “superintelligence.”Scale AI, founded in 2016, is a leading data annotation firm that hires workers around the world to label or create the data that is used to train AI systems.The deal is expected to greatly enrich Wang and many of his colleagues with equity in Scale AI; Wang, already a billionaire, would see his wealth grow even further. For Meta, it would breathe new life into the company’s flagging attempts to compete at the “frontier” of AI against OpenAI, Google, and Anthropic.However, Scale’s contract workers, many of whom earn just dollars per day via a subsidiary called RemoTasks, are unlikely to benefit at all from the deal, according to sociologists who study the sector. Typically data workers are not formally employed, and are instead paid for the tasks they complete. Those tasks can include labeling the contents of images, answering questions, or rating which of two chatbots’ answers are better, in order to teach AI systems to better comply with human preferences.“I expect few if any Scale annotators will see any upside at all,” says Callum Cant, a senior lecturer at the University of Essex, U.K., who studies gig work platforms. “It would be very surprising to see some kind of feed-through. Most of these people don’t have a stake in ownership of the company.”Many of those workers already suffer from low pay and poor working conditions. In a recent report by Oxford University’s Internet Institute, the Scale subsidiary RemoTasks failed to meet basic standards for fair pay, fair contracts, fair management, and fair worker representation.Advertisement“A key part of Scale’s value lies in its data work services performed by hundreds of thousands of underpaid and poorly protected workers,” says Jonas Valente, an Oxford researcher who worked on the report. “The company remains far from safeguarding basic standards of fair work, despite limited efforts to improve its practices.”The Meta deal is unlikely to change that. “Unfortunately, the increasing profits of many digital labor platforms and their primary companies, such as the case of Scale, do not translate into better conditions for,” Valente says.A Scale AI spokesperson declined to comment for this story. “We're proud of the flexible earning opportunities offered through our platforms,” the company said in a statement to TechCrunch in May. Meta’s investment also calls into question whether Scale AI will continue supplying data to OpenAI and Google, two of its major clients. In the increasingly competitive AI landscape, observers say Meta may see value in cutting off its rivals from annotated data — an essential means of making AI systems smarter. Advertisement“By buying up access to Scale AI, could Meta deny access to that platform and that avenue for data annotation by other competitors?” says Cant. “It depends entirely on Meta’s strategy.”If that were to happen, Cant says, it could put downward pressure on the wages and tasks available to workers, many of whom already struggle to make ends meet with data work.A Meta spokesperson declined to comment on this story.
    #metas #billion #scale #deal #could
    Meta’s $15 Billion Scale AI Deal Could Leave Gig Workers Behind
    Meta is reportedly set to invest billion to acquire a 49% stake in Scale AI, in a deal that would make Scale CEO Alexandr Wang head of the tech giant’s new AI unit dedicated to pursuing “superintelligence.”Scale AI, founded in 2016, is a leading data annotation firm that hires workers around the world to label or create the data that is used to train AI systems.The deal is expected to greatly enrich Wang and many of his colleagues with equity in Scale AI; Wang, already a billionaire, would see his wealth grow even further. For Meta, it would breathe new life into the company’s flagging attempts to compete at the “frontier” of AI against OpenAI, Google, and Anthropic.However, Scale’s contract workers, many of whom earn just dollars per day via a subsidiary called RemoTasks, are unlikely to benefit at all from the deal, according to sociologists who study the sector. Typically data workers are not formally employed, and are instead paid for the tasks they complete. Those tasks can include labeling the contents of images, answering questions, or rating which of two chatbots’ answers are better, in order to teach AI systems to better comply with human preferences.“I expect few if any Scale annotators will see any upside at all,” says Callum Cant, a senior lecturer at the University of Essex, U.K., who studies gig work platforms. “It would be very surprising to see some kind of feed-through. Most of these people don’t have a stake in ownership of the company.”Many of those workers already suffer from low pay and poor working conditions. In a recent report by Oxford University’s Internet Institute, the Scale subsidiary RemoTasks failed to meet basic standards for fair pay, fair contracts, fair management, and fair worker representation.Advertisement“A key part of Scale’s value lies in its data work services performed by hundreds of thousands of underpaid and poorly protected workers,” says Jonas Valente, an Oxford researcher who worked on the report. “The company remains far from safeguarding basic standards of fair work, despite limited efforts to improve its practices.”The Meta deal is unlikely to change that. “Unfortunately, the increasing profits of many digital labor platforms and their primary companies, such as the case of Scale, do not translate into better conditions for,” Valente says.A Scale AI spokesperson declined to comment for this story. “We're proud of the flexible earning opportunities offered through our platforms,” the company said in a statement to TechCrunch in May. Meta’s investment also calls into question whether Scale AI will continue supplying data to OpenAI and Google, two of its major clients. In the increasingly competitive AI landscape, observers say Meta may see value in cutting off its rivals from annotated data — an essential means of making AI systems smarter. Advertisement“By buying up access to Scale AI, could Meta deny access to that platform and that avenue for data annotation by other competitors?” says Cant. “It depends entirely on Meta’s strategy.”If that were to happen, Cant says, it could put downward pressure on the wages and tasks available to workers, many of whom already struggle to make ends meet with data work.A Meta spokesperson declined to comment on this story. #metas #billion #scale #deal #could
    TIME.COM
    Meta’s $15 Billion Scale AI Deal Could Leave Gig Workers Behind
    Meta is reportedly set to invest $15 billion to acquire a 49% stake in Scale AI, in a deal that would make Scale CEO Alexandr Wang head of the tech giant’s new AI unit dedicated to pursuing “superintelligence.”Scale AI, founded in 2016, is a leading data annotation firm that hires workers around the world to label or create the data that is used to train AI systems.The deal is expected to greatly enrich Wang and many of his colleagues with equity in Scale AI; Wang, already a billionaire, would see his wealth grow even further. For Meta, it would breathe new life into the company’s flagging attempts to compete at the “frontier” of AI against OpenAI, Google, and Anthropic.However, Scale’s contract workers, many of whom earn just dollars per day via a subsidiary called RemoTasks, are unlikely to benefit at all from the deal, according to sociologists who study the sector. Typically data workers are not formally employed, and are instead paid for the tasks they complete. Those tasks can include labeling the contents of images, answering questions, or rating which of two chatbots’ answers are better, in order to teach AI systems to better comply with human preferences.(TIME has a content partnership with Scale AI.)“I expect few if any Scale annotators will see any upside at all,” says Callum Cant, a senior lecturer at the University of Essex, U.K., who studies gig work platforms. “It would be very surprising to see some kind of feed-through. Most of these people don’t have a stake in ownership of the company.”Many of those workers already suffer from low pay and poor working conditions. In a recent report by Oxford University’s Internet Institute, the Scale subsidiary RemoTasks failed to meet basic standards for fair pay, fair contracts, fair management, and fair worker representation.Advertisement“A key part of Scale’s value lies in its data work services performed by hundreds of thousands of underpaid and poorly protected workers,” says Jonas Valente, an Oxford researcher who worked on the report. “The company remains far from safeguarding basic standards of fair work, despite limited efforts to improve its practices.”The Meta deal is unlikely to change that. “Unfortunately, the increasing profits of many digital labor platforms and their primary companies, such as the case of Scale, do not translate into better conditions for [workers],” Valente says.A Scale AI spokesperson declined to comment for this story. “We're proud of the flexible earning opportunities offered through our platforms,” the company said in a statement to TechCrunch in May. Meta’s investment also calls into question whether Scale AI will continue supplying data to OpenAI and Google, two of its major clients. In the increasingly competitive AI landscape, observers say Meta may see value in cutting off its rivals from annotated data — an essential means of making AI systems smarter. Advertisement“By buying up access to Scale AI, could Meta deny access to that platform and that avenue for data annotation by other competitors?” says Cant. “It depends entirely on Meta’s strategy.”If that were to happen, Cant says, it could put downward pressure on the wages and tasks available to workers, many of whom already struggle to make ends meet with data work.A Meta spokesperson declined to comment on this story.
    0 Comments 0 Shares 0 Reviews
  • What DEI actually does for the economy

    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI.

    Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants.

    These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy.

    Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law.

    Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy?

    As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research.

    Who gains from DEI?

    In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction.

    Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender.

    A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline.

    But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc.

    Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access.

    Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth.

    The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation.

    Ultimately, the U.S. economy is more profitable and productive as a result of immigrants.

    What does DEI cost?

    While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense.

    The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity.

    But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves.

    That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases.

    The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself.

    What lies ahead for DEI?

    People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement.

    Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women.

    At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively.

    Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually.

    In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence.

    Rodney Coates is a professor of critical race and ethnic studies at Miami University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #what #dei #actually #does #economy
    What DEI actually does for the economy
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated billion on DEI programs. And in 2023, the federal government spent more than million on DEI, including million by the Department of Health and Human Services and another million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of compared with for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at and million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by trillion, added up to billion in lifetime earnings through higher college enrollment, and generated trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #what #dei #actually #does #economy
    WWW.FASTCOMPANY.COM
    What DEI actually does for the economy
    Few issues in the U.S. today are as controversial as diversity, equity, and inclusion—commonly referred to as DEI. Although the term didn’t come into common usage until the 21st century, DEI is best understood as the latest stage in a long American project. Its egalitarian principles are seen in America’s founding documents, and its roots lie in landmark 20th-century efforts such as the 1964 Civil Rights Act and affirmative action policies, as well as movements for racial justice, gender equity, disability rights, veterans, and immigrants. These movements sought to expand who gets to participate in economic, educational, and civic life. DEI programs, in many ways, are their legacy. Critics argue that DEI is antidemocratic, that it fosters ideological conformity, and that it leads to discriminatory initiatives, which they say disadvantage white people and undermine meritocracy. Those defending DEI argue just the opposite: that it encourages critical thinking and promotes democracy—and that attacks on DEI amount to a retreat from long-standing civil rights law. Yet missing from much of the debate is a crucial question: What are the tangible costs and benefits of DEI? Who benefits, who doesn’t, and what are the broader effects on society and the economy? As a sociologist, I believe any productive conversation about DEI should be rooted in evidence, not ideology. So let’s look at the research. Who gains from DEI? In the corporate world, DEI initiatives are intended to promote diversity, and research consistently shows that diversity is good for business. Companies with more diverse teams tend to perform better across several key metrics, including revenue, profitability, and worker satisfaction. Businesses with diverse workforces also have an edge in innovation, recruitment, and competitiveness, research shows. The general trend holds for many types of diversity, including age, race, and ethnicity, and gender. A focus on diversity can also offer profit opportunities for businesses seeking new markets. Two-thirds of American consumers consider diversity when making their shopping choices, a 2021 survey found. So-called “inclusive consumers” tend to be female, younger, and more ethnically and racially diverse. Ignoring their values can be costly: When Target backed away from its DEI efforts, the resulting backlash contributed to a sales decline. But DEI goes beyond corporate policy. At its core, it’s about expanding access to opportunities for groups historically excluded from full participation in American life. From this broader perspective, many 20th-century reforms can be seen as part of the DEI arc. Consider higher education. Many elite U.S. universities refused to admit women until well into the 1960s and 1970s. Columbia, the last Ivy League university to go co-ed, started admitting women in 1982. Since the advent of affirmative action, women haven’t just closed the gender gap in higher education—they outpace men in college completion across all racial groups. DEI policies have particularly benefited women, especially white women, by expanding workforce access. Similarly, the push to desegregate American universities was followed by an explosion in the number of Black college students—a number that has increased by 125% since the 1970s, twice the national rate. With college gates open to more people than ever, overall enrollment at U.S. colleges has quadrupled since 1965. While there are many reasons for this, expanding opportunity no doubt plays a role. And a better-educated population has had significant implications for productivity and economic growth. The 1965 Immigration Act also exemplifies DEI’s impact. It abolished racial and national quotas, enabling the immigration of more diverse populations, including from Asia, Africa, southern and eastern Europe, and Latin America. Many of these immigrants were highly educated, and their presence has boosted U.S. productivity and innovation. Ultimately, the U.S. economy is more profitable and productive as a result of immigrants. What does DEI cost? While DEI generates returns for many businesses and institutions, it does come with costs. In 2020, corporate America spent an estimated $7.5 billion on DEI programs. And in 2023, the federal government spent more than $100 million on DEI, including $38.7 million by the Department of Health and Human Services and another $86.5 million by the Department of Defense. The government will no doubt be spending less on DEI in 2025. One of President Donald Trump’s first acts in his second term was to sign an executive order banning DEI practices in federal agencies—one of several anti-DEI executive orders currently facing legal challenges. More than 30 states have also introduced or enacted bills to limit or entirely restrict DEI in recent years. Central to many of these policies is the belief that diversity lowers standards, replacing meritocracy with mediocrity. But a large body of research disputes this claim. For example, a 2023 McKinsey & Company report found that companies with higher levels of gender and ethnic diversity will likely financially outperform those with the least diversity by at least 39%. Similarly, concerns that DEI in science and technology education leads to lowering standards aren’t backed up by scholarship. Instead, scholars are increasingly pointing out that disparities in performance are linked to built-in biases in courses themselves. That said, legal concerns about DEI are rising. The Equal Employment Opportunity Commission and the Department of Justice have recently warned employers that some DEI programs may violate Title VII of the Civil Rights Act of 1964. Anecdotal evidence suggests that reverse discrimination claims, particularly from white men, are increasing, and legal experts expect the Supreme Court to lower the burden of proof needed by complainants for such cases. The issue remains legally unsettled. But while the cases work their way through the courts, women and people of color will continue to shoulder much of the unpaid volunteer work that powers corporate DEI initiatives. This pattern raises important equity concerns within DEI itself. What lies ahead for DEI? People’s fears of DEI are partly rooted in demographic anxiety. Since the U.S. Census Bureau projected in 2008 that non-Hispanic white people would become a minority in the U.S by the year 2042, nationwide news coverage has amplified white fears of displacement. Research indicates many white men experience this change as a crisis of identity and masculinity, particularly amid economic shifts such as the decline of blue-collar work. This perception aligns with research showing that white Americans are more likely to believe DEI policies disadvantage white men than white women. At the same time, in spite of DEI initiatives, women and people of color are most likely to be underemployed and living in poverty regardless of how much education they attain. The gender wage gap remains stark: In 2023, women working full time earned a median weekly salary of $1,005 compared with $1,202 for men—just 83.6% of what men earned. Over a 40-year career, that adds up to hundreds of thousands of dollars in lost earnings. For Black and Latina women, the disparities are even worse, with one source estimating lifetime losses at $976,800 and $1.2 million, respectively. Racism, too, carries an economic toll. A 2020 analysis from Citi found that systemic racism has cost the U.S. economy $16 trillion since 2000. The same analysis found that addressing these disparities could have boosted Black wages by $2.7 trillion, added up to $113 billion in lifetime earnings through higher college enrollment, and generated $13 trillion in business revenue, creating 6.1 million jobs annually. In a moment of backlash and uncertainty, I believe DEI remains a vital if imperfect tool in the American experiment of inclusion. Rather than abandon it, the challenge now, from my perspective, is how to refine it: grounding efforts not in slogans or fear, but in fairness and evidence. Rodney Coates is a professor of critical race and ethnic studies at Miami University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Comments 0 Shares 0 Reviews
  • What AI’s impact on individuals means for the health workforce and industry

    Transcript    
    PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”      
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak.
    You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues.
    So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.  
    To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar.
    Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence.
    Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics.
    Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick:
    LEE: Ethan, welcome.
    ETHAN MOLLICK: So happy to be here, thank you.
    LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI?
    MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it.
    And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field.
    And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question.
    LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been?
    MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now.
    One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things.
    And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever.
    So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology.
    LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty?
    MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect.
    So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated.
    LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI.
    MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system.
    There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing?
    The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way.
    The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind.
    LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention.
    MOLLICK: Yes.
    LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point?
    MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right.
    I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?”
    So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right.
    LEE: Yes. Mm-hmm.
    MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either.
    LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever?
    MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered.
    You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete.
    What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one.
    Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet.
    LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this. 
    MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills.
    Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely.
    But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety.
    LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company.
    And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs?
    MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right.
    So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains.
    And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result.
    LEE: You know, where are those productivity gains going, then, when you get to the organizational level?
    MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right.
    Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal.
    At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen.
    So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons.
    And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves.
    So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change.
    LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI?
    MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again.
    What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field.
    So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab.
    So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill?
    And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves.
    LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones.
    And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ?
    MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish.
    I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space.
    But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things?
    And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to.
    So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that.
    LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching?
    MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful.
    A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing.
    So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right.
    I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear.
    But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition.
    LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading?
    MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems.
    So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is.
    But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview …
    LEE: Yeah, that’s a great one.
    MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend.
    Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works.
    LEE: Yeah.
    MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right.
    LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here.
    Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize?
    MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine.
    I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast.
    So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right.
    We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here.
    LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer.
    MOLLICK: Yes. Yes.
    LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall.
    But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea?
    MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right.
    There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right.
    LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens?
    MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people.
    So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine.
    But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point.
    Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not.
    Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get?
    LEE: Yeah.
    MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything.
    Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right.
    And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it.
    LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining.
    MOLLICK: Thank you.  
    I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work.
    One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does.
    In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI.
    The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI.
    Here’s now my interview with Azeem Azhar:
    LEE: Azeem, welcome.
    AZEEM AZHAR: Peter, thank you so much for having me. 
    LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before.
    And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day?
    AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip …
    LEE: Oh wow.
    AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started.
    And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large.
    LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through?
    AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed.
    Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th.
    And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold.
    LEE: And who’s the we that you were experimenting with?
    AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems.
    LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.  
    And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine?
    AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that.
    So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away.
    And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload.
    And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help.
    So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced.
    So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized.
    LEE: Yeah.
    AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura.
    LEE: Yup.
    AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to.
    And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on.
    It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector.
    And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout.
    So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems
    LEE: I love how you break that down. And I want to press on a couple of things.
    You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated?
    AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example.
    In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different.
    I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away.
    LEE: Yeah.
    AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week.
    LEE: Right. Yeah.
    AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer.
    LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution.
    Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons.
    And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work?
    AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice.
    I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors.
    I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner.
    LEE: Yeah.
    AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly.
    LEE: Right.
    AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful.
    LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis.
    And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem?
    AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before.
    We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right?
    LEE: Yeah, yeah.
    AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system.
    So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later …
    LEE: Right.
    AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for.
    And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system …
    LEE: Yup.
    AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that.
    So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible.
    And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons.
    LEE: Yeah, yep.
    AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own.
    LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop?
    AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold.
    If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system.
    LEE: Right. Yep. Yep.
    AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time.
    LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you.
    AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician.
    In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart
    I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that.
    LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes.
    LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that.
    And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway.
    AZHAR: Right.
    LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like?
    AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through.
    You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience.
    So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly.
    So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots.
    LEE: Yes.
    AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval.
    I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth?
    AZHAR: Right.
    LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow.
    AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week.
    And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician.
    LEE: Yeah.
    AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah.
    AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers …
    LEE: Yes.
    AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next.
    LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this.
    And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions?
    AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in.
    LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches.
    And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time.
    LEE: Yes.
    AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety.
    And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines.
    I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health.
    LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said.
    Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much.
    AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.  
    I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies.
    In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.  
    Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear.
    Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level.
    Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference.
    But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in.
    Until next time.
    #what #ais #impact #individuals #means
    What AI’s impact on individuals means for the health workforce and industry
    Transcript     PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”       This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you.   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura. LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. #what #ais #impact #individuals #means
    WWW.MICROSOFT.COM
    What AI’s impact on individuals means for the health workforce and industry
    Transcript [MUSIC]    [BOOK PASSAGE]  PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.” [END OF BOOK PASSAGE]    [THEME MUSIC]    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.      [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society. [TRANSITION MUSIC] Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine. [LAUGHTER] So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I was [getting] my PhD at MIT, I worked with Marvin Minsky (opens in new tab) and the MIT [Massachusetts Institute of Technology] Media Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start. [LAUGHTER] So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better (opens in new tab). So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some ways [LAUGHTER] compared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot (opens in new tab), the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them. [LAUGHTER] And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how to [get] AI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.” [LAUGHS] MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize. [LAUGHTER] LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI (opens in new tab), and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I think [Andrej] Karpathy (opens in new tab) has some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCME [Liaison Committee on Medical Education] accrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which [is], “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.” [LAUGHTER] Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you. [TRANSITION MUSIC]   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate (opens in new tab). Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar, [LAUGHTER] or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that is [LAUGHS] more broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right? [LAUGHTER] They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back (opens in new tab), right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura (opens in new tab). LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health (opens in new tab), which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks. [LAUGHTER] But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz around [LAUGHTER] the hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Care (opens in new tab) was one, and Narayana Hrudayalaya [now known as Narayana Health (opens in new tab)] was another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows. [LAUGHS] Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time. [LAUGHS] AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself [LAUGHS], about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs [randomized control trials], and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots [very rapidly]. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be. [LAUGHTER] LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right. [LAUGHTER] LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM [continuous glucose monitor]. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK. [LAUGHS] AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do, [LAUGHTER] but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you. [TRANSITION MUSIC]   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build (opens in new tab), which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing. [THEME MUSIC] A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. [MUSIC FADES]
    11 Comments 0 Shares 0 Reviews
  • AI Is Replacing Women's Jobs Specifically

    With under three years of mass-market artificial intelligence available to consumers, businesses in nearly every industry have flocked to the tech like antivaxxers to a multi-level marketing scheme.By 2024, more than 50 percent of companies with more than 5,000 employees were using AI. For the penny-pinching boss, AI represents the promise of rising productivity and lower overhead cost — also known as wages, which were traditionally paid to pesky human employees.Now, though, as workers around the world grow anxious at the idea of an AI future dominated by a few massive tech monopolies, the race to AI adoption is already having a noticeable effect on job markets.Thanks to AI, the number of young college grads entering the workforce hit an all-time low, full-time salary jobs are becoming gigified, and lying on resumes is now the norm as the job search becomes a nightmarish hell.Though rich tech tycoons like Marc Andreessen would have you believe that tech gadgetry has a magical power to free us all, history has shown us that technological development often sharpens existing inequalities instead of the other way around.Indeed, AI has already shown a surprising amount of gender and race bias thanks to the data it's trained on, and experts warn that this combination of prejudiced software with a massive global rollout is already driving exploitation.As such, it's no surprise that AI is likely to increase the gender gap in employment, according to an updated report from the United Nations' International Labour OrganisationThe report builds on estimates made in 2023 on the automation risk facing different jobs thanks to AI. The new analysis found that in high-income countries like the US, women's risk for "high automation potential" rose to 9.6 percent, up from 7.8 just two years ago.That's three times the risk faced by men today at 3.5 percent, which also rose from 2.9 percent in 2023.Interestingly, the study also found that one in three workers in rich countries face "some degree of exposure" to automation, compared to the world average of one in four.The ILO report also points out that the nature of jobs frequently held by women in wealthy nations — like administrative, clerical, and data entry roles — are primed for automation by AI.Sociologists have noted that the gender gap in labor hours worked has narrowed significantly in recent years — meaning men and women work close to the same quantity of hours. However, the gender pay gap still persists as women's share of work goes less toward jobs and more into household tasks compared to men.With AI supposedly poised to "revolutionize work," it will take substantial change in our current labor environment to safeguard women from AI-driven austerity.Share This Article
    #replacing #women039s #jobs #specifically
    AI Is Replacing Women's Jobs Specifically
    With under three years of mass-market artificial intelligence available to consumers, businesses in nearly every industry have flocked to the tech like antivaxxers to a multi-level marketing scheme.By 2024, more than 50 percent of companies with more than 5,000 employees were using AI. For the penny-pinching boss, AI represents the promise of rising productivity and lower overhead cost — also known as wages, which were traditionally paid to pesky human employees.Now, though, as workers around the world grow anxious at the idea of an AI future dominated by a few massive tech monopolies, the race to AI adoption is already having a noticeable effect on job markets.Thanks to AI, the number of young college grads entering the workforce hit an all-time low, full-time salary jobs are becoming gigified, and lying on resumes is now the norm as the job search becomes a nightmarish hell.Though rich tech tycoons like Marc Andreessen would have you believe that tech gadgetry has a magical power to free us all, history has shown us that technological development often sharpens existing inequalities instead of the other way around.Indeed, AI has already shown a surprising amount of gender and race bias thanks to the data it's trained on, and experts warn that this combination of prejudiced software with a massive global rollout is already driving exploitation.As such, it's no surprise that AI is likely to increase the gender gap in employment, according to an updated report from the United Nations' International Labour OrganisationThe report builds on estimates made in 2023 on the automation risk facing different jobs thanks to AI. The new analysis found that in high-income countries like the US, women's risk for "high automation potential" rose to 9.6 percent, up from 7.8 just two years ago.That's three times the risk faced by men today at 3.5 percent, which also rose from 2.9 percent in 2023.Interestingly, the study also found that one in three workers in rich countries face "some degree of exposure" to automation, compared to the world average of one in four.The ILO report also points out that the nature of jobs frequently held by women in wealthy nations — like administrative, clerical, and data entry roles — are primed for automation by AI.Sociologists have noted that the gender gap in labor hours worked has narrowed significantly in recent years — meaning men and women work close to the same quantity of hours. However, the gender pay gap still persists as women's share of work goes less toward jobs and more into household tasks compared to men.With AI supposedly poised to "revolutionize work," it will take substantial change in our current labor environment to safeguard women from AI-driven austerity.Share This Article #replacing #women039s #jobs #specifically
    FUTURISM.COM
    AI Is Replacing Women's Jobs Specifically
    With under three years of mass-market artificial intelligence available to consumers, businesses in nearly every industry have flocked to the tech like antivaxxers to a multi-level marketing scheme.By 2024, more than 50 percent of companies with more than 5,000 employees were using AI. For the penny-pinching boss, AI represents the promise of rising productivity and lower overhead cost — also known as wages, which were traditionally paid to pesky human employees.Now, though, as workers around the world grow anxious at the idea of an AI future dominated by a few massive tech monopolies, the race to AI adoption is already having a noticeable effect on job markets.Thanks to AI, the number of young college grads entering the workforce hit an all-time low, full-time salary jobs are becoming gigified, and lying on resumes is now the norm as the job search becomes a nightmarish hell.Though rich tech tycoons like Marc Andreessen would have you believe that tech gadgetry has a magical power to free us all, history has shown us that technological development often sharpens existing inequalities instead of the other way around. (That trend has been observed by scholars ranging from Albert Einstein to Stephen Hawking, well before mass-market AI hit the scene.)Indeed, AI has already shown a surprising amount of gender and race bias thanks to the data it's trained on, and experts warn that this combination of prejudiced software with a massive global rollout is already driving exploitation.As such, it's no surprise that AI is likely to increase the gender gap in employment, according to an updated report from the United Nations' International Labour Organisation (ILO.)The report builds on estimates made in 2023 on the automation risk facing different jobs thanks to AI. The new analysis found that in high-income countries like the US, women's risk for "high automation potential" rose to 9.6 percent, up from 7.8 just two years ago.That's three times the risk faced by men today at 3.5 percent, which also rose from 2.9 percent in 2023.Interestingly, the study also found that one in three workers in rich countries face "some degree of exposure" to automation, compared to the world average of one in four.The ILO report also points out that the nature of jobs frequently held by women in wealthy nations — like administrative, clerical, and data entry roles — are primed for automation by AI.Sociologists have noted that the gender gap in labor hours worked has narrowed significantly in recent years — meaning men and women work close to the same quantity of hours. However, the gender pay gap still persists as women's share of work goes less toward jobs and more into household tasks compared to men.With AI supposedly poised to "revolutionize work," it will take substantial change in our current labor environment to safeguard women from AI-driven austerity.Share This Article
    0 Comments 0 Shares 0 Reviews
  • In tiny rural towns, young entrepreneurs are using food to revitalize communities

    Visit just about any downtown on a weekend and you will likely happen upon a farmers market. Or, you might grab lunch from a food truck outside a local brewpub or winery.

    Very likely, there is a community-shared kitchen or food entrepreneur incubator initiative behind the scenes to support this growing foodie ecosystem.

    As rural America gains younger residents, and grows more diverse and increasingly digitally connected, these dynamics are driving a renaissance in craft foods.

    One food entrepreneur incubator, Hope & Main Kitchen, operates out of a school that sat vacant for over 10 years in the small Rhode Island town of Warren. Its business incubation program, with over 300 graduates to date, gives food and beverage entrepreneurs a way to test, scale and develop their products before investing in their own facilities. Its markets also give entrepreneurs a place to test their products on the public and buyers for stores, while providing the community with local goods.

    Food has been central to culture, community and social connections for millennia. But food channels, social media food influencers and craft brews have paved the way for a renaissance of regional beverage and food industry startups across America.

    View this post on Instagram A post shared by Hope & Main: Culinary IncubatorIn my work in agriculture economics, I see connections between this boom in food and agriculture innovation and the inflow of young residents who are helping revitalize rural America and reinvigorate its Main Streets.

    Why entrepreneurs are embracing rural life

    An analysis of 2023 U.S. Census Bureau data found that more people have been moving to small towns and rural counties in recent years, and that the bulk of that population growth is driven by 25- to 44-year-olds.

    This represents a stark contrast to the 2000s, when 90% of the growth for younger demographics was concentrated in the largest metro areas.

    The COVID-19 pandemic and the shift to remote work options it created, along with rising housing prices, were catalysts for the change, but other interesting dynamics may also be at play.

    One is social connectedness. Sociologists have long believed that the community fabric of rural America contributes to economic efficiency, productive business activity, growth of communities and population health.

    Maps show that rural areas of the U.S. with higher social capital—those with strong networks and relationships among residents—are some of the strongest draws for younger households today.

    Another important dynamic for both rural communities and their new young residents is entrepreneurship, including food entrepreneurship.

    Rural food startups may be leveraging the social capital aligned with the legacy of agriculture in rural America, resulting in a renewed interest in craft and local foods. This includes a renaissance in foods made with local ingredients or linked to regional cultures and tastes.

    According to data from the National Agricultural Statistics Service, U.S. local sales of edible farm products increased 33% from 2017 to 2022, reaching billion.

    The new ‘AgriCulture’

    A 2020 study I was involved in, led by agriculture economist Sarah Low, found a positive relationship between the availability of farm-based local and organic foods and complementary food startups. The study termed this new dynamic “AgriCulture.”

    We found a tendency for these dynamics to occur in areas with higher natural amenities, such as hiking trails and streams, along with transportation and broadband infrastructure attractive to digital natives.

    The same dynamic drawing young people to the outdoors offers digital natives a way to experience far-reaching regions of the country and, in some cases, move there.

    View this post on Instagram A post shared by Home Farm U-Pick & EventsA thriving food and beverage scene can be a pull for those who want to live in a vibrant community, or the new settlers and their diverse tastes may be what get food entrepreneurs started. Many urban necessities, such as shopping, can be done online, but eating and food shopping are local daily necessities.

    Governments can help rural food havens thrive

    When my colleagues and I talk to community leaders interested in attracting new industries and young families, or who seek to build community through revitalized downtowns and public spaces, the topic of food commonly arises.

    We encourage them to think about ways they can help draw food entrepreneurs: Can they increase local growers’ and producers’ access to food markets? Would creating shared kitchens help support food trucks and small businesses? Does their area have a local advantage, such as a seashore, hiking trails or cultural heritage, that they can market in connection with local food?

    West Reading Farmer's Market Daniel Price of the Daily Loaf helps a customer pick out bread.Several federal, state and local economic development programs are framing strategies to bolster any momentum occurring at the crossroads of rural, social connections, resiliency, food and entrepreneurship.

    For example, a recent study from a collaboration of shared kitchen experts found that there were over 600 shared-use food facilities across the U.S. in 2020, and over 20% were in rural areas. In a survey of owners, the report found that 50% of respondents identified assisting early-growth businesses as their primary goal.

    View this post on Instagram A post shared by FEAST & FETTLEThe USDA Regional Food Business Centers, one of which I am fortunate to co-lead, have been bolstering the networking and technical assistance to support these types of rural food economy efforts.

    Many rural counties are still facing shrinking workforces, commonly because of lagging legacy industries with declining employment, such as mining. However, recent data and studies suggest that in rural areas with strong social capital, community support and outdoor opportunities, younger populations are growing, and their food interests are helping boost rural economies.

    Dawn Thilmany is a professor of agricultural economics at Colorado State University.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #tiny #rural #towns #young #entrepreneurs
    In tiny rural towns, young entrepreneurs are using food to revitalize communities
    Visit just about any downtown on a weekend and you will likely happen upon a farmers market. Or, you might grab lunch from a food truck outside a local brewpub or winery. Very likely, there is a community-shared kitchen or food entrepreneur incubator initiative behind the scenes to support this growing foodie ecosystem. As rural America gains younger residents, and grows more diverse and increasingly digitally connected, these dynamics are driving a renaissance in craft foods. One food entrepreneur incubator, Hope & Main Kitchen, operates out of a school that sat vacant for over 10 years in the small Rhode Island town of Warren. Its business incubation program, with over 300 graduates to date, gives food and beverage entrepreneurs a way to test, scale and develop their products before investing in their own facilities. Its markets also give entrepreneurs a place to test their products on the public and buyers for stores, while providing the community with local goods. Food has been central to culture, community and social connections for millennia. But food channels, social media food influencers and craft brews have paved the way for a renaissance of regional beverage and food industry startups across America. View this post on Instagram A post shared by Hope & Main: Culinary IncubatorIn my work in agriculture economics, I see connections between this boom in food and agriculture innovation and the inflow of young residents who are helping revitalize rural America and reinvigorate its Main Streets. Why entrepreneurs are embracing rural life An analysis of 2023 U.S. Census Bureau data found that more people have been moving to small towns and rural counties in recent years, and that the bulk of that population growth is driven by 25- to 44-year-olds. This represents a stark contrast to the 2000s, when 90% of the growth for younger demographics was concentrated in the largest metro areas. The COVID-19 pandemic and the shift to remote work options it created, along with rising housing prices, were catalysts for the change, but other interesting dynamics may also be at play. One is social connectedness. Sociologists have long believed that the community fabric of rural America contributes to economic efficiency, productive business activity, growth of communities and population health. Maps show that rural areas of the U.S. with higher social capital—those with strong networks and relationships among residents—are some of the strongest draws for younger households today. Another important dynamic for both rural communities and their new young residents is entrepreneurship, including food entrepreneurship. Rural food startups may be leveraging the social capital aligned with the legacy of agriculture in rural America, resulting in a renewed interest in craft and local foods. This includes a renaissance in foods made with local ingredients or linked to regional cultures and tastes. According to data from the National Agricultural Statistics Service, U.S. local sales of edible farm products increased 33% from 2017 to 2022, reaching billion. The new ‘AgriCulture’ A 2020 study I was involved in, led by agriculture economist Sarah Low, found a positive relationship between the availability of farm-based local and organic foods and complementary food startups. The study termed this new dynamic “AgriCulture.” We found a tendency for these dynamics to occur in areas with higher natural amenities, such as hiking trails and streams, along with transportation and broadband infrastructure attractive to digital natives. The same dynamic drawing young people to the outdoors offers digital natives a way to experience far-reaching regions of the country and, in some cases, move there. View this post on Instagram A post shared by Home Farm U-Pick & EventsA thriving food and beverage scene can be a pull for those who want to live in a vibrant community, or the new settlers and their diverse tastes may be what get food entrepreneurs started. Many urban necessities, such as shopping, can be done online, but eating and food shopping are local daily necessities. Governments can help rural food havens thrive When my colleagues and I talk to community leaders interested in attracting new industries and young families, or who seek to build community through revitalized downtowns and public spaces, the topic of food commonly arises. We encourage them to think about ways they can help draw food entrepreneurs: Can they increase local growers’ and producers’ access to food markets? Would creating shared kitchens help support food trucks and small businesses? Does their area have a local advantage, such as a seashore, hiking trails or cultural heritage, that they can market in connection with local food? West Reading Farmer's Market Daniel Price of the Daily Loaf helps a customer pick out bread.Several federal, state and local economic development programs are framing strategies to bolster any momentum occurring at the crossroads of rural, social connections, resiliency, food and entrepreneurship. For example, a recent study from a collaboration of shared kitchen experts found that there were over 600 shared-use food facilities across the U.S. in 2020, and over 20% were in rural areas. In a survey of owners, the report found that 50% of respondents identified assisting early-growth businesses as their primary goal. View this post on Instagram A post shared by FEAST & FETTLEThe USDA Regional Food Business Centers, one of which I am fortunate to co-lead, have been bolstering the networking and technical assistance to support these types of rural food economy efforts. Many rural counties are still facing shrinking workforces, commonly because of lagging legacy industries with declining employment, such as mining. However, recent data and studies suggest that in rural areas with strong social capital, community support and outdoor opportunities, younger populations are growing, and their food interests are helping boost rural economies. Dawn Thilmany is a professor of agricultural economics at Colorado State University. This article is republished from The Conversation under a Creative Commons license. Read the original article. #tiny #rural #towns #young #entrepreneurs
    WWW.FASTCOMPANY.COM
    In tiny rural towns, young entrepreneurs are using food to revitalize communities
    Visit just about any downtown on a weekend and you will likely happen upon a farmers market. Or, you might grab lunch from a food truck outside a local brewpub or winery. Very likely, there is a community-shared kitchen or food entrepreneur incubator initiative behind the scenes to support this growing foodie ecosystem. As rural America gains younger residents, and grows more diverse and increasingly digitally connected, these dynamics are driving a renaissance in craft foods. One food entrepreneur incubator, Hope & Main Kitchen, operates out of a school that sat vacant for over 10 years in the small Rhode Island town of Warren. Its business incubation program, with over 300 graduates to date, gives food and beverage entrepreneurs a way to test, scale and develop their products before investing in their own facilities. Its markets also give entrepreneurs a place to test their products on the public and buyers for stores, while providing the community with local goods. Food has been central to culture, community and social connections for millennia. But food channels, social media food influencers and craft brews have paved the way for a renaissance of regional beverage and food industry startups across America. View this post on Instagram A post shared by Hope & Main: Culinary Incubator (@hopemain) In my work in agriculture economics, I see connections between this boom in food and agriculture innovation and the inflow of young residents who are helping revitalize rural America and reinvigorate its Main Streets. Why entrepreneurs are embracing rural life An analysis of 2023 U.S. Census Bureau data found that more people have been moving to small towns and rural counties in recent years, and that the bulk of that population growth is driven by 25- to 44-year-olds. This represents a stark contrast to the 2000s, when 90% of the growth for younger demographics was concentrated in the largest metro areas. The COVID-19 pandemic and the shift to remote work options it created, along with rising housing prices, were catalysts for the change, but other interesting dynamics may also be at play. One is social connectedness. Sociologists have long believed that the community fabric of rural America contributes to economic efficiency, productive business activity, growth of communities and population health. Maps show that rural areas of the U.S. with higher social capital—those with strong networks and relationships among residents—are some of the strongest draws for younger households today. Another important dynamic for both rural communities and their new young residents is entrepreneurship, including food entrepreneurship. Rural food startups may be leveraging the social capital aligned with the legacy of agriculture in rural America, resulting in a renewed interest in craft and local foods. This includes a renaissance in foods made with local ingredients or linked to regional cultures and tastes. According to data from the National Agricultural Statistics Service, U.S. local sales of edible farm products increased 33% from 2017 to 2022, reaching $14.2 billion. The new ‘AgriCulture’ A 2020 study I was involved in, led by agriculture economist Sarah Low, found a positive relationship between the availability of farm-based local and organic foods and complementary food startups. The study termed this new dynamic “AgriCulture.” We found a tendency for these dynamics to occur in areas with higher natural amenities, such as hiking trails and streams, along with transportation and broadband infrastructure attractive to digital natives. The same dynamic drawing young people to the outdoors offers digital natives a way to experience far-reaching regions of the country and, in some cases, move there. View this post on Instagram A post shared by Home Farm U-Pick & Events (@homefarmfamily) A thriving food and beverage scene can be a pull for those who want to live in a vibrant community, or the new settlers and their diverse tastes may be what get food entrepreneurs started. Many urban necessities, such as shopping, can be done online, but eating and food shopping are local daily necessities. Governments can help rural food havens thrive When my colleagues and I talk to community leaders interested in attracting new industries and young families, or who seek to build community through revitalized downtowns and public spaces, the topic of food commonly arises. We encourage them to think about ways they can help draw food entrepreneurs: Can they increase local growers’ and producers’ access to food markets? Would creating shared kitchens help support food trucks and small businesses? Does their area have a local advantage, such as a seashore, hiking trails or cultural heritage, that they can market in connection with local food? West Reading Farmer's Market Daniel Price of the Daily Loaf helps a customer pick out bread. [Photo By Susan L. Angstadt/MediaNews Group/Reading Eagle via Getty Images] Several federal, state and local economic development programs are framing strategies to bolster any momentum occurring at the crossroads of rural, social connections, resiliency, food and entrepreneurship. For example, a recent study from a collaboration of shared kitchen experts found that there were over 600 shared-use food facilities across the U.S. in 2020, and over 20% were in rural areas. In a survey of owners, the report found that 50% of respondents identified assisting early-growth businesses as their primary goal. View this post on Instagram A post shared by FEAST & FETTLE (@feastandfettle) The USDA Regional Food Business Centers, one of which I am fortunate to co-lead, have been bolstering the networking and technical assistance to support these types of rural food economy efforts. Many rural counties are still facing shrinking workforces, commonly because of lagging legacy industries with declining employment, such as mining. However, recent data and studies suggest that in rural areas with strong social capital, community support and outdoor opportunities, younger populations are growing, and their food interests are helping boost rural economies. Dawn Thilmany is a professor of agricultural economics at Colorado State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 Comments 0 Shares 0 Reviews
  • Trump’s big bill is terrible in all the normal Republican ways

    Politics, you will notice, has gotten extremely weird.To some degree, of course, this is Donald Trump’s fault. No other president has seen the first part of their term defined by a fight over whether the federal government can send people living in the US to a prison in El Salvador with no due process. No other modern president has decided to ignore decades of settled economic and political wisdom and institute the biggest tariffs since the Hoover administration. No other president has waged war against the entire foundation of American science. Some weirdness is also the fault of Covid. The pandemic introduced a slew of policies that proved divisive, from mask mandates to vaccine mandates to funding for “gain of function” research to school closures. None of these were polarizing topics in 2019 because they either had never happened before or were too obscure for most people to care. And though we’re a few years past the worst days of the pandemic, the appointment of anti-vaxxer Robert F. Kennedy Jr. as health and human services secretary shows just how central many of these topics remain.It’s this context that has made Congress’s debate over a multitrillion-dollar reconciliation bill so fascinating. The bill’s contents are still evolving, but the broad outlines are simple: trillions in tax cuts, tilted to the wealthy; hundreds of billions in spending cuts, particularly to programs for the poor like Medicaid and food stamps; over a hundred billion dollars in increased spending for defense.I know of no better summary of its effects than the above chart from the Urban Institute, which shows that it would make poor Americans earning less than dramatically worse offwhile affluent households earning over would thrive.So, all in all, a terrible bill. But whatever else that proposal is, it’s startlingly normal for Republican politics. It represents ideas that have defined the Republican party and its economic and budgetary priorities since 1980, and which the party has strongly held to even in the face of Trump’s total takeover. The Republican party stands for lower taxes, especially on the rich; lower spending on programs for the poor; and big spending on defense. That’s what Ronald Reagan, Newt Gingrich, Paul Ryan, and other figures who defined the party have all stood for, for nearly half a century now.The extreme weirdness of national politics has led to a temptation to see a new Republican party just over the horizon, defined by rejecting its tax-cutting and program-slashing tradition. This is stoked by strategic leaks that Trump might be open to a higher tax rate on the richest Americans; by Sen. Josh Hawleycondemning Medicaid cuts; by party figures like Vice President JD Vance suggesting a break from the party’s hawkish foreign policy.But the composition of the reconciliation bill suggests that when it comes to bread-and-butter economic issues, this is mostly a mirage. The essential Republican message may become blurred around the edges, the way that George W. Bush messed with it by expanding Medicare or his father did by accepting a small tax hike. But the deviations are swamped by the continuity. It’s not, in the ludicrous phrasing of Steve Bannon, a “workers’ party.”Congressional Republicans, led by Speaker Mike Johnson, have brought back normal politics, and for them that means one thing: redistributing income upward.The rise and persistence of Normal RepublicanismThis essential pattern of Republicans standing for across-the-board tax cuts and cuts to safety net programs has not always been the norm. Nothing in politics is truly permanent. As late as Richard Nixon, Republican presidents would propose ideas like a guaranteed minimum income and universal health coverage while actually raising taxes on the rich.The ground shifted in the late 1970s and early 1980s, when a small group of policy entrepreneurs in Washington centered around Congress member Jack Kempbegan promoting across-the-board cuts to individual income tax rates as a solution to stagflation. In her history of this moment, Starving the Beast, sociologist Monica Prasad notes that major business lobbies at the time opposed this move. Their priority was corporate rate cuts and a balanced budget, and they saw individual cuts as a threat to both.Kemp and his allies, including soon-to-be-President Reagan, overcame corporate skepticism for one simple reason: The cuts were popular, and the public mood was becoming strongly anti-tax. At this point in time, the thresholds for tax rates were not indexed for inflation, which meant that more and more middle-class people were being pushed into higher and higher tax brackets every year without actually becoming richer as inflation worsened. These pressures had forced even Democrat Jimmy Carter to sign tax cuts in 1978, and they only built as inflation rose still further.Timeline of major Republican tax billsTax Reform Act of 1969 — signed by Richard Nixon, cracked down on foundations, extended a temporary across-the-board income tax hike to fund the war in Vietnam, and created the Alternative Minimum Tax, meant to target high-earners claiming many deductions and credits. On net, substantially raised taxes on the rich while cutting them for the poor.Economic Recovery Tax Act of 1981 — signed by Ronald Reagan, across-the-board cut in tax rates for individuals, with top rate falling from 70 percent to 50 percent. Tax thresholds now indexed for inflation. Businesses allowed to deduct expenses at an accelerated pace.Tax Equity and Fiscal Responsibility Act of 1982 — also signed by Reagan, undid most of the 1981 cuts to corporate taxes, but crucially kept the cuts on individuals in place.Tax Reform Act of 1986 — bipartisan legislation signed by Reagan that eliminated many deductions and credits and simplified the individual income tax to only two brackets.Omnibus Budget Reconciliation Act of 1990 — signed by George H.W. Bush, added a 31 percent bracket on the rich to raise revenue on top of the 1986 law. Bill Clinton would add 36 percent and 39.6 percent rates in 1993.Taxpayer Relief Act of 1997 — signed by Bill Clinton but championed by Republicans in Congress, created a child tax credit and cut the capital gains rate from 28 percent to 20 percent.Economic Growth and Tax Relief Reconciliation Act of 2001 — signed by George W. Bush, slashed individual rates across the board, with the top rate falling from 39.6 percent to 35 percent, and gradually eliminated the estate tax. Initially set to expire in 2010. Extended temporarily by Barack Obama in 2010 as part of an economic stimulus deal, and then in 2012 permanently, but only for couples earning under Jobs and Growth Tax Relief Reconciliation Act of 2003 — signed by George W. Bush, cut taxes on dividend and interest income, and limited the Alternative Minimum Tax’s effects. Set to expire in 2010, like the 2001 cuts; largely expired under the 2012 Obama deal.Tax Cuts and Jobs Act of 2017 — signed by Donald Trump, cut individual rates with the top rate falling from 39.6 percent to 37 percent; doubled the standard deduction and consolidated personal and dependent exemptions into a larger child tax credit; dramatically cut the corporate tax rate from 35 percent to 21 percent. While some cuts, like the corporate rate cut, were permanent, most of the bill is set to expire at the end of 2025.At the same time, Reagan and his team in the early 1980s were convinced that the US needed a major military buildout to counteract what they claimed had been a Soviet buildout in the 1970s. That led to a big increase in defense spending, from 6.6 percent of GDP in 1981 to 7.6 percent in 1985; at today’s size of the US economy, an equivalent increase would be about billion more per year.To pay for at least some of this, Reagan’s first reconciliation bill included sweeping cuts to safety net programs, notably including Medicaid, food stamps, and Aid to Families with Dependent Children. This, too, fit well with Reagan’s image. He had campaigned hard against establishing Medicare in the 1960s, and denunciations of “welfare queens” had been a prominent theme in his ultimately failed 1976 campaign for the presidency. This wasn’t the most popular part of the Reagan brand, but it reflected both his genuine beliefs and the twin pressures of the tax cuts and defense buildout on the budget.This combination of policies was a profound break from the Nixon/Gerald Ford years, when tax cuts were not seriously considered, the priority with the Soviets was detente and arms control, and safety net programs were largely protected. And, sure enough, some Republicans continued to push back against the new regime. Sen. Bob Dolesuccessfully fought to limit food stamps cuts; Congressional Republicans worked with Democrats to expand Medicaid throughout the 1980s over Reagan’s objections; most infamously, George H.W. Bush signed a bill adding a new 31 percent tax bracket for rich Americans in 1990, violating his pledge not to raise taxes.But for the most part, the pattern established by Reagan has persisted ever since, and deviations — like Bush’s tax hike, which contributed to his loss in 1992 — are remembered more as cautionary tales than examples to emulate.Look at the Contract With America, the Republicans’ platform during their successful 1994 bid to retake the House for the first time in over 40 years. It included tax cutsand cuts to welfare and other safety net programs. While Bill Clinton was able to tamp down these demands somewhat, all became law in one form or another.A few years later, George W. Bush began his first term with sweeping across-the-board tax cuts, and his second with a failed effort to slash Social Security spending in favor of individual accounts. In the Obama years, Congressional Republicans, led by Paul Ryan, coalesced around plans for yet more across-the-board tax cuts and sweeping cuts to Medicare, Medicaid, and other safety net programs. Under Trump, Ryan was able to pass the former, though his attempts at the latter through Obamacare repeal failed.Occasionally, a Republican politician will gesture at trying to break with this orthodoxy, and is invariably greeted with intrigued attention from the press. With George W. Bush in 2000, it was called “compassionate conservatism.” With Minnesota Gov. Tim Pawlenty in 2005, it was “Sam’s Club Republicanism.” With then-Sen. Marco Rubio and his allies in the press circa 2014, it was “reform conservatism.” These days the preferred term for Sen. Josh Hawley and Vice President JD Vance seems to be “national conservative,” which, like the Sam’s Club and Reformocons before, purports to reject the tax-cutting orthodoxy of past Republicans in favor of a more communitarian vision — very little of which, of course, appears to have made its way into the budget bill. All of this has happened before, and all of this will happen again.A very Normal Republican budgetBut through each of these much-hyped fads, Normal Republicanism on the budget has survived more or less unchanged. The legacy of compassionate conservatism is a prescription drug benefit in Medicare administered by private insurers; the legacy of reform conservatism is mostly increasing the child tax credit from to in 2017. These are slight ripples in a pattern that has been remarkably persistent.The 2025 reconciliation package is a perfect illustration of these dynamics. Per a helpful tally by the Committee for a Responsible Federal Budget, the centerpiece of the legislation is the extension and expansion of Trump’s 2017 Tax Cuts and Jobs Act. The cuts here will cost over trillion over a decade. Remarkably, some of the cuts aren’t even made permanent, but temporarily extended again, to artificially make the cost look lower than it is; if they’re extended still further, the total cost of the TCJA extensions would be more like trillion.Deficits would surge dramatically due to the reconciliation bill. Committee for a Responsible Federal BudgetThese are tax cuts overwhelmingly tilted at the top. trillion goes to repealing the Alternative Minimum Tax, which exists to prevent high-earners from taking excessive deductions; trillion goes to cutting rates, including the top rate, which disproportionately helps the rich. The pass-through deduction, which arbitrarily lets some business owners exclude 20 percent of their profit from taxation, is extended and also expanded to 23 percent for no apparent reason, for a mere billion. The Tax Policy Center has estimated that extending the TCJA increases incomes for the top 1 percent by an average of 3.7 percent, which swamps the 0.6 percent increase that the poorest fifth of Americans would get.On top of this, the House Ways and Means committee has thrown a potpourri of assorted other tax cuts: through 2028, for instance, tips, overtime income, and car loan interests would be tax-free, and senior citizens would get bigger standard deductions. The spending spree isn’t limited to taxes, either. There’s billion from the Armed Services Committee, focused on shipbuildingand “air superiority and missile defense”, and billion for border security, including about billion for Trump’s long-promised wall.The gross cost of all these giveaways hits around trillion, before even considering the possibility that giveaways like the tips and overtime tax cuts are made permanent. But the net cost of the package, and impact on the defiict is “only” about trillion, per the Committee for a Responsible Federal Budget. What makes up the difference? billion comes from cuts to Medicaid; including other cuts to Affordable Care Act subsidies, the bill will deprive about 10 million people of health insurance. billion comes from undoing most of the Inflation Reduction Act’s credits for clean energy and electric vehicles. billion comes from cuts to education, heavily focused on student loan programs and subsidies meant to make them more affordable for borrowers. billion comes from food stamps, slashing the program by about 30 percent overall.There’s a lot to say about this set of priorities. The Medicaid and food stamp cuts target the most vulnerable Americans and, combined with the tax cuts for the wealthy, amount to extreme upward redistribution. The Inflation Reduction Act cuts will likely substantially increase energy prices for most Americans, while substantially increasing emissions. I know of no serious economist who thinks that many of the most expensive provisions in the bill, like deductibility of overtime income or the pass-through business deduction, are effective ways to boost economic growth.But, at a moment where so many assumptions about politics have been overturned, the plan is not surprising. This is not a radically different Republican party newly attuned to the interests of the working class. It’s not a party whose tax-cutting passions have been tempered now that their president is imposing new taxes left and right on foreign imports in the form of tariffs. It’s not a party reflecting the fact that Medicaid recipients narrowly voted for Trump over Kamala Harris in 2024.It’s just the normal Reagan-Gingrich-Bush-Ryan Republican party, same as it’s ever been.Why, then, are people so eager to hear that the Republican party has changed? Part of it might be the simple fact that voters have had trouble accepting that a political party could actually be like this. When Priorities USA, a Democratic super-PAC in the 2012 election, told a focus group that Mitt Romney wanted to slash Medicare while cutting taxes on the rich, “the respondents simply refused to believe any politician would do such a thing,” per reporter Robert Draper. It does sound vaguely ridiculous: It defies common sense that cutting taxes on the wealthy and funding it with spending cuts on programs for poor and working people would ever be a compelling political message, perhaps outside the extreme inflationary environment of the 1970s that birthed Reagan’s presidency.But ridiculous or not, that’s the world we have. The Republican Party’s budgetary views simply have not changed. They want to blow up the deficit with massive tax cuts tilted toward the rich and pay for a fraction of the cost by slashing programs for the poor. It’s really that simple. It may not be exciting or brand new. But it’s normal Republican policymaking, and it’s back with a vengeance.See More:
    #trumps #big #bill #terrible #all
    Trump’s big bill is terrible in all the normal Republican ways
    Politics, you will notice, has gotten extremely weird.To some degree, of course, this is Donald Trump’s fault. No other president has seen the first part of their term defined by a fight over whether the federal government can send people living in the US to a prison in El Salvador with no due process. No other modern president has decided to ignore decades of settled economic and political wisdom and institute the biggest tariffs since the Hoover administration. No other president has waged war against the entire foundation of American science. Some weirdness is also the fault of Covid. The pandemic introduced a slew of policies that proved divisive, from mask mandates to vaccine mandates to funding for “gain of function” research to school closures. None of these were polarizing topics in 2019 because they either had never happened before or were too obscure for most people to care. And though we’re a few years past the worst days of the pandemic, the appointment of anti-vaxxer Robert F. Kennedy Jr. as health and human services secretary shows just how central many of these topics remain.It’s this context that has made Congress’s debate over a multitrillion-dollar reconciliation bill so fascinating. The bill’s contents are still evolving, but the broad outlines are simple: trillions in tax cuts, tilted to the wealthy; hundreds of billions in spending cuts, particularly to programs for the poor like Medicaid and food stamps; over a hundred billion dollars in increased spending for defense.I know of no better summary of its effects than the above chart from the Urban Institute, which shows that it would make poor Americans earning less than dramatically worse offwhile affluent households earning over would thrive.So, all in all, a terrible bill. But whatever else that proposal is, it’s startlingly normal for Republican politics. It represents ideas that have defined the Republican party and its economic and budgetary priorities since 1980, and which the party has strongly held to even in the face of Trump’s total takeover. The Republican party stands for lower taxes, especially on the rich; lower spending on programs for the poor; and big spending on defense. That’s what Ronald Reagan, Newt Gingrich, Paul Ryan, and other figures who defined the party have all stood for, for nearly half a century now.The extreme weirdness of national politics has led to a temptation to see a new Republican party just over the horizon, defined by rejecting its tax-cutting and program-slashing tradition. This is stoked by strategic leaks that Trump might be open to a higher tax rate on the richest Americans; by Sen. Josh Hawleycondemning Medicaid cuts; by party figures like Vice President JD Vance suggesting a break from the party’s hawkish foreign policy.But the composition of the reconciliation bill suggests that when it comes to bread-and-butter economic issues, this is mostly a mirage. The essential Republican message may become blurred around the edges, the way that George W. Bush messed with it by expanding Medicare or his father did by accepting a small tax hike. But the deviations are swamped by the continuity. It’s not, in the ludicrous phrasing of Steve Bannon, a “workers’ party.”Congressional Republicans, led by Speaker Mike Johnson, have brought back normal politics, and for them that means one thing: redistributing income upward.The rise and persistence of Normal RepublicanismThis essential pattern of Republicans standing for across-the-board tax cuts and cuts to safety net programs has not always been the norm. Nothing in politics is truly permanent. As late as Richard Nixon, Republican presidents would propose ideas like a guaranteed minimum income and universal health coverage while actually raising taxes on the rich.The ground shifted in the late 1970s and early 1980s, when a small group of policy entrepreneurs in Washington centered around Congress member Jack Kempbegan promoting across-the-board cuts to individual income tax rates as a solution to stagflation. In her history of this moment, Starving the Beast, sociologist Monica Prasad notes that major business lobbies at the time opposed this move. Their priority was corporate rate cuts and a balanced budget, and they saw individual cuts as a threat to both.Kemp and his allies, including soon-to-be-President Reagan, overcame corporate skepticism for one simple reason: The cuts were popular, and the public mood was becoming strongly anti-tax. At this point in time, the thresholds for tax rates were not indexed for inflation, which meant that more and more middle-class people were being pushed into higher and higher tax brackets every year without actually becoming richer as inflation worsened. These pressures had forced even Democrat Jimmy Carter to sign tax cuts in 1978, and they only built as inflation rose still further.Timeline of major Republican tax billsTax Reform Act of 1969 — signed by Richard Nixon, cracked down on foundations, extended a temporary across-the-board income tax hike to fund the war in Vietnam, and created the Alternative Minimum Tax, meant to target high-earners claiming many deductions and credits. On net, substantially raised taxes on the rich while cutting them for the poor.Economic Recovery Tax Act of 1981 — signed by Ronald Reagan, across-the-board cut in tax rates for individuals, with top rate falling from 70 percent to 50 percent. Tax thresholds now indexed for inflation. Businesses allowed to deduct expenses at an accelerated pace.Tax Equity and Fiscal Responsibility Act of 1982 — also signed by Reagan, undid most of the 1981 cuts to corporate taxes, but crucially kept the cuts on individuals in place.Tax Reform Act of 1986 — bipartisan legislation signed by Reagan that eliminated many deductions and credits and simplified the individual income tax to only two brackets.Omnibus Budget Reconciliation Act of 1990 — signed by George H.W. Bush, added a 31 percent bracket on the rich to raise revenue on top of the 1986 law. Bill Clinton would add 36 percent and 39.6 percent rates in 1993.Taxpayer Relief Act of 1997 — signed by Bill Clinton but championed by Republicans in Congress, created a child tax credit and cut the capital gains rate from 28 percent to 20 percent.Economic Growth and Tax Relief Reconciliation Act of 2001 — signed by George W. Bush, slashed individual rates across the board, with the top rate falling from 39.6 percent to 35 percent, and gradually eliminated the estate tax. Initially set to expire in 2010. Extended temporarily by Barack Obama in 2010 as part of an economic stimulus deal, and then in 2012 permanently, but only for couples earning under Jobs and Growth Tax Relief Reconciliation Act of 2003 — signed by George W. Bush, cut taxes on dividend and interest income, and limited the Alternative Minimum Tax’s effects. Set to expire in 2010, like the 2001 cuts; largely expired under the 2012 Obama deal.Tax Cuts and Jobs Act of 2017 — signed by Donald Trump, cut individual rates with the top rate falling from 39.6 percent to 37 percent; doubled the standard deduction and consolidated personal and dependent exemptions into a larger child tax credit; dramatically cut the corporate tax rate from 35 percent to 21 percent. While some cuts, like the corporate rate cut, were permanent, most of the bill is set to expire at the end of 2025.At the same time, Reagan and his team in the early 1980s were convinced that the US needed a major military buildout to counteract what they claimed had been a Soviet buildout in the 1970s. That led to a big increase in defense spending, from 6.6 percent of GDP in 1981 to 7.6 percent in 1985; at today’s size of the US economy, an equivalent increase would be about billion more per year.To pay for at least some of this, Reagan’s first reconciliation bill included sweeping cuts to safety net programs, notably including Medicaid, food stamps, and Aid to Families with Dependent Children. This, too, fit well with Reagan’s image. He had campaigned hard against establishing Medicare in the 1960s, and denunciations of “welfare queens” had been a prominent theme in his ultimately failed 1976 campaign for the presidency. This wasn’t the most popular part of the Reagan brand, but it reflected both his genuine beliefs and the twin pressures of the tax cuts and defense buildout on the budget.This combination of policies was a profound break from the Nixon/Gerald Ford years, when tax cuts were not seriously considered, the priority with the Soviets was detente and arms control, and safety net programs were largely protected. And, sure enough, some Republicans continued to push back against the new regime. Sen. Bob Dolesuccessfully fought to limit food stamps cuts; Congressional Republicans worked with Democrats to expand Medicaid throughout the 1980s over Reagan’s objections; most infamously, George H.W. Bush signed a bill adding a new 31 percent tax bracket for rich Americans in 1990, violating his pledge not to raise taxes.But for the most part, the pattern established by Reagan has persisted ever since, and deviations — like Bush’s tax hike, which contributed to his loss in 1992 — are remembered more as cautionary tales than examples to emulate.Look at the Contract With America, the Republicans’ platform during their successful 1994 bid to retake the House for the first time in over 40 years. It included tax cutsand cuts to welfare and other safety net programs. While Bill Clinton was able to tamp down these demands somewhat, all became law in one form or another.A few years later, George W. Bush began his first term with sweeping across-the-board tax cuts, and his second with a failed effort to slash Social Security spending in favor of individual accounts. In the Obama years, Congressional Republicans, led by Paul Ryan, coalesced around plans for yet more across-the-board tax cuts and sweeping cuts to Medicare, Medicaid, and other safety net programs. Under Trump, Ryan was able to pass the former, though his attempts at the latter through Obamacare repeal failed.Occasionally, a Republican politician will gesture at trying to break with this orthodoxy, and is invariably greeted with intrigued attention from the press. With George W. Bush in 2000, it was called “compassionate conservatism.” With Minnesota Gov. Tim Pawlenty in 2005, it was “Sam’s Club Republicanism.” With then-Sen. Marco Rubio and his allies in the press circa 2014, it was “reform conservatism.” These days the preferred term for Sen. Josh Hawley and Vice President JD Vance seems to be “national conservative,” which, like the Sam’s Club and Reformocons before, purports to reject the tax-cutting orthodoxy of past Republicans in favor of a more communitarian vision — very little of which, of course, appears to have made its way into the budget bill. All of this has happened before, and all of this will happen again.A very Normal Republican budgetBut through each of these much-hyped fads, Normal Republicanism on the budget has survived more or less unchanged. The legacy of compassionate conservatism is a prescription drug benefit in Medicare administered by private insurers; the legacy of reform conservatism is mostly increasing the child tax credit from to in 2017. These are slight ripples in a pattern that has been remarkably persistent.The 2025 reconciliation package is a perfect illustration of these dynamics. Per a helpful tally by the Committee for a Responsible Federal Budget, the centerpiece of the legislation is the extension and expansion of Trump’s 2017 Tax Cuts and Jobs Act. The cuts here will cost over trillion over a decade. Remarkably, some of the cuts aren’t even made permanent, but temporarily extended again, to artificially make the cost look lower than it is; if they’re extended still further, the total cost of the TCJA extensions would be more like trillion.Deficits would surge dramatically due to the reconciliation bill. Committee for a Responsible Federal BudgetThese are tax cuts overwhelmingly tilted at the top. trillion goes to repealing the Alternative Minimum Tax, which exists to prevent high-earners from taking excessive deductions; trillion goes to cutting rates, including the top rate, which disproportionately helps the rich. The pass-through deduction, which arbitrarily lets some business owners exclude 20 percent of their profit from taxation, is extended and also expanded to 23 percent for no apparent reason, for a mere billion. The Tax Policy Center has estimated that extending the TCJA increases incomes for the top 1 percent by an average of 3.7 percent, which swamps the 0.6 percent increase that the poorest fifth of Americans would get.On top of this, the House Ways and Means committee has thrown a potpourri of assorted other tax cuts: through 2028, for instance, tips, overtime income, and car loan interests would be tax-free, and senior citizens would get bigger standard deductions. The spending spree isn’t limited to taxes, either. There’s billion from the Armed Services Committee, focused on shipbuildingand “air superiority and missile defense”, and billion for border security, including about billion for Trump’s long-promised wall.The gross cost of all these giveaways hits around trillion, before even considering the possibility that giveaways like the tips and overtime tax cuts are made permanent. But the net cost of the package, and impact on the defiict is “only” about trillion, per the Committee for a Responsible Federal Budget. What makes up the difference? billion comes from cuts to Medicaid; including other cuts to Affordable Care Act subsidies, the bill will deprive about 10 million people of health insurance. billion comes from undoing most of the Inflation Reduction Act’s credits for clean energy and electric vehicles. billion comes from cuts to education, heavily focused on student loan programs and subsidies meant to make them more affordable for borrowers. billion comes from food stamps, slashing the program by about 30 percent overall.There’s a lot to say about this set of priorities. The Medicaid and food stamp cuts target the most vulnerable Americans and, combined with the tax cuts for the wealthy, amount to extreme upward redistribution. The Inflation Reduction Act cuts will likely substantially increase energy prices for most Americans, while substantially increasing emissions. I know of no serious economist who thinks that many of the most expensive provisions in the bill, like deductibility of overtime income or the pass-through business deduction, are effective ways to boost economic growth.But, at a moment where so many assumptions about politics have been overturned, the plan is not surprising. This is not a radically different Republican party newly attuned to the interests of the working class. It’s not a party whose tax-cutting passions have been tempered now that their president is imposing new taxes left and right on foreign imports in the form of tariffs. It’s not a party reflecting the fact that Medicaid recipients narrowly voted for Trump over Kamala Harris in 2024.It’s just the normal Reagan-Gingrich-Bush-Ryan Republican party, same as it’s ever been.Why, then, are people so eager to hear that the Republican party has changed? Part of it might be the simple fact that voters have had trouble accepting that a political party could actually be like this. When Priorities USA, a Democratic super-PAC in the 2012 election, told a focus group that Mitt Romney wanted to slash Medicare while cutting taxes on the rich, “the respondents simply refused to believe any politician would do such a thing,” per reporter Robert Draper. It does sound vaguely ridiculous: It defies common sense that cutting taxes on the wealthy and funding it with spending cuts on programs for poor and working people would ever be a compelling political message, perhaps outside the extreme inflationary environment of the 1970s that birthed Reagan’s presidency.But ridiculous or not, that’s the world we have. The Republican Party’s budgetary views simply have not changed. They want to blow up the deficit with massive tax cuts tilted toward the rich and pay for a fraction of the cost by slashing programs for the poor. It’s really that simple. It may not be exciting or brand new. But it’s normal Republican policymaking, and it’s back with a vengeance.See More: #trumps #big #bill #terrible #all
    WWW.VOX.COM
    Trump’s big bill is terrible in all the normal Republican ways
    Politics, you will notice, has gotten extremely weird.To some degree, of course, this is Donald Trump’s fault. No other president has seen the first part of their term defined by a fight over whether the federal government can send people living in the US to a prison in El Salvador with no due process. No other modern president has decided to ignore decades of settled economic and political wisdom and institute the biggest tariffs since the Hoover administration. No other president has waged war against the entire foundation of American science. Some weirdness is also the fault of Covid. The pandemic introduced a slew of policies that proved divisive, from mask mandates to vaccine mandates to funding for “gain of function” research to school closures. None of these were polarizing topics in 2019 because they either had never happened before or were too obscure for most people to care. And though we’re a few years past the worst days of the pandemic, the appointment of anti-vaxxer Robert F. Kennedy Jr. as health and human services secretary shows just how central many of these topics remain.It’s this context that has made Congress’s debate over a multitrillion-dollar reconciliation bill so fascinating. The bill’s contents are still evolving, but the broad outlines are simple: trillions in tax cuts, tilted to the wealthy; hundreds of billions in spending cuts, particularly to programs for the poor like Medicaid and food stamps; over a hundred billion dollars in increased spending for defense.I know of no better summary of its effects than the above chart from the Urban Institute, which shows that it would make poor Americans earning less than $10,000 dramatically worse off (reducing their income by 14.9 percent) while affluent households earning over $200,000 would thrive.So, all in all, a terrible bill. But whatever else that proposal is, it’s startlingly normal for Republican politics. It represents ideas that have defined the Republican party and its economic and budgetary priorities since 1980, and which the party has strongly held to even in the face of Trump’s total takeover. The Republican party stands for lower taxes, especially on the rich; lower spending on programs for the poor; and big spending on defense. That’s what Ronald Reagan, Newt Gingrich, Paul Ryan, and other figures who defined the party have all stood for, for nearly half a century now.The extreme weirdness of national politics has led to a temptation to see a new Republican party just over the horizon, defined by rejecting its tax-cutting and program-slashing tradition. This is stoked by strategic leaks that Trump might be open to a higher tax rate on the richest Americans; by Sen. Josh Hawley (R-MO) condemning Medicaid cuts; by party figures like Vice President JD Vance suggesting a break from the party’s hawkish foreign policy.But the composition of the reconciliation bill suggests that when it comes to bread-and-butter economic issues, this is mostly a mirage. The essential Republican message may become blurred around the edges, the way that George W. Bush messed with it by expanding Medicare or his father did by accepting a small tax hike. But the deviations are swamped by the continuity. It’s not, in the ludicrous phrasing of Steve Bannon, a “workers’ party.”Congressional Republicans, led by Speaker Mike Johnson, have brought back normal politics, and for them that means one thing: redistributing income upward.The rise and persistence of Normal RepublicanismThis essential pattern of Republicans standing for across-the-board tax cuts and cuts to safety net programs has not always been the norm. Nothing in politics is truly permanent. As late as Richard Nixon, Republican presidents would propose ideas like a guaranteed minimum income and universal health coverage while actually raising taxes on the rich.The ground shifted in the late 1970s and early 1980s, when a small group of policy entrepreneurs in Washington centered around Congress member Jack Kemp (R-NY) began promoting across-the-board cuts to individual income tax rates as a solution to stagflation (the combination of slow growth and high inflation then characterizing the economy). In her history of this moment, Starving the Beast, sociologist Monica Prasad notes that major business lobbies at the time opposed this move. Their priority was corporate rate cuts and a balanced budget, and they saw individual cuts as a threat to both.Kemp and his allies, including soon-to-be-President Reagan, overcame corporate skepticism for one simple reason: The cuts were popular, and the public mood was becoming strongly anti-tax. At this point in time, the thresholds for tax rates were not indexed for inflation, which meant that more and more middle-class people were being pushed into higher and higher tax brackets every year without actually becoming richer as inflation worsened. These pressures had forced even Democrat Jimmy Carter to sign tax cuts in 1978, and they only built as inflation rose still further.Timeline of major Republican tax billsTax Reform Act of 1969 — signed by Richard Nixon, cracked down on foundations, extended a temporary across-the-board income tax hike to fund the war in Vietnam, and created the Alternative Minimum Tax, meant to target high-earners claiming many deductions and credits. On net, substantially raised taxes on the rich while cutting them for the poor.Economic Recovery Tax Act of 1981 — signed by Ronald Reagan, across-the-board cut in tax rates for individuals, with top rate falling from 70 percent to 50 percent. Tax thresholds now indexed for inflation. Businesses allowed to deduct expenses at an accelerated pace.Tax Equity and Fiscal Responsibility Act of 1982 — also signed by Reagan, undid most of the 1981 cuts to corporate taxes, but crucially kept the cuts on individuals in place.Tax Reform Act of 1986 — bipartisan legislation signed by Reagan that eliminated many deductions and credits and simplified the individual income tax to only two brackets (15 percent and 28 percent).Omnibus Budget Reconciliation Act of 1990 — signed by George H.W. Bush, added a 31 percent bracket on the rich to raise revenue on top of the 1986 law. Bill Clinton would add 36 percent and 39.6 percent rates in 1993.Taxpayer Relief Act of 1997 — signed by Bill Clinton but championed by Republicans in Congress, created a $500 child tax credit and cut the capital gains rate from 28 percent to 20 percent.Economic Growth and Tax Relief Reconciliation Act of 2001 — signed by George W. Bush, slashed individual rates across the board, with the top rate falling from 39.6 percent to 35 percent, and gradually eliminated the estate tax. Initially set to expire in 2010. Extended temporarily by Barack Obama in 2010 as part of an economic stimulus deal, and then in 2012 permanently, but only for couples earning under $450,000.Jobs and Growth Tax Relief Reconciliation Act of 2003 — signed by George W. Bush, cut taxes on dividend and interest income, and limited the Alternative Minimum Tax’s effects. Set to expire in 2010, like the 2001 cuts; largely expired under the 2012 Obama deal.Tax Cuts and Jobs Act of 2017 — signed by Donald Trump, cut individual rates with the top rate falling from 39.6 percent to 37 percent; doubled the standard deduction and consolidated personal and dependent exemptions into a larger child tax credit; dramatically cut the corporate tax rate from 35 percent to 21 percent. While some cuts, like the corporate rate cut, were permanent, most of the bill is set to expire at the end of 2025.At the same time, Reagan and his team in the early 1980s were convinced that the US needed a major military buildout to counteract what they claimed had been a Soviet buildout in the 1970s. That led to a big increase in defense spending, from 6.6 percent of GDP in 1981 to 7.6 percent in 1985; at today’s size of the US economy, an equivalent increase would be about $290 billion more per year.To pay for at least some of this, Reagan’s first reconciliation bill included sweeping cuts to safety net programs, notably including Medicaid, food stamps, and Aid to Families with Dependent Children (AFDC). This, too, fit well with Reagan’s image. He had campaigned hard against establishing Medicare in the 1960s, and denunciations of “welfare queens” had been a prominent theme in his ultimately failed 1976 campaign for the presidency. This wasn’t the most popular part of the Reagan brand (he denounced “welfare queens” while trying to win the Republican primary, not the general), but it reflected both his genuine beliefs and the twin pressures of the tax cuts and defense buildout on the budget.This combination of policies was a profound break from the Nixon/Gerald Ford years, when tax cuts were not seriously considered, the priority with the Soviets was detente and arms control, and safety net programs were largely protected. And, sure enough, some Republicans continued to push back against the new regime. Sen. Bob Dole (R-KS) successfully fought to limit food stamps cuts; Congressional Republicans worked with Democrats to expand Medicaid throughout the 1980s over Reagan’s objections; most infamously, George H.W. Bush signed a bill adding a new 31 percent tax bracket for rich Americans in 1990, violating his pledge not to raise taxes.But for the most part, the pattern established by Reagan has persisted ever since, and deviations — like Bush’s tax hike, which contributed to his loss in 1992 — are remembered more as cautionary tales than examples to emulate.Look at the Contract With America, the Republicans’ platform during their successful 1994 bid to retake the House for the first time in over 40 years. It included tax cuts (like introducing a child tax credit and lower capital gains rates) and cuts to welfare and other safety net programs. While Bill Clinton was able to tamp down these demands somewhat, all became law in one form or another.A few years later, George W. Bush began his first term with sweeping across-the-board tax cuts, and his second with a failed effort to slash Social Security spending in favor of individual accounts. In the Obama years, Congressional Republicans, led by Paul Ryan, coalesced around plans for yet more across-the-board tax cuts and sweeping cuts to Medicare, Medicaid, and other safety net programs. Under Trump, Ryan was able to pass the former, though his attempts at the latter through Obamacare repeal failed.Occasionally, a Republican politician will gesture at trying to break with this orthodoxy, and is invariably greeted with intrigued attention from the press. With George W. Bush in 2000, it was called “compassionate conservatism.” With Minnesota Gov. Tim Pawlenty in 2005, it was “Sam’s Club Republicanism.” With then-Sen. Marco Rubio and his allies in the press circa 2014, it was “reform conservatism.” These days the preferred term for Sen. Josh Hawley and Vice President JD Vance seems to be “national conservative,” which, like the Sam’s Club and Reformocons before, purports to reject the tax-cutting orthodoxy of past Republicans in favor of a more communitarian vision — very little of which, of course, appears to have made its way into the budget bill. All of this has happened before, and all of this will happen again.A very Normal Republican budgetBut through each of these much-hyped fads, Normal Republicanism on the budget has survived more or less unchanged. The legacy of compassionate conservatism is a prescription drug benefit in Medicare administered by private insurers; the legacy of reform conservatism is mostly increasing the child tax credit from $1,000 to $2,000 in 2017. These are slight ripples in a pattern that has been remarkably persistent.The 2025 reconciliation package is a perfect illustration of these dynamics. Per a helpful tally by the Committee for a Responsible Federal Budget, the centerpiece of the legislation is the extension and expansion of Trump’s 2017 Tax Cuts and Jobs Act. The cuts here will cost over $4.1 trillion over a decade. Remarkably, some of the cuts aren’t even made permanent, but temporarily extended again, to artificially make the cost look lower than it is; if they’re extended still further, the total cost of the TCJA extensions would be more like $4.8 trillion.Deficits would surge dramatically due to the reconciliation bill. Committee for a Responsible Federal BudgetThese are tax cuts overwhelmingly tilted at the top. $1.4 trillion goes to repealing the Alternative Minimum Tax, which exists to prevent high-earners from taking excessive deductions; $2.2 trillion goes to cutting rates, including the top rate, which disproportionately helps the rich. The pass-through deduction, which arbitrarily lets some business owners exclude 20 percent of their profit from taxation, is extended and also expanded to 23 percent for no apparent reason, for a mere $820 billion. The Tax Policy Center has estimated that extending the TCJA increases incomes for the top 1 percent by an average of 3.7 percent, which swamps the 0.6 percent increase that the poorest fifth of Americans would get.On top of this, the House Ways and Means committee has thrown a potpourri of assorted other tax cuts: through 2028, for instance, tips, overtime income, and car loan interests would be tax-free, and senior citizens would get bigger standard deductions. The spending spree isn’t limited to taxes, either. There’s $144 billion from the Armed Services Committee, focused on shipbuilding ($32 billion) and “air superiority and missile defense” ($30 billion), and $67 billion for border security, including about $50 billion for Trump’s long-promised wall.The gross cost of all these giveaways hits around $5 trillion, before even considering the possibility that giveaways like the tips and overtime tax cuts are made permanent. But the net cost of the package, and impact on the defiict is “only” about $3.3 trillion, per the Committee for a Responsible Federal Budget. What makes up the difference? $698 billion comes from cuts to Medicaid; including other cuts to Affordable Care Act subsidies, the bill will deprive about 10 million people of health insurance. $559 billion comes from undoing most of the Inflation Reduction Act’s credits for clean energy and electric vehicles. $350 billion comes from cuts to education, heavily focused on student loan programs and subsidies meant to make them more affordable for borrowers. $267 billion comes from food stamps, slashing the program by about 30 percent overall.There’s a lot to say about this set of priorities. The Medicaid and food stamp cuts target the most vulnerable Americans and, combined with the tax cuts for the wealthy, amount to extreme upward redistribution. The Inflation Reduction Act cuts will likely substantially increase energy prices for most Americans, while substantially increasing emissions. I know of no serious economist who thinks that many of the most expensive provisions in the bill, like deductibility of overtime income or the pass-through business deduction, are effective ways to boost economic growth.But, at a moment where so many assumptions about politics have been overturned, the plan is not surprising. This is not a radically different Republican party newly attuned to the interests of the working class. It’s not a party whose tax-cutting passions have been tempered now that their president is imposing new taxes left and right on foreign imports in the form of tariffs (and which will be borne disproportionately by lower-income Americans). It’s not a party reflecting the fact that Medicaid recipients narrowly voted for Trump over Kamala Harris in 2024.It’s just the normal Reagan-Gingrich-Bush-Ryan Republican party, same as it’s ever been.Why, then, are people so eager to hear that the Republican party has changed? Part of it might be the simple fact that voters have had trouble accepting that a political party could actually be like this. When Priorities USA, a Democratic super-PAC in the 2012 election, told a focus group that Mitt Romney wanted to slash Medicare while cutting taxes on the rich, “the respondents simply refused to believe any politician would do such a thing,” per reporter Robert Draper. It does sound vaguely ridiculous: It defies common sense that cutting taxes on the wealthy and funding it with spending cuts on programs for poor and working people would ever be a compelling political message, perhaps outside the extreme inflationary environment of the 1970s that birthed Reagan’s presidency.But ridiculous or not, that’s the world we have. The Republican Party’s budgetary views simply have not changed. They want to blow up the deficit with massive tax cuts tilted toward the rich and pay for a fraction of the cost by slashing programs for the poor. It’s really that simple. It may not be exciting or brand new. But it’s normal Republican policymaking, and it’s back with a vengeance.See More:
    0 Comments 0 Shares 0 Reviews
  • Wikipedia picture of the day for May 20

    Rhina Aguirrewas a Bolivian disability activist, politician, and sociologist. An opponent of the military dictatorships of the 1970s and 1980s, Aguirre was an early activist in the country's human rights movement. Exiled to Ecuador by the regime of Luis García Meza, she collaborated with Leonidas Proaño's indigenous ministry and worked closely with the country's peasant and social organizations. Blinded in both eyes by toxoplasmosis, Aguirre took up the cause of disability rights, joining the Departmental Council for Disabled Persons upon her return to Bolivia. In 2009, she joined the Movement for Socialism and was elected to represent the department of Tarija in the Chamber of Senators, becoming the first blind person in Bolivian history to assume a parliamentary seat. This photograph of Aguirre was taken in 2014.

    Photograph credit: Chamber of Senators; edited by Krisgabwoosh

    Recently featured:
    Chester Cathedral
    El Tatio
    Short-beaked echidna

    Archive
    More featured pictures
    #wikipedia #picture #day
    Wikipedia picture of the day for May 20
    Rhina Aguirrewas a Bolivian disability activist, politician, and sociologist. An opponent of the military dictatorships of the 1970s and 1980s, Aguirre was an early activist in the country's human rights movement. Exiled to Ecuador by the regime of Luis García Meza, she collaborated with Leonidas Proaño's indigenous ministry and worked closely with the country's peasant and social organizations. Blinded in both eyes by toxoplasmosis, Aguirre took up the cause of disability rights, joining the Departmental Council for Disabled Persons upon her return to Bolivia. In 2009, she joined the Movement for Socialism and was elected to represent the department of Tarija in the Chamber of Senators, becoming the first blind person in Bolivian history to assume a parliamentary seat. This photograph of Aguirre was taken in 2014. Photograph credit: Chamber of Senators; edited by Krisgabwoosh Recently featured: Chester Cathedral El Tatio Short-beaked echidna Archive More featured pictures #wikipedia #picture #day
    EN.WIKIPEDIA.ORG
    Wikipedia picture of the day for May 20
    Rhina Aguirre (20 May 1939 – 30 October 2021) was a Bolivian disability activist, politician, and sociologist. An opponent of the military dictatorships of the 1970s and 1980s, Aguirre was an early activist in the country's human rights movement. Exiled to Ecuador by the regime of Luis García Meza, she collaborated with Leonidas Proaño's indigenous ministry and worked closely with the country's peasant and social organizations. Blinded in both eyes by toxoplasmosis, Aguirre took up the cause of disability rights, joining the Departmental Council for Disabled Persons upon her return to Bolivia. In 2009, she joined the Movement for Socialism and was elected to represent the department of Tarija in the Chamber of Senators, becoming the first blind person in Bolivian history to assume a parliamentary seat. This photograph of Aguirre was taken in 2014. Photograph credit: Chamber of Senators; edited by Krisgabwoosh Recently featured: Chester Cathedral El Tatio Short-beaked echidna Archive More featured pictures
    0 Comments 0 Shares 0 Reviews
  • A critical fight over “quality” child care could shape millions of kids

    America’s lack of affordable child care has brought a long-simmering question to a boil: What exactly makes child care “good”? Everyone wants quality care for kids, and the need for child care or preschool to be considered “high quality” has been embraced by researchers, providers, parents, and policymakers for years. But with rising costs and uneven availability, parents, providers, and policymakers find themselves increasingly divided over whether “quality” should be measured by caregivers’ credentials or by toddlers’ happiness, by structured learning outcomes, or by parent preference. Progressives generally champion credentialed and well-paid teachers, academic standards, and standardized ratings as essential for aiding children’s development. Conservatives counter that such requirements inflate costs while devaluing the nurturing care that parents and community caregivers provide.The answer to the question of what “quality” means shapes everything from household budgets to workforce participation to children’s school readiness — yet there is no clear consensus on what exactly that entails or how to measure it.“People know it when they see it, but it’s hard to define,” said Josh McCabe, director of social policy at the Niskanen Center think tank.As regulations shift with political winds, the question has become more salient: Who defines quality, and at what cost to kids, families, and society?Mixed-quality quality metricsStates have sought ways to measure, improve, and communicate the components of quality to parents and providers alike. Their solution: developing rating systems that attempt to boil aspects of child care settings down into simple metrics, much like hotel or restaurant reviews.Over the past two decades, such Quality Rating and Improvement Systemshave become the primary method for assessing child care quality. These systems, which vary significantly across states, award ratings based on multiple dimensions, including teacher qualifications; learning environments; administrative practices, and the caliber of child-adult interactions. By 2020, nearly all states had implemented some form of QRIS, though participation remains voluntary in many areas. These systems vary widely — some use star ratings, others use tiers or categories. States prioritize different elements: Some emphasize school readiness, others focus on health and safety, cultural responsiveness, or infant and toddler care. Financial incentives also differ, with states offering a variety of supports, technical assistance, and bonuses for higher scores.The evidence is mixed, though, on whether these ratings actually predict better outcomes for children. “If we’re looking at what supports children’s well-being and development, it’s the quality of the interactions, the relations with the caregiver,” Steven Barnett, senior director of the National Institute for Early Education Research, told me. Yet these critical interactions carry relatively little weight in some QRIS systems, overshadowed by structural features that are easier to quantify and less costly to implement. The QRIS ratings nonetheless drive real behavior. “The scoring matters in that providers and parents react to it,” McCabe said. “But like the US News and World Report rankings, I don’t know if it actually makes them better colleges or students.”A 2019 Department of Education study found that children who attended higher-rated programs according to QRIS did not have better developmental outcomes than those attending lower-rated ones. This echoed earlier research that found that the overall QRIS ratings were less predictive of child learning than a single measure of teacher-child interactions.“Quality, in some sense, can take many different forms, and perhaps it should, because child care and early learning is not one-size-fits-all.”— Hailey Gibbs, associate director of Early Childhood Policy at the Center for American ProgressFurther research found “little evidence” that adopting QRIS in Head Start improved quality when measured against the Head Start Program Performance Standards — the quality benchmarks the federal preschool programs must meet. QRIS showed no significant boost to teacher qualifications or teacher-child interactions. More troublingly, research found that QRIS adoption actually increased annual teacher turnover — potentially undermining the very stability that quality programs need.These rating systems can also create a troubling cycle, said Hailey Gibbs, associate director of early childhood policy at the Center for American Progress. Lower-rated programs receive fewer resources, making it even harder for them to improve. Gibbs notes there’s “valid criticism” that QRIS systems often lack cultural sensitivity and tend to disregard the perspectives of enrolled families when adjudicating which programs are good or bad.This fundamental tension — between standardized measurement and the complex, relationship-based reality of quality care — remains at the heart of ongoing debates about how best to ensure positive outcomes for America’s youngest children. As sociologist Zach Griffen notes of performance measurement across other sectors like health care and K-12 schooling, quality assessment tools can be “wildly successful in policy research at the same time as they fall apart in real-world applications.”So what actually is quality?While the QRIS measure of “quality” might be contentious, the stakes are high because quality itself does seem to really affect kids’ life trajectories. Back in 2000, the National Academies of Sciences published a 600-page report concluding that both nurturing parenting and caregiving relationships were essential for early childhood development, and that well-designed programs could help improve the lives of kids growing up in poverty in particular.Another landmark federal study tracked more than 1,300 children from infancy through adolescence. Emerging from the contentious “day care wars” of the 1980s over whether maternal employment and non-parental care harmed children, the researchers found that “higher-quality” care predicted better cognitive and language development. The study highlighted responsive adult-child relationships as the most critical quality factor, with elements like staff ratios and group sizes also playing important roles.These early studies established a foundation for understanding quality, but even today early childhood experts describe different visions. Ruth Friedman, who directed the Office of Child Care during the Biden administration, defines quality as care that ensures safety, engaging activities, and “nurturing, consistent, and well-compensated caregivers who support development across multiple domains — including language, early math and science, social-emotional, and physical growth,” she told me. Some advocates stress that trained caregivers are essential for building the kind of stable, high-quality workforce that children need. “All teachers need to have a foundational knowledge of child development…formal education and training in early childhood education,” the Center for American Progress wrote in a “Quality 101” report published in 2017. Other experts say the growing demand for professional training devalues the kind of care offered by parents, grandparents, and other informal community leaders. “What we realizeis the quality of the relationship andtypically things you can measure in a simple way, like a child-adult ratio,” said Jenet Erickson, a professor of religious education at Brigham Young University and a researcher of maternal and child well-being. “It’s just not as simple as having trained caregivers. … We need more flexibility in who can provide care, so families can say, ‘We really like this neighborhood grandmother because of the way she relates to our children, and we’re less concerned about whether or not she got a degree in human development.’”There is perhaps more agreement, however, on what constitutes unacceptably poor care. Gibbs identified warning signs that cross cultural boundaries: “disorganized or unsafe spaces…young children wandering aimlessly.” These align with the National Academies’ findings about the lowest-quality settings: caregivers ignoring children’s bids for attention, few appropriate toys, and children spending time “unengaged with adults, other children, or materials.” Finding the balance between acceptable care and parent preferences can be tricky, especially when public funding is involved and lawmakers pursue multiple policy objectives at once, like promoting child development, supporting moms in the workforce, advancing educational equity, and even broader social concerns like reducing crime or increasing GDP.The Center for American Progress has grown somewhat less prescriptive about quality since it published its “Quality 101” report in 2017. “Quality, in some sense, can take many different forms, and perhaps it should, because child care and early learning is not one-size-fits-all,” Gibbs told me, noting that some home-based child care options “are extremely high quality” yet nevertheless are sometimes “viewed as second tier to school-like center-based care” when it comes to state ratings. The future of quality standardsThe landscape of child care quality measurement appears poised for significant shifts. The Build Back Better Act, proposed during the Biden administration, represented perhaps the most ambitious federal effort to date to elevate child care quality standards nationwide. States would have been required to develop tiered QRIS frameworks aligned with the federal Head Start preschool standards, and mandate child care provider participation in QRIS to receive federal money. Most significantly, payment rates would have been directly linked to quality ratings — so child care programs achieving higher QRIS scores would have received higher reimbursement rates. However, with the transition to a new administration, a fundamentally different approach is emerging, as states move to reduce restrictions conservatives see as driving up costs without improving outcomes or access.This regulatory rollback fits within a broader conservative vision reshaping child care — one that prioritizes less expensive home-based programs over professionally staffed centers, de-emphasizes academic credentials and curricula, and often encourages more mothers to stay home to raise their children. As Idaho Rep. Rod Furniss argued when promoting his deregulation bill, “perhaps the most important” small business is the home day care, “where moms can stay home and supplement the household income and watch a few kids.”This approach also aligns with principles outlined in the American Enterprise Institute’s 2024 “Three Principles for Conservative Early-Childhood Policy,” which advocated for subsidizing “lower-cost” options while keeping “children connected to their families.”Conservatives specifically warn against what they call the “Bill de Blasio model” — folding early childhood education into the credentialing, unionization, and compensation structure of K-12 teachers. This approach, which many progressive advocates view as essential for stable and quality care, is viewed by many on the right as a recipe for unsustainable costs.As control shifts in Washington, the very definitions of quality that have guided policy for decades may soon change. But the core questions remain unresolved: What matters most in determining quality and how do we capture it? How should we balance measurable outcomes with family preferences? What trade-offs between quality, affordability, and access are Americans prepared to make? And ultimately, what kind of care do American children deserve?This work was supported by a grant from the Bainum Family Foundation. Vox Media had full discretion over the content of this reporting.See More:
    #critical #fight #over #quality #child
    A critical fight over “quality” child care could shape millions of kids
    America’s lack of affordable child care has brought a long-simmering question to a boil: What exactly makes child care “good”? Everyone wants quality care for kids, and the need for child care or preschool to be considered “high quality” has been embraced by researchers, providers, parents, and policymakers for years. But with rising costs and uneven availability, parents, providers, and policymakers find themselves increasingly divided over whether “quality” should be measured by caregivers’ credentials or by toddlers’ happiness, by structured learning outcomes, or by parent preference. Progressives generally champion credentialed and well-paid teachers, academic standards, and standardized ratings as essential for aiding children’s development. Conservatives counter that such requirements inflate costs while devaluing the nurturing care that parents and community caregivers provide.The answer to the question of what “quality” means shapes everything from household budgets to workforce participation to children’s school readiness — yet there is no clear consensus on what exactly that entails or how to measure it.“People know it when they see it, but it’s hard to define,” said Josh McCabe, director of social policy at the Niskanen Center think tank.As regulations shift with political winds, the question has become more salient: Who defines quality, and at what cost to kids, families, and society?Mixed-quality quality metricsStates have sought ways to measure, improve, and communicate the components of quality to parents and providers alike. Their solution: developing rating systems that attempt to boil aspects of child care settings down into simple metrics, much like hotel or restaurant reviews.Over the past two decades, such Quality Rating and Improvement Systemshave become the primary method for assessing child care quality. These systems, which vary significantly across states, award ratings based on multiple dimensions, including teacher qualifications; learning environments; administrative practices, and the caliber of child-adult interactions. By 2020, nearly all states had implemented some form of QRIS, though participation remains voluntary in many areas. These systems vary widely — some use star ratings, others use tiers or categories. States prioritize different elements: Some emphasize school readiness, others focus on health and safety, cultural responsiveness, or infant and toddler care. Financial incentives also differ, with states offering a variety of supports, technical assistance, and bonuses for higher scores.The evidence is mixed, though, on whether these ratings actually predict better outcomes for children. “If we’re looking at what supports children’s well-being and development, it’s the quality of the interactions, the relations with the caregiver,” Steven Barnett, senior director of the National Institute for Early Education Research, told me. Yet these critical interactions carry relatively little weight in some QRIS systems, overshadowed by structural features that are easier to quantify and less costly to implement. The QRIS ratings nonetheless drive real behavior. “The scoring matters in that providers and parents react to it,” McCabe said. “But like the US News and World Report rankings, I don’t know if it actually makes them better colleges or students.”A 2019 Department of Education study found that children who attended higher-rated programs according to QRIS did not have better developmental outcomes than those attending lower-rated ones. This echoed earlier research that found that the overall QRIS ratings were less predictive of child learning than a single measure of teacher-child interactions.“Quality, in some sense, can take many different forms, and perhaps it should, because child care and early learning is not one-size-fits-all.”— Hailey Gibbs, associate director of Early Childhood Policy at the Center for American ProgressFurther research found “little evidence” that adopting QRIS in Head Start improved quality when measured against the Head Start Program Performance Standards — the quality benchmarks the federal preschool programs must meet. QRIS showed no significant boost to teacher qualifications or teacher-child interactions. More troublingly, research found that QRIS adoption actually increased annual teacher turnover — potentially undermining the very stability that quality programs need.These rating systems can also create a troubling cycle, said Hailey Gibbs, associate director of early childhood policy at the Center for American Progress. Lower-rated programs receive fewer resources, making it even harder for them to improve. Gibbs notes there’s “valid criticism” that QRIS systems often lack cultural sensitivity and tend to disregard the perspectives of enrolled families when adjudicating which programs are good or bad.This fundamental tension — between standardized measurement and the complex, relationship-based reality of quality care — remains at the heart of ongoing debates about how best to ensure positive outcomes for America’s youngest children. As sociologist Zach Griffen notes of performance measurement across other sectors like health care and K-12 schooling, quality assessment tools can be “wildly successful in policy research at the same time as they fall apart in real-world applications.”So what actually is quality?While the QRIS measure of “quality” might be contentious, the stakes are high because quality itself does seem to really affect kids’ life trajectories. Back in 2000, the National Academies of Sciences published a 600-page report concluding that both nurturing parenting and caregiving relationships were essential for early childhood development, and that well-designed programs could help improve the lives of kids growing up in poverty in particular.Another landmark federal study tracked more than 1,300 children from infancy through adolescence. Emerging from the contentious “day care wars” of the 1980s over whether maternal employment and non-parental care harmed children, the researchers found that “higher-quality” care predicted better cognitive and language development. The study highlighted responsive adult-child relationships as the most critical quality factor, with elements like staff ratios and group sizes also playing important roles.These early studies established a foundation for understanding quality, but even today early childhood experts describe different visions. Ruth Friedman, who directed the Office of Child Care during the Biden administration, defines quality as care that ensures safety, engaging activities, and “nurturing, consistent, and well-compensated caregivers who support development across multiple domains — including language, early math and science, social-emotional, and physical growth,” she told me. Some advocates stress that trained caregivers are essential for building the kind of stable, high-quality workforce that children need. “All teachers need to have a foundational knowledge of child development…formal education and training in early childhood education,” the Center for American Progress wrote in a “Quality 101” report published in 2017. Other experts say the growing demand for professional training devalues the kind of care offered by parents, grandparents, and other informal community leaders. “What we realizeis the quality of the relationship andtypically things you can measure in a simple way, like a child-adult ratio,” said Jenet Erickson, a professor of religious education at Brigham Young University and a researcher of maternal and child well-being. “It’s just not as simple as having trained caregivers. … We need more flexibility in who can provide care, so families can say, ‘We really like this neighborhood grandmother because of the way she relates to our children, and we’re less concerned about whether or not she got a degree in human development.’”There is perhaps more agreement, however, on what constitutes unacceptably poor care. Gibbs identified warning signs that cross cultural boundaries: “disorganized or unsafe spaces…young children wandering aimlessly.” These align with the National Academies’ findings about the lowest-quality settings: caregivers ignoring children’s bids for attention, few appropriate toys, and children spending time “unengaged with adults, other children, or materials.” Finding the balance between acceptable care and parent preferences can be tricky, especially when public funding is involved and lawmakers pursue multiple policy objectives at once, like promoting child development, supporting moms in the workforce, advancing educational equity, and even broader social concerns like reducing crime or increasing GDP.The Center for American Progress has grown somewhat less prescriptive about quality since it published its “Quality 101” report in 2017. “Quality, in some sense, can take many different forms, and perhaps it should, because child care and early learning is not one-size-fits-all,” Gibbs told me, noting that some home-based child care options “are extremely high quality” yet nevertheless are sometimes “viewed as second tier to school-like center-based care” when it comes to state ratings. The future of quality standardsThe landscape of child care quality measurement appears poised for significant shifts. The Build Back Better Act, proposed during the Biden administration, represented perhaps the most ambitious federal effort to date to elevate child care quality standards nationwide. States would have been required to develop tiered QRIS frameworks aligned with the federal Head Start preschool standards, and mandate child care provider participation in QRIS to receive federal money. Most significantly, payment rates would have been directly linked to quality ratings — so child care programs achieving higher QRIS scores would have received higher reimbursement rates. However, with the transition to a new administration, a fundamentally different approach is emerging, as states move to reduce restrictions conservatives see as driving up costs without improving outcomes or access.This regulatory rollback fits within a broader conservative vision reshaping child care — one that prioritizes less expensive home-based programs over professionally staffed centers, de-emphasizes academic credentials and curricula, and often encourages more mothers to stay home to raise their children. As Idaho Rep. Rod Furniss argued when promoting his deregulation bill, “perhaps the most important” small business is the home day care, “where moms can stay home and supplement the household income and watch a few kids.”This approach also aligns with principles outlined in the American Enterprise Institute’s 2024 “Three Principles for Conservative Early-Childhood Policy,” which advocated for subsidizing “lower-cost” options while keeping “children connected to their families.”Conservatives specifically warn against what they call the “Bill de Blasio model” — folding early childhood education into the credentialing, unionization, and compensation structure of K-12 teachers. This approach, which many progressive advocates view as essential for stable and quality care, is viewed by many on the right as a recipe for unsustainable costs.As control shifts in Washington, the very definitions of quality that have guided policy for decades may soon change. But the core questions remain unresolved: What matters most in determining quality and how do we capture it? How should we balance measurable outcomes with family preferences? What trade-offs between quality, affordability, and access are Americans prepared to make? And ultimately, what kind of care do American children deserve?This work was supported by a grant from the Bainum Family Foundation. Vox Media had full discretion over the content of this reporting.See More: #critical #fight #over #quality #child
    WWW.VOX.COM
    A critical fight over “quality” child care could shape millions of kids
    America’s lack of affordable child care has brought a long-simmering question to a boil: What exactly makes child care “good”? Everyone wants quality care for kids, and the need for child care or preschool to be considered “high quality” has been embraced by researchers, providers, parents, and policymakers for years. But with rising costs and uneven availability, parents, providers, and policymakers find themselves increasingly divided over whether “quality” should be measured by caregivers’ credentials or by toddlers’ happiness, by structured learning outcomes, or by parent preference. Progressives generally champion credentialed and well-paid teachers, academic standards, and standardized ratings as essential for aiding children’s development. Conservatives counter that such requirements inflate costs while devaluing the nurturing care that parents and community caregivers provide.The answer to the question of what “quality” means shapes everything from household budgets to workforce participation to children’s school readiness — yet there is no clear consensus on what exactly that entails or how to measure it.“People know it when they see it, but it’s hard to define,” said Josh McCabe, director of social policy at the Niskanen Center think tank.As regulations shift with political winds, the question has become more salient: Who defines quality, and at what cost to kids, families, and society?Mixed-quality quality metricsStates have sought ways to measure, improve, and communicate the components of quality to parents and providers alike. Their solution: developing rating systems that attempt to boil aspects of child care settings down into simple metrics, much like hotel or restaurant reviews.Over the past two decades, such Quality Rating and Improvement Systems (QRIS) have become the primary method for assessing child care quality. These systems, which vary significantly across states, award ratings based on multiple dimensions, including teacher qualifications (such as holding a child development associate credential or a degree in early childhood education); learning environments (including safe teacher-to-child ratios, classroom cleanliness, and availability of age-appropriate books and toys); administrative practices (like documented emergency procedures and business management systems), and the caliber of child-adult interactions (measured through classroom observations). By 2020, nearly all states had implemented some form of QRIS, though participation remains voluntary in many areas. These systems vary widely — some use star ratings (one to five stars), others use tiers or categories. States prioritize different elements: Some emphasize school readiness, others focus on health and safety, cultural responsiveness, or infant and toddler care. Financial incentives also differ, with states offering a variety of supports, technical assistance, and bonuses for higher scores.The evidence is mixed, though, on whether these ratings actually predict better outcomes for children. “If we’re looking at what supports children’s well-being and development, it’s the quality of the interactions, the relations with the caregiver,” Steven Barnett, senior director of the National Institute for Early Education Research, told me. Yet these critical interactions carry relatively little weight in some QRIS systems, overshadowed by structural features that are easier to quantify and less costly to implement. The QRIS ratings nonetheless drive real behavior. “The scoring matters in that providers and parents react to it,” McCabe said. “But like the US News and World Report rankings, I don’t know if it actually makes them better colleges or students.”A 2019 Department of Education study found that children who attended higher-rated programs according to QRIS did not have better developmental outcomes than those attending lower-rated ones. This echoed earlier research that found that the overall QRIS ratings were less predictive of child learning than a single measure of teacher-child interactions.“Quality, in some sense, can take many different forms, and perhaps it should, because child care and early learning is not one-size-fits-all.”— Hailey Gibbs, associate director of Early Childhood Policy at the Center for American ProgressFurther research found “little evidence” that adopting QRIS in Head Start improved quality when measured against the Head Start Program Performance Standards — the quality benchmarks the federal preschool programs must meet. QRIS showed no significant boost to teacher qualifications or teacher-child interactions. More troublingly, research found that QRIS adoption actually increased annual teacher turnover — potentially undermining the very stability that quality programs need.These rating systems can also create a troubling cycle, said Hailey Gibbs, associate director of early childhood policy at the Center for American Progress. Lower-rated programs receive fewer resources, making it even harder for them to improve. Gibbs notes there’s “valid criticism” that QRIS systems often lack cultural sensitivity and tend to disregard the perspectives of enrolled families when adjudicating which programs are good or bad.This fundamental tension — between standardized measurement and the complex, relationship-based reality of quality care — remains at the heart of ongoing debates about how best to ensure positive outcomes for America’s youngest children. As sociologist Zach Griffen notes of performance measurement across other sectors like health care and K-12 schooling, quality assessment tools can be “wildly successful in policy research at the same time as they fall apart in real-world applications.”So what actually is quality?While the QRIS measure of “quality” might be contentious, the stakes are high because quality itself does seem to really affect kids’ life trajectories. Back in 2000, the National Academies of Sciences published a 600-page report concluding that both nurturing parenting and caregiving relationships were essential for early childhood development, and that well-designed programs could help improve the lives of kids growing up in poverty in particular.Another landmark federal study tracked more than 1,300 children from infancy through adolescence. Emerging from the contentious “day care wars” of the 1980s over whether maternal employment and non-parental care harmed children, the researchers found that “higher-quality” care predicted better cognitive and language development. The study highlighted responsive adult-child relationships as the most critical quality factor, with elements like staff ratios and group sizes also playing important roles.These early studies established a foundation for understanding quality, but even today early childhood experts describe different visions. Ruth Friedman, who directed the Office of Child Care during the Biden administration, defines quality as care that ensures safety, engaging activities, and “nurturing, consistent, and well-compensated caregivers who support development across multiple domains — including language, early math and science, social-emotional, and physical growth,” she told me. Some advocates stress that trained caregivers are essential for building the kind of stable, high-quality workforce that children need. “All teachers need to have a foundational knowledge of child development…[with] formal education and training in early childhood education,” the Center for American Progress wrote in a “Quality 101” report published in 2017. Other experts say the growing demand for professional training devalues the kind of care offered by parents, grandparents, and other informal community leaders. “What we realize [matters] is the quality of the relationship and [that isn’t] typically things you can measure in a simple way, like a child-adult ratio,” said Jenet Erickson, a professor of religious education at Brigham Young University and a researcher of maternal and child well-being. “It’s just not as simple as having trained caregivers. … We need more flexibility in who can provide care, so families can say, ‘We really like this neighborhood grandmother because of the way she relates to our children, and we’re less concerned about whether or not she got a degree in human development.’”There is perhaps more agreement, however, on what constitutes unacceptably poor care. Gibbs identified warning signs that cross cultural boundaries: “disorganized or unsafe spaces…young children wandering aimlessly.” These align with the National Academies’ findings about the lowest-quality settings: caregivers ignoring children’s bids for attention, few appropriate toys, and children spending time “unengaged with adults, other children, or materials.” Finding the balance between acceptable care and parent preferences can be tricky, especially when public funding is involved and lawmakers pursue multiple policy objectives at once, like promoting child development, supporting moms in the workforce, advancing educational equity, and even broader social concerns like reducing crime or increasing GDP.The Center for American Progress has grown somewhat less prescriptive about quality since it published its “Quality 101” report in 2017. “Quality, in some sense, can take many different forms, and perhaps it should, because child care and early learning is not one-size-fits-all,” Gibbs told me, noting that some home-based child care options “are extremely high quality” yet nevertheless are sometimes “viewed as second tier to school-like center-based care” when it comes to state ratings. The future of quality standardsThe landscape of child care quality measurement appears poised for significant shifts. The Build Back Better Act, proposed during the Biden administration, represented perhaps the most ambitious federal effort to date to elevate child care quality standards nationwide. States would have been required to develop tiered QRIS frameworks aligned with the federal Head Start preschool standards, and mandate child care provider participation in QRIS to receive federal money. Most significantly, payment rates would have been directly linked to quality ratings — so child care programs achieving higher QRIS scores would have received higher reimbursement rates. However, with the transition to a new administration, a fundamentally different approach is emerging, as states move to reduce restrictions conservatives see as driving up costs without improving outcomes or access.This regulatory rollback fits within a broader conservative vision reshaping child care — one that prioritizes less expensive home-based programs over professionally staffed centers, de-emphasizes academic credentials and curricula, and often encourages more mothers to stay home to raise their children. As Idaho Rep. Rod Furniss argued when promoting his deregulation bill, “perhaps the most important” small business is the home day care, “where moms can stay home and supplement the household income and watch a few kids.”This approach also aligns with principles outlined in the American Enterprise Institute’s 2024 “Three Principles for Conservative Early-Childhood Policy,” which advocated for subsidizing “lower-cost” options while keeping “children connected to their families.”Conservatives specifically warn against what they call the “Bill de Blasio model” — folding early childhood education into the credentialing, unionization, and compensation structure of K-12 teachers. This approach, which many progressive advocates view as essential for stable and quality care, is viewed by many on the right as a recipe for unsustainable costs.As control shifts in Washington, the very definitions of quality that have guided policy for decades may soon change. But the core questions remain unresolved: What matters most in determining quality and how do we capture it? How should we balance measurable outcomes with family preferences? What trade-offs between quality, affordability, and access are Americans prepared to make? And ultimately, what kind of care do American children deserve?This work was supported by a grant from the Bainum Family Foundation. Vox Media had full discretion over the content of this reporting.See More:
    0 Comments 0 Shares 0 Reviews
  • #333;">How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con.
    It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us.
    Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI.
    Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence.
    It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically.
    "It didn't spring whole cloth out of Zeus's head or anything.
    This has a longer history," Hanna said in an interview with CNET.
    "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development.
    The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing.
    And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development.
    Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s.
    Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon.
    Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money.
    But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below.
    The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype.
    An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading.
    AI chatbots aren't capable of seeing of thinking because they don't have brains.
    Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language.
    We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said.
    "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say.
    "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said.
    "And it is very hard to remind ourselves that the mind isn't there.
    It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators.
    It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything.
    AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers.
    As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it.
    "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said.
    In "certain domains, like pattern matching at scale, computers are quite good at that.
    But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence.
    Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks.
    There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction.
    Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios.
    The boosters imagine an AI-powered futuristic society.
    The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable.
    "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said.
    "And then there's this claim that this particular technology is a step on that path, and it's all marketing.
    It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors.
    Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals.
    For better or worse, life is not science fiction.
    Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism.
    Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates.
    Many AI companies won't tell you what content is used to train their models.
    But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors.
    That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained.
    There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm.
    "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said.
    Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness.
    Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag.
    "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed.
    But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information.
    For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    #0066cc;">#how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    المصدر: www.cnet.com
    #how #spot #hype #and #avoid #the #con #according #two #experts #quotartificial #intelligence #we039re #being #frank #bill #goods #you #are #sold #line #someone039s #pocketsquotthat #heart #argument #that #linguist #emily #bender #sociologist #alex #hannamake #their #new #bookthe #conit039s #useful #guide #for #anyone #whose #life #has #intersected #with #technologies #artificial #who039s #questioned #real #usefulness #which #most #usbender #professor #university #washington #who #was #named #one #time #magazine039s #influential #people #hanna #director #research #nonprofit #distributed #instituteand #former #member #ethical #team #googlethe #explosion #chatgpt #late #kicked #off #cycle #aihype #authors #define #quotaggrandizementquot #technology #convinced #need #buy #invest #quotlest #miss #out #entertainment #pleasure #monetary #reward #return #investment #market #sharequot #but #it039s #not #first #nor #likely #last #scholars #government #leaders #regular #have #been #intrigued #worried #idea #machine #learning #aibender #trace #roots #back #1950s #when #mathematician #john #mccarthy #coined #term #intelligenceit #era #united #states #looking #fund #projects #would #help #country #gain #any #kind #edge #soviets #militarily #ideologically #technologicallyquotit #didn039t #spring #whole #cloth #zeus039s #head #anythingthis #longer #historyquot #said #interview #cnetquotit039s #certainly #quote #unquote #aiquottoday039s #propelled #billions #dollars #venture #capital #into #startups #like #openai #tech #giants #meta #google #microsoft #pouring #developmentthe #result #clear #all #newest #phones #laptops #software #updates #drenched #aiwashingand #there #signs #development #will #slow #down #thanks #part #growing #motivation #beat #china #developmentnot #indeedof #course #generative #much #more #advanced #than #eliza #psychotherapy #chatbot #enraptured #scientists #1970stoday039s #business #workers #inundated #heavy #dose #fomo #seemingly #complex #often #misused #jargonlistening #enthusiasts #might #seem #take #your #job #save #company #moneybut #argue #neither #wholly #reason #why #important #recognize #break #through #hypeso #these #few #telltale #share #belowthe #outline #questions #ask #strategies #busting #book #now #uswatch #language #humanizes #aianthropomorphizing #process #giving #inanimate #object #humanlike #characteristics #qualities #big #building #hypean #example #this #can #found #companies #say #chatbots #quotseequot #quotthinkquotthese #comparisons #trying #describe #ability #objectidentifying #programs #deepreasoning #models #they #also #misleadingai #aren039t #capable #seeing #thinking #because #don039t #brainseven #neural #nets #noted #our #based #human #understanding #neurons #from #actually #work #fool #believing #there039s #brain #behind #machinethat #belief #something #predisposed #humans #languagewe039re #conditioned #imagine #mind #text #see #even #know #generated #saidquotwe #interpret #developing #model #minds #speaker #wasquot #addedin #use #knowledge #person #speaking #create #meaning #just #using #words #sayquotso #encounter #synthetic #extruded #going #same #thingquot #saidquotand #very #hard #remind #ourselves #isn039t #thereit039s #construct #producedquotthe #try #convince #products #sets #foreground #them #replace #whether #creatorsit039s #compelling #believe #could #silver #bullet #fix #complicated #problems #critical #industries #health #care #servicesbut #bring #used #anythingai #goal #efficiency #services #end #replacing #qualified #black #box #machines #copious #amounts #babysitting #underpaid #contract #gig #workersas #put #quotai #make #shittierquotbe #dubious #phrase #039super #intelligence039if #can039t #should #wary #claims #itquotsuperhuman #super #dangerous #turn #insofar #thinks #some #superfluousquot #saidin #quotcertain #domains #pattern #matching #scale #computers #quite #good #thatbut #superhuman #poem #notion #doing #science #hypequot #added #quotand #talk #about #airplanes #flyers #rulers #measurers #seems #only #space #comes #upquotthe #quotsuper #intelligencequot #general #intelligencemany #ceos #struggle #what #exactly #agi #essentially #ai039s #form #potentially #making #decisions #handling #tasksthere039s #still #evidence #anywhere #near #future #enabled #popularbuzzwordmany #futurelooking #statements #borrow #tropes #fictionboth #boosters #doomers #those #potential #harm #rely #scifi #scenariosthe #aipowered #futuristic #societythe #bemoan #where #robots #over #world #wipe #humanitythe #connecting #thread #unshakable #smarter #inevitablequotone #things #lot #discourse #fixed #question #fast #get #therequot #then #claim #particular #step #path #marketingit #helpful #able #itquotpart #popular #autonomous #functional #assistant #mean #fulfilling #promises #worldchanging #innovation #investorsplanning #utopia #dystopia #keeps #investors #forward #burn #admit #they039ll #carbon #emission #goalsfor #better #worse #fictionwhenever #someone #claiming #product #straight #movie #sign #approach #skepticism #goes #outputs #evaluatedone #easiest #ways #marketing #fluff #look #disclosing #operatesmany #won039t #tell #content #train #modelsbut #usually #disclose #does #data #sometimes #brag #stack #against #competitorsthat039s #start #typically #privacy #policiesone #top #complaints #concernsfrom #creators #trainedthere #many #lawsuits #alleged #copyright #infringement #concerns #bias #capacity #harmquotif #wanted #system #designed #move #rather #reproduce #oppressions #past #curating #dataquot #saidinstead #grabbing #quoteverything #wasn039t #nailed #internetquot #saidif #you039re #hearing #thing #statistic #highlights #its #effectivenesslike #other #researchers #called #finding #citation #red #flagquotanytime #selling #access #evaluated #thin #icequot #saidit #frustrating #disappointing #certain #information #were #developedbut #recognizing #holes #sales #pitch #deflate #though #informationfor #check #fullchatgpt #glossary #offapple
    WWW.CNET.COM
    How to Spot AI Hype and Avoid The AI Con, According to Two Experts
    "Artificial intelligence, if we're being frank, is a con: a bill of goods you are being sold to line someone's pockets."That is the heart of the argument that linguist Emily Bender and sociologist Alex Hanna make in their new book The AI Con. It's a useful guide for anyone whose life has intersected with technologies sold as artificial intelligence and anyone who's questioned their real usefulness, which is most of us. Bender is a professor at the University of Washington who was named one of Time magazine's most influential people in artificial intelligence, and Hanna is the director of research at the nonprofit Distributed AI Research Institute and a former member of the ethical AI team at Google.The explosion of ChatGPT in late 2022 kicked off a new hype cycle in AI. Hype, as the authors define it, is the "aggrandizement" of technology that you are convinced you need to buy or invest in "lest you miss out on entertainment or pleasure, monetary reward, return on investment, or market share." But it's not the first time, nor likely the last, that scholars, government leaders and regular people have been intrigued and worried by the idea of machine learning and AI.Bender and Hanna trace the roots of machine learning back to the 1950s, to when mathematician John McCarthy coined the term artificial intelligence. It was in an era when the United States was looking to fund projects that would help the country gain any kind of edge on the Soviets militarily, ideologically and technologically. "It didn't spring whole cloth out of Zeus's head or anything. This has a longer history," Hanna said in an interview with CNET. "It's certainly not the first hype cycle with, quote, unquote, AI."Today's hype cycle is propelled by the billions of dollars of venture capital investment into startups like OpenAI and the tech giants like Meta, Google and Microsoft pouring billions of dollars into AI research and development. The result is clear, with all the newest phones, laptops and software updates drenched in AI-washing. And there are no signs that AI research and development will slow down, thanks in part to a growing motivation to beat China in AI development. Not the first hype cycle indeed.Of course, generative AI in 2025 is much more advanced than the Eliza psychotherapy chatbot that first enraptured scientists in the 1970s. Today's business leaders and workers are inundated with hype, with a heavy dose of FOMO and seemingly complex but often misused jargon. Listening to tech leaders and AI enthusiasts, it might seem like AI will take your job to save your company money. But the authors argue that neither is wholly likely, which is one reason why it's important to recognize and break through the hype.So how do we recognize AI hype? These are a few telltale signs, according to Bender and Hanna, that we share below. The authors outline more questions to ask and strategies for AI hype busting in their book, which is out now in the US.Watch out for language that humanizes AIAnthropomorphizing, or the process of giving an inanimate object human-like characteristics or qualities, is a big part of building AI hype. An example of this kind of language can be found when AI companies say their chatbots can now "see" and "think."These can be useful comparisons when trying to describe the ability of new object-identifying AI programs or deep-reasoning AI models, but they can also be misleading. AI chatbots aren't capable of seeing of thinking because they don't have brains. Even the idea of neural nets, Hanna noted in our interview and in the book, is based on human understanding of neurons from the 1950s, not actually how neurons work, but it can fool us into believing there's a brain behind the machine.That belief is something we're predisposed to because of how we as humans process language. We're conditioned to imagine that there is a mind behind the text we see, even when we know it's generated by AI, Bender said. "We interpret language by developing a model in our minds of who the speaker was," Bender added.In these models, we use our knowledge of the person speaking to create meaning, not just using the meaning of the words they say. "So when we encounter synthetic text extruded from something like ChatGPT, we're going to do the same thing," Bender said. "And it is very hard to remind ourselves that the mind isn't there. It's just a construct that we have produced."The authors argue that part of why AI companies try to convince us their products are human-like is that this sets the foreground for them to convince us that AI can replace humans, whether it's at work or as creators. It's compelling for us to believe that AI could be the silver bullet fix to complicated problems in critical industries like health care and government services.But more often than not, the authors argue, AI isn't bring used to fix anything. AI is sold with the goal of efficiency, but AI services end up replacing qualified workers with black box machines that need copious amounts of babysitting from underpaid contract or gig workers. As Hanna put it in our interview, "AI is not going to take your job, but it will make your job shittier."Be dubious of the phrase 'super intelligence'If a human can't do something, you should be wary of claims that an AI can do it. "Superhuman intelligence, or super intelligence, is a very dangerous turn of phrase, insofar as it thinks that some technology is going to make humans superfluous," Hanna said. In "certain domains, like pattern matching at scale, computers are quite good at that. But if there's an idea that there's going to be a superhuman poem, or a superhuman notion of research or doing science, that is clear hype." Bender added, "And we don't talk about airplanes as superhuman flyers or rulers as superhuman measurers, it seems to be only in this AI space that that comes up."The idea of AI "super intelligence" comes up often when people talk about artificial general intelligence. Many CEOs struggle to define what exactly AGI is, but it's essentially AI's most advanced form, potentially capable of making decisions and handling complex tasks. There's still no evidence we're anywhere near a future enabled by AGI, but it's a popular buzzword.Many of these future-looking statements from AI leaders borrow tropes from science fiction. Both boosters and doomers — how Bender and Hanna describe AI enthusiasts and those worried about the potential for harm — rely on sci-fi scenarios. The boosters imagine an AI-powered futuristic society. The doomers bemoan a future where AI robots take over the world and wipe out humanity.The connecting thread, according to the authors, is an unshakable belief that AI is smarter than humans and inevitable. "One of the things that we see a lot in the discourse is this idea that the future is fixed, and it's just a question of how fast we get there," Bender said. "And then there's this claim that this particular technology is a step on that path, and it's all marketing. It is helpful to be able to see behind it."Part of why AI is so popular is that an autonomous functional AI assistant would mean AI companies are fulfilling their promises of world-changing innovation to their investors. Planning for that future — whether it's a utopia or dystopia — keeps investors looking forward as the companies burn through billions of dollars and admit they'll miss their carbon emission goals. For better or worse, life is not science fiction. Whenever you see someone claiming their AI product is straight out of a movie, it's a good sign to approach with skepticism. Ask what goes in and how outputs are evaluatedOne of the easiest ways to see through AI marketing fluff is to look and see whether the company is disclosing how it operates. Many AI companies won't tell you what content is used to train their models. But they usually disclose what the company does with your data and sometimes brag about how their models stack up against competitors. That's where you should start looking, typically in their privacy policies.One of the top complaints and concerns from creators is how AI models are trained. There are many lawsuits over alleged copyright infringement, and there are a lot of concerns over bias in AI chatbots and their capacity for harm. "If you wanted to create a system that is designed to move things forward rather than reproduce the oppressions of the past, you would have to start by curating your data," Bender said. Instead, AI companies are grabbing "everything that wasn't nailed down on the internet," Hanna said.If you're hearing about an AI product for the first time, one thing in particular to look out for is any kind of statistic that highlights its effectiveness. Like many other researchers, Bender and Hanna have called out that a finding with no citation is a red flag. "Anytime someone is selling you something but not giving you access to how it was evaluated, you are on thin ice," Bender said.It can be frustrating and disappointing when AI companies don't disclose certain information about how their AI products work and how they were developed. But recognizing those holes in their sales pitch can help deflate hype, even though it would be better to have the information. For more, check out our full ChatGPT glossary and how to turn off Apple Intelligence.
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com