• Hey, beautiful souls! Today, I want to shine a light on a topic that brings us hope and reminds us of the strength of justice. Recently, NSO Group, the infamous Israeli company known for its spyware Pegasus, faced a monumental verdict! They have been ordered to pay over $167 million in punitive damages to Meta for their unethical hacking campaign against WhatsApp users. Can you believe it? This is a HUGE win for all of us who value privacy and security!

    For five long years, this legal battle unfolded, shedding light on the dark practices of this tech giant. It’s a reminder that no matter how big the challenge, truth and justice will always prevail in the end. This ruling not only holds NSO Group accountable for their actions but also sends a powerful message to others in the tech industry. We must prioritize ethical practices and protect the rights of users around the globe!

    Let’s take a moment to celebrate the tireless efforts of those who fought for this victory! Every single person involved in this battle, from the lawyers to the advocates, showed that perseverance and belief in justice can lead to monumental change. Their dedication inspires us all to stand up for what is right, no matter how daunting the challenge may seem.

    This ruling is not just about money; it's about restoring faith in our digital world. It reminds us that we have the power to demand accountability from those who misuse technology. We can create a safer and more secure environment for everyone, where our privacy is respected, and our voices are heard!

    Let's keep that optimism alive! Use this moment as motivation to advocate for ethical tech practices, support companies that prioritize user security, and raise awareness about digital rights. Together, we can build a brighter future, where technology serves humanity positively and constructively!

    In conclusion, let’s celebrate this victory and continue to push for a world where every individual can feel safe in their digital interactions. Remember, every challenge is an opportunity for growth! Keep shining, keep fighting, and let your voice be heard! The future is bright, and it’s in our hands!

    #JusticeForUsers #EthicalTech #PrivacyMatters #DigitalRights #NSOGroup #Inspiration
    🌟✨ Hey, beautiful souls! 🌈💖 Today, I want to shine a light on a topic that brings us hope and reminds us of the strength of justice. Recently, NSO Group, the infamous Israeli company known for its spyware Pegasus, faced a monumental verdict! 🎉 They have been ordered to pay over $167 million in punitive damages to Meta for their unethical hacking campaign against WhatsApp users. Can you believe it? This is a HUGE win for all of us who value privacy and security! 🙌💪 For five long years, this legal battle unfolded, shedding light on the dark practices of this tech giant. It’s a reminder that no matter how big the challenge, truth and justice will always prevail in the end. 🌍💖 This ruling not only holds NSO Group accountable for their actions but also sends a powerful message to others in the tech industry. We must prioritize ethical practices and protect the rights of users around the globe! 🛡️✨ Let’s take a moment to celebrate the tireless efforts of those who fought for this victory! Every single person involved in this battle, from the lawyers to the advocates, showed that perseverance and belief in justice can lead to monumental change. 🙏🌟 Their dedication inspires us all to stand up for what is right, no matter how daunting the challenge may seem. This ruling is not just about money; it's about restoring faith in our digital world. It reminds us that we have the power to demand accountability from those who misuse technology. 📲💥 We can create a safer and more secure environment for everyone, where our privacy is respected, and our voices are heard! 🗣️❤️ Let's keep that optimism alive! Use this moment as motivation to advocate for ethical tech practices, support companies that prioritize user security, and raise awareness about digital rights. Together, we can build a brighter future, where technology serves humanity positively and constructively! 🌈🌟 In conclusion, let’s celebrate this victory and continue to push for a world where every individual can feel safe in their digital interactions. Remember, every challenge is an opportunity for growth! Keep shining, keep fighting, and let your voice be heard! The future is bright, and it’s in our hands! 💖💪✨ #JusticeForUsers #EthicalTech #PrivacyMatters #DigitalRights #NSOGroup #Inspiration
    www.muyseguridad.net
    NSO Group, compañía israelita conocida por el software espía Pegasus, deberá pagar más de 167 millones de dólares en daños punitivos a Meta por una campaña de piratería informática y difusión de malware contra usuarios de WhatsApp. Así lo ha estimad
    Like
    Love
    Wow
    Sad
    Angry
    573
    · 1 Commentaires ·0 Parts ·0 Aperçu
  • Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    News

    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy

    7 min read

    Published: June 4, 2025

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent.
    The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets.
    This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device.
    Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode.
    How Does It Work?
    Here’s the method used by Meta to spy on Android devices:

    As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour.
    When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites.
    However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background. 
    Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost. 
    Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms.

    The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites.
    Yandex also uses a similar method to harvest your personal data.

    Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone. 
    When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters. 
    These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1.
    Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains.
    The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK.

    Meta’s Infamous History with Privacy Norms
    This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations. 
    For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B. 
    Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission. 
    Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data. 
    In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users.
    In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally.
    So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place.
    That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities. 
    The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data.
    Meta’s Timid Response
    Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication. 

    We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson

    This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports.
    Here’s what will possibly happen next:

    A lawsuit may be filed based on the report.
    An investigating committee might be formed to question the matter.
    The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines.
    Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done. 

    The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes.
    More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in.

    Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style.
    He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth.
    Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides. 
    Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh. 
    Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well.

    View all articles by Krishi Chowdhary

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

    More from News

    View all

    View all
    #meta #yandex #spying #android #users
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP portand a UDP port, on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid M and promised to delete the collected data.  In 2024, South Korea also fined Meta M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setupthat’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all #meta #yandex #spying #android #users
    Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy
    techreport.com
    Home Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy News Meta and Yandex Spying on Android Users Through Localhost Ports: The Dying State of Online Privacy 7 min read Published: June 4, 2025 Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Wake up, Android folks! A new privacy scandal has hit your area of town. According to a new report led by Radboud University, Meta and Yandex have been listening to localhost ports to link your web browsing data with your identity and collect personal information without your consent. The companies use Meta Pixel and the Yandex Metrica scripts, which are embedded on 5.8 million and 3 million websites, respectively, to connect with their native apps on Android devices through localhost sockets. This creates a communication path between the cookies on your website and the local apps, establishing a channel for transferring personal information from your device. Also, you are mistaken if you think using your browser’s incognito mode or a VPN can protect you. Zuckerberg’s latest method of data harvesting can’t be overcome by tweaking any privacy or cookie settings or by using a VPN or incognito mode. How Does It Work? Here’s the method used by Meta to spy on Android devices: As many as 22% of the top 1 million websites contain Meta Pixel – a tracking code that helps website owners measure ad performance and track user behaviour. When Meta Pixel loads, it creates a special cookie called _fbp, which is supposed to be a first-party cookie. This means no other third party, including Meta apps themselves, should have access to this cookie. The _fbp cookie identifies your browser whenever you visit a website, meaning it can identify which person is accessing which websites. However, Meta, being Meta, went and found a loophole around this. Now, whenever you run Facebook or Instagram on your Android device, they can open up listening ports, specifically a TCP port (12387 or 12388) and a UDP port (the first unoccupied port in 12580-12585), on your phone in the background.  Whenever you load a website on your browser, the Meta Pixel uses WebRTC with SDP Munging, which essentially hides the _fbp cookie value inside the SDP message before being transmitted to your phone’s localhost.  Since Facebook and Instagram are already listening to this port, it receives the _fbp cookie value and can easily tie your identity to the website you’re visiting. Remember, Facebook and Instagram already have your identification details since you’re always logged in on these platforms. The report also says that Meta can link all _fbp received from various websites to your ID. Simply put, Meta knows which person is viewing what set of websites. Yandex also uses a similar method to harvest your personal data. Whenever you open a Yandex app, such as Yandex Maps, Yandex Browser, Yandex Search, or Navigator, it opens up ports like 29009, 30102, 29010, and 30103 on your phone.  When you visit a website that contains the Yandex Metrica Script, Yandex’s version of Meta Pixel, the script sends requests to Yandex servers containing obfuscated parameters.  These parameters are then sent to the local host via HTTP and HTTPS, which contains the IP address 127.0.0.1, or the yandexmetrica.com domain, which secretly points to 127.0.0.1. Now, the Yandex Metrica SDK in the Yandex apps receives these parameters and sends device identifiers, such as an Android Advertising ID, UUIDs, or device fingerprints. This entire message is encrypted to hide what it contains. The Yandex Metrica Script receives this info and sends it back to the Yandex servers. Just like Meta, Yandex can also tie your website activity to the device information shared by the SDK. Meta’s Infamous History with Privacy Norms This is not something new or unthinkable that Meta has done. The Mark Zuckerberg-led social media giant has a history of such privacy violations.  For instance, in 2024, the company was accused of collecting biometric data from Texas users without their express consent. The company settled the lawsuit by paying $1.4B.  Another of the most famous lawsuits was the Cambridge Analytica scandal in 2018, where a political consulting firm accessed private data of 87 million Facebook users without consent. The FTC fined Meta $5B for privacy violations along with a 100M settlement with the US Securities and Exchange Commission.  Meta Pixel has also come under scrutiny before, when it was accused of collecting sensitive health information from hospital websites. In another case dating back to 2012, Meta was accused of tracking users even after they logged out from their Facebook accounts. In this case, Meta paid $90M and promised to delete the collected data.  In 2024, South Korea also fined Meta $15M for inappropriately collecting personal data, such as sexual orientation and political beliefs, of 980K users. In September 2024, Meta was fined $101.6M by the Irish Data Protection Commission for inadvertently storing user passwords in plain text in such a way that employees could search for them. The passwords were not encrypted and were essentially leaked internally. So, the latest scandal isn’t entirely out of character for Meta. It has been finding ways to collect your data ever since its incorporation, and it seems like it will continue to do so, regardless of the regulations and safeguards in place. That said, Meta’s recent tracking method is insanely dangerous because there’s no safeguard around it. Even if you visit websites in incognito mode or use a VPN, Meta Pixel can still track your activities.  The past lawsuits also show a very identifiable pattern: Meta doesn’t fight a lawsuit until the end to try to win it. It either accepts the fine or settles the lawsuit with monetary compensation. This essentially goes to show that it passively accepts and even ‘owns’ the illegitimate tracking methods it has been using for decades. It’s quite possible that the top management views these fines and penalties as a cost of collecting data. Meta’s Timid Response Meta’s response claims that there’s some ‘miscommunication’ regarding Google policies. However, the method used in the aforementioned tracking scandal isn’t something that can simply happen due to ‘faulty design’ or miscommunication.  We are in discussions with Google to address a potential miscommunication regarding the application of their policies – Meta Spokesperson This kind of unethical tracking method has to be deliberately designed by engineers for it to work perfectly on such a large scale. While Meta is still trying to underplay the situation, it has paused the ‘feature’ (yep, that’s what they are calling it) as of now. The report also claims that as of June 3, Facebook and Instagram are not actively listening to the new ports. Here’s what will possibly happen next: A lawsuit may be filed based on the report. An investigating committee might be formed to question the matter. The company will come up with lame excuses, such as misinterpretation or miscommunication of policy guidelines. Meta will eventually settle the lawsuit or bear the fine with pride, like it has always done.  The regulatory authorities are apparently chasing a rat that finds new holes to hide every day. Companies like Meta and Yandex seem to be one step ahead of these regulations and have mastered the art of finding loopholes. More than legislative technicalities, it’s the moral ethics of the company that become clear with incidents like this. The intent of these regulations is to protect personal information, and the fact that Meta and Yandex blatantly circumvent these regulations in their spirit shows the absolutely horrific state of capitalism these corporations are in. Krishi is a seasoned tech journalist with over four years of experience writing about PC hardware, consumer technology, and artificial intelligence.  Clarity and accessibility are at the core of Krishi’s writing style. He believes technology writing should empower readers—not confuse them—and he’s committed to ensuring his content is always easy to understand without sacrificing accuracy or depth. Over the years, Krishi has contributed to some of the most reputable names in the industry, including Techopedia, TechRadar, and Tom’s Guide. A man of many talents, Krishi has also proven his mettle as a crypto writer, tackling complex topics with both ease and zeal. His work spans various formats—from in-depth explainers and news coverage to feature pieces and buying guides.  Behind the scenes, Krishi operates from a dual-monitor setup (including a 29-inch LG UltraWide) that’s always buzzing with news feeds, technical documentation, and research notes, as well as the occasional gaming sessions that keep him fresh.  Krishi thrives on staying current, always ready to dive into the latest announcements, industry shifts, and their far-reaching impacts.  When he's not deep into research on the latest PC hardware news, Krishi would love to chat with you about day trading and the financial markets—oh! And cricket, as well. View all articles by Krishi Chowdhary Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. More from News View all View all
    Like
    Love
    Wow
    Sad
    Angry
    193
    · 0 Commentaires ·0 Parts ·0 Aperçu
  • What AI’s impact on individuals means for the health workforce and industry

    Transcript    
    PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”      
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak.
    You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues.
    So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.  
    To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar.
    Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence.
    Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics.
    Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick:
    LEE: Ethan, welcome.
    ETHAN MOLLICK: So happy to be here, thank you.
    LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI?
    MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it.
    And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field.
    And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question.
    LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been?
    MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now.
    One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things.
    And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever.
    So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology.
    LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty?
    MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect.
    So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated.
    LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI.
    MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system.
    There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing?
    The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way.
    The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind.
    LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention.
    MOLLICK: Yes.
    LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point?
    MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right.
    I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?”
    So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right.
    LEE: Yes. Mm-hmm.
    MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either.
    LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever?
    MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered.
    You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete.
    What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one.
    Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet.
    LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this. 
    MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills.
    Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely.
    But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety.
    LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company.
    And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs?
    MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right.
    So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains.
    And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result.
    LEE: You know, where are those productivity gains going, then, when you get to the organizational level?
    MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right.
    Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal.
    At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen.
    So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons.
    And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves.
    So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change.
    LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI?
    MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again.
    What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field.
    So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab.
    So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill?
    And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves.
    LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones.
    And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ?
    MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish.
    I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space.
    But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things?
    And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to.
    So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that.
    LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching?
    MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful.
    A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing.
    So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right.
    I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear.
    But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition.
    LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading?
    MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems.
    So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is.
    But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview …
    LEE: Yeah, that’s a great one.
    MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend.
    Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works.
    LEE: Yeah.
    MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right.
    LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here.
    Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize?
    MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine.
    I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast.
    So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right.
    We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here.
    LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer.
    MOLLICK: Yes. Yes.
    LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall.
    But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea?
    MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right.
    There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right.
    LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens?
    MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people.
    So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine.
    But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point.
    Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not.
    Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get?
    LEE: Yeah.
    MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything.
    Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right.
    And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it.
    LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining.
    MOLLICK: Thank you.  
    I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work.
    One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does.
    In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI.
    The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI.
    Here’s now my interview with Azeem Azhar:
    LEE: Azeem, welcome.
    AZEEM AZHAR: Peter, thank you so much for having me. 
    LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before.
    And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day?
    AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip …
    LEE: Oh wow.
    AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started.
    And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large.
    LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through?
    AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed.
    Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th.
    And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold.
    LEE: And who’s the we that you were experimenting with?
    AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems.
    LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.  
    And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine?
    AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that.
    So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away.
    And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload.
    And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help.
    So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced.
    So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized.
    LEE: Yeah.
    AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura.
    LEE: Yup.
    AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to.
    And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on.
    It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector.
    And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout.
    So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems
    LEE: I love how you break that down. And I want to press on a couple of things.
    You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated?
    AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example.
    In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different.
    I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away.
    LEE: Yeah.
    AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week.
    LEE: Right. Yeah.
    AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer.
    LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution.
    Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons.
    And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work?
    AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice.
    I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors.
    I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner.
    LEE: Yeah.
    AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly.
    LEE: Right.
    AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful.
    LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis.
    And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem?
    AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before.
    We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right?
    LEE: Yeah, yeah.
    AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system.
    So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later …
    LEE: Right.
    AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for.
    And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system …
    LEE: Yup.
    AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that.
    So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible.
    And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons.
    LEE: Yeah, yep.
    AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own.
    LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop?
    AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold.
    If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system.
    LEE: Right. Yep. Yep.
    AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time.
    LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you.
    AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician.
    In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart
    I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that.
    LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes.
    LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that.
    And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway.
    AZHAR: Right.
    LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like?
    AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through.
    You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience.
    So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly.
    So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots.
    LEE: Yes.
    AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval.
    I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth?
    AZHAR: Right.
    LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow.
    AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week.
    And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician.
    LEE: Yeah.
    AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah.
    AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers …
    LEE: Yes.
    AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next.
    LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this.
    And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions?
    AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in.
    LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches.
    And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time.
    LEE: Yes.
    AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety.
    And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines.
    I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health.
    LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said.
    Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much.
    AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.  
    I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies.
    In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.  
    Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear.
    Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level.
    Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference.
    But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in.
    Until next time.
    #what #ais #impact #individuals #means
    What AI’s impact on individuals means for the health workforce and industry
    Transcript     PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.”       This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.     The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society.Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine.So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I wasmy PhD at MIT, I worked with Marvin Minskyand the MITMedia Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start.So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better. So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some wayscompared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot, the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them.And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how toAI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.”MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize.LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI, and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I thinkKarpathyhas some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCMEaccrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which, “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.”Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you.   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate. Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar,or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that ismore broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right?They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back, right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura. LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health, which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks.But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz aroundthe hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Carewas one, and Narayana Hrudayalayawas another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows.Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time.AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself, about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs, and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be.LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right.LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK.AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do,but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you.   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build, which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing.A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. #what #ais #impact #individuals #means
    What AI’s impact on individuals means for the health workforce and industry
    www.microsoft.com
    Transcript [MUSIC]    [BOOK PASSAGE]  PETER LEE: “In American primary care, the missing workforce is stunning in magnitude, the shortfall estimated to reach up to 48,000 doctors within the next dozen years. China and other countries with aging populations can expect drastic shortfalls, as well. Just last month, I asked a respected colleague retiring from primary care who he would recommend as a replacement; he told me bluntly that, other than expensive concierge care practices, he could not think of anyone, even for himself. This mismatch between need and supply will only grow, and the US is far from alone among developed countries in facing it.” [END OF BOOK PASSAGE]    [THEME MUSIC]    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.      [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 4: Trust but Verify,” which was written by Zak. You know, it’s no secret that in the US and elsewhere shortages in medical staff and the rise of clinician burnout are affecting the quality of patient care for the worse. In our book, we predicted that generative AI would be something that might help address these issues. So in this episode, we’ll delve into how individual performance gains that our previous guests have described might affect the healthcare workforce as a whole, and on the patient side, we’ll look into the influence of generative AI on the consumerization of healthcare. Now, since all of this consumes such a huge fraction of the overall economy, we’ll also get into what a general-purpose technology as disruptive as generative AI might mean in the context of labor markets and beyond.   To help us do that, I’m pleased to welcome Ethan Mollick and Azeem Azhar. Ethan Mollick is the Ralph J. Roberts Distinguished Faculty Scholar, a Rowan Fellow, and an associate professor at the Wharton School of the University of Pennsylvania. His research into the effects of AI on work, entrepreneurship, and education is applied by organizations around the world, leading him to be named one of Time magazine’s most influential people in AI for 2024. He’s also the author of the New York Times best-selling book Co-Intelligence. Azeem Azhar is an author, founder, investor, and one of the most thoughtful and influential voices on the interplay between disruptive emerging technologies and business and society. In his best-selling book, The Exponential Age, and in his highly regarded newsletter and podcast, Exponential View, he explores how technologies like AI are reshaping everything from healthcare to geopolitics. Ethan and Azeem are two leading thinkers on the ways that disruptive technologies—and especially AI—affect our work, our jobs, our business enterprises, and whole industries. As economists, they are trying to work out whether we are in the midst of an economic revolution as profound as the shift from an agrarian to an industrial society. [TRANSITION MUSIC] Here is my interview with Ethan Mollick: LEE: Ethan, welcome. ETHAN MOLLICK: So happy to be here, thank you. LEE: I described you as a professor at Wharton, which I think most of the people who listen to this podcast series know of as an elite business school. So it might surprise some people that you study AI. And beyond that, you know, that I would seek you out to talk about AI in medicine. [LAUGHTER] So to get started, how and why did it happen that you’ve become one of the leading experts on AI? MOLLICK: It’s actually an interesting story. I’ve been AI-adjacent my whole career. When I was [getting] my PhD at MIT, I worked with Marvin Minsky (opens in new tab) and the MIT [Massachusetts Institute of Technology] Media Labs AI group. But I was never the technical AI guy. I was the person who was trying to explain AI to everybody else who didn’t understand it. And then I became very interested in, how do you train and teach? And AI was always a part of that. I was building games for teaching, teaching tools that were used in hospitals and elsewhere, simulations. So when LLMs burst into the scene, I had already been using them and had a good sense of what they could do. And between that and, kind of, being practically oriented and getting some of the first research projects underway, especially under education and AI and performance, I became sort of a go-to person in the field. And once you’re in a field where nobody knows what’s going on and we’re all making it up as we go along—I thought it’s funny that you led with the idea that you have a couple of months head start for GPT-4, right. Like that’s all we have at this point, is a few months’ head start. [LAUGHTER] So being a few months ahead is good enough to be an expert at this point. Whether it should be or not is a different question. LEE: Well, if I understand correctly, leading AI companies like OpenAI, Anthropic, and others have now sought you out as someone who should get early access to really start to do early assessments and gauge early reactions. How has that been? MOLLICK: So, I mean, I think the bigger picture is less about me than about two things that tells us about the state of AI right now. One, nobody really knows what’s going on, right. So in a lot of ways, if it wasn’t for your work, Peter, like, I don’t think people would be thinking about medicine as much because these systems weren’t built for medicine. They weren’t built to change education. They weren’t built to write memos. They, like, they weren’t built to do any of these things. They weren’t really built to do anything in particular. It turns out they’re just good at many things. And to the extent that the labs work on them, they care about their coding ability above everything else and maybe math and science secondarily. They don’t think about the fact that it expresses high empathy. They don’t think about its accuracy and diagnosis or where it’s inaccurate. They don’t think about how it’s changing education forever. So one part of this is the fact that they go to my Twitter feed or ask me for advice is an indicator of where they are, too, which is they’re not thinking about this. And the fact that a few months’ head start continues to give you a lead tells you that we are at the very cutting edge. These labs aren’t sitting on projects for two years and then releasing them. Months after a project is complete or sooner, it’s out the door. Like, there’s very little delay. So we’re kind of all in the same boat here, which is a very unusual space for a new technology. LEE: And I, you know, explained that you’re at Wharton. Are you an odd fit as a faculty member at Wharton, or is this a trend now even in business schools that AI experts are becoming key members of the faculty? MOLLICK: I mean, it’s a little of both, right. It’s faculty, so everybody does everything. I’m a professor of innovation-entrepreneurship. I’ve launched startups before and working on that and education means I think about, how do organizations redesign themselves? How do they take advantage of these kinds of problems? So medicine’s always been very central to that, right. A lot of people in my MBA class have been MDs either switching, you know, careers or else looking to advance from being sort of individual contributors to running teams. So I don’t think that’s that bad a fit. But I also think this is general-purpose technology; it’s going to touch everything. The focus on this is medicine, but Microsoft does far more than medicine, right. It’s … there’s transformation happening in literally every field, in every country. This is a widespread effect. So I don’t think we should be surprised that business schools matter on this because we care about management. There’s a long tradition of management and medicine going together. There’s actually a great academic paper that shows that teaching hospitals that also have MBA programs associated with them have higher management scores and perform better (opens in new tab). So I think that these are not as foreign concepts, especially as medicine continues to get more complicated. LEE: Yeah. Well, in fact, I want to dive a little deeper on these issues of management, of entrepreneurship, um, education. But before doing that, if I could just stay focused on you. There is always something interesting to hear from people about their first encounters with AI. And throughout this entire series, I’ve been doing that both pre-generative AI and post-generative AI. So you, sort of, hinted at the pre-generative AI. You were in Minsky’s lab. Can you say a little bit more about that early encounter? And then tell us about your first encounters with generative AI. MOLLICK: Yeah. Those are great questions. So first of all, when I was at the media lab, that was pre-the current boom in sort of, you know, even in the old-school machine learning kind of space. So there was a lot of potential directions to head in. While I was there, there were projects underway, for example, to record every interaction small children had. One of the professors was recording everything their baby interacted with in the hope that maybe that would give them a hint about how to build an AI system. There was a bunch of projects underway that were about labeling every concept and how they relate to other concepts. So, like, it was very much Wild West of, like, how do we make an AI work—which has been this repeated problem in AI, which is, what is this thing? The fact that it was just like brute force over the corpus of all human knowledge turns out to be a little bit of like a, you know, it’s a miracle and a little bit of a disappointment in some ways [LAUGHTER] compared to how elaborate some of this was. So, you know, I think that, that was sort of my first encounters in sort of the intellectual way. The generative AI encounters actually started with the original, sort of, GPT-3, or, you know, earlier versions. And it was actually game-based. So I played games like AI Dungeon. And as an educator, I realized, oh my gosh, this stuff could write essays at a fourth-grade level. That’s really going to change the way, like, middle school works, was my thinking at the time. And I was posting about that back in, you know, 2021 that this is a big deal. But I think everybody was taken surprise, including the AI companies themselves, by, you know, ChatGPT, by GPT-3.5. The difference in degree turned out to be a difference in kind. LEE: Yeah, you know, if I think back, even with GPT-3, and certainly this was the case with GPT-2, it was, at least, you know, from where I was sitting, it was hard to get people to really take this seriously and pay attention. MOLLICK: Yes. LEE: You know, it’s remarkable. Within Microsoft, I think a turning point was the use of GPT-3 to do code completions. And that was actually productized as GitHub Copilot (opens in new tab), the very first version. That, I think, is where there was widespread belief. But, you know, in a way, I think there is, even for me early on, a sense of denial and skepticism. Did you have those initially at any point? MOLLICK: Yeah, I mean, it still happens today, right. Like, this is a weird technology. You know, the original denial and skepticism was, I couldn’t see where this was going. It didn’t seem like a miracle because, you know, of course computers can complete code for you. Like, what else are they supposed to do? Of course, computers can give you answers to questions and write fun things. So there’s difference of moving into a world of generative AI. I think a lot of people just thought that’s what computers could do. So it made the conversations a little weird. But even today, faced with these, you know, with very strong reasoner models that operate at the level of PhD students, I think a lot of people have issues with it, right. I mean, first of all, they seem intuitive to use, but they’re not always intuitive to use because the first use case that everyone puts AI to, it fails at because they use it like Google or some other use case. And then it’s genuinely upsetting in a lot of ways. I think, you know, I write in my book about the idea of three sleepless nights. That hasn’t changed. Like, you have to have an intellectual crisis to some extent, you know, and I think people do a lot to avoid having that existential angst of like, “Oh my god, what does it mean that a machine could think—apparently think—like a person?” So, I mean, I see resistance now. I saw resistance then. And then on top of all of that, there’s the fact that the curve of the technology is quite great. I mean, the price of GPT-4 level intelligence from, you know, when it was released has dropped 99.97% at this point, right. LEE: Yes. Mm-hmm. MOLLICK: I mean, I could run a GPT-4 class system basically on my phone. Microsoft’s releasing things that can almost run on like, you know, like it fits in almost no space, that are almost as good as the original GPT-4 models. I mean, I don’t think people have a sense of how fast the trajectory is moving either. LEE: Yeah, you know, there’s something that I think about often. There is this existential dread, or will this technology replace me? But I think the first people to feel that are researchers—people encountering this for the first time. You know, if you were working, let’s say, in Bayesian reasoning or in traditional, let’s say, Gaussian mixture model based, you know, speech recognition, you do get this feeling, Oh, my god, this technology has just solved the problem that I’ve dedicated my life to. And there is this really difficult period where you have to cope with that. And I think this is going to be spreading, you know, in more and more walks of life. And so this … at what point does that sort of sense of dread hit you, if ever? MOLLICK: I mean, you know, it’s not even dread as much as like, you know, Tyler Cowen wrote that it’s impossible to not feel a little bit of sadness as you use these AI systems, too. Because, like, I was talking to a friend, just as the most minor example, and his talent that he was very proud of was he was very good at writing limericks for birthday cards. He’d write these limericks. Everyone was always amused by them. [LAUGHTER] And now, you know, GPT-4 and GPT-4.5, they made limericks obsolete. Like, anyone can write a good limerick, right. So this was a talent, and it was a little sad. Like, this thing that you cared about mattered. You know, as academics, we’re a little used to dead ends, right, and like, you know, some getting the lap. But the idea that entire fields are hitting that way. Like in medicine, there’s a lot of support systems that are now obsolete. And the question is how quickly you change that. In education, a lot of our techniques are obsolete. What do you do to change that? You know, it’s like the fact that this brute force technology is good enough to solve so many problems is weird, right. And it’s not just the end of, you know, of our research angles that matter, too. Like, for example, I ran this, you know, 14-person-plus, multimillion-dollar effort at Wharton to build these teaching simulations, and we’re very proud of them. It took years of work to build one. Now we’ve built a system that can build teaching simulations on demand by you talking to it with one team member. And, you know, you literally can create any simulation by having a discussion with the AI. I mean, you know, there’s a switch to a new form of excitement, but there is a little bit of like, this mattered to me, and, you know, now I have to change how I do things. I mean, adjustment happens. But if you haven’t had that displacement, I think that’s a good indicator that you haven’t really faced AI yet. LEE: Yeah, what’s so interesting just listening to you is you use words like sadness, and yet I can see the—and hear the—excitement in your voice and your body language. So, you know, that’s also kind of an interesting aspect of all of this.  MOLLICK: Yeah, I mean, I think there’s something on the other side, right. But, like, I can’t say that I haven’t had moments where like, ughhhh, but then there’s joy and basically like also, you know, freeing stuff up. I mean, I think about doctors or professors, right. These are jobs that bundle together lots of different tasks that you would never have put together, right. If you’re a doctor, you would never have expected the same person to be good at keeping up with the research and being a good diagnostician and being a good manager and being good with people and being good with hand skills. Like, who would ever want that kind of bundle? That’s not something you’re all good at, right. And a lot of our stress of our job comes from the fact that we suck at some of it. And so to the extent that AI steps in for that, you kind of feel bad about some of the stuff that it’s doing that you wanted to do. But it’s much more uplifting to be like, I don’t have to do this stuff I’m bad anymore, or I get the support to make myself good at it. And the stuff that I really care about, I can focus on more. Well, because we are at kind of a unique moment where whatever you’re best at, you’re still better than AI. And I think it’s an ongoing question about how long that lasts. But for right now, like you’re not going to say, OK, AI replaces me entirely in my job in medicine. It’s very unlikely. But you will say it replaces these 17 things I’m bad at, but I never liked that anyway. So it’s a period of both excitement and a little anxiety. LEE: Yeah, I’m going to want to get back to this question about in what ways AI may or may not replace doctors or some of what doctors and nurses and other clinicians do. But before that, let’s get into, I think, the real meat of this conversation. In previous episodes of this podcast, we talked to clinicians and healthcare administrators and technology developers that are very rapidly injecting AI today to do various forms of workforce automation, you know, automatically writing a clinical encounter note, automatically filling out a referral letter or request for prior authorization for some reimbursement to an insurance company. And so these sorts of things are intended not only to make things more efficient and lower costs but also to reduce various forms of drudgery, cognitive burden on frontline health workers. So how do you think about the impact of AI on that aspect of workforce, and, you know, what would you expect will happen over the next few years in terms of impact on efficiency and costs? MOLLICK: So I mean, this is a case where I think we’re facing the big bright problem in AI in a lot of ways, which is that this is … at the individual level, there’s lots of performance gains to be gained, right. The problem, though, is that we as individuals fit into systems, in medicine as much as anywhere else or more so, right. Which is that you could individually boost your performance, but it’s also about systems that fit along with this, right. So, you know, if you could automatically, you know, record an encounter, if you could automatically make notes, does that change what you should be expecting for notes or the value of those notes or what they’re for? How do we take what one person does and validate it across the organization and roll it out for everybody without making it a 10-year process that it feels like IT in medicine often is? Like, so we’re in this really interesting period where there’s incredible amounts of individual innovation in productivity and performance improvements in this field, like very high levels of it, but not necessarily seeing that same thing translate to organizational efficiency or gains. And one of my big concerns is seeing that happen. We’re seeing that in nonmedical problems, the same kind of thing, which is, you know, we’ve got research showing 20 and 40% performance improvements, like not uncommon to see those things. But then the organization doesn’t capture it; the system doesn’t capture it. Because the individuals are doing their own work and the systems don’t have the ability to, kind of, learn or adapt as a result. LEE: You know, where are those productivity gains going, then, when you get to the organizational level? MOLLICK: Well, they’re dying for a few reasons. One is, there’s a tendency for individual contributors to underestimate the power of management, right. Practices associated with good management increase happiness, decrease, you know, issues, increase success rates. In the same way, about 40%, as far as we can tell, of the US advantage over other companies, of US firms, has to do with management ability. Like, management is a big deal. Organizing is a big deal. Thinking about how you coordinate is a big deal. At the individual level, when things get stuck there, right, you can’t start bringing them up to how systems work together. It becomes, How do I deal with a doctor that has a 60% performance improvement? We really only have one thing in our playbook for doing that right now, which is, OK, we could fire 40% of the other doctors and still have a performance gain, which is not the answer you want to see happen. So because of that, people are hiding their use. They’re actually hiding their use for lots of reasons. And it’s a weird case because the people who are able to figure out best how to use these systems, for a lot of use cases, they’re actually clinicians themselves because they’re experimenting all the time. Like, they have to take those encounter notes. And if they figure out a better way to do it, they figure that out. You don’t want to wait for, you know, a med tech company to figure that out and then sell that back to you when it can be done by the physicians themselves. So we’re just not used to a period where everybody’s innovating and where the management structure isn’t in place to take advantage of that. And so we’re seeing things stalled at the individual level, and people are often, especially in risk-averse organizations or organizations where there’s lots of regulatory hurdles, people are so afraid of the regulatory piece that they don’t even bother trying to make change. LEE: If you are, you know, the leader of a hospital or a clinic or a whole health system, how should you approach this? You know, how should you be trying to extract positive success out of AI? MOLLICK: So I think that you need to embrace the right kind of risk, right. We don’t want to put risk on our patients … like, we don’t want to put uninformed risk. But innovation involves risk to how organizations operate. They involve change. So I think part of this is embracing the idea that R&D has to happen in organizations again. What’s happened over the last 20 years or so has been organizations giving that up. Partially, that’s a trend to focus on what you’re good at and not try and do this other stuff. Partially, it’s because it’s outsourced now to software companies that, like, Salesforce tells you how to organize your sales team. Workforce tells you how to organize your organization. Consultants come in and will tell you how to make change based on the average of what other people are doing in your field. So companies and organizations and hospital systems have all started to give up their ability to create their own organizational change. And when I talk to organizations, I often say they have to have two approaches. They have to think about the crowd and the lab. So the crowd is the idea of how to empower clinicians and administrators and supporter networks to start using AI and experimenting in ethical, legal ways and then sharing that information with each other. And the lab is, how are we doing R&D about the approach of how to [get] AI to work, not just in direct patient care, right. But also fundamentally, like, what paperwork can you cut out? How can we better explain procedures? Like, what management role can this fill? And we need to be doing active experimentation on that. We can’t just wait for, you know, Microsoft to solve the problems. It has to be at the level of the organizations themselves. LEE: So let’s shift a little bit to the patient. You know, one of the things that we see, and I think everyone is seeing, is that people are turning to chatbots, like ChatGPT, actually to seek healthcare information for, you know, their own health or the health of their loved ones. And there was already, prior to all of this, a trend towards, let’s call it, consumerization of healthcare. So just in the business of healthcare delivery, do you think AI is going to hasten these kinds of trends, or from the consumer’s perspective, what … ? MOLLICK: I mean, absolutely, right. Like, all the early data that we have suggests that for most common medical problems, you should just consult AI, too, right. In fact, there is a real question to ask: at what point does it become unethical for doctors themselves to not ask for a second opinion from the AI because it’s cheap, right? You could overrule it or whatever you want, but like not asking seems foolish. I think the two places where there’s a burning almost, you know, moral imperative is … let’s say, you know, I’m in Philadelphia, I’m a professor, I have access to really good healthcare through the Hospital University of Pennsylvania system. I know doctors. You know, I’m lucky. I’m well connected. If, you know, something goes wrong, I have friends who I can talk to. I have specialists. I’m, you know, pretty well educated in this space. But for most people on the planet, they don’t have access to good medical care, they don’t have good health. It feels like it’s absolutely imperative to say when should you use AI and when not. Are there blind spots? What are those things? And I worry that, like, to me, that would be the crash project I’d be invoking because I’m doing the same thing in education, which is this system is not as good as being in a room with a great teacher who also uses AI to help you, but it’s better than not getting an, you know, to the level of education people get in many cases. Where should we be using it? How do we guide usage in the right way? Because the AI labs aren’t thinking about this. We have to. So, to me, there is a burning need here to understand this. And I worry that people will say, you know, everything that’s true—AI can hallucinate, AI can be biased. All of these things are absolutely true, but people are going to use it. The early indications are that it is quite useful. And unless we take the active role of saying, here’s when to use it, here’s when not to use it, we don’t have a right to say, don’t use this system. And I think, you know, we have to be exploring that. LEE: What do people need to understand about AI? And what should schools, universities, and so on be teaching? MOLLICK: Those are, kind of, two separate questions in lot of ways. I think a lot of people want to teach AI skills, and I will tell you, as somebody who works in this space a lot, there isn’t like an easy, sort of, AI skill, right. I could teach you prompt engineering in two to three classes, but every indication we have is that for most people under most circumstances, the value of prompting, you know, any one case is probably not that useful. A lot of the tricks are disappearing because the AI systems are just starting to use them themselves. So asking good questions, being a good manager, being a good thinker tend to be important, but like magic tricks around making, you know, the AI do something because you use the right phrase used to be something that was real but is rapidly disappearing. So I worry when people say teach AI skills. No one’s been able to articulate to me as somebody who knows AI very well and teaches classes on AI, what those AI skills that everyone should learn are, right. I mean, there’s value in learning a little bit how the models work. There’s a value in working with these systems. A lot of it’s just hands on keyboard kind of work. But, like, we don’t have an easy slam dunk “this is what you learn in the world of AI” because the systems are getting better, and as they get better, they get less sensitive to these prompting techniques. They get better prompting themselves. They solve problems spontaneously and start being agentic. So it’s a hard problem to ask about, like, what do you train someone on? I think getting people experience in hands-on-keyboards, getting them to … there’s like four things I could teach you about AI, and two of them are already starting to disappear. But, like, one is be direct. Like, tell the AI exactly what you want. That’s very helpful. Second, provide as much context as possible. That can include things like acting as a doctor, but also all the information you have. The third is give it step-by-step directions—that’s becoming less important. And the fourth is good and bad examples of the kind of output you want. Those four, that’s like, that’s it as far as the research telling you what to do, and the rest is building intuition. LEE: I’m really impressed that you didn’t give the answer, “Well, everyone should be teaching my book, Co-Intelligence.” [LAUGHS] MOLLICK: Oh, no, sorry! Everybody should be teaching my book Co-Intelligence. I apologize. [LAUGHTER] LEE: It’s good to chuckle about that, but actually, I can’t think of a better book, like, if you were to assign a textbook in any professional education space, I think Co-Intelligence would be number one on my list. Are there other things that you think are essential reading? MOLLICK: That’s a really good question. I think that a lot of things are evolving very quickly. I happen to, kind of, hit a sweet spot with Co-Intelligence to some degree because I talk about how I used it, and I was, sort of, an advanced user of these systems. So, like, it’s, sort of, like my Twitter feed, my online newsletter. I’m just trying to, kind of, in some ways, it’s about trying to make people aware of what these systems can do by just showing a lot, right. Rather than picking one thing, and, like, this is a general-purpose technology. Let’s use it for this. And, like, everybody gets a light bulb for a different reason. So more than reading, it is using, you know, and that can be Copilot or whatever your favorite tool is. But using it. Voice modes help a lot. In terms of readings, I mean, I think that there is a couple of good guides to understanding AI that were originally blog posts. I think Tim Lee has one called Understanding AI (opens in new tab), and it had a good overview … LEE: Yeah, that’s a great one. MOLLICK: … of that topic that I think explains how transformers work, which can give you some mental sense. I think [Andrej] Karpathy (opens in new tab) has some really nice videos of use that I would recommend. Like on the medical side, I think the book that you did, if you’re in medicine, you should read that. I think that that’s very valuable. But like all we can offer are hints in some ways. Like there isn’t … if you’re looking for the instruction manual, I think it can be very frustrating because it’s like you want the best practices and procedures laid out, and we cannot do that, right. That’s not how a system like this works. LEE: Yeah. MOLLICK: It’s not a person, but thinking about it like a person can be helpful, right. LEE: One of the things that has been sort of a fun project for me for the last few years is I have been a founding board member of a new medical school at Kaiser Permanente. And, you know, that medical school curriculum is being formed in this era. But it’s been perplexing to understand, you know, what this means for a medical school curriculum. And maybe even more perplexing for me, at least, is the accrediting bodies, which are extremely important in US medical schools; how accreditors should think about what’s necessary here. Besides the things that you’ve … the, kind of, four key ideas you mentioned, if you were talking to the board of directors of the LCME [Liaison Committee on Medical Education] accrediting body, what’s the one thing you would want them to really internalize? MOLLICK: This is both a fast-moving and vital area. This can’t be viewed like a usual change, which [is], “Let’s see how this works.” Because it’s, like, the things that make medical technologies hard to do, which is like unclear results, limited, you know, expensive use cases where it rolls out slowly. So one or two, you know, advanced medical facilities get access to, you know, proton beams or something else at multi-billion dollars of cost, and that takes a while to diffuse out. That’s not happening here. This is all happening at the same time, all at once. This is now … AI is part of medicine. I mean, there’s a minor point that I’d make that actually is a really important one, which is large language models, generative AI overall, work incredibly differently than other forms of AI. So the other worry I have with some of these accreditors is they blend together algorithmic forms of AI, which medicine has been trying for long time—decision support, algorithmic methods, like, medicine more so than other places has been thinking about those issues. Generative AI, even though it uses the same underlying techniques, is a completely different beast. So, like, even just take the most simple thing of algorithmic aversion, which is a well-understood problem in medicine, right. Which is, so you have a tool that could tell you as a radiologist, you know, the chance of this being cancer; you don’t like it, you overrule it, right. We don’t find algorithmic aversion happening with LLMs in the same way. People actually enjoy using them because it’s more like working with a person. The flaws are different. The approach is different. So you need to both view this as universal applicable today, which makes it urgent, but also as something that is not the same as your other form of AI, and your AI working group that is thinking about how to solve this problem is not the right people here. LEE: You know, I think the world has been trained because of the magic of web search to view computers as question-answering machines. Ask a question, get an answer. MOLLICK: Yes. Yes. LEE: Write a query, get results. And as I have interacted with medical professionals, you can see that medical professionals have that model of a machine in mind. And I think that’s partly, I think psychologically, why hallucination is so alarming. Because you have a mental model of a computer as a machine that has absolutely rock-solid perfect memory recall. But the thing that was so powerful in Co-Intelligence, and we tried to get at this in our book also, is that’s not the sweet spot. It’s this sort of deeper interaction, more of a collaboration. And I thought your use of the term Co-Intelligence really just even in the title of the book tried to capture this. When I think about education, it seems like that’s the first step, to get past this concept of a machine being just a question-answering machine. Do you have a reaction to that idea? MOLLICK: I think that’s very powerful. You know, we’ve been trained over so many years at both using computers but also in science fiction, right. Computers are about cold logic, right. They will give you the right answer, but if you ask it what love is, they explode, right. Like that’s the classic way you defeat the evil robot in Star Trek, right. “Love does not compute.” [LAUGHTER] Instead, we have a system that makes mistakes, is warm, beats doctors in empathy in almost every controlled study on the subject, right. Like, absolutely can outwrite you in a sonnet but will absolutely struggle with giving you the right answer every time. And I think our mental models are just broken for this. And I think you’re absolutely right. And that’s part of what I thought your book does get at really well is, like, this is a different thing. It’s also generally applicable. Again, the model in your head should be kind of like a person even though it isn’t, right. There’s a lot of warnings and caveats to it, but if you start from person, smart person you’re talking to, your mental model will be more accurate than smart machine, even though both are flawed examples, right. So it will make mistakes; it will make errors. The question is, what do you trust it on? What do you not trust it? As you get to know a model, you’ll get to understand, like, I totally don’t trust it for this, but I absolutely trust it for that, right. LEE: All right. So we’re getting to the end of the time we have together. And so I’d just like to get now into something a little bit more provocative. And I get the question all the time. You know, will AI replace doctors? In medicine and other advanced knowledge work, project out five to 10 years. What do think happens? MOLLICK: OK, so first of all, let’s acknowledge systems change much more slowly than individual use. You know, doctors are not individual actors; they’re part of systems, right. So not just the system of a patient who like may or may not want to talk to a machine instead of a person but also legal systems and administrative systems and systems that allocate labor and systems that train people. So, like, it’s hard to imagine that in five to 10 years medicine being so upended that even if AI was better than doctors at every single thing doctors do, that we’d actually see as radical a change in medicine as you might in other fields. I think you will see faster changes happen in consulting and law and, you know, coding, other spaces than medicine. But I do think that there is good reason to suspect that AI will outperform people while still having flaws, right. That’s the difference. We’re already seeing that for common medical questions in enough randomized controlled trials that, you know, best doctors beat AI, but the AI beats the mean doctor, right. Like, that’s just something we should acknowledge is happening at this point. Now, will that work in your specialty? No. Will that work with all the contingent social knowledge that you have in your space? Probably not. Like, these are vignettes, right. But, like, that’s kind of where things are. So let’s assume, right … you’re asking two questions. One is, how good will AI get? LEE: Yeah. MOLLICK: And we don’t know the answer to that question. I will tell you that your colleagues at Microsoft and increasingly the labs, the AI labs themselves, are all saying they think they’ll have a machine smarter than a human at every intellectual task in the next two to three years. If that doesn’t happen, that makes it easier to assume the future, but let’s just assume that that’s the case. I think medicine starts to change with the idea that people feel obligated to use this to help for everything. Your patients will be using it, and it will be your advisor and helper at the beginning phases, right. And I think that I expect people to be better at empathy. I expect better bedside manner. I expect management tasks to become easier. I think administrative burden might lighten if we handle this right way or much worse if we handle it badly. Diagnostic accuracy will increase, right. And then there’s a set of discovery pieces happening, too, right. One of the core goals of all the AI companies is to accelerate medical research. How does that happen and how does that affect us is a, kind of, unknown question. So I think clinicians are in both the eye of the storm and surrounded by it, right. Like, they can resist AI use for longer than most other fields, but everything around them is going to be affected by it. LEE: Well, Ethan, this has been really a fantastic conversation. And, you know, I think in contrast to all the other conversations we’ve had, this one gives especially the leaders in healthcare, you know, people actually trying to lead their organizations into the future, whether it’s in education or in delivery, a lot to think about. So I really appreciate you joining. MOLLICK: Thank you. [TRANSITION MUSIC]   I’m a computing researcher who works with people who are right in the middle of today’s bleeding-edge developments in AI. And because of that, I often lose sight of how to talk to a broader audience about what it’s all about. And so I think one of Ethan’s superpowers is that he has this knack for explaining complex topics in AI in a really accessible way, getting right to the most important points without making it so simple as to be useless. That’s why I rarely miss an opportunity to read up on his latest work. One of the first things I learned from Ethan is the intuition that you can, sort of, think of AI as a very knowledgeable intern. In other words, think of it as a persona that you can interact with, but you also need to be a manager for it and to always assess the work that it does. In our discussion, Ethan went further to stress that there is, because of that, a serious education gap. You know, over the last decade or two, we’ve all been trained, mainly by search engines, to think of computers as question-answering machines. In medicine, in fact, there’s a question-answering application that is really popular called UpToDate (opens in new tab). Doctors use it all the time. But generative AI systems like ChatGPT are different. There’s therefore a challenge in how to break out of the old-fashioned mindset of search to get the full value out of generative AI. The other big takeaway for me was that Ethan pointed out while it’s easy to see productivity gains from AI at the individual level, those same gains, at least today, don’t often translate automatically to organization-wide or system-wide gains. And one, of course, has to conclude that it takes more than just making individuals more productive; the whole system also has to adjust to the realities of AI. Here’s now my interview with Azeem Azhar: LEE: Azeem, welcome. AZEEM AZHAR: Peter, thank you so much for having me.  LEE: You know, I think you’re extremely well known in the world. But still, some of the listeners of this podcast series might not have encountered you before. And so one of the ways I like to ask people to introduce themselves is, how do you explain to your parents what you do every day? AZHAR: Well, I’m very lucky in that way because my mother was the person who got me into computers more than 40 years ago. And I still have that first computer, a ZX81 with a Z80 chip … LEE: Oh wow. AZHAR: … to this day. It sits in my study, all seven and a half thousand transistors and Bakelite plastic that it is. And my parents were both economists, and economics is deeply connected with technology in some sense. And I grew up in the late ’70s and the early ’80s. And that was a time of tremendous optimism around technology. It was space opera, science fiction, robots, and of course, the personal computer and, you know, Bill Gates and Steve Jobs. So that’s where I started. And so, in a way, my mother and my dad, who passed away a few years ago, had always known me as someone who was fiddling with computers but also thinking about economics and society. And so, in a way, it’s easier to explain to them because they’re the ones who nurtured the environment that allowed me to research technology and AI and think about what it means to firms and to the economy at large. LEE: I always like to understand the origin story. And what I mean by that is, you know, what was your first encounter with generative AI? And what was that like? What did you go through? AZHAR: The first real moment was when Midjourney and Stable Diffusion emerged in that summer of 2022. I’d been away on vacation, and I came back—and I’d been off grid, in fact—and the world had really changed. Now, I’d been aware of GPT-3 and GPT-2, which I played around with and with BERT, the original transformer paper about seven or eight years ago, but it was the moment where I could talk to my computer, and it could produce these images, and it could be refined in natural language that really made me think we’ve crossed into a new domain. We’ve gone from AI being highly discriminative to AI that’s able to explore the world in particular ways. And then it was a few months later that ChatGPT came out—November, the 30th. And I think it was the next day or the day after that I said to my team, everyone has to use this, and we have to meet every morning and discuss how we experimented the day before. And we did that for three or four months. And, you know, it was really clear to me in that interface at that point that, you know, we’d absolutely pass some kind of threshold. LEE: And who’s the we that you were experimenting with? AZHAR: So I have a team of four who support me. They’re mostly researchers of different types. I mean, it’s almost like one of those jokes. You know, I have a sociologist, an economist, and an astrophysicist. And, you know, they walk into the bar, [LAUGHTER] or they walk into our virtual team room, and we try to solve problems. LEE: Well, so let’s get now into brass tacks here. And I think I want to start maybe just with an exploration of the economics of all this and economic realities. Because I think in a lot of your work—for example, in your book—you look pretty deeply at how automation generally and AI specifically are transforming certain sectors like finance, manufacturing, and you have a really, kind of, insightful focus on what this means for productivity and which ways, you know, efficiencies are found.   And then you, sort of, balance that with risks, things that can and do go wrong. And so as you take that background and looking at all those other sectors, in what ways are the same patterns playing out or likely to play out in healthcare and medicine? AZHAR: I’m sure we will see really remarkable parallels but also new things going on. I mean, medicine has a particular quality compared to other sectors in the sense that it’s highly regulated, market structure is very different country to country, and it’s an incredibly broad field. I mean, just think about taking a Tylenol and going through laparoscopic surgery. Having an MRI and seeing a physio. I mean, this is all medicine. I mean, it’s hard to imagine a sector that is [LAUGHS] more broad than that. So I think we can start to break it down, and, you know, where we’re seeing things with generative AI will be that the, sort of, softest entry point, which is the medical scribing. And I’m sure many of us have been with clinicians who have a medical scribe running alongside—they’re all on Surface Pros I noticed, right? [LAUGHTER] They’re on the tablet computers, and they’re scribing away. And what that’s doing is, in the words of my friend Eric Topol, it’s giving the clinician time back (opens in new tab), right. They have time back from days that are extremely busy and, you know, full of administrative overload. So I think you can obviously do a great deal with reducing that overload. And within my team, we have a view, which is if you do something five times in a week, you should be writing an automation for it. And if you’re a doctor, you’re probably reviewing your notes, writing the prescriptions, and so on several times a day. So those are things that can clearly be automated, and the human can be in the loop. But I think there are so many other ways just within the clinic that things can help. So, one of my friends, my friend from my junior school—I’ve known him since I was 9—is an oncologist who’s also deeply into machine learning, and he’s in Cambridge in the UK. And he built with Microsoft Research a suite of imaging AI tools from his own discipline, which they then open sourced. So that’s another way that you have an impact, which is that you actually enable the, you know, generalist, specialist, polymath, whatever they are in health systems to be able to get this technology, to tune it to their requirements, to use it, to encourage some grassroots adoption in a system that’s often been very, very heavily centralized. LEE: Yeah. AZHAR: And then I think there are some other things that are going on that I find really, really exciting. So one is the consumerization of healthcare. So I have one of those sleep tracking rings, the Oura (opens in new tab). LEE: Yup. AZHAR: That is building a data stream that we’ll be able to apply more and more AI to. I mean, right now, it’s applying traditional, I suspect, machine learning, but you can imagine that as we start to get more data, we start to get more used to measuring ourselves, we create this sort of pot, a personal asset that we can turn AI to. And there’s still another category. And that other category is one of the completely novel ways in which we can enable patient care and patient pathway. And there’s a fantastic startup in the UK called Neko Health (opens in new tab), which, I mean, does physicals, MRI scans, and blood tests, and so on. It’s hard to imagine Neko existing without the sort of advanced data, machine learning, AI that we’ve seen emerge over the last decade. So, I mean, I think that there are so many ways in which the temperature is slowly being turned up to encourage a phase change within the healthcare sector. And last but not least, I do think that these tools can also be very, very supportive of a clinician’s life cycle. I think we, as patients, we’re a bit …  I don’t know if we’re as grateful as we should be for our clinicians who are putting in 90-hour weeks. [LAUGHTER] But you can imagine a world where AI is able to support not just the clinicians’ workload but also their sense of stress, their sense of burnout. So just in those five areas, Peter, I sort of imagine we could start to fundamentally transform over the course of many years, of course, the way in which people think about their health and their interactions with healthcare systems LEE: I love how you break that down. And I want to press on a couple of things. You also touched on the fact that medicine is, at least in most of the world, is a highly regulated industry. I guess finance is the same way, but they also feel different because the, like, finance sector has to be very responsive to consumers, and consumers are sensitive to, you know, an abundance of choice; they are sensitive to price. Is there something unique about medicine besides being regulated? AZHAR: I mean, there absolutely is. And in finance, as well, you have much clearer end states. So if you’re not in the consumer space, but you’re in the, you know, asset management space, you have to essentially deliver returns against the volatility or risk boundary, right. That’s what you have to go out and do. And I think if you’re in the consumer industry, you can come back to very, very clear measures, net promoter score being a very good example. In the case of medicine and healthcare, it is much more complicated because as far as the clinician is concerned, people are individuals, and we have our own parts and our own responses. If we didn’t, there would never be a need for a differential diagnosis. There’d never be a need for, you know, Let’s try azithromycin first, and then if that doesn’t work, we’ll go to vancomycin, or, you know, whatever it happens to be. You would just know. But ultimately, you know, people are quite different. The symptoms that they’re showing are quite different, and also their compliance is really, really different. I had a back problem that had to be dealt with by, you know, a physio and extremely boring exercises four times a week, but I was ruthless in complying, and my physio was incredibly surprised. He’d say well no one ever does this, and I said, well you know the thing is that I kind of just want to get this thing to go away. LEE: Yeah. AZHAR: And I think that that’s why medicine is and healthcare is so different and more complex. But I also think that’s why AI can be really, really helpful. I mean, we didn’t talk about, you know, AI in its ability to potentially do this, which is to extend the clinician’s presence throughout the week. LEE: Right. Yeah. AZHAR: The idea that maybe some part of what the clinician would do if you could talk to them on Wednesday, Thursday, and Friday could be delivered through an app or a chatbot just as a way of encouraging the compliance, which is often, especially with older patients, one reason why conditions, you know, linger on for longer. LEE: You know, just staying on the regulatory thing, as I’ve thought about this, the one regulated sector that I think seems to have some parallels to healthcare is energy delivery, energy distribution. Because like healthcare, as a consumer, I don’t have choice in who delivers electricity to my house. And even though I care about it being cheap or at least not being overcharged, I don’t have an abundance of choice. I can’t do price comparisons. And there’s something about that, just speaking as a consumer of both energy and a consumer of healthcare, that feels similar. Whereas other regulated industries, you know, somehow, as a consumer, I feel like I have a lot more direct influence and power. Does that make any sense to someone, you know, like you, who’s really much more expert in how economic systems work? AZHAR: I mean, in a sense, one part of that is very, very true. You have a limited panel of energy providers you can go to, and in the US, there may be places where you have no choice. I think the area where it’s slightly different is that as a consumer or a patient, you can actually make meaningful choices and changes yourself using these technologies, and people used to joke about you know asking Dr. Google. But Dr. Google is not terrible, particularly if you go to WebMD. And, you know, when I look at long-range change, many of the regulations that exist around healthcare delivery were formed at a point before people had access to good quality information at the touch of their fingertips or when educational levels in general were much, much lower. And many regulations existed because of the incumbent power of particular professional sectors. I’ll give you an example from the United Kingdom. So I have had asthma all of my life. That means I’ve been taking my inhaler, Ventolin, and maybe a steroid inhaler for nearly 50 years. That means that I know … actually, I’ve got more experience, and I—in some sense—know more about it than a general practitioner. LEE: Yeah. AZHAR: And until a few years ago, I would have to go to a general practitioner to get this drug that I’ve been taking for five decades, and there they are, age 30 or whatever it is. And a few years ago, the regulations changed. And now pharmacies can … or pharmacists can prescribe those types of drugs under certain conditions directly. LEE: Right. AZHAR: That was not to do with technology. That was to do with incumbent lock-in. So when we look at the medical industry, the healthcare space, there are some parallels with energy, but there are a few little things that the ability that the consumer has to put in some effort to learn about their condition, but also the fact that some of the regulations that exist just exist because certain professions are powerful. LEE: Yeah, one last question while we’re still on economics. There seems to be a conundrum about productivity and efficiency in healthcare delivery because I’ve never encountered a doctor or a nurse that wants to be able to handle even more patients than they’re doing on a daily basis. And so, you know, if productivity means simply, well, your rounds can now handle 16 patients instead of eight patients, that doesn’t seem necessarily to be a desirable thing. So how can we or should we be thinking about efficiency and productivity since obviously costs are, in most of the developed world, are a huge, huge problem? AZHAR: Yes, and when you described doubling the number of patients on the round, I imagined you buying them all roller skates so they could just whizz around [LAUGHTER] the hospital faster and faster than ever before. We can learn from what happened with the introduction of electricity. Electricity emerged at the end of the 19th century, around the same time that cars were emerging as a product, and car makers were very small and very artisanal. And in the early 1900s, some really smart car makers figured out that electricity was going to be important. And they bought into this technology by putting pendant lights in their workshops so they could “visit more patients.” Right? LEE: Yeah, yeah. AZHAR: They could effectively spend more hours working, and that was a productivity enhancement, and it was noticeable. But, of course, electricity fundamentally changed the productivity by orders of magnitude of people who made cars starting with Henry Ford because he was able to reorganize his factories around the electrical delivery of power and to therefore have the moving assembly line, which 10xed the productivity of that system. So when we think about how AI will affect the clinician, the nurse, the doctor, it’s much easier for us to imagine it as the pendant light that just has them working later … LEE: Right. AZHAR: … than it is to imagine a reconceptualization of the relationship between the clinician and the people they care for. And I’m not sure. I don’t think anybody knows what that looks like. But, you know, I do think that there will be a way that this changes, and you can see that scale out factor. And it may be, Peter, that what we end up doing is we end up saying, OK, because we have these brilliant AIs, there’s a lower level of training and cost and expense that’s required for a broader range of conditions that need treating. And that expands the market, right. That expands the market hugely. It’s what has happened in the market for taxis or ride sharing. The introduction of Uber and the GPS system … LEE: Yup. AZHAR: … has meant many more people now earn their living driving people around in their cars. And at least in London, you had to be reasonably highly trained to do that. So I can see a reorganization is possible. Of course, entrenched interests, the economic flow … and there are many entrenched interests, particularly in the US between the health systems and the, you know, professional bodies that might slow things down. But I think a reimagining is possible. And if I may, I’ll give you one example of that, which is, if you go to countries outside of the US where there are many more sick people per doctor, they have incentives to change the way they deliver their healthcare. And well before there was AI of this quality around, there was a few cases of health systems in India—Aravind Eye Care (opens in new tab) was one, and Narayana Hrudayalaya [now known as Narayana Health (opens in new tab)] was another. And in the latter, they were a cardiac care unit where you couldn’t get enough heart surgeons. LEE: Yeah, yep. AZHAR: So specially trained nurses would operate under the supervision of a single surgeon who would supervise many in parallel. So there are ways of increasing the quality of care, reducing the cost, but it does require a systems change. And we can’t expect a single bright algorithm to do it on its own. LEE: Yeah, really, really interesting. So now let’s get into regulation. And let me start with this question. You know, there are several startup companies I’m aware of that are pushing on, I think, a near-term future possibility that a medical AI for consumer might be allowed, say, to prescribe a medication for you, something that would normally require a doctor or a pharmacist, you know, that is certified in some way, licensed to do. Do you think we’ll get to a point where for certain regulated activities, humans are more or less cut out of the loop? AZHAR: Well, humans would have been in the loop because they would have provided the training data, they would have done the oversight, the quality control. But to your question in general, would we delegate an important decision entirely to a tested set of algorithms? I’m sure we will. We already do that. I delegate less important decisions like, What time should I leave for the airport to Waze. I delegate more important decisions to the automated braking in my car. We will do this at certain levels of risk and threshold. If I come back to my example of prescribing Ventolin. It’s really unclear to me that the prescription of Ventolin, this incredibly benign bronchodilator that is only used by people who’ve been through the asthma process, needs to be prescribed by someone who’s gone through 10 years or 12 years of medical training. And why that couldn’t be prescribed by an algorithm or an AI system. LEE: Right. Yep. Yep. AZHAR: So, you know, I absolutely think that that will be the case and could be the case. I can’t really see what the objections are. And the real issue is where do you draw the line of where you say, “Listen, this is too important,” or “The cost is too great,” or “The side effects are too high,” and therefore this is a point at which we want to have some, you know, human taking personal responsibility, having a liability framework in place, having a sense that there is a person with legal agency who signed off on this decision. And that line I suspect will start fairly low, and what we’d expect to see would be that that would rise progressively over time. LEE: What you just said, that scenario of your personal asthma medication, is really interesting because your personal AI might have the benefit of 50 years of your own experience with that medication. So, in a way, there is at least the data potential for, let’s say, the next prescription to be more personalized and more tailored specifically for you. AZHAR: Yes. Well, let’s dig into this because I think this is super interesting, and we can look at how things have changed. So 15 years ago, if I had a bad asthma attack, which I might have once a year, I would have needed to go and see my general physician. In the UK, it’s very difficult to get an appointment. I would have had to see someone privately who didn’t know me at all because I’ve just walked in off the street, and I would explain my situation. It would take me half a day. Productivity lost. I’ve been miserable for a couple of days with severe wheezing. Then a few years ago the system changed, a protocol changed, and now I have a thing called a rescue pack, which includes prednisolone steroids. It includes something else I’ve just forgotten, and an antibiotic in case I get an upper respiratory tract infection, and I have an “algorithm.” It’s called a protocol. It’s printed out. It’s a flowchart I answer various questions, and then I say, “I’m going to prescribe this to myself.” You know, UK doctors don’t prescribe prednisolone, or prednisone as you may call it in the US, at the drop of a hat, right. It’s a powerful steroid. I can self-administer, and I can now get that repeat prescription without seeing a physician a couple of times a year. And the algorithm, the “AI” is, it’s obviously been done in PowerPoint naturally, and it’s a bunch of arrows. [LAUGHS] Surely, surely, an AI system is going to be more sophisticated, more nuanced, and give me more assurance that I’m making the right decision around something like that. LEE: Yeah. Well, at a minimum, the AI should be able to make that PowerPoint the next time. [LAUGHS] AZHAR: Yeah, yeah. Thank god for Clippy. Yes. LEE: So, you know, I think in our book, we had a lot of certainty about most of the things we’ve discussed here, but one chapter where I felt we really sort of ran out of ideas, frankly, was on regulation. And, you know, what we ended up doing for that chapter is … I can’t remember if it was Carey’s or Zak’s idea, but we asked GPT-4 to have a conversation, a debate with itself [LAUGHS], about regulation. And we made some minor commentary on that. And really, I think we took that approach because we just didn’t have much to offer. By the way, in our defense, I don’t think anyone else had any better ideas anyway. AZHAR: Right. LEE: And so now two years later, do we have better ideas about the need for regulation, the frameworks around which those regulations should be developed, and, you know, what should this look like? AZHAR: So regulation is going to be in some cases very helpful because it provides certainty for the clinician that they’re doing the right thing, that they are still insured for what they’re doing, and it provides some degree of confidence for the patient. And we need to make sure that the claims that are made stand up to quite rigorous levels, where ideally there are RCTs [randomized control trials], and there are the classic set of processes you go through. You do also want to be able to experiment, and so the question is: as a regulator, how can you enable conditions for there to be experimentation? And what is experimentation? Experimentation is learning so that every element of the system can learn from this experience. So finding that space where there can be bit of experimentation, I think, becomes very, very important. And a lot of this is about experience, so I think the first digital therapeutics have received FDA approval, which means there are now people within the FDA who understand how you go about running an approvals process for that, and what that ends up looking like—and of course what we’re very good at doing in this sort of modern hyper-connected world—is we can share that expertise, that knowledge, that experience very, very quickly. So you go from one approval a year to a hundred approvals a year to a thousand approvals a year. So we will then actually, I suspect, need to think about what is it to approve digital therapeutics because, unlike big biological molecules, we can generate these digital therapeutics at the rate of knots [very rapidly]. LEE: Yes. AZHAR: Every road in Hayes Valley in San Francisco, right, is churning out new startups who will want to do things like this. So then, I think about, what does it mean to get approved if indeed it gets approved? But we can also go really far with things that don’t require approval. I come back to my sleep tracking ring. So I’ve been wearing this for a few years, and when I go and see my doctor or I have my annual checkup, one of the first things that he asks is how have I been sleeping. And in fact, I even sync my sleep tracking data to their medical record system, so he’s saying … hearing what I’m saying, but he’s actually pulling up the real data going, This patient’s lying to me again. Of course, I’m very truthful with my doctor, as we should all be. [LAUGHTER] LEE: You know, actually, that brings up a point that consumer-facing health AI has to deal with pop science, bad science, you know, weird stuff that you hear on Reddit. And because one of the things that consumers want to know always is, you know, what’s the truth? AZHAR: Right. LEE: What can I rely on? And I think that somehow feels different than an AI that you actually put in the hands of, let’s say, a licensed practitioner. And so the regulatory issues seem very, very different for these two cases somehow. AZHAR: I agree, they’re very different. And I think for a lot of areas, you will want to build AI systems that are first and foremost for the clinician, even if they have patient extensions, that idea that the clinician can still be with a patient during the week. And you’ll do that anyway because you need the data, and you also need a little bit of a liability shield to have like a sensible person who’s been trained around that. And I think that’s going to be a very important pathway for many AI medical crossovers. We’re going to go through the clinician. LEE: Yeah. AZHAR: But I also do recognize what you say about the, kind of, kooky quackery that exists on Reddit. Although on Creatine, Reddit may yet prove to have been right. [LAUGHTER] LEE: Yeah, that’s right. Yes, yeah, absolutely. Yeah. AZHAR: Sometimes it’s right. And I think that it serves a really good role as a field of extreme experimentation. So if you’re somebody who makes a continuous glucose monitor traditionally given to diabetics but now lots of people will wear them—and sports people will wear them—you probably gathered a lot of extreme tail distribution data by reading the Reddit/biohackers … LEE: Yes. AZHAR: … for the last few years, where people were doing things that you would never want them to really do with the CGM [continuous glucose monitor]. And so I think we shouldn’t understate how important that petri dish can be for helping us learn what could happen next. LEE: Oh, I think it’s absolutely going to be essential and a bigger thing in the future. So I think I just want to close here then with one last question. And I always try to be a little bit provocative with this. And so as you look ahead to what doctors and nurses and patients might be doing two years from now, five years from now, 10 years from now, do you have any kind of firm predictions? AZHAR: I’m going to push the boat out, and I’m going to go further out than closer in. LEE: OK. [LAUGHS] AZHAR: As patients, we will have many, many more touch points and interaction with our biomarkers and our health. We’ll be reading how well we feel through an array of things. And some of them we’ll be wearing directly, like sleep trackers and watches. And so we’ll have a better sense of what’s happening in our lives. It’s like the moment you go from paper bank statements that arrive every month to being able to see your account in real time. LEE: Yes. AZHAR: And I suspect we’ll have … we’ll still have interactions with clinicians because societies that get richer see doctors more, societies that get older see doctors more, and we’re going to be doing both of those over the coming 10 years. But there will be a sense, I think, of continuous health engagement, not in an overbearing way, but just in a sense that we know it’s there, we can check in with it, it’s likely to be data that is compiled on our behalf somewhere centrally and delivered through a user experience that reinforces agency rather than anxiety. And we’re learning how to do that slowly. I don’t think the health apps on our phones and devices have yet quite got that right. And that could help us personalize problems before they arise, and again, I use my experience for things that I’ve tracked really, really well. And I know from my data and from how I’m feeling when I’m on the verge of one of those severe asthma attacks that hits me once a year, and I can take a little bit of preemptive measure, so I think that that will become progressively more common and that sense that we will know our baselines. I mean, when you think about being an athlete, which is something I think about, but I could never ever do, [LAUGHTER] but what happens is you start with your detailed baselines, and that’s what your health coach looks at every three or four months. For most of us, we have no idea of our baselines. You we get our blood pressure measured once a year. We will have baselines, and that will help us on an ongoing basis to better understand and be in control of our health. And then if the product designers get it right, it will be done in a way that doesn’t feel invasive, but it’ll be done in a way that feels enabling. We’ll still be engaging with clinicians augmented by AI systems more and more because they will also have gone up the stack. They won’t be spending their time on just “take two Tylenol and have a lie down” type of engagements because that will be dealt with earlier on in the system. And so we will be there in a very, very different set of relationships. And they will feel that they have different ways of looking after our health. LEE: Azeem, it’s so comforting to hear such a wonderfully optimistic picture of the future of healthcare. And I actually agree with everything you’ve said. Let me just thank you again for joining this conversation. I think it’s been really fascinating. And I think somehow the systemic issues, the systemic issues that you tend to just see with such clarity, I think are going to be the most, kind of, profound drivers of change in the future. So thank you so much. AZHAR: Well, thank you, it’s been my pleasure, Peter, thank you. [TRANSITION MUSIC]   I always think of Azeem as a systems thinker. He’s always able to take the experiences of new technologies at an individual level and then project out to what this could mean for whole organizations and whole societies. In our conversation, I felt that Azeem really connected some of what we learned in a previous episode—for example, from Chrissy Farr—on the evolving consumerization of healthcare to the broader workforce and economic impacts that we’ve heard about from Ethan Mollick.   Azeem’s personal story about managing his asthma was also a great example. You know, he imagines a future, as do I, where personal AI might assist and remember decades of personal experience with a condition like asthma and thereby know more than any human being could possibly know in a deeply personalized and effective way, leading to better care. Azeem’s relentless optimism about our AI future was also so heartening to hear. Both of these conversations leave me really optimistic about the future of AI in medicine. At the same time, it is pretty sobering to realize just how much we’ll all need to change in pretty fundamental and maybe even in radical ways. I think a big insight I got from these conversations is how we interact with machines is going to have to be altered not only at the individual level, but at the company level and maybe even at the societal level. Since my conversation with Ethan and Azeem, there have been some pretty important developments that speak directly to this. Just last week at Build (opens in new tab), which is Microsoft’s yearly developer conference, we announced a slew of AI agent technologies. Our CEO, Satya Nadella, in fact, started his keynote by going online in a GitHub developer environment and then assigning a coding task to an AI agent, basically treating that AI as a full-fledged member of a development team. Other agents, for example, a meeting facilitator, a data analyst, a business researcher, travel agent, and more were also shown during the conference. But pertinent to healthcare specifically, what really blew me away was the demonstration of a healthcare orchestrator agent. And the specific thing here was in Stanford’s cancer treatment center, when they are trying to decide on potentially experimental treatments for cancer patients, they convene a meeting of experts. That is typically called a tumor board. And so this AI healthcare orchestrator agent actually participated as a full-fledged member of a tumor board meeting to help bring data together, make sure that the latest medical knowledge was brought to bear, and to assist in the decision-making around a patient’s cancer treatment. It was pretty amazing. [THEME MUSIC] A big thank-you again to Ethan and Azeem for sharing their knowledge and understanding of the dynamics between AI and society more broadly. And to our listeners, thank you for joining us. I’m really excited for the upcoming episodes, including discussions on medical students’ experiences with AI and AI’s influence on the operation of health systems and public health departments. We hope you’ll continue to tune in. Until next time. [MUSIC FADES]
    11 Commentaires ·0 Parts ·0 Aperçu
  • How to push back on an unethical request at work

    A few years ago, a sales executive I worked with found himself in a difficult position. His company was under review for a potential buyout, and his director asked him to present a version of the company’s story that, while technically true, left out critical details. The omission would make the company look healthier than it was, protecting its valuation and the leadership team’s positions post-acquisition.

    He knew this wasn’t an outright lie, but it didn’t feel honest either. Was this just strategic messaging or something more ethically concerning? And how could he navigate this without jeopardizing his reputation or future at the company?

    A third path

    He chose a third path. Instead of outright refusal, which might have been career-limiting, he started by asking clarifying questions. What was the real outcome that the leadership team wanted? Was there a way to tell a fuller, more balanced story that acknowledged challenges while highlighting future opportunities?

    In the end, he was able to get leadership buy-in to reframe the story to focus on how the company had learned from its struggles and was taking steps to improve. It wasn’t a spin. It was honest, forward-looking, and hopeful. The CEO praised the approach, and the executive maintained his integrity without derailing his career.

    The Institute of Business Ethics found in a study that one in three employees felt pressured to compromise the business’s ethical standards. Many comply out of fear—worried they’ll face retaliation, be labeled “difficult,” or lose opportunities. But there are ways to push back without risking your career.

    UNDERSTAND BEFORE OBJECTING

    When confronted with a questionable request, most people respond in one of two ways: They comply out of fear or they push back immediately, putting their job security at risk.

    There’s a better first step: Push to understand.

    Not all uncomfortable requests are unethical. Some are simply poorly communicated or misaligned with your values. 

    Clarify: Start by seeking to fully understand the request. You may find the issue is one of discomfort rather than unethical intent.

    Question: Explore the outcomes they want and whether the request achieves those goals in the best way. Asking thoughtful questions often makes leaders rethink their approach on their own.

    Redirect: If appropriate, propose a solution that meets the same business objectives without compromising integrity. For example, rather than omitting challenges, highlight how those challenges spurred innovation or improved future outcomes.

    These conversations can reveal that the person making the request is open to alternatives, they just hadn’t thought of them yet.

    UNETHICAL VERSUS ILLEGAL

    If you’ve clarified, questioned, and still feel uncomfortable, it’s important to assess whether the request is merely unethical or actually illegal. That distinction determines your next move.

    If the request is illegal, you will want to tread carefully. If you feel psychologically safe, it can be helpful to start communicating via email to keep a digital trail. Further, if your company has an HR department, you can share the request with them along with expressing your discomfort.

    One friend who works in compliance found himself in this exact situation. His manager asked him to manipulate data, a clear violation of regulations. He responded by email, explicitly stating why the request was illegal and citing the relevant regulatory code. He was never asked to do it again. Sometimes, simply stating the facts is the most powerful shield you have.

    However, if the request is unethical but not necessarily illegal, your next move should be a personal decision that minimizes future regret.

    REGRET MINIMIZATION FRAMEWORK

    If you’re facing this kind of dilemma, it’s already a bad situation. There’s no playbook that guarantees success or protection. Sometimes, doing everything “right” still results in backlash or career limitations.  This is why I recommend applying what’s called the “regret minimization framework.

    Ask yourself: If I look back on this 10 years from now, will I regret how I handled it?

    This is the core of the regret minimization framework, a decision-making tool made famous by Jeff Bezos. It doesn’t promise a perfect outcome. But it helps you act in a way that minimizes long-term regret, even if it leads to short-term discomfort.

    When you apply this framework, you’re not just considering whether you’ll keep your job next month. You’re asking which version of yourself—today’s self or your future self—you want to protect more. Do you want to be someone who went along to keep the peace? Or someone who held the line when it mattered?

    This doesn’t mean you have to become a whistleblower or burn bridges. It simply means choosing the actions that leave you at peace with yourself, knowing you did what you could with the power and information you had at the time.
    #how #push #back #unethical #request
    How to push back on an unethical request at work
    A few years ago, a sales executive I worked with found himself in a difficult position. His company was under review for a potential buyout, and his director asked him to present a version of the company’s story that, while technically true, left out critical details. The omission would make the company look healthier than it was, protecting its valuation and the leadership team’s positions post-acquisition. He knew this wasn’t an outright lie, but it didn’t feel honest either. Was this just strategic messaging or something more ethically concerning? And how could he navigate this without jeopardizing his reputation or future at the company? A third path He chose a third path. Instead of outright refusal, which might have been career-limiting, he started by asking clarifying questions. What was the real outcome that the leadership team wanted? Was there a way to tell a fuller, more balanced story that acknowledged challenges while highlighting future opportunities? In the end, he was able to get leadership buy-in to reframe the story to focus on how the company had learned from its struggles and was taking steps to improve. It wasn’t a spin. It was honest, forward-looking, and hopeful. The CEO praised the approach, and the executive maintained his integrity without derailing his career. The Institute of Business Ethics found in a study that one in three employees felt pressured to compromise the business’s ethical standards. Many comply out of fear—worried they’ll face retaliation, be labeled “difficult,” or lose opportunities. But there are ways to push back without risking your career. UNDERSTAND BEFORE OBJECTING When confronted with a questionable request, most people respond in one of two ways: They comply out of fear or they push back immediately, putting their job security at risk. There’s a better first step: Push to understand. Not all uncomfortable requests are unethical. Some are simply poorly communicated or misaligned with your values.  Clarify: Start by seeking to fully understand the request. You may find the issue is one of discomfort rather than unethical intent. Question: Explore the outcomes they want and whether the request achieves those goals in the best way. Asking thoughtful questions often makes leaders rethink their approach on their own. Redirect: If appropriate, propose a solution that meets the same business objectives without compromising integrity. For example, rather than omitting challenges, highlight how those challenges spurred innovation or improved future outcomes. These conversations can reveal that the person making the request is open to alternatives, they just hadn’t thought of them yet. UNETHICAL VERSUS ILLEGAL If you’ve clarified, questioned, and still feel uncomfortable, it’s important to assess whether the request is merely unethical or actually illegal. That distinction determines your next move. If the request is illegal, you will want to tread carefully. If you feel psychologically safe, it can be helpful to start communicating via email to keep a digital trail. Further, if your company has an HR department, you can share the request with them along with expressing your discomfort. One friend who works in compliance found himself in this exact situation. His manager asked him to manipulate data, a clear violation of regulations. He responded by email, explicitly stating why the request was illegal and citing the relevant regulatory code. He was never asked to do it again. Sometimes, simply stating the facts is the most powerful shield you have. However, if the request is unethical but not necessarily illegal, your next move should be a personal decision that minimizes future regret. REGRET MINIMIZATION FRAMEWORK If you’re facing this kind of dilemma, it’s already a bad situation. There’s no playbook that guarantees success or protection. Sometimes, doing everything “right” still results in backlash or career limitations.  This is why I recommend applying what’s called the “regret minimization framework. Ask yourself: If I look back on this 10 years from now, will I regret how I handled it? This is the core of the regret minimization framework, a decision-making tool made famous by Jeff Bezos. It doesn’t promise a perfect outcome. But it helps you act in a way that minimizes long-term regret, even if it leads to short-term discomfort. When you apply this framework, you’re not just considering whether you’ll keep your job next month. You’re asking which version of yourself—today’s self or your future self—you want to protect more. Do you want to be someone who went along to keep the peace? Or someone who held the line when it mattered? This doesn’t mean you have to become a whistleblower or burn bridges. It simply means choosing the actions that leave you at peace with yourself, knowing you did what you could with the power and information you had at the time. #how #push #back #unethical #request
    How to push back on an unethical request at work
    www.fastcompany.com
    A few years ago, a sales executive I worked with found himself in a difficult position. His company was under review for a potential buyout, and his director asked him to present a version of the company’s story that, while technically true, left out critical details. The omission would make the company look healthier than it was, protecting its valuation and the leadership team’s positions post-acquisition. He knew this wasn’t an outright lie, but it didn’t feel honest either. Was this just strategic messaging or something more ethically concerning? And how could he navigate this without jeopardizing his reputation or future at the company? A third path He chose a third path. Instead of outright refusal, which might have been career-limiting, he started by asking clarifying questions. What was the real outcome that the leadership team wanted? Was there a way to tell a fuller, more balanced story that acknowledged challenges while highlighting future opportunities? In the end, he was able to get leadership buy-in to reframe the story to focus on how the company had learned from its struggles and was taking steps to improve. It wasn’t a spin. It was honest, forward-looking, and hopeful. The CEO praised the approach, and the executive maintained his integrity without derailing his career. The Institute of Business Ethics found in a study that one in three employees felt pressured to compromise the business’s ethical standards. Many comply out of fear—worried they’ll face retaliation, be labeled “difficult,” or lose opportunities. But there are ways to push back without risking your career. UNDERSTAND BEFORE OBJECTING When confronted with a questionable request, most people respond in one of two ways: They comply out of fear or they push back immediately, putting their job security at risk. There’s a better first step: Push to understand. Not all uncomfortable requests are unethical. Some are simply poorly communicated or misaligned with your values.  Clarify: Start by seeking to fully understand the request. You may find the issue is one of discomfort rather than unethical intent. Question: Explore the outcomes they want and whether the request achieves those goals in the best way. Asking thoughtful questions often makes leaders rethink their approach on their own. Redirect: If appropriate, propose a solution that meets the same business objectives without compromising integrity. For example, rather than omitting challenges, highlight how those challenges spurred innovation or improved future outcomes. These conversations can reveal that the person making the request is open to alternatives, they just hadn’t thought of them yet. UNETHICAL VERSUS ILLEGAL If you’ve clarified, questioned, and still feel uncomfortable, it’s important to assess whether the request is merely unethical or actually illegal. That distinction determines your next move. If the request is illegal, you will want to tread carefully. If you feel psychologically safe, it can be helpful to start communicating via email to keep a digital trail (although it is possible that your manager will cover their trail by refusing to engage on email). Further, if your company has an HR department, you can share the request with them along with expressing your discomfort. One friend who works in compliance found himself in this exact situation. His manager asked him to manipulate data, a clear violation of regulations. He responded by email, explicitly stating why the request was illegal and citing the relevant regulatory code. He was never asked to do it again. Sometimes, simply stating the facts is the most powerful shield you have. However, if the request is unethical but not necessarily illegal, your next move should be a personal decision that minimizes future regret. REGRET MINIMIZATION FRAMEWORK If you’re facing this kind of dilemma, it’s already a bad situation. There’s no playbook that guarantees success or protection. Sometimes, doing everything “right” still results in backlash or career limitations.  This is why I recommend applying what’s called the “regret minimization framework. Ask yourself: If I look back on this 10 years from now, will I regret how I handled it? This is the core of the regret minimization framework, a decision-making tool made famous by Jeff Bezos. It doesn’t promise a perfect outcome. But it helps you act in a way that minimizes long-term regret, even if it leads to short-term discomfort. When you apply this framework, you’re not just considering whether you’ll keep your job next month. You’re asking which version of yourself—today’s self or your future self—you want to protect more. Do you want to be someone who went along to keep the peace? Or someone who held the line when it mattered? This doesn’t mean you have to become a whistleblower or burn bridges. It simply means choosing the actions that leave you at peace with yourself, knowing you did what you could with the power and information you had at the time.
    0 Commentaires ·0 Parts ·0 Aperçu
  • Fantasy Author Called Out for Using AI After Leaving Prompt in Published Book: 'So Embarrassing'

    "Author Lena McDonald is blatantly using AI to mimic other popular author's writing styles"
    Reddit
    A fantasy romance author is facing backlash after readers discovered an AI-generated prompt accidentally left in the published version of her book, sparking renewed criticism of AI use in self-published fiction.With the rise of generative AI tools, more authors have turned to software for brainstorming, editing, or even drafting entire scenes. But when remnants of AI prompts make it into the final books, fans and fellow writers see it as both careless and unethical.Author Lena McDonald's AI slip-up came to light when readers noticed an editing note embedded in chapter three of her book "Darkhollow Academy: Year 2," referencing the style of another author."I've rewritten the passage to align more with J. Bree's style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements," the passage read.The sentence, seemingly left over from an AI prompt, appeared in the middle of a romantic scene. While the book has since been quietly updated on Amazon to remove the passage, screenshots of the gaffe continue circulating on Reddit, where fans have dubbed the incident "so embarrassing." Comment by u/fox_paw44 from discussion in ReverseHarem

    Comment by u/fox_paw44 from discussion in ReverseHarem
    Additionally, the discovery sparked swift backlash from Goodreads commenters accusing the author of deceiving fans with "AI generated slop," dropping her rating drastically."Is this the author using AI to 'write' books? Because it seems she is. I urge people to do the research, people are posting screenshots of an AI prompt left in the text," one commenter said."This author is a blatant thief who uses generative AI to mimic other authors' voices," another added.McDonald, who also publishes under the name Sienna Patterson, has not responded publicly and appears to have no active online presence, making her difficult to reach for comment.© 2025 Latin Times. All rights reserved. Do not reproduce without permission.
    #fantasy #author #called #out #using
    Fantasy Author Called Out for Using AI After Leaving Prompt in Published Book: 'So Embarrassing'
    "Author Lena McDonald is blatantly using AI to mimic other popular author's writing styles" Reddit A fantasy romance author is facing backlash after readers discovered an AI-generated prompt accidentally left in the published version of her book, sparking renewed criticism of AI use in self-published fiction.With the rise of generative AI tools, more authors have turned to software for brainstorming, editing, or even drafting entire scenes. But when remnants of AI prompts make it into the final books, fans and fellow writers see it as both careless and unethical.Author Lena McDonald's AI slip-up came to light when readers noticed an editing note embedded in chapter three of her book "Darkhollow Academy: Year 2," referencing the style of another author."I've rewritten the passage to align more with J. Bree's style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements," the passage read.The sentence, seemingly left over from an AI prompt, appeared in the middle of a romantic scene. While the book has since been quietly updated on Amazon to remove the passage, screenshots of the gaffe continue circulating on Reddit, where fans have dubbed the incident "so embarrassing." Comment by u/fox_paw44 from discussion in ReverseHarem Comment by u/fox_paw44 from discussion in ReverseHarem Additionally, the discovery sparked swift backlash from Goodreads commenters accusing the author of deceiving fans with "AI generated slop," dropping her rating drastically."Is this the author using AI to 'write' books? Because it seems she is. I urge people to do the research, people are posting screenshots of an AI prompt left in the text," one commenter said."This author is a blatant thief who uses generative AI to mimic other authors' voices," another added.McDonald, who also publishes under the name Sienna Patterson, has not responded publicly and appears to have no active online presence, making her difficult to reach for comment.© 2025 Latin Times. All rights reserved. Do not reproduce without permission. #fantasy #author #called #out #using
    Fantasy Author Called Out for Using AI After Leaving Prompt in Published Book: 'So Embarrassing'
    www.latintimes.com
    "Author Lena McDonald is blatantly using AI to mimic other popular author's writing styles" Reddit A fantasy romance author is facing backlash after readers discovered an AI-generated prompt accidentally left in the published version of her book, sparking renewed criticism of AI use in self-published fiction.With the rise of generative AI tools, more authors have turned to software for brainstorming, editing, or even drafting entire scenes. But when remnants of AI prompts make it into the final books, fans and fellow writers see it as both careless and unethical.Author Lena McDonald's AI slip-up came to light when readers noticed an editing note embedded in chapter three of her book "Darkhollow Academy: Year 2," referencing the style of another author."I've rewritten the passage to align more with J. Bree's style, which features more tension, gritty undertones, and raw emotional subtext beneath the supernatural elements," the passage read.The sentence, seemingly left over from an AI prompt, appeared in the middle of a romantic scene. While the book has since been quietly updated on Amazon to remove the passage, screenshots of the gaffe continue circulating on Reddit, where fans have dubbed the incident "so embarrassing." Comment by u/fox_paw44 from discussion in ReverseHarem Comment by u/fox_paw44 from discussion in ReverseHarem Additionally, the discovery sparked swift backlash from Goodreads commenters accusing the author of deceiving fans with "AI generated slop," dropping her rating drastically."Is this the author using AI to 'write' books? Because it seems she is. I urge people to do the research, people are posting screenshots of an AI prompt left in the text," one commenter said."This author is a blatant thief who uses generative AI to mimic other authors' voices," another added.McDonald, who also publishes under the name Sienna Patterson, has not responded publicly and appears to have no active online presence, making her difficult to reach for comment.© 2025 Latin Times. All rights reserved. Do not reproduce without permission.
    0 Commentaires ·0 Parts ·0 Aperçu
  • What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design

    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it.
    In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it.
    The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about.
    I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either.
    Buddha In The Machine
    Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States.
    Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious.
    Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position.
    In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman.
    Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces.
    “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig

    Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be.

    Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think.
    So, with the Godhead in mind, to business.
    Classical And Romantic
    A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it.
    “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig

    If we were to characterize the two as bickering siblings, familiar adjectives might start to appear:

    Classical
    Romantic

    Dull
    Frivolous

    Awkward
    Irrational

    Ugly
    Erratic

    Mechanical
    Untrustworthy

    Cold
    Fleeting

    Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them?
    Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound.
    Steve Jobs was a famous advocate of this.
    “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs

    Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony.
    Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty.

    Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory.

    Classical
    Romantic

    Organized
    Vibrant

    Scaleable
    Evocative

    Reliable
    Playful

    Efficient
    Fun

    Replicable
    Expressive

    And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share.
    Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way.
    In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself.

    The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one.
    What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality.
    Quality
    The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience.
    “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig

    Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is.

    Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us.
    We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there.
    A Quality Web
    So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it?
    There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feelssecondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality.
    Here are a few habits that I think work in the service of more Quality on the web.
    Seek To Understand How Things Work
    I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding.
    To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse.
    “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle.
    So, in concrete terms:

    Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it.
    Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work?
    Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful.
    Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion.

    Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run.
    Reframe The Questions
    Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process.
    The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process.
    Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality?
    Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful.
    The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small.
    Seek To Wed Art With ScienceNone of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen.
    In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence.
    Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture.
    The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away.
    Make Time For Doing Nothing
    Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys.
    If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head.

    Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop!
    Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone.

    As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too.
    From time to time, let go of your sense of urgency.
    Spirit Of Play
    Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door.
    Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article.
    Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play.
    We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise.
    The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself.
    Other Resources

    Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
    The Beauty of Everyday Things by Soetsu Yanagi
    Tao Te Ching
    “The Creative Act” by Rick Rubin
    “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt
    “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva

    Further Reading on Smashing Magazine

    “Three Approaches To Amplify Your Design Projects,” Olivia De Alba
    “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag
    “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks
    “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth
    #what #zen #art #motorcycle #maintenance
    What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design
    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it. In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it. The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about. I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either. Buddha In The Machine Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States. Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious. Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position. In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman. Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces. “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be. Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think. So, with the Godhead in mind, to business. Classical And Romantic A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it. “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig If we were to characterize the two as bickering siblings, familiar adjectives might start to appear: Classical Romantic Dull Frivolous Awkward Irrational Ugly Erratic Mechanical Untrustworthy Cold Fleeting Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them? Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound. Steve Jobs was a famous advocate of this. “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony. Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty. Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory. Classical Romantic Organized Vibrant Scaleable Evocative Reliable Playful Efficient Fun Replicable Expressive And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share. Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way. In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself. The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one. What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality. Quality The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience. “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is. Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us. We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there. A Quality Web So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it? There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feelssecondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality. Here are a few habits that I think work in the service of more Quality on the web. Seek To Understand How Things Work I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding. To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse. “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle. So, in concrete terms: Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it. Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work? Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful. Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion. Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run. Reframe The Questions Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process. The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process. Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality? Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful. The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small. Seek To Wed Art With ScienceNone of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen. In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence. Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture. The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away. Make Time For Doing Nothing Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys. If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head. Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop! Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone. As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too. From time to time, let go of your sense of urgency. Spirit Of Play Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door. Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article. Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play. We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise. The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself. Other Resources Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig The Beauty of Everyday Things by Soetsu Yanagi Tao Te Ching “The Creative Act” by Rick Rubin “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva Further Reading on Smashing Magazine “Three Approaches To Amplify Your Design Projects,” Olivia De Alba “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth #what #zen #art #motorcycle #maintenance
    What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design
    smashingmagazine.com
    I think we, as engineers and designers, have a lot to gain by stepping outside of our worlds. That’s why in previous pieces I’ve been drawn towards architecture, newspapers, and the occasional polymath. Today, we stumble blindly into the world of philosophy. Bear with me. I think there’s something to it. In 1974, the American philosopher Robert M. Pirsig published a book called Zen and the Art of Motorcycle Maintenance. A flowing blend of autobiography, road trip diary, and philosophical musings, the book’s ‘chautauqua’ is an interplay between art, science, and self. Its outlook on life has stuck with me since I read it. The book often feels prescient, at times surreal to read given it’s now 50 years old. Pirsig’s reflections on arts vs. sciences, subjective vs. objective, and systems vs. people translate seamlessly to the digital age. There are lessons there that I think are useful when trying to navigate — and build — the web. Those lessons are what this piece is about. I feel obliged at this point to echo Pirsig and say that what follows should in no way be associated with the great body of factual information about Zen Buddhist practice. It’s not very factual in terms of web development, either. Buddha In The Machine Zen is written in stages. It sets a scene before making its central case. That backdrop is important, so I will mirror it here. The book opens with the start of a motorcycle road trip undertaken by Pirsig and his son. It’s a winding journey that takes them most of the way across the United States. Despite the trip being in part characterized as a flight from the machine, from the industrial ‘death force’, Pirsig takes great pains to emphasize that technology is not inherently bad or destructive. Treating it as such actually prevents us from finding ways in which machinery and nature can be harmonious. Granted, at its worst, the technological world does feel like a death force. In the book’s 1970s backdrop, it manifests as things like efficiency, profit, optimization, automation, growth — the kinds of words that, when we read them listed together, a part of our soul wants to curl up in the fetal position. In modern tech, those same forces apply. We might add things like engagement and tracking to them. Taken to the extreme, these forces contribute to the web feeling like a deeply inhuman place. Something cold, calculating, and relentless, yet without a fire in its belly. Impersonal, mechanical, inhuman. Faced with these forces, the impulse is often to recoil. To shut our laptops and wander into the woods. However, there is a big difference between clearing one’s head and burying it in the sand. Pirsig argues that “Flight from and hatred of technology is self-defeating.” To throw our hands up and step away from tech is to concede to the power of its more sinister forces. “The Buddha, the Godhead, resides quite as comfortably in the circuits of a digital computer or the gears of a cycle transmission as he does at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha — which is to demean oneself.”— Robert M. Pirsig Before we can concern ourselves with questions about what we might do, we must try our best to marshal how we might be. We take our heads and hearts with us wherever we go. If we characterize ourselves as powerless pawns, then that is what we will be. Where design and development are concerned, that means residing in the technology without losing our sense of self — or power. Technology is only as good or evil, as useful or as futile, as the people shaping it. Be it the internet or artificial intelligence, to direct blame or ire at the technology itself is to absolve ourselves of the responsibility to use it better. It is better not to demean oneself, I think. So, with the Godhead in mind, to business. Classical And Romantic A core concern of Zen and the Art of Motorcycle Maintenance is the tension between the arts and sciences. The two worlds have a long, rich history of squabbling and dysfunction. There is often mutual distrust, suspicion, and even hostility. This, again, is self-defeating. Hatred of technology is a symptom of it. “A classical understanding sees the world primarily as the underlying form itself. A romantic understanding sees it primarily in terms of immediate appearance.”— Robert M. Pirsig If we were to characterize the two as bickering siblings, familiar adjectives might start to appear: Classical Romantic Dull Frivolous Awkward Irrational Ugly Erratic Mechanical Untrustworthy Cold Fleeting Anyone in the world of web design and development will have come up against these kinds of standoffs. Tensions arise between testing and intuition, best practices and innovation, structure and fluidity. Is design about following rules or breaking them? Treating such questions as binary is a fallacy. In doing so, we place ourselves in adversarial positions, whatever we consider ourselves to be. The best work comes from these worlds working together — from recognising they are bound. Steve Jobs was a famous advocate of this. “Technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing.”— Steve Jobs Whatever you may feel about Jobs himself, I think this sentiment is watertight. No one field holds all the keys. Leonardo da Vinci was a shining example of doing away with this needless siloing of worlds. He was a student of light, anatomy, art, architecture, everything and anything that interested him. And they complemented each other. Excellence is a question of harmony. Is a motorcycle a romantic or classical artifact? Is it a machine or a symbol? A series of parts or a whole? It’s all these things and more. To say otherwise does a disservice to the motorcycle and deprives us of its full beauty. Just by reframing the relationship in this way, the kinds of adjectives that come to mind naturally shift toward more harmonious territory. Classical Romantic Organized Vibrant Scaleable Evocative Reliable Playful Efficient Fun Replicable Expressive And, of course, when we try thinking this way, the distinction itself starts feeling fuzzier. There is so much that they share. Pirsig posits that the division between the subjective and objective is one of the great missteps of the Greeks, one that has been embraced wholeheartedly by the West in the millennia since. That doesn’t have to be the lens, though. Perhaps monism, not dualism, is the way. In a sense, technology marks the ultimate interplay between the arts and the sciences, the classical and the romantic. It is the human condition brought to you with ones and zeros. To separate those parts of it is to tear apart the thing itself. The same is true of the web. Is it romantic or classical? Art or science? Structured or anarchic? It is all those things and more. Engineering at its best is where all these apparent contradictions meet and become one. What is this place? Well, that brings us to a core concept of Pirsig’s book: Quality. Quality The central concern of Zen and the Art of Motorcycle Maintenance is the ‘Metaphysics of Quality’. Pirsig argues that ‘Quality’ is where subjective and objective experience meet. Quality is at the knife edge of experience. “Quality is the continuing stimulus which our environment puts upon us to create the world in which we live. All of it. Every last bit of it.”— Robert M. Pirsig Pirsig's writings overlap a lot with Taoism and Eastern philosophy, to the extent that he likens Quality to the Tao. Quality is similarly undefinable, with Pirsig himself making a point of not defining it. Like the Tao, Plato’s Form of the Good, or the ‘good taste’ to which GitHub cofounder Scott Chacon recently attributed the platform’s success, it simply is. Despite its nebulous nature, Quality is something we recognise when we see it. Any given problem or question has an infinite number of potential solutions, but we are drawn to the best ones as water flows toward the sea. When in a hostile environment, we withdraw from it, responding to a lack of Quality around us. We are drawn to Quality, to the point at which subjective and objective, romantic and classical, meet. There is no map, there isn’t a bullet point list of instructions for finding it, but we know it when we’re there. A Quality Web So, what does all this look like in a web context? How can we recognize and pursue Quality for its own sake and resist the forces that pull us away from it? There are a lot of ways in which the web is not what we’d call a Quality environment. When we use social media sites with algorithms designed around provocation rather than communication, when we’re assailed with ads to such an extent that content feels (and often is) secondary, and when AI-generated slop replaces artisanal craft, something feels off. We feel the absence of Quality. Here are a few habits that I think work in the service of more Quality on the web. Seek To Understand How Things Work I’m more guilty than anyone of diving into projects without taking time to step back and assess what I’m actually dealing with. As you can probably guess from the title, a decent amount of time in Zen and the Art of Motorcycle Maintenance is spent with the author as he tinkers with his motorcycle. Keeping it tuned up and in good repair makes it work better, of course, but the practice has deeper, more understated value, too. It lends itself to understanding. To maintain a motorcycle, one must have some idea of how it works. To take an engine apart and put it back together, one must know what each piece does and how it connects. For Pirsig, this process becomes almost meditative, offering perspective and clarity. The same is true of code. Rushing to the quick fix, be it due to deadlines or lethargy, will, at best, lead to a shoddy result and, in all likelihood, make things worse. “Black boxes” are as much a choice not to learn as they are something innately mysterious or unknowable. One of the reasons the web feels so ominous at times is that we don’t know how it works. Why am I being recommended this? Why are ads about ivory backscratchers following me everywhere? The inner workings of web tracking or AI models may not always be available, but just about any concept can be understood in principle. So, in concrete terms: Read the documentation, for the love of god.Sometimes we don’t understand how things work because the manual’s bad; more often, it’s because we haven’t looked at it. Follow pipelines from their start to their finish.How does data get from point A to point Z? What functions does it pass through, and how do they work? Do health work.Changing the oil in a motorcycle and bumping project dependencies amount to the same thing: a caring and long-term outlook. Shiny new gizmos are cool, but old ones that still run like a dream are beautiful. Always be studying.We are all works in progress, and clinging on to the way things were won’t make the brave new world go away. Be open to things you don’t know, and try not to treat those areas with suspicion. Bound up with this is nurturing a love for what might easily be mischaracterized as the ‘boring’ bits. Motorcycles are for road trips, and code powers products and services, but understanding how they work and tending to their inner workings will bring greater benefits in the long run. Reframe The Questions Much of the time, our work is understandably organized in terms of goals. OKRs, metrics, milestones, and the like help keep things organized and stuff happening. We shouldn’t get too hung up on them, though. Looking at the things we do in terms of Quality helps us reframe the process. The highest Quality solution isn’t always the same as the solution that performed best in A/B tests. The Dark Side of the Moon doesn’t exist because of focus groups. The test screenings for Se7en were dreadful. Reducing any given task to a single metric — or even a handful of metrics — hamstrings the entire process. Rory Sutherland suggests much the same thing in Are We Too Impatient to Be Intelligent? when he talks about looking at things as open-ended questions rather than reducing them to binary metrics to be optimized. Instead of fixating on making trains faster, wouldn’t it be more useful to ask, How do we improve their Quality? Challenge metrics. Good ones — which is to say, Quality ones — can handle the scrutiny. The bad ones deserve to crumble. Either way, you’re doing the world a service. With any given action you take on a website — from button design to database choices — ask yourself, Does this improve the Quality of what I’m working on? Not the bottom line. Not the conversion rate. Not egos. The Quality. Quality pulls us away from dark patterns and towards the delightful. The will to Quality is itself a paradigm shift. Aspiring to Quality removes a lot of noise from what is often a deafening environment. It may make things that once seemed big appear small. Seek To Wed Art With Science (And Whatever Else Fits The Bill) None of the above is to say that rules, best practices, conventions, and the like don’t have their place or are antithetical to Quality. They aren’t. To think otherwise is to slip into the kind of dualities Pirsig rails against in Zen. In a lot of ways, the main underlying theme in my What X Can Teach Us About Web Design pieces over the years has been how connected seemingly disparate worlds are. Yes, Vitruvius’s 1st-century tenets about architecture are useful to web design. Yes, newspapers can teach us much about grid systems and organising content. And yes, a piece of philosophical fiction from the 1970s holds many lessons about how to meet the challenges of artificial intelligence. Do not close your work off from atypical companions. Stuck on a highly technical problem? Perhaps a piece of children’s literature will help you to make the complicated simple. Designing a new homepage for your website? Look at some architecture. The best outcomes are harmonies of seemingly disparate worlds. Cling to nothing and throw nothing away. Make Time For Doing Nothing Here’s the rub. Just as Quality itself cannot be defined, the way to attain it is also not reducible to a neat bullet point list. Neither waterfall, agile or any other management framework holds the keys. If we are serious about putting Buddha in the machine, then we must allow ourselves time and space to not do things. Distancing ourselves from the myriad distractions of modern life puts us in states where the drift toward Quality is almost inevitable. In the absence of distracting forces, that’s where we head. Get away from the screen.We all have those moments where the solution to a problem appears as if out of nowhere. We may be on a walk or doing chores, then pop! Work on side projects.I’m not naive. I know some work environments are hostile to anything that doesn’t look like relentless delivery. Pet projects are ideal spaces for you to breathe. They’re yours, and you don’t have to justify them to anyone. As I go into more detail in “An Ode to Side Project Time,” there is immense good in non-doing, in letting the water clear. There is so much urgency, so much of the time. Stepping away from that is vital not just for well-being, but actually leads to better quality work too. From time to time, let go of your sense of urgency. Spirit Of Play Despite appearances, the web remains a deeply human experiment. The very best and very worst of our souls spill out into this place. It only makes sense, therefore, to think of the web — and how we shape it — in spiritual terms. We can’t leave those questions at the door. Zen and the Art of Motorcycle Maintenance has a lot to offer the modern web. It’s not a manifesto or a way of life, but it articulates an outlook on technology, art, and the self that many of us recognise on a deep, fundamental level. For anyone even vaguely intrigued by what’s been written here, I suggest reading the book. It’s much better than this article. Be inspired. So much of the web is beautiful. The highest-rated Awwwards profiles are just a fraction of the amazing things being made every day. Allow yourself to be delighted. Aspire to be delightful. Find things you care about and make them the highest form of themselves you can. And always do so in a spirit of play. We can carry those sentiments to the web. Do away with artificial divides between arts and science and bring out the best in both. Nurture a taste for Quality and let it guide the things you design and engineer. Allow yourself space for the water to clear in defiance of the myriad forces that would have you do otherwise. The Buddha, the Godhead, resides quite as comfortably in a social media feed or the inner machinations of cloud computing as at the top of a mountain or in the petals of a flower. To think otherwise is to demean the Buddha, which is to demean oneself. Other Resources Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig The Beauty of Everyday Things by Soetsu Yanagi Tao Te Ching “The Creative Act” by Rick Rubin “Robert Pirsig & His Metaphysics of Quality” by Anthony McWatt “Dark Patterns in UX: How to Identify and Avoid Unethical Design Practices” by Daria Zaytseva Further Reading on Smashing Magazine “Three Approaches To Amplify Your Design Projects,” Olivia De Alba “AI’s Transformative Impact On Web Design: Supercharging Productivity Across The Industry,” Paul Boag “How A Bottom-Up Design Approach Enhances Site Accessibility,” Eleanor Hecks “How Accessibility Standards Can Empower Better Chart Visual Design,” Kent Eisenhuth
    0 Commentaires ·0 Parts ·0 Aperçu
  • Meet the Author: Robert Bird

    SSRN

    Meet the Author: Robert Bird

    Robert Bird is a professor of business law and the Eversource Energy Chair in Business Ethicsat the University of Connecticut. He conducts research in legal strategy, business ethics, compliance, employment law, and related fields. Bird has authored over eighty academic publications, including articles in the American Business Law Journal, Journal of Law and Economics, Law and Society Review, Boston University Law Review, Boston College Law Review, and the Harvard Journal of Law and Public Policy. Robert has received sixteen research-related awards – including the Academy of Legal Studies in Businessbest international paper award, distinguished proceedings award, and the Holmes-Cardozo best overall conference paper award – and various teaching-related awards, such as the outstanding article of the year award two years in a row from the Journal of Legal Studies Education, and the student-selected Alpha Kappa Psi Teacher of the Year award. Robert is also a manuscript reviewer for several journals and is a past president of the Academy of Legal Studies in Business, the international academic organization for professors of law in schools of business. He spoke with SSRN about the importance of legal education within business schools and how legal knowledge provides value to organizations, both for their bottom line and for creating businesses for good.
    Q: Your main research and teaching focus has surrounded the intersection of business and law. What is it about the relationship between these two that led you to explore it further?
    A: As an undergraduate management information systems major at Fairfield University, I became interested in how legal issues impacted the development of new technologies. I remember writing a paper in the early 1990s on the legal and ethical implications of expert systems, a predecessor to today’s artificial intelligence. When I began my dual JD/MBA degree at Boston University, I found both fields fascinating. Business rewards people who have effective problem-solving skills, strong communication skills, and the ability to lead. Legal studies emphasize thinking on your feet, clear and persuasive legal writing, and an enduring sense of justice and fairness. Law school helped me to connect disparate ideas in a novel and creative way. Business school helped me solve complex problems and connect thought to action. The disciplines, at least to me, seemed to naturally work together, and I found it irresistible to explore more deeply.
    I do not teach in a law school. I’m a lawyer in a business school. For years I felt like a cat in a dog show. The disciplines think in a fundamentally different manner. Traditionally, business faculty research as social scientists, while law faculty emphasize the humanist side of knowledge. Business faculty are skilled at statistical analysis and modeling, while law faculty are adept with abstract ideas and interpretation of textual knowledge. This has its challenges and its opportunities. In a law school, I doubt any professor questions the importance of law in business. In a business school, I initially had to address fundamental questions: “Why do our business students need to know the law? Can’t they just call a lawyer?”
    Having to respond to these kinds of questions has made me a better teacher and scholar. Initially, my standard answer was that “lawyers are important because they keep our students out of trouble, and they prevent companies from being investigated by regulators that result in costly penalties.” No less important, however, is that lawyers can’t be present for every decision a manager makes, and some bad business decisions result in irreversible liability. Businesspeople need to know how the law works in order to minimize their legal risks. Today, I very much value my business school affiliation.
    Legal knowledge can also be used as a source of strategic value for the company. If business people see law as a domain that is just as value-creating as finance and marketing and operations, they will take the law more seriously. They will increase their respect for the rule of law. As a result, you’ll have a company that is inherently primed to act with integrity, follow ethical values, and be socially responsible. That is valuable because what is unethical today is often illegal tomorrow.
    I think studying business is interesting because company operations are intertwined with some of the most important issues in society. Companies are making money, but they’re also impacting the societies in which they sell products. Law focuses on justice, equity and fairness, and I was interested in how companies can not only add value to their bottom line but also help build a better world: business that respects human rights, business that aspires to ethical and sustainable goals. Business schools that know the importance of legal knowledge will give their students a legal education, and those students are more likely to graduate as moral agents for change.
    Q: In your new book, “Legal Knowledge in Organizations,”you discuss how legal knowledge can greatly benefit firms by providing them with a distinct competitive advantage. In doing so, you lay out five pathways that firms use to pursue legal strategies. So now going back a ways, in your paper “Pathways of Legal Strategy,” which was included in the Stanford Journal of Law, Business, and Finance in 2008 and was later posted on SSRN, you talk similarly about five pathways. How have you developed the concept of these pathways over time, leading now to your recent book?
    A: I have been interested in how legal knowledge can be a source of value since 2001, when I just started full-time teaching. I still have an inexplicably pink sheet of paper upon which I scribbled the date and title “Ideas for Managerial Law for Strategic Advantage.” Because legal knowledge is so important in the organization, I was interested in how legal knowledge is used by companies, how it can be a source of value for companies, and how legal experts within organizations can deploy legal knowledge. So much of what is written about law in business relates to litigation and conflict. I wanted to learn more about how legal and business experts can work together successfully. Those ideas scribbled on that sheet of paper later became the foundation for my recently published book.
    However, twenty years ago there was limited research on how to use legal knowledge as a source of value for organizations. I looked at a number of companies and how they behaved, and I noticed that there were five different pathways – or patterns – that companies seemed to follow. There’s an avoidance pathway, where the companies ignore legal rules and circumvent enforcement. A firm following the conformance pathway perceives law as little more than a box to be checked, after which you move on to the more important aspects of business. In the prevention pathway, firms will take business steps to avoid legal problems, such as implementing business policies that prevent legal liability from appearing in the first place. This is where most companies believe their best practices are, with legal and compliance experts.
    However, there are two additional pathways: the value pathway perceives law as a source of competitive advantage and shows how legal knowledge can help you open up new markets, manage legal risk more efficiently and have more resources than your competitors do. Then finally, the transformative pathway uses legal knowledge to fundamentally change how the organization works. That means building a culture of integrity in the organization, enduring respect for legal rules, and supporting a close partnership between legal experts and businesspeople that generates a long-term competitive advantage that rivals cannot easily match. These pathways are explored in more detail in my book.
    The book also highlights how legal knowledge helps managers better understand and manage legal risk in a dynamic fashion. This can result in a first mover advantage in a new market. Companies can use the law to capture value in a way their rivals haven’t, and sustain that advantage, because they’re more versed in how the laws work than their competitors. I have applied these pathways of legal strategy to business challenges ranging from whistleblower laws to cannabis regulation.
    There’s a rich volume of information in this book. I also focus on legal risk management, and I apply an acronym called VUCA, which stands for volatility, uncertainty, complexity and ambiguity. Each of those four risk management techniques presents a distinct risk but also a distinct opportunity that can help companies assess risk, effectively avoid legal liability, and generate value through a well-coordinated response. The VUCA method perceives legal risk in a novel way, enabling firms to manage legal risk better than their rivals.
    Q: Of the five pathways you just discussed, the fifth, transformation, can bring significant benefits, but it is one that you’ve said few companies can successfully achieve, as it requires the company to rethink the way the entire organization works. What are some of the barriers that might prevent well-meaning companies from following the transformation pathway?
    A: The transformation pathway requires a fundamental change in the culture of the organization that fully embraces the value of legal knowledge as a strategic asset. However, there are two primary barriers that prevent this from happening. One barrier is that lawyers will sometimes be too risk averse and will focus on the technical nature of law rather than integrating their decisions into the strategy of the firm. The other barrier is that managers do not receive sufficient legal education to appreciate the importance of law or productively communicate with their legal team. If a manager does not know what the law is or how the law works, the manager can’t ask questions of their legal counsel such as, “Can you build me a legal strategy? Can you work with me as a strategic partner?”
    One of my key missions is to highlight the critical importance of legal education in business schools. At schools like the University of Connecticut, we are committed to that legal education. Every student that earns an undergraduate degree or an MBA receives at least one course in business law and ethics. These students understand what law is, how law works, and why it’s important to companies.
    Some business schools don’t require legal knowledge to get an MBA. Their students graduate with their business degree, and even though they have this elite pedigree, they don’t understand the law. They have not learned how to read a contract, legally hire and fire an employee, avoid insider trading, deal with regulators, negotiate with counter parties, protect the environment, and a variety of other legal issues that companies face every day. No business school should bestow a business degree on a student whose entire legal knowledge comes from watching reruns of Law & Order.
    Law is a critical part of business education. If they don’t get legal education, they’re not going to think of their lawyers as anything more than litigators. Lawyers can be so much more valuable than that: they can be strategic partners, they can be thought leaders, they can help change the culture of the organization to one that’s committed to integrity, which is not only good for society, but improves the bottom line. Legal knowledge is the last great untapped source of competitive advantage in organizations. My recently published book helps unlock that value for anyone who wants to read it.
    Q: What aspects of the book do you think are especially timely now?
    A: Right now, regulations are more complex, more comprehensive, and more punitive than at any other time in business history. Changes in presidential administrations, and radical shifts in how legal rules are enforced, do not create a steady state for companies. All that does is create turbulence for firms and increase their cost of operations. Companies can’t take efficient risks, and they can’t optimally plan for the future. Managers need legal knowledge now more than ever in order to handle the legal standards that are in a state of almost constant change.
    In addition, law is critical for the global economy. Today, respect for the rule of law and the adherence to following the rule of law is being challenged in a way that it hasn’t in decades. Companies need the rule of law to survive. Unwise firms see the rule of law as just another burden or obligation or another box that needs to be checked. In fact, legal mandates establish the rules of global markets. Legal rules provide certainty in terms of their regulatory obligations, especially when they’re well written. Legal knowledge also helps companies understand how to manage their workforce and how to protect the environment.
    Q: You’ve said in previous discussions, regarding why rules and regulations are so complex, that “words are finite and imprecise tools that are trying to govern and account for an infinite number of situations.” How would you suggest laws be structured in order to be succinct while still managing to capture an array of scenarios and account for possible loopholes? Where is that balance?
    A: Legal regulation needs to be as simple as it needs to be, and no simpler. What does that mean? That means there are a bunch of ways to make laws functional and effective. First is that legislators need to be careful to draft rules that do not have deliberate opacity. The more specificity you can provide, the more guidance you have for firms. That said, if there is too much specificity, where firms lack the flexibility to respond to mandates, then the law becomes convoluted. Complex laws aren’t necessarily bad. Sometimes laws have to be complex to meet their goals. Convoluted laws, however, are unnecessarily complex, and that’s the kind of law that drafters need to avoid.
    This may sound counterintuitive, but Americans enjoy the significant freedom where there is strong, consistent, and well-written regulation. For example, almost every city and town in the U.S. has traffic lights. Laws that require people to listen to traffic lights restrict freedom because they stop you from getting where you need to go, while others can cross the road. But if everyone just ignored traffic lights, there would be more traffic jams, accidents, and even deaths. Getting from one place to another would be much harder. Delivering goods and services would be more difficult. So, while traffic lights restrict freedom on one level, they actually increase the freedom of people overall to get where they need to go safely and quickly.
    The same goes for markets. Without strong and well-written regulation, you create chaos. Law is an accelerant for commercial transactions. Laws help prevent corruption in markets. Laws enable companies to make and enforce contracts. Laws protect intellectual property rights. Laws enable free and fair global trade. Laws keep the peace so that business can flourish. You want strong business; you need strong legal rules. You want an efficient market; you need efficient regulation. Law and business go hand-in-hand to make a functioning society and global market thrive.
    Q: You have many papers on SSRN, which have been frequently downloaded over the past twenty-plus years. Are there any in particular that you’d like to highlight?
    A: I am currently studying the harmful impact that corporate tax avoidance has on society, and how tax avoidance can be more effectively prevented. I have an article talking about the moral economy against tax avoidance. A moral economy in this context is a network of beliefs that society has about certain economic practices. These beliefs arise from collectively held notions of fairness and equitable opportunity. When a wealthy taxpayer uses aggressive tax avoidance to squeeze through loopholes of legal rules and avoid paying taxes, that hurts everyone in society. A moral economy against tax avoidance would empower individuals in society to condemn the practice, thereby discouraging all but the most aggressive avoiders from circumventing their obligations to contribute to the public good.
    I have also co-authored an article on an organization-centered approach to whistleblowing law. Most scholars study whistleblowing law from a legal perspective and how it applies to the organization. This article focuses on how organizations can engage in self-regulation in order to better manage whistleblowing risks. Whistleblowers may be perceived as just another cost of doing business, but whistleblowing laws can be a source of competitive advantage. Applying the five pathways of legal strategy researched earlier, this article shows how companies can leverage employees who are potential whistleblowers into valuable allies that create value for the organization.
    Q: In addition to business law, you also have an interest in business ethics. What are the essential principles that companies must know in order to be ethical?
    A: There are four principles of values-driven management that every organization should fully embrace. These four principles serve as four legs of a platform of responsible business practice. The first principle is business ethics, the internal values of the firm. Business ethics focuses on the individual decision-making in the organization that shapes how the organization functions on a day-to-day basis. Strong ethical principles can be the basis of a culture of integrity. By a culture of integrity, I mean ethical values that are so strong that employees comply with those values because they believe in them and not because they have to be asked to do so. The absence of ethical principles leaves companies morally adrift and prone to mistakes that can hurt the company’s reputation or trigger legal liability.
    The second principle is corporate social responsibility, which focuses on a company’s obligation to its stakeholders. These include shareholders, employees, neighbors, community, society, regulators, suppliers, creditors, the environment, and others. The third principle is sustainability. Sustainability focuses on management of collective resources over time. What responsibility do organizations have not just today, but also next year, 20 years, and 50 years from now? How do we sustain an environment that will be preserved for our grandchildren and our great grandchildren?
    The fourth principle is business and human rights. This focuses on inalienable rights that all persons have regardless of wealth or nationality. All people have a right to life, a right to education, a right to a fair wage, and a right to be able to raise a family in a safe environment, free from war and conflict. These rights are so strong they override the economic interests of corporations. Human rights are the vanguard of these values-driven principles.
    Finally, my colleagues and I have developed a new program at UConn, a Master’s in Social Responsibility & Impact in Business. The program trains students not only in business principles, but also in how companies can be a source of good for society. They can also be agents for cultural change, promoting sustainability and human rights, and advancing goals of business ethics. We’re training both change makers and change accelerators in organizations. We can show them not only that acting responsibly is good for society, but also that it’s good for business in the long term. An ethical company is a profitable company. A sustainable company is a profitable company. That’s something that we are showing our students now.
    Q: How do you think SSRN fits into the broader research and scholarship landscape?
    A: I joined SSRN over 20 years ago, and it has been my go-to mechanism to distribute scholarship to the wider academic community. Virtually every manuscript that I’ve drafted goes on SSRN before it gets submitted to publication. I find it to be a key vector for disseminating my research before it eventually gets published. SSRN is a living embodiment of tomorrow’s research today. Instead of waiting potentially years to be formally published, through SSRN my working paper is shared with a wide audience in a short time.
    I’m just a few quick clicks away from sharing my work with interested colleagues who will be able to easily find it. I don’t have to wait until publication. When you’re on SSRN, you’re in an ecosystem where people are looking for current knowledge, and if they find your work, they’re going to cite it before it gets to publication.
    SSRN also brings research to me. The eJournals are one of the primary ways that I learn about research that’s not within my immediate network. SSRN is also excellent for accessing international scholarship. There are scholars in other countries that I may never hear about except for on SSRN. SSRN is a gateway to world scholarship.
    You can see more work by Robert Bird on his SSRN Author page here
    #meet #author #robert #bird
    Meet the Author: Robert Bird
    SSRN Meet the Author: Robert Bird Robert Bird is a professor of business law and the Eversource Energy Chair in Business Ethicsat the University of Connecticut. He conducts research in legal strategy, business ethics, compliance, employment law, and related fields. Bird has authored over eighty academic publications, including articles in the American Business Law Journal, Journal of Law and Economics, Law and Society Review, Boston University Law Review, Boston College Law Review, and the Harvard Journal of Law and Public Policy. Robert has received sixteen research-related awards – including the Academy of Legal Studies in Businessbest international paper award, distinguished proceedings award, and the Holmes-Cardozo best overall conference paper award – and various teaching-related awards, such as the outstanding article of the year award two years in a row from the Journal of Legal Studies Education, and the student-selected Alpha Kappa Psi Teacher of the Year award. Robert is also a manuscript reviewer for several journals and is a past president of the Academy of Legal Studies in Business, the international academic organization for professors of law in schools of business. He spoke with SSRN about the importance of legal education within business schools and how legal knowledge provides value to organizations, both for their bottom line and for creating businesses for good. Q: Your main research and teaching focus has surrounded the intersection of business and law. What is it about the relationship between these two that led you to explore it further? A: As an undergraduate management information systems major at Fairfield University, I became interested in how legal issues impacted the development of new technologies. I remember writing a paper in the early 1990s on the legal and ethical implications of expert systems, a predecessor to today’s artificial intelligence. When I began my dual JD/MBA degree at Boston University, I found both fields fascinating. Business rewards people who have effective problem-solving skills, strong communication skills, and the ability to lead. Legal studies emphasize thinking on your feet, clear and persuasive legal writing, and an enduring sense of justice and fairness. Law school helped me to connect disparate ideas in a novel and creative way. Business school helped me solve complex problems and connect thought to action. The disciplines, at least to me, seemed to naturally work together, and I found it irresistible to explore more deeply. I do not teach in a law school. I’m a lawyer in a business school. For years I felt like a cat in a dog show. The disciplines think in a fundamentally different manner. Traditionally, business faculty research as social scientists, while law faculty emphasize the humanist side of knowledge. Business faculty are skilled at statistical analysis and modeling, while law faculty are adept with abstract ideas and interpretation of textual knowledge. This has its challenges and its opportunities. In a law school, I doubt any professor questions the importance of law in business. In a business school, I initially had to address fundamental questions: “Why do our business students need to know the law? Can’t they just call a lawyer?” Having to respond to these kinds of questions has made me a better teacher and scholar. Initially, my standard answer was that “lawyers are important because they keep our students out of trouble, and they prevent companies from being investigated by regulators that result in costly penalties.” No less important, however, is that lawyers can’t be present for every decision a manager makes, and some bad business decisions result in irreversible liability. Businesspeople need to know how the law works in order to minimize their legal risks. Today, I very much value my business school affiliation. Legal knowledge can also be used as a source of strategic value for the company. If business people see law as a domain that is just as value-creating as finance and marketing and operations, they will take the law more seriously. They will increase their respect for the rule of law. As a result, you’ll have a company that is inherently primed to act with integrity, follow ethical values, and be socially responsible. That is valuable because what is unethical today is often illegal tomorrow. I think studying business is interesting because company operations are intertwined with some of the most important issues in society. Companies are making money, but they’re also impacting the societies in which they sell products. Law focuses on justice, equity and fairness, and I was interested in how companies can not only add value to their bottom line but also help build a better world: business that respects human rights, business that aspires to ethical and sustainable goals. Business schools that know the importance of legal knowledge will give their students a legal education, and those students are more likely to graduate as moral agents for change. Q: In your new book, “Legal Knowledge in Organizations,”you discuss how legal knowledge can greatly benefit firms by providing them with a distinct competitive advantage. In doing so, you lay out five pathways that firms use to pursue legal strategies. So now going back a ways, in your paper “Pathways of Legal Strategy,” which was included in the Stanford Journal of Law, Business, and Finance in 2008 and was later posted on SSRN, you talk similarly about five pathways. How have you developed the concept of these pathways over time, leading now to your recent book? A: I have been interested in how legal knowledge can be a source of value since 2001, when I just started full-time teaching. I still have an inexplicably pink sheet of paper upon which I scribbled the date and title “Ideas for Managerial Law for Strategic Advantage.” Because legal knowledge is so important in the organization, I was interested in how legal knowledge is used by companies, how it can be a source of value for companies, and how legal experts within organizations can deploy legal knowledge. So much of what is written about law in business relates to litigation and conflict. I wanted to learn more about how legal and business experts can work together successfully. Those ideas scribbled on that sheet of paper later became the foundation for my recently published book. However, twenty years ago there was limited research on how to use legal knowledge as a source of value for organizations. I looked at a number of companies and how they behaved, and I noticed that there were five different pathways – or patterns – that companies seemed to follow. There’s an avoidance pathway, where the companies ignore legal rules and circumvent enforcement. A firm following the conformance pathway perceives law as little more than a box to be checked, after which you move on to the more important aspects of business. In the prevention pathway, firms will take business steps to avoid legal problems, such as implementing business policies that prevent legal liability from appearing in the first place. This is where most companies believe their best practices are, with legal and compliance experts. However, there are two additional pathways: the value pathway perceives law as a source of competitive advantage and shows how legal knowledge can help you open up new markets, manage legal risk more efficiently and have more resources than your competitors do. Then finally, the transformative pathway uses legal knowledge to fundamentally change how the organization works. That means building a culture of integrity in the organization, enduring respect for legal rules, and supporting a close partnership between legal experts and businesspeople that generates a long-term competitive advantage that rivals cannot easily match. These pathways are explored in more detail in my book. The book also highlights how legal knowledge helps managers better understand and manage legal risk in a dynamic fashion. This can result in a first mover advantage in a new market. Companies can use the law to capture value in a way their rivals haven’t, and sustain that advantage, because they’re more versed in how the laws work than their competitors. I have applied these pathways of legal strategy to business challenges ranging from whistleblower laws to cannabis regulation. There’s a rich volume of information in this book. I also focus on legal risk management, and I apply an acronym called VUCA, which stands for volatility, uncertainty, complexity and ambiguity. Each of those four risk management techniques presents a distinct risk but also a distinct opportunity that can help companies assess risk, effectively avoid legal liability, and generate value through a well-coordinated response. The VUCA method perceives legal risk in a novel way, enabling firms to manage legal risk better than their rivals. Q: Of the five pathways you just discussed, the fifth, transformation, can bring significant benefits, but it is one that you’ve said few companies can successfully achieve, as it requires the company to rethink the way the entire organization works. What are some of the barriers that might prevent well-meaning companies from following the transformation pathway? A: The transformation pathway requires a fundamental change in the culture of the organization that fully embraces the value of legal knowledge as a strategic asset. However, there are two primary barriers that prevent this from happening. One barrier is that lawyers will sometimes be too risk averse and will focus on the technical nature of law rather than integrating their decisions into the strategy of the firm. The other barrier is that managers do not receive sufficient legal education to appreciate the importance of law or productively communicate with their legal team. If a manager does not know what the law is or how the law works, the manager can’t ask questions of their legal counsel such as, “Can you build me a legal strategy? Can you work with me as a strategic partner?” One of my key missions is to highlight the critical importance of legal education in business schools. At schools like the University of Connecticut, we are committed to that legal education. Every student that earns an undergraduate degree or an MBA receives at least one course in business law and ethics. These students understand what law is, how law works, and why it’s important to companies. Some business schools don’t require legal knowledge to get an MBA. Their students graduate with their business degree, and even though they have this elite pedigree, they don’t understand the law. They have not learned how to read a contract, legally hire and fire an employee, avoid insider trading, deal with regulators, negotiate with counter parties, protect the environment, and a variety of other legal issues that companies face every day. No business school should bestow a business degree on a student whose entire legal knowledge comes from watching reruns of Law & Order. Law is a critical part of business education. If they don’t get legal education, they’re not going to think of their lawyers as anything more than litigators. Lawyers can be so much more valuable than that: they can be strategic partners, they can be thought leaders, they can help change the culture of the organization to one that’s committed to integrity, which is not only good for society, but improves the bottom line. Legal knowledge is the last great untapped source of competitive advantage in organizations. My recently published book helps unlock that value for anyone who wants to read it. Q: What aspects of the book do you think are especially timely now? A: Right now, regulations are more complex, more comprehensive, and more punitive than at any other time in business history. Changes in presidential administrations, and radical shifts in how legal rules are enforced, do not create a steady state for companies. All that does is create turbulence for firms and increase their cost of operations. Companies can’t take efficient risks, and they can’t optimally plan for the future. Managers need legal knowledge now more than ever in order to handle the legal standards that are in a state of almost constant change. In addition, law is critical for the global economy. Today, respect for the rule of law and the adherence to following the rule of law is being challenged in a way that it hasn’t in decades. Companies need the rule of law to survive. Unwise firms see the rule of law as just another burden or obligation or another box that needs to be checked. In fact, legal mandates establish the rules of global markets. Legal rules provide certainty in terms of their regulatory obligations, especially when they’re well written. Legal knowledge also helps companies understand how to manage their workforce and how to protect the environment. Q: You’ve said in previous discussions, regarding why rules and regulations are so complex, that “words are finite and imprecise tools that are trying to govern and account for an infinite number of situations.” How would you suggest laws be structured in order to be succinct while still managing to capture an array of scenarios and account for possible loopholes? Where is that balance? A: Legal regulation needs to be as simple as it needs to be, and no simpler. What does that mean? That means there are a bunch of ways to make laws functional and effective. First is that legislators need to be careful to draft rules that do not have deliberate opacity. The more specificity you can provide, the more guidance you have for firms. That said, if there is too much specificity, where firms lack the flexibility to respond to mandates, then the law becomes convoluted. Complex laws aren’t necessarily bad. Sometimes laws have to be complex to meet their goals. Convoluted laws, however, are unnecessarily complex, and that’s the kind of law that drafters need to avoid. This may sound counterintuitive, but Americans enjoy the significant freedom where there is strong, consistent, and well-written regulation. For example, almost every city and town in the U.S. has traffic lights. Laws that require people to listen to traffic lights restrict freedom because they stop you from getting where you need to go, while others can cross the road. But if everyone just ignored traffic lights, there would be more traffic jams, accidents, and even deaths. Getting from one place to another would be much harder. Delivering goods and services would be more difficult. So, while traffic lights restrict freedom on one level, they actually increase the freedom of people overall to get where they need to go safely and quickly. The same goes for markets. Without strong and well-written regulation, you create chaos. Law is an accelerant for commercial transactions. Laws help prevent corruption in markets. Laws enable companies to make and enforce contracts. Laws protect intellectual property rights. Laws enable free and fair global trade. Laws keep the peace so that business can flourish. You want strong business; you need strong legal rules. You want an efficient market; you need efficient regulation. Law and business go hand-in-hand to make a functioning society and global market thrive. Q: You have many papers on SSRN, which have been frequently downloaded over the past twenty-plus years. Are there any in particular that you’d like to highlight? A: I am currently studying the harmful impact that corporate tax avoidance has on society, and how tax avoidance can be more effectively prevented. I have an article talking about the moral economy against tax avoidance. A moral economy in this context is a network of beliefs that society has about certain economic practices. These beliefs arise from collectively held notions of fairness and equitable opportunity. When a wealthy taxpayer uses aggressive tax avoidance to squeeze through loopholes of legal rules and avoid paying taxes, that hurts everyone in society. A moral economy against tax avoidance would empower individuals in society to condemn the practice, thereby discouraging all but the most aggressive avoiders from circumventing their obligations to contribute to the public good. I have also co-authored an article on an organization-centered approach to whistleblowing law. Most scholars study whistleblowing law from a legal perspective and how it applies to the organization. This article focuses on how organizations can engage in self-regulation in order to better manage whistleblowing risks. Whistleblowers may be perceived as just another cost of doing business, but whistleblowing laws can be a source of competitive advantage. Applying the five pathways of legal strategy researched earlier, this article shows how companies can leverage employees who are potential whistleblowers into valuable allies that create value for the organization. Q: In addition to business law, you also have an interest in business ethics. What are the essential principles that companies must know in order to be ethical? A: There are four principles of values-driven management that every organization should fully embrace. These four principles serve as four legs of a platform of responsible business practice. The first principle is business ethics, the internal values of the firm. Business ethics focuses on the individual decision-making in the organization that shapes how the organization functions on a day-to-day basis. Strong ethical principles can be the basis of a culture of integrity. By a culture of integrity, I mean ethical values that are so strong that employees comply with those values because they believe in them and not because they have to be asked to do so. The absence of ethical principles leaves companies morally adrift and prone to mistakes that can hurt the company’s reputation or trigger legal liability. The second principle is corporate social responsibility, which focuses on a company’s obligation to its stakeholders. These include shareholders, employees, neighbors, community, society, regulators, suppliers, creditors, the environment, and others. The third principle is sustainability. Sustainability focuses on management of collective resources over time. What responsibility do organizations have not just today, but also next year, 20 years, and 50 years from now? How do we sustain an environment that will be preserved for our grandchildren and our great grandchildren? The fourth principle is business and human rights. This focuses on inalienable rights that all persons have regardless of wealth or nationality. All people have a right to life, a right to education, a right to a fair wage, and a right to be able to raise a family in a safe environment, free from war and conflict. These rights are so strong they override the economic interests of corporations. Human rights are the vanguard of these values-driven principles. Finally, my colleagues and I have developed a new program at UConn, a Master’s in Social Responsibility & Impact in Business. The program trains students not only in business principles, but also in how companies can be a source of good for society. They can also be agents for cultural change, promoting sustainability and human rights, and advancing goals of business ethics. We’re training both change makers and change accelerators in organizations. We can show them not only that acting responsibly is good for society, but also that it’s good for business in the long term. An ethical company is a profitable company. A sustainable company is a profitable company. That’s something that we are showing our students now. Q: How do you think SSRN fits into the broader research and scholarship landscape? A: I joined SSRN over 20 years ago, and it has been my go-to mechanism to distribute scholarship to the wider academic community. Virtually every manuscript that I’ve drafted goes on SSRN before it gets submitted to publication. I find it to be a key vector for disseminating my research before it eventually gets published. SSRN is a living embodiment of tomorrow’s research today. Instead of waiting potentially years to be formally published, through SSRN my working paper is shared with a wide audience in a short time. I’m just a few quick clicks away from sharing my work with interested colleagues who will be able to easily find it. I don’t have to wait until publication. When you’re on SSRN, you’re in an ecosystem where people are looking for current knowledge, and if they find your work, they’re going to cite it before it gets to publication. SSRN also brings research to me. The eJournals are one of the primary ways that I learn about research that’s not within my immediate network. SSRN is also excellent for accessing international scholarship. There are scholars in other countries that I may never hear about except for on SSRN. SSRN is a gateway to world scholarship. You can see more work by Robert Bird on his SSRN Author page here #meet #author #robert #bird
    Meet the Author: Robert Bird
    blog.ssrn.com
    SSRN Meet the Author: Robert Bird Robert Bird is a professor of business law and the Eversource Energy Chair in Business Ethicsat the University of Connecticut. He conducts research in legal strategy, business ethics, compliance, employment law, and related fields. Bird has authored over eighty academic publications, including articles in the American Business Law Journal, Journal of Law and Economics, Law and Society Review, Boston University Law Review, Boston College Law Review, and the Harvard Journal of Law and Public Policy. Robert has received sixteen research-related awards – including the Academy of Legal Studies in Business (ALSB) best international paper award, distinguished proceedings award, and the Holmes-Cardozo best overall conference paper award – and various teaching-related awards, such as the outstanding article of the year award two years in a row from the Journal of Legal Studies Education, and the student-selected Alpha Kappa Psi Teacher of the Year award. Robert is also a manuscript reviewer for several journals and is a past president of the Academy of Legal Studies in Business, the international academic organization for professors of law in schools of business. He spoke with SSRN about the importance of legal education within business schools and how legal knowledge provides value to organizations, both for their bottom line and for creating businesses for good. Q: Your main research and teaching focus has surrounded the intersection of business and law. What is it about the relationship between these two that led you to explore it further? A: As an undergraduate management information systems major at Fairfield University, I became interested in how legal issues impacted the development of new technologies. I remember writing a paper in the early 1990s on the legal and ethical implications of expert systems, a predecessor to today’s artificial intelligence. When I began my dual JD/MBA degree at Boston University, I found both fields fascinating. Business rewards people who have effective problem-solving skills, strong communication skills, and the ability to lead. Legal studies emphasize thinking on your feet, clear and persuasive legal writing, and an enduring sense of justice and fairness. Law school helped me to connect disparate ideas in a novel and creative way. Business school helped me solve complex problems and connect thought to action. The disciplines, at least to me, seemed to naturally work together, and I found it irresistible to explore more deeply. I do not teach in a law school. I’m a lawyer in a business school. For years I felt like a cat in a dog show. The disciplines think in a fundamentally different manner. Traditionally, business faculty research as social scientists, while law faculty emphasize the humanist side of knowledge. Business faculty are skilled at statistical analysis and modeling, while law faculty are adept with abstract ideas and interpretation of textual knowledge. This has its challenges and its opportunities. In a law school, I doubt any professor questions the importance of law in business. In a business school, I initially had to address fundamental questions: “Why do our business students need to know the law? Can’t they just call a lawyer?” Having to respond to these kinds of questions has made me a better teacher and scholar. Initially, my standard answer was that “lawyers are important because they keep our students out of trouble, and they prevent companies from being investigated by regulators that result in costly penalties.” No less important, however, is that lawyers can’t be present for every decision a manager makes, and some bad business decisions result in irreversible liability. Businesspeople need to know how the law works in order to minimize their legal risks. Today, I very much value my business school affiliation. Legal knowledge can also be used as a source of strategic value for the company. If business people see law as a domain that is just as value-creating as finance and marketing and operations, they will take the law more seriously. They will increase their respect for the rule of law. As a result, you’ll have a company that is inherently primed to act with integrity, follow ethical values, and be socially responsible. That is valuable because what is unethical today is often illegal tomorrow. I think studying business is interesting because company operations are intertwined with some of the most important issues in society. Companies are making money, but they’re also impacting the societies in which they sell products. Law focuses on justice, equity and fairness, and I was interested in how companies can not only add value to their bottom line but also help build a better world: business that respects human rights, business that aspires to ethical and sustainable goals. Business schools that know the importance of legal knowledge will give their students a legal education, and those students are more likely to graduate as moral agents for change. Q: In your new book, “Legal Knowledge in Organizations,” [which was released in March 2025,] you discuss how legal knowledge can greatly benefit firms by providing them with a distinct competitive advantage. In doing so, you lay out five pathways that firms use to pursue legal strategies. So now going back a ways, in your paper “Pathways of Legal Strategy,” which was included in the Stanford Journal of Law, Business, and Finance in 2008 and was later posted on SSRN, you talk similarly about five pathways. How have you developed the concept of these pathways over time, leading now to your recent book? A: I have been interested in how legal knowledge can be a source of value since 2001, when I just started full-time teaching. I still have an inexplicably pink sheet of paper upon which I scribbled the date and title “Ideas for Managerial Law for Strategic Advantage.” Because legal knowledge is so important in the organization, I was interested in how legal knowledge is used by companies, how it can be a source of value for companies, and how legal experts within organizations can deploy legal knowledge. So much of what is written about law in business relates to litigation and conflict. I wanted to learn more about how legal and business experts can work together successfully. Those ideas scribbled on that sheet of paper later became the foundation for my recently published book. However, twenty years ago there was limited research on how to use legal knowledge as a source of value for organizations. I looked at a number of companies and how they behaved, and I noticed that there were five different pathways – or patterns – that companies seemed to follow. There’s an avoidance pathway, where the companies ignore legal rules and circumvent enforcement. A firm following the conformance pathway perceives law as little more than a box to be checked, after which you move on to the more important aspects of business. In the prevention pathway, firms will take business steps to avoid legal problems, such as implementing business policies that prevent legal liability from appearing in the first place. This is where most companies believe their best practices are, with legal and compliance experts. However, there are two additional pathways: the value pathway perceives law as a source of competitive advantage and shows how legal knowledge can help you open up new markets, manage legal risk more efficiently and have more resources than your competitors do. Then finally, the transformative pathway uses legal knowledge to fundamentally change how the organization works. That means building a culture of integrity in the organization, enduring respect for legal rules, and supporting a close partnership between legal experts and businesspeople that generates a long-term competitive advantage that rivals cannot easily match. These pathways are explored in more detail in my book. The book also highlights how legal knowledge helps managers better understand and manage legal risk in a dynamic fashion. This can result in a first mover advantage in a new market. Companies can use the law to capture value in a way their rivals haven’t, and sustain that advantage, because they’re more versed in how the laws work than their competitors. I have applied these pathways of legal strategy to business challenges ranging from whistleblower laws to cannabis regulation. There’s a rich volume of information in this book. I also focus on legal risk management, and I apply an acronym called VUCA, which stands for volatility, uncertainty, complexity and ambiguity. Each of those four risk management techniques presents a distinct risk but also a distinct opportunity that can help companies assess risk, effectively avoid legal liability, and generate value through a well-coordinated response. The VUCA method perceives legal risk in a novel way, enabling firms to manage legal risk better than their rivals. Q: Of the five pathways you just discussed, the fifth, transformation, can bring significant benefits, but it is one that you’ve said few companies can successfully achieve, as it requires the company to rethink the way the entire organization works. What are some of the barriers that might prevent well-meaning companies from following the transformation pathway? A: The transformation pathway requires a fundamental change in the culture of the organization that fully embraces the value of legal knowledge as a strategic asset. However, there are two primary barriers that prevent this from happening. One barrier is that lawyers will sometimes be too risk averse and will focus on the technical nature of law rather than integrating their decisions into the strategy of the firm. The other barrier is that managers do not receive sufficient legal education to appreciate the importance of law or productively communicate with their legal team. If a manager does not know what the law is or how the law works, the manager can’t ask questions of their legal counsel such as, “Can you build me a legal strategy? Can you work with me as a strategic partner?” One of my key missions is to highlight the critical importance of legal education in business schools. At schools like the University of Connecticut, we are committed to that legal education. Every student that earns an undergraduate degree or an MBA receives at least one course in business law and ethics. These students understand what law is, how law works, and why it’s important to companies. Some business schools don’t require legal knowledge to get an MBA. Their students graduate with their business degree, and even though they have this elite pedigree, they don’t understand the law. They have not learned how to read a contract, legally hire and fire an employee, avoid insider trading, deal with regulators, negotiate with counter parties, protect the environment, and a variety of other legal issues that companies face every day. No business school should bestow a business degree on a student whose entire legal knowledge comes from watching reruns of Law & Order. Law is a critical part of business education. If they don’t get legal education, they’re not going to think of their lawyers as anything more than litigators. Lawyers can be so much more valuable than that: they can be strategic partners, they can be thought leaders, they can help change the culture of the organization to one that’s committed to integrity, which is not only good for society, but improves the bottom line. Legal knowledge is the last great untapped source of competitive advantage in organizations. My recently published book helps unlock that value for anyone who wants to read it. Q: What aspects of the book do you think are especially timely now? A: Right now, regulations are more complex, more comprehensive, and more punitive than at any other time in business history. Changes in presidential administrations, and radical shifts in how legal rules are enforced, do not create a steady state for companies. All that does is create turbulence for firms and increase their cost of operations. Companies can’t take efficient risks, and they can’t optimally plan for the future. Managers need legal knowledge now more than ever in order to handle the legal standards that are in a state of almost constant change. In addition, law is critical for the global economy. Today, respect for the rule of law and the adherence to following the rule of law is being challenged in a way that it hasn’t in decades. Companies need the rule of law to survive. Unwise firms see the rule of law as just another burden or obligation or another box that needs to be checked. In fact, legal mandates establish the rules of global markets. Legal rules provide certainty in terms of their regulatory obligations, especially when they’re well written. Legal knowledge also helps companies understand how to manage their workforce and how to protect the environment. Q: You’ve said in previous discussions, regarding why rules and regulations are so complex, that “words are finite and imprecise tools that are trying to govern and account for an infinite number of situations.” How would you suggest laws be structured in order to be succinct while still managing to capture an array of scenarios and account for possible loopholes? Where is that balance? A: Legal regulation needs to be as simple as it needs to be, and no simpler. What does that mean? That means there are a bunch of ways to make laws functional and effective. First is that legislators need to be careful to draft rules that do not have deliberate opacity. The more specificity you can provide, the more guidance you have for firms. That said, if there is too much specificity, where firms lack the flexibility to respond to mandates, then the law becomes convoluted. Complex laws aren’t necessarily bad. Sometimes laws have to be complex to meet their goals. Convoluted laws, however, are unnecessarily complex, and that’s the kind of law that drafters need to avoid. This may sound counterintuitive, but Americans enjoy the significant freedom where there is strong, consistent, and well-written regulation. For example, almost every city and town in the U.S. has traffic lights. Laws that require people to listen to traffic lights restrict freedom because they stop you from getting where you need to go, while others can cross the road. But if everyone just ignored traffic lights, there would be more traffic jams, accidents, and even deaths. Getting from one place to another would be much harder. Delivering goods and services would be more difficult. So, while traffic lights restrict freedom on one level, they actually increase the freedom of people overall to get where they need to go safely and quickly. The same goes for markets. Without strong and well-written regulation, you create chaos. Law is an accelerant for commercial transactions. Laws help prevent corruption in markets. Laws enable companies to make and enforce contracts. Laws protect intellectual property rights. Laws enable free and fair global trade. Laws keep the peace so that business can flourish. You want strong business; you need strong legal rules. You want an efficient market; you need efficient regulation. Law and business go hand-in-hand to make a functioning society and global market thrive. Q: You have many papers on SSRN, which have been frequently downloaded over the past twenty-plus years. Are there any in particular that you’d like to highlight? A: I am currently studying the harmful impact that corporate tax avoidance has on society, and how tax avoidance can be more effectively prevented. I have an article talking about the moral economy against tax avoidance. A moral economy in this context is a network of beliefs that society has about certain economic practices. These beliefs arise from collectively held notions of fairness and equitable opportunity. When a wealthy taxpayer uses aggressive tax avoidance to squeeze through loopholes of legal rules and avoid paying taxes, that hurts everyone in society. A moral economy against tax avoidance would empower individuals in society to condemn the practice, thereby discouraging all but the most aggressive avoiders from circumventing their obligations to contribute to the public good. I have also co-authored an article on an organization-centered approach to whistleblowing law. Most scholars study whistleblowing law from a legal perspective and how it applies to the organization. This article focuses on how organizations can engage in self-regulation in order to better manage whistleblowing risks. Whistleblowers may be perceived as just another cost of doing business, but whistleblowing laws can be a source of competitive advantage. Applying the five pathways of legal strategy researched earlier, this article shows how companies can leverage employees who are potential whistleblowers into valuable allies that create value for the organization. Q: In addition to business law, you also have an interest in business ethics. What are the essential principles that companies must know in order to be ethical? A: There are four principles of values-driven management that every organization should fully embrace. These four principles serve as four legs of a platform of responsible business practice. The first principle is business ethics, the internal values of the firm. Business ethics focuses on the individual decision-making in the organization that shapes how the organization functions on a day-to-day basis. Strong ethical principles can be the basis of a culture of integrity. By a culture of integrity, I mean ethical values that are so strong that employees comply with those values because they believe in them and not because they have to be asked to do so. The absence of ethical principles leaves companies morally adrift and prone to mistakes that can hurt the company’s reputation or trigger legal liability. The second principle is corporate social responsibility, which focuses on a company’s obligation to its stakeholders. These include shareholders, employees, neighbors, community, society, regulators, suppliers, creditors, the environment, and others. The third principle is sustainability. Sustainability focuses on management of collective resources over time. What responsibility do organizations have not just today, but also next year, 20 years, and 50 years from now? How do we sustain an environment that will be preserved for our grandchildren and our great grandchildren? The fourth principle is business and human rights. This focuses on inalienable rights that all persons have regardless of wealth or nationality. All people have a right to life, a right to education, a right to a fair wage, and a right to be able to raise a family in a safe environment, free from war and conflict. These rights are so strong they override the economic interests of corporations. Human rights are the vanguard of these values-driven principles. Finally, my colleagues and I have developed a new program at UConn, a Master’s in Social Responsibility & Impact in Business. The program trains students not only in business principles, but also in how companies can be a source of good for society. They can also be agents for cultural change, promoting sustainability and human rights, and advancing goals of business ethics. We’re training both change makers and change accelerators in organizations. We can show them not only that acting responsibly is good for society, but also that it’s good for business in the long term. An ethical company is a profitable company. A sustainable company is a profitable company. That’s something that we are showing our students now. Q: How do you think SSRN fits into the broader research and scholarship landscape? A: I joined SSRN over 20 years ago, and it has been my go-to mechanism to distribute scholarship to the wider academic community. Virtually every manuscript that I’ve drafted goes on SSRN before it gets submitted to publication. I find it to be a key vector for disseminating my research before it eventually gets published. SSRN is a living embodiment of tomorrow’s research today. Instead of waiting potentially years to be formally published, through SSRN my working paper is shared with a wide audience in a short time. I’m just a few quick clicks away from sharing my work with interested colleagues who will be able to easily find it. I don’t have to wait until publication. When you’re on SSRN, you’re in an ecosystem where people are looking for current knowledge, and if they find your work, they’re going to cite it before it gets to publication. SSRN also brings research to me. The eJournals are one of the primary ways that I learn about research that’s not within my immediate network. SSRN is also excellent for accessing international scholarship. There are scholars in other countries that I may never hear about except for on SSRN. SSRN is a gateway to world scholarship. You can see more work by Robert Bird on his SSRN Author page here
    0 Commentaires ·0 Parts ·0 Aperçu
  • The Preview Paradox: How Early RTX 5060 Review Restrictions Reshape GPU Coverage (and What it Means for Buyers)

    We never thought we’d utter the phrase RTX 5060 review restrictions, but here we are. From YouTube channels to review sites, independent tech media has always played a huge role in the launch cycle of a new graphics card. With early access to hardware and drivers, these outlets conduct their own, thorough tests and give buyers an objective view on performance – the full picture, so to speak.
    With the launch of NVIDIA’s GeForce RTX 5060, that could all change.
    According to a report from VideoCardz, NVIDIA has switched up its preview model before the card’s launch. Where it used to provide pre-release drivers to media outlets in exchange for comprehensive reviews, it’s instead now limited early access to outlets that agree to publish ‘previews’. 
    Adding insult to injury, NVIDIA has a set of conditions that these outlets must agree to, meaning they’re in charge of what information consumers receive, rather than the outlets themselves.

    NVIDIA ‘has apparently handpicked media who are willing to share the preview, and that itself was apparently the only way to obtain the drivers.’

    This selective approach could mean we as consumers can expect less diverse perspectives prior to launch. Tom’s Hardware explains that this means day-one impressions ‘will largely be based on NVIDIA’s first-party metrics and the few reviewers who aren’t traveling.’
    NVIDIA’s RTX 5060 Review Restrictions Limit Game Choices and Graphics Settings
    So, what are NVIDIA’s parameters for the early testing and reporting during the ‘previews’? They want to:

    Limit the games allowed for benchmarking
    Only permit the RTX 5060 to be compared to specific other graphics cards, and 
    Specifying individual graphics settings

    Though we don’t have a full list of the games allowed by NVIDIA, judging from already-published previews from Tom’s Guide and Techradar, the approved titles include Cyberpunk 2077, Avowed, Marvel Rivals, Hogwarts Legacy, and Doom: The Dark Ages – all games which have been optimized for NVIDIA GPUs.
    According to Tom’s Hardware, NVIDIA won’t allow the RTX 5060 to be compared to the RTX 4060, only permitting comparisons with older cards such as the RTX 2060 Super and RTX 3060. 
    Speaking to VideoCardz, GameStar Tech explained: “What’s particularly crucial is that we weren’t able to choose which graphics cards and games we would measure and with which settings for this preview.”
    Should a card’s manufacturer really have such control over this type of content? Anyone who values independent journalism says a resounding ‘No.’ 

    Credit: HardwareLuxx
    First Party “Tests” Can’t Always Be Trusted
    Taking control of the testing environment in this way and dictating points for comparison means NVIDIA is steering the narrative. It wants these early previews to highlight the strengths of its latest card, while keeping under wraps any areas where it may fall short or fail to provide significant improvements over the last generation.
    Cards are typically tested by playing a diverse array of game titles and at different graphical settings and resolutions, with many factors such as thermal performance, power consumption, and more taken into account to provide a balanced overview that should help consumers decide if the latest release is worth an upgrade.
    NVIDIA has come under suspicion from tech outlets for its shady behavior in the past. During a previous round of reviews, the manufacturer intentionally didn’t launch the RTX 5060 with the 5060XT. It was thought this was to promote and receive positive reviews for the 16GB variant, while quietly putting the 8GB variant onto store shelves.
    Overly positive early glimpses of the latest NVIDIA products could prompt consumers to purchase if they’re desperate to upgrade, but for those who want more in-depth analysis, the RTX 5060 review restrictions are stifling independent media coverage
    Consumers Deserve Comprehensive Reviews and Competitor Comparisons
    Constraints put in place by a manufacturer mean we’re not getting a full, comprehensive review of a product’s pros and cons. The ‘preview’ of the RTX 5060’s capabilities is distorted by these constraints, meaning we’ll never see how the card really compares to competitors from rival AMD, or previous generation cards from NVIDIA itself. Any negatives, like performance bottlenecks when playing specific tiles, also won’t be initially apparent.
    Furthermore, NVIDIA’s latest move opens up a can of worms surrounding ‘access journalism.’ This is where media outlets feel they need to comply with demands from manufacturers so they can keep receiving samples for future reviews, exclusive interviews, and so on. It’s a valid and growing concern, according to a report by NotebookCheck.
    NVIDIA seems like it’s trying to turn independent journalism into a PR effort for its own purposes. Controlling reviews in this way has many asking the question: Why doesn’t NVIDIA simply take a more ethical approach by paying for coverage and marking it as sponsored? 
    Gamers Nexus Raises Ethical Concerns Over NVIDIA Pressure
    In the NotebookCheck report, Gamers Nexus claims NVIDIA pressured them for over six months to include Multi-Frame Generation 4Xperformance figures in their reviews, even when the graphics cards being tested didn’t support this feature. Understandably, Gamers Nexus found the request unethical and misleading for its reviewers and declined to comply.
    Gamers Nexus then says that NVIDIA threatened to remove access to interviews with its engineers. Since GN isn’t paid by NVIDIA for their coverage, this is the best way to penalize them as this unique, expert content and technical insight helps them stand out from the competition and has proven popular with subscribers.

    According to the report, ‘their continued availability was apparently made conditional on GN complying with NVIDIA’s editorial demands.’

    Stephen Burke of GN spoke about this in more detail on a recent YouTube video, likening NVIDIA’s demands to ‘extortion.’
    The alleged behavior is shocking, if true. Manufacturers behaving in this way bring the entire integrity of the review process into question and raises several ethical questions. Should manufacturers be using sanctions to influence how their products are covered?
    Making this the norm could mean other media outlets are afraid to stray from the approved narrative and may not publish honest analysis, which is the whole point of reviews in the first place.
    Part of the appeal of independent testing is just that: it’s independent. Some feel that makes it more credible than testing carried out by companies that have a financial stake in the matter. Whatever your views on it, there’s no denying that these controlled previews only benefit the chosen outlets and have the potential to harm the credibility and reputation of others.
    FTC and Google Would Disagree with Nvidia’s Review Restrictions
    Not to mention the fact that controlling coverage in this way expressly goes against Google’s EEAT guidelines for publishers. The EEAT guidelines, standing for Experience, Expertise, Authoritativeness, and Trustworthiness, are designed to ensure content is helpful – but most importantly, that it can be trusted. NVIDIA’s move to influence reviews goes directly against this.
    Moreover, the FTC in the US also has strict guidelines surrounding reviews, prohibiting businesses from “providing compensation or other incentives conditioned on the writing of consumer reviews expressing a particular sentiment, either positive or negative.” This doesn’t have to be monetary – and could apply in the case of NVIDIA only providing outlets that comply with its demands with drivers.
    It’s not the first time GN has raised questions about the way NVIDIA does business. In May 2024, they posted a video surrounding the manufacturer’s entrenched market dominance and how the ‘mere exposure effect’ could subconsciously influence consumers to buy NVIDIA products. 
    Consumers May Need to Wait For Trusted, Independent Reviews
    This move by NVIDIA could mean we all take a more critical view of the first wave of reviews when a new GPU is launched. If other manufacturers follow NVIDIA’s lead, we will likely all need to wait a week – or more – for independent reviews from trusted sources, carried out without any restrictions imposed by manufacturers. It’s that or rely on previews that don’t provide a full picture.
    This ‘preview paradox’ surrounding the launch of the RTX 5060 is undoubtedly concerning. It’s something new – a dangerous shift towards a less transparent product launch. 
    Influencing independent coverage at launch raises ethical questions and places a greater onus on consumers to ensure the reporting they’re reading is unbiased and comprehensive. 
    There’s also pressure on media outlets to remain committed to providing the full, honest picture, even when faced with the risk of losing access to products or interviews in the future.
    This practice has the potential to harm publishers’ ability to operate – particularly smaller independent outlets. There’s enough evidence available for a consumer to claim an outlet is going against best practices for reviews, as laid out by Google and the US FTC, opening them up to legal ramifications.
    Ultimately, consumers deserve to be able to make informed choices. This puts that right at risk.

    Paula has been a writer for over a decade, starting off in the travel industry for brands like Skyscanner and Thomas Cook. She’s written everything from a guide to visiting Lithuania’s top restaurants to how to survive a zombie apocalypse and also worked as an editor/proofreader for indie authors and publishing houses, focusing on mystery, gothic, and crime fiction.
    She made the move to tech writing in 2019 and has worked as a writer and editor for websites such as Android Authority, Android Central, XDA, Megagames, Online Tech Tips, and Xbox Advisor. These days as well as contributing articles on all-things-tech for Techreport, you’ll find her writing about mobile tech over at Digital Trends.
    She’s obsessed with gaming, PC hardware, AI, and the latest and greatest gadgets and is never far from a screen of some sort.Her attention to detail, ability to get lost in a rabbit hole of research, and obsessive need to know every fact ensures that the news stories she covers and features she writes areas interesting and engaging to read as they are to write.
    When she’s not working, you’ll usually find her gaming on her Xbox Series X or PS5. As well as story-driven games like The Last of Us, Firewatch, and South of Midnight she loves anything with a post-apocalyptic setting. She’s also not averse to being absolutely terrified watching the latest horror films, when she feels brave enough!

    View all articles by Paula Beaton

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #preview #paradox #how #early #rtx
    The Preview Paradox: How Early RTX 5060 Review Restrictions Reshape GPU Coverage (and What it Means for Buyers)
    We never thought we’d utter the phrase RTX 5060 review restrictions, but here we are. From YouTube channels to review sites, independent tech media has always played a huge role in the launch cycle of a new graphics card. With early access to hardware and drivers, these outlets conduct their own, thorough tests and give buyers an objective view on performance – the full picture, so to speak. With the launch of NVIDIA’s GeForce RTX 5060, that could all change. According to a report from VideoCardz, NVIDIA has switched up its preview model before the card’s launch. Where it used to provide pre-release drivers to media outlets in exchange for comprehensive reviews, it’s instead now limited early access to outlets that agree to publish ‘previews’.  Adding insult to injury, NVIDIA has a set of conditions that these outlets must agree to, meaning they’re in charge of what information consumers receive, rather than the outlets themselves. NVIDIA ‘has apparently handpicked media who are willing to share the preview, and that itself was apparently the only way to obtain the drivers.’ This selective approach could mean we as consumers can expect less diverse perspectives prior to launch. Tom’s Hardware explains that this means day-one impressions ‘will largely be based on NVIDIA’s first-party metrics and the few reviewers who aren’t traveling.’ NVIDIA’s RTX 5060 Review Restrictions Limit Game Choices and Graphics Settings So, what are NVIDIA’s parameters for the early testing and reporting during the ‘previews’? They want to: Limit the games allowed for benchmarking Only permit the RTX 5060 to be compared to specific other graphics cards, and  Specifying individual graphics settings Though we don’t have a full list of the games allowed by NVIDIA, judging from already-published previews from Tom’s Guide and Techradar, the approved titles include Cyberpunk 2077, Avowed, Marvel Rivals, Hogwarts Legacy, and Doom: The Dark Ages – all games which have been optimized for NVIDIA GPUs. According to Tom’s Hardware, NVIDIA won’t allow the RTX 5060 to be compared to the RTX 4060, only permitting comparisons with older cards such as the RTX 2060 Super and RTX 3060.  Speaking to VideoCardz, GameStar Tech explained: “What’s particularly crucial is that we weren’t able to choose which graphics cards and games we would measure and with which settings for this preview.” Should a card’s manufacturer really have such control over this type of content? Anyone who values independent journalism says a resounding ‘No.’  Credit: HardwareLuxx First Party “Tests” Can’t Always Be Trusted Taking control of the testing environment in this way and dictating points for comparison means NVIDIA is steering the narrative. It wants these early previews to highlight the strengths of its latest card, while keeping under wraps any areas where it may fall short or fail to provide significant improvements over the last generation. Cards are typically tested by playing a diverse array of game titles and at different graphical settings and resolutions, with many factors such as thermal performance, power consumption, and more taken into account to provide a balanced overview that should help consumers decide if the latest release is worth an upgrade. NVIDIA has come under suspicion from tech outlets for its shady behavior in the past. During a previous round of reviews, the manufacturer intentionally didn’t launch the RTX 5060 with the 5060XT. It was thought this was to promote and receive positive reviews for the 16GB variant, while quietly putting the 8GB variant onto store shelves. Overly positive early glimpses of the latest NVIDIA products could prompt consumers to purchase if they’re desperate to upgrade, but for those who want more in-depth analysis, the RTX 5060 review restrictions are stifling independent media coverage Consumers Deserve Comprehensive Reviews and Competitor Comparisons Constraints put in place by a manufacturer mean we’re not getting a full, comprehensive review of a product’s pros and cons. The ‘preview’ of the RTX 5060’s capabilities is distorted by these constraints, meaning we’ll never see how the card really compares to competitors from rival AMD, or previous generation cards from NVIDIA itself. Any negatives, like performance bottlenecks when playing specific tiles, also won’t be initially apparent. Furthermore, NVIDIA’s latest move opens up a can of worms surrounding ‘access journalism.’ This is where media outlets feel they need to comply with demands from manufacturers so they can keep receiving samples for future reviews, exclusive interviews, and so on. It’s a valid and growing concern, according to a report by NotebookCheck. NVIDIA seems like it’s trying to turn independent journalism into a PR effort for its own purposes. Controlling reviews in this way has many asking the question: Why doesn’t NVIDIA simply take a more ethical approach by paying for coverage and marking it as sponsored?  Gamers Nexus Raises Ethical Concerns Over NVIDIA Pressure In the NotebookCheck report, Gamers Nexus claims NVIDIA pressured them for over six months to include Multi-Frame Generation 4Xperformance figures in their reviews, even when the graphics cards being tested didn’t support this feature. Understandably, Gamers Nexus found the request unethical and misleading for its reviewers and declined to comply. Gamers Nexus then says that NVIDIA threatened to remove access to interviews with its engineers. Since GN isn’t paid by NVIDIA for their coverage, this is the best way to penalize them as this unique, expert content and technical insight helps them stand out from the competition and has proven popular with subscribers. According to the report, ‘their continued availability was apparently made conditional on GN complying with NVIDIA’s editorial demands.’ Stephen Burke of GN spoke about this in more detail on a recent YouTube video, likening NVIDIA’s demands to ‘extortion.’ The alleged behavior is shocking, if true. Manufacturers behaving in this way bring the entire integrity of the review process into question and raises several ethical questions. Should manufacturers be using sanctions to influence how their products are covered? Making this the norm could mean other media outlets are afraid to stray from the approved narrative and may not publish honest analysis, which is the whole point of reviews in the first place. Part of the appeal of independent testing is just that: it’s independent. Some feel that makes it more credible than testing carried out by companies that have a financial stake in the matter. Whatever your views on it, there’s no denying that these controlled previews only benefit the chosen outlets and have the potential to harm the credibility and reputation of others. FTC and Google Would Disagree with Nvidia’s Review Restrictions Not to mention the fact that controlling coverage in this way expressly goes against Google’s EEAT guidelines for publishers. The EEAT guidelines, standing for Experience, Expertise, Authoritativeness, and Trustworthiness, are designed to ensure content is helpful – but most importantly, that it can be trusted. NVIDIA’s move to influence reviews goes directly against this. Moreover, the FTC in the US also has strict guidelines surrounding reviews, prohibiting businesses from “providing compensation or other incentives conditioned on the writing of consumer reviews expressing a particular sentiment, either positive or negative.” This doesn’t have to be monetary – and could apply in the case of NVIDIA only providing outlets that comply with its demands with drivers. It’s not the first time GN has raised questions about the way NVIDIA does business. In May 2024, they posted a video surrounding the manufacturer’s entrenched market dominance and how the ‘mere exposure effect’ could subconsciously influence consumers to buy NVIDIA products.  Consumers May Need to Wait For Trusted, Independent Reviews This move by NVIDIA could mean we all take a more critical view of the first wave of reviews when a new GPU is launched. If other manufacturers follow NVIDIA’s lead, we will likely all need to wait a week – or more – for independent reviews from trusted sources, carried out without any restrictions imposed by manufacturers. It’s that or rely on previews that don’t provide a full picture. This ‘preview paradox’ surrounding the launch of the RTX 5060 is undoubtedly concerning. It’s something new – a dangerous shift towards a less transparent product launch.  Influencing independent coverage at launch raises ethical questions and places a greater onus on consumers to ensure the reporting they’re reading is unbiased and comprehensive.  There’s also pressure on media outlets to remain committed to providing the full, honest picture, even when faced with the risk of losing access to products or interviews in the future. This practice has the potential to harm publishers’ ability to operate – particularly smaller independent outlets. There’s enough evidence available for a consumer to claim an outlet is going against best practices for reviews, as laid out by Google and the US FTC, opening them up to legal ramifications. Ultimately, consumers deserve to be able to make informed choices. This puts that right at risk. Paula has been a writer for over a decade, starting off in the travel industry for brands like Skyscanner and Thomas Cook. She’s written everything from a guide to visiting Lithuania’s top restaurants to how to survive a zombie apocalypse and also worked as an editor/proofreader for indie authors and publishing houses, focusing on mystery, gothic, and crime fiction. She made the move to tech writing in 2019 and has worked as a writer and editor for websites such as Android Authority, Android Central, XDA, Megagames, Online Tech Tips, and Xbox Advisor. These days as well as contributing articles on all-things-tech for Techreport, you’ll find her writing about mobile tech over at Digital Trends. She’s obsessed with gaming, PC hardware, AI, and the latest and greatest gadgets and is never far from a screen of some sort.Her attention to detail, ability to get lost in a rabbit hole of research, and obsessive need to know every fact ensures that the news stories she covers and features she writes areas interesting and engaging to read as they are to write. When she’s not working, you’ll usually find her gaming on her Xbox Series X or PS5. As well as story-driven games like The Last of Us, Firewatch, and South of Midnight she loves anything with a post-apocalyptic setting. She’s also not averse to being absolutely terrified watching the latest horror films, when she feels brave enough! View all articles by Paula Beaton Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #preview #paradox #how #early #rtx
    The Preview Paradox: How Early RTX 5060 Review Restrictions Reshape GPU Coverage (and What it Means for Buyers)
    techreport.com
    We never thought we’d utter the phrase RTX 5060 review restrictions, but here we are. From YouTube channels to review sites, independent tech media has always played a huge role in the launch cycle of a new graphics card. With early access to hardware and drivers, these outlets conduct their own, thorough tests and give buyers an objective view on performance – the full picture, so to speak. With the launch of NVIDIA’s GeForce RTX 5060, that could all change. According to a report from VideoCardz, NVIDIA has switched up its preview model before the card’s launch. Where it used to provide pre-release drivers to media outlets in exchange for comprehensive reviews, it’s instead now limited early access to outlets that agree to publish ‘previews’.  Adding insult to injury, NVIDIA has a set of conditions that these outlets must agree to, meaning they’re in charge of what information consumers receive, rather than the outlets themselves. NVIDIA ‘has apparently handpicked media who are willing to share the preview, and that itself was apparently the only way to obtain the drivers.’ This selective approach could mean we as consumers can expect less diverse perspectives prior to launch. Tom’s Hardware explains that this means day-one impressions ‘will largely be based on NVIDIA’s first-party metrics and the few reviewers who aren’t traveling.’ NVIDIA’s RTX 5060 Review Restrictions Limit Game Choices and Graphics Settings So, what are NVIDIA’s parameters for the early testing and reporting during the ‘previews’? They want to: Limit the games allowed for benchmarking Only permit the RTX 5060 to be compared to specific other graphics cards, and  Specifying individual graphics settings Though we don’t have a full list of the games allowed by NVIDIA, judging from already-published previews from Tom’s Guide and Techradar, the approved titles include Cyberpunk 2077, Avowed, Marvel Rivals, Hogwarts Legacy, and Doom: The Dark Ages – all games which have been optimized for NVIDIA GPUs. According to Tom’s Hardware, NVIDIA won’t allow the RTX 5060 to be compared to the RTX 4060, only permitting comparisons with older cards such as the RTX 2060 Super and RTX 3060.  Speaking to VideoCardz, GameStar Tech explained: “What’s particularly crucial is that we weren’t able to choose which graphics cards and games we would measure and with which settings for this preview.” Should a card’s manufacturer really have such control over this type of content? Anyone who values independent journalism says a resounding ‘No.’  Credit: HardwareLuxx First Party “Tests” Can’t Always Be Trusted Taking control of the testing environment in this way and dictating points for comparison means NVIDIA is steering the narrative. It wants these early previews to highlight the strengths of its latest card, while keeping under wraps any areas where it may fall short or fail to provide significant improvements over the last generation. Cards are typically tested by playing a diverse array of game titles and at different graphical settings and resolutions, with many factors such as thermal performance, power consumption, and more taken into account to provide a balanced overview that should help consumers decide if the latest release is worth an upgrade. NVIDIA has come under suspicion from tech outlets for its shady behavior in the past. During a previous round of reviews, the manufacturer intentionally didn’t launch the RTX 5060 with the 5060XT. It was thought this was to promote and receive positive reviews for the 16GB variant, while quietly putting the 8GB variant onto store shelves. Overly positive early glimpses of the latest NVIDIA products could prompt consumers to purchase if they’re desperate to upgrade, but for those who want more in-depth analysis, the RTX 5060 review restrictions are stifling independent media coverage Consumers Deserve Comprehensive Reviews and Competitor Comparisons Constraints put in place by a manufacturer mean we’re not getting a full, comprehensive review of a product’s pros and cons. The ‘preview’ of the RTX 5060’s capabilities is distorted by these constraints, meaning we’ll never see how the card really compares to competitors from rival AMD, or previous generation cards from NVIDIA itself. Any negatives, like performance bottlenecks when playing specific tiles, also won’t be initially apparent. Furthermore, NVIDIA’s latest move opens up a can of worms surrounding ‘access journalism.’ This is where media outlets feel they need to comply with demands from manufacturers so they can keep receiving samples for future reviews, exclusive interviews, and so on. It’s a valid and growing concern, according to a report by NotebookCheck. NVIDIA seems like it’s trying to turn independent journalism into a PR effort for its own purposes. Controlling reviews in this way has many asking the question: Why doesn’t NVIDIA simply take a more ethical approach by paying for coverage and marking it as sponsored?  Gamers Nexus Raises Ethical Concerns Over NVIDIA Pressure In the NotebookCheck report, Gamers Nexus claims NVIDIA pressured them for over six months to include Multi-Frame Generation 4X (MFG4X) performance figures in their reviews, even when the graphics cards being tested didn’t support this feature. Understandably, Gamers Nexus found the request unethical and misleading for its reviewers and declined to comply. Gamers Nexus then says that NVIDIA threatened to remove access to interviews with its engineers. Since GN isn’t paid by NVIDIA for their coverage, this is the best way to penalize them as this unique, expert content and technical insight helps them stand out from the competition and has proven popular with subscribers. According to the report, ‘their continued availability was apparently made conditional on GN complying with NVIDIA’s editorial demands.’ Stephen Burke of GN spoke about this in more detail on a recent YouTube video, likening NVIDIA’s demands to ‘extortion.’ The alleged behavior is shocking, if true. Manufacturers behaving in this way bring the entire integrity of the review process into question and raises several ethical questions. Should manufacturers be using sanctions to influence how their products are covered? Making this the norm could mean other media outlets are afraid to stray from the approved narrative and may not publish honest analysis, which is the whole point of reviews in the first place. Part of the appeal of independent testing is just that: it’s independent. Some feel that makes it more credible than testing carried out by companies that have a financial stake in the matter. Whatever your views on it, there’s no denying that these controlled previews only benefit the chosen outlets and have the potential to harm the credibility and reputation of others. FTC and Google Would Disagree with Nvidia’s Review Restrictions Not to mention the fact that controlling coverage in this way expressly goes against Google’s EEAT guidelines for publishers. The EEAT guidelines, standing for Experience, Expertise, Authoritativeness, and Trustworthiness, are designed to ensure content is helpful – but most importantly, that it can be trusted. NVIDIA’s move to influence reviews goes directly against this. Moreover, the FTC in the US also has strict guidelines surrounding reviews, prohibiting businesses from “providing compensation or other incentives conditioned on the writing of consumer reviews expressing a particular sentiment, either positive or negative.” This doesn’t have to be monetary – and could apply in the case of NVIDIA only providing outlets that comply with its demands with drivers. It’s not the first time GN has raised questions about the way NVIDIA does business. In May 2024, they posted a video surrounding the manufacturer’s entrenched market dominance and how the ‘mere exposure effect’ could subconsciously influence consumers to buy NVIDIA products.  Consumers May Need to Wait For Trusted, Independent Reviews This move by NVIDIA could mean we all take a more critical view of the first wave of reviews when a new GPU is launched. If other manufacturers follow NVIDIA’s lead, we will likely all need to wait a week – or more – for independent reviews from trusted sources, carried out without any restrictions imposed by manufacturers. It’s that or rely on previews that don’t provide a full picture. This ‘preview paradox’ surrounding the launch of the RTX 5060 is undoubtedly concerning. It’s something new – a dangerous shift towards a less transparent product launch.  Influencing independent coverage at launch raises ethical questions and places a greater onus on consumers to ensure the reporting they’re reading is unbiased and comprehensive.  There’s also pressure on media outlets to remain committed to providing the full, honest picture, even when faced with the risk of losing access to products or interviews in the future. This practice has the potential to harm publishers’ ability to operate – particularly smaller independent outlets. There’s enough evidence available for a consumer to claim an outlet is going against best practices for reviews, as laid out by Google and the US FTC, opening them up to legal ramifications. Ultimately, consumers deserve to be able to make informed choices. This puts that right at risk. Paula has been a writer for over a decade, starting off in the travel industry for brands like Skyscanner and Thomas Cook. She’s written everything from a guide to visiting Lithuania’s top restaurants to how to survive a zombie apocalypse and also worked as an editor/proofreader for indie authors and publishing houses, focusing on mystery, gothic, and crime fiction. She made the move to tech writing in 2019 and has worked as a writer and editor for websites such as Android Authority, Android Central, XDA, Megagames, Online Tech Tips, and Xbox Advisor. These days as well as contributing articles on all-things-tech for Techreport, you’ll find her writing about mobile tech over at Digital Trends. She’s obsessed with gaming, PC hardware, AI, and the latest and greatest gadgets and is never far from a screen of some sort.Her attention to detail, ability to get lost in a rabbit hole of research, and obsessive need to know every fact ensures that the news stories she covers and features she writes are (hopefully) as interesting and engaging to read as they are to write. When she’s not working, you’ll usually find her gaming on her Xbox Series X or PS5. As well as story-driven games like The Last of Us, Firewatch, and South of Midnight she loves anything with a post-apocalyptic setting. She’s also not averse to being absolutely terrified watching the latest horror films, when she feels brave enough! View all articles by Paula Beaton Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Commentaires ·0 Parts ·0 Aperçu
  • The Best Phones for Every Budget - 2025 Update

    Choosing the right phone in 2025 involves more than just deciding between Android and iPhone. There are well-rounded options across various market segments, whether you are a budget-conscious buyer, a dedicated iPhone fan, or an Android enthusiast.
    We've structured our smartphone buying guide to reflect the most relevant categories for tech enthusiasts, with clear-cut recommendations if you're looking for top-tier performance, reliable midrange versatility, or affordable essentials.
    Additionally, upgrade cycles vary among consumers – some prefer to upgrade every year to keep up with the latest technology, while others may opt for longer intervals, prioritizing durability and value over cutting-edge features. Explore our top picks below to find the phone that best fits your needs.

    The Best Value Phones

    Google Pixel 9a | OnePlus 13R | iPhone 16e

    In numbers

    Price:

    Google guarantees seven years of OS upgrades and security patches for its Pixel phones. As the brain behind Android, Google's updates are as prompt as Apple's iOS patches. The Google Pixel 9a has been hailed for being almost as good as the Pixel 9 at less, but in reality the difference is often just In this case, it helps to know what the other differences are.
    The 6.3-inch, 1080p, 120Hz display is almost identical, offering the same 20:9 aspect ratio. The Pixel 9a's battery is slightly bigger at a typical 5,100mAh versus 4,700mAh, but the Pixel 9 offers wireless charging that's twice as fast, and the ability to charge other devices wirelessly.
    The Pixel 9a's 48MP main camerasounds similar to the Pixel 9's 50MP, but the pixels are significantly smaller and capture less light. For the same reason, the Pixel 9a's 13MP front camera isn't better than the Pixel 9's 10.5MP, especially as it doesn't offer auto-focus. However, the biggest difference is the ultrawide camera, where the Pixel 9 uses a 48MP sensor to capture 12MP photos, and the Pixel 9a offers a much smaller 13MP sensor.

    Both phones include the Tensor G4 processor and 256GB of storage, but the Pixel 9 has 12GB of RAM rather than just 8GB, which allows for two extra AI features: Pixel Screenshots scans your screen captures for information you may need later, and Call Notes can transcribe and summarize your phone conversations. The Pixel 9 also supports Wi-Fi 7, and G5 mmWave.
    What makes the Pixel line stand out is the editing tools offered by the Tensor SoC. Audio Magic Eraser is useful for removing background noise from videos. Magic Editor allows to move, resize and remove people and objects in photos.
    'Best Take' allows you to combine faces from different times into the same photo – a feature that was actually introduced by BlackBerry in 2013. Add Me does the same, but with people's entire body.
    Auto Frame can not only crop photos, but also expand them using AI, and Reimagine completely replaces the photo's background. All of this may sound unethical, but the era of photos being more reliable than drawings is over anyway.
    OnePlus 13R

    If you prefer raw horsepower and a bigger display over optimized software and AI photo features, then the OnePlus 13R is a good alternative. For you get a Snapdragon 8 Gen 3 SoC, 12GB of RAM and 256GB of storage.
    The 6.8-inch display features a non-standard width of 1264p and a 120Hz refresh rate. The phone offers a large 6,000mAh battery and exceptional 80W wired charging. Four years of OS updates and six years of security updates are long enough at this price point.
    The main camera can save 50MP photos, but those can look under-exposed compared to the default 12.5MP. The same is true for the telephoto lens with 2x optical zoom. The selfie camera takes 16MP photos. The 8MP ultrawide lens is functional, but nothing more. Other than that, the phone's main drawbacks are mediocre water resistance and lack of mmWave support.
    iPhone 16e

    In our previous phone buying guides, we'd recommend two-year-old iPhone models over the outdated iPhone SE as the cheapest option for most Apple fans. In 2025, Apple made things simpler, discontinuing both in favor of the iPhone 16e.
    The 16e is cheaper than the iPhone 16 with the same storage, but how does it compare?

    It has the same display as the iPhone 14, with a bigger notch that's attached to the top of the 6.1-inch screen. The ring/silent switch has been replaced with the programmable Action button. The iPhone 16e has the same A18 processor with 8GB of RAM and Apple Intelligence support, but with four active graphics units instead of five.

    The most visible difference is the lack of the ultrawide lens. Other than that, it doesn't offer mmWave, and the iPhone 16e supports Qi wireless charging rather than the faster MagSafe.

    Back to top ▵

    Best Phones for Most People

    iPhone 16 | OnePlus 13 | More Alternatives

    In numbers

    Price:

    The iPhone 16 offers a newer and brighter display than the iPhone 16e. The notch is smaller and integrated into the "dynamic island," which is useful to display key information from apps running in the background.
    As an overall package, it delivers the full iPhone experience with a solid balance of well-built hardware and polished software features, except for the 60Hz screen, which is at a disadvantage compared to competing Android handsets in the same price range and even cheaper ones.
    With the touch-sensitive Camera Control button, you can finally focus on your subject while taking a photo. The main camera can capture 48MP images but defaults to 24MP for better dynamic range and faster shutter speed. The ultrawide lens offers 0.5x optical zoom and, like the front camera, captures 12MP photos.

    If you want a larger screen and battery, you can get the 6.7-inch iPhone 16 Plus for an extra OnePlus 13

    With improvements across the board, the OnePlus 13 makes it hard to justify buying more expensive Android phones, like the Samsung Galaxy S25 Ultra and the Google Pixel 9 Pro XL. The Qualcomm Snapdragon 8 Elite processor rivals the Apple A18 in single-core performance, and beats it in multi-core.
    The OnePlus 13 has an IP68/IP69 rating, meaning it's resistant not only to immersion but also to high-temperature water jets – making it safe to use even if you work at a car wash. With a 6,000mAh battery, the OnePlus 13 provides great battery life despite the 1440p, 6.8-inch display. Unlike the OnePlus 13R, it's a dual-cell battery, so it will charge faster with the same charger.

    The main, ultrawide and telephotocameras all provide a 50MP resolution. The selfie camera shoots at 32MP, but its fixed focus makes it less optimal for usage with a selfie stick.
    The OnePlus 13 starts at with 12GB of RAM and 256GB of storage, and for another you can increase that to 16GB of RAM and 512GB of storage. Many phones are cheaper, but none of them offer similar hardware specifications.
    Samsung Galaxy S25

    At this point, the Samsung Galaxy S has remained mostly the same on the outside for several years, while competitors from OnePlus and Google have kept improving. So why do we recommend the Galaxy S25 for some? Because it's the only Android phone that's as powerful as the OnePlus 13 and as compact as the Pixel 9 with a 6.2-inch display.
    With the compact size comes a smaller 4,000mAh battery, but the smaller, 1080p display somewhat makes up for that in battery life. The 50MP main cameraremains, and the rest of the setup is well-rounded but basic, with 12MP front and ultrawide lenses, and a 10MP telephoto camera with 3x optical zoom.
    The Galaxy S25 starts atwith 128GB of storage, and to match the OnePlus 13's 256GB you'll need to add Samsung's advantage over OnePlus is the promise of seven years of OS and security updates.

    For those who favor larger screens, the Galaxy S25+ offers a 6.7-inch display, a bigger battery to compensate, and 45W wired charging compared to the base model's 25W. Additionally, it features ultra-wide bandsupport, ideal for pinpointing Bluetooth-linked items such as Galaxy SmartTags.
    The Galaxy S25+ starts at, providing 256GB of base storage, so if you were planning to get that amount anyway, it's only more expensive than the S24. On the other hand, the Galaxy S25 Ultra adds too little for too much money, especially now that the S Pen no longer supports Bluetooth functionality.
    To fold, or not to fold?

    Foldable phones, like the Samsung Galaxy Z Flip 6, might seem tempting. However, their design restricts battery and camera configurations. With a prev-gen SoC and a noticeable crease when unfolded, the Flip 6's price pointis higher the Galaxy S25+, and it's more scratch-prone and less dust-resistant.
    If you really need the unique form factor, you should wait for the reviews of the Motorola Razr, named after the legendary Razr V3 and promising a higher-quality external display that can fully replace the main one more often. Otherwise, you should probably look elsewhere.

    Back to top ▵

    Best Budget Phones

    Samsung Galaxy A16 and A26

    In numbers

    Price:

    on Amazon

    If you're in the market for an affordable device that can handle the tasks most users demand from their phones – albeit not always as proficiently – the Samsung Galaxy A16 5G is a compelling option. Six years of Android and security updates are more than you are going to get anywhere else for this price.
    The 6.7-inch 1080p AMOLED display offers great contrast, and it also runs at 90Hz. Additionally, the phone supports NFC for contactless payments.
    The Galaxy A16 features a 13MP front-facing camera, and on the rear three cameras: a 50MP primary lens, along with a 5MP ultrawide camera and a 2MP macro sensor. The phone is available with 4GB, 6GB or 8GB of RAM, and 128GB or 256GB of storage, which can be expanded with microSD.
    Thanks to the 5,000mAh battery, the phone features good battery life, and it also supports 25W charging. On the other hand, the mono speaker is as basic as you can imagine. For more, the Galaxy A26 features Gorilla Glass on the front and back, IP67 dust/water resistance, an always-on 120Hz display and an upgraded 8MP ultrawide camera.
    Motorola Moto G PowerIf you replace your phone often, you can check out the Motorola Moto G Power. Just make sure you are getting the 2024 version, as the more expensive 2025 model has a slower CPU. The older phone won't receive OS updates beyond Android 15, but it will get security updates until 2027.
    The LCD display runs at 120Hz. The main and ultrawide cameras are similar to the Galaxy A26's, and the 16MP front camera is equivalent with slightly smaller pixels. The differentiating features are wireless charging, stereo speakers and a headphone jack. The main problem is the amount of bloatware that Lenovo installs on the phone.

    Back to top ▵

    Best of the Best

    Apple iPhone 16 Pro Max

    In numbers

    Price:

    The iPhone 16 Pro Max sports a 6.9-inch OLED display that supports a 120Hz refresh rate. The always-on display provides users with glanceable information without waking the device. The camera system includes a 48MP main sensor, a 48MP ultrawide lens with autofocus for improved macro photography, and a 12MP telephoto lens offering 5x optical zoom.
    The upgraded ultrawide sensor delivers enhanced detail and macro capabilities. Video recording is also enhanced with support for 4K at 120 frames per second, and the inclusion of four studio-quality microphones improves audio capture.
    The iPhone Pro line differentiates itself with the A18 Pro SoC, and supporting USB 3.1speeds via USB-C. On paper, Apple's top-tier smartphone may appear to offer similar features to those found in mainstream products from other companies. However, thanks to iOS and its finely-tuned apps, its performance is notably superior. The hardware is also top notch and more carefully built than most, using titanium instead of aluminum.

    The iPhone 16 Pro Max is the only model that's not available with 128GB of storage. Starting at with 256GB, it is steep, especially when contrasted with a iPhone 16e with the same 256GB.
    If it plays any kind of factor in your equation, a well-preserved iPhone Pro Max can typically be traded in or sold for around – after two years.
    If you're inclined towards a more compact device, the regular iPhone 16 Pro will save you though the Max is where its at for the most pixels and biggest battery.

    Back to top ▵

    The Best ePaper Phone

    Bigme HiBreak Pro | Mudita Kompakt

    In numbers

    Price:

    Until recently, if you wanted to remain available on a camping trip lasting several days, your main option was an outdated and limited feature phone. Now, you can also opt for a phone with an efficient monochrome e-paper display. These phones remain perfectly usable in direct sunlight, and utilize front light to work in the dark.
    Except for the 21Hz display, the Bigme HiBreak Pro is a fully modern smartphone, with Android 14 and 5G support. The 6.1-inch, 824p display may not be ideal for watching video, but for reading it's arguably better than any other. Combined with a 4,500mAh battery, it's built to last for days between charges.
    While it's not designed for media consumption, the HiBreak Pro can still shoot color photos and videos with its 20MP rear camera and 5MP front camera. For the price, 8GB of RAM and 256GB of storage is solid. Unusually, it includes an infrared sensor, so it can double as a remote control. The main thing missing is an official IP rating.
    Mudita Kompakt

    If you don't want a full-featured smartphone, the Mudita Kompakt offers a de-Googled version of Android, with 13 apps optimized for its monochrome display. It doesn't support 5G and lacks a front-facing camera to accompany the 8MP one on the back. The smaller 4.3-inch, 480p screen helps balance out the modest 3,300mAh battery.

    Due to its custom software, the Kompakt costs nearly as much as the HiBreak Pro. It only includes 3GB of RAM and 32GB of storage but does offer a microSD card slot and a headphone jack.
    If you're buying a phone because you want just a phone, it may be your best option.

    Back to top ▵

    Masthead credit: Amanz
    #best #phones #every #budget #update
    The Best Phones for Every Budget - 2025 Update
    Choosing the right phone in 2025 involves more than just deciding between Android and iPhone. There are well-rounded options across various market segments, whether you are a budget-conscious buyer, a dedicated iPhone fan, or an Android enthusiast. We've structured our smartphone buying guide to reflect the most relevant categories for tech enthusiasts, with clear-cut recommendations if you're looking for top-tier performance, reliable midrange versatility, or affordable essentials. Additionally, upgrade cycles vary among consumers – some prefer to upgrade every year to keep up with the latest technology, while others may opt for longer intervals, prioritizing durability and value over cutting-edge features. Explore our top picks below to find the phone that best fits your needs. The Best Value Phones Google Pixel 9a | OnePlus 13R | iPhone 16e In numbers Price: Google guarantees seven years of OS upgrades and security patches for its Pixel phones. As the brain behind Android, Google's updates are as prompt as Apple's iOS patches. The Google Pixel 9a has been hailed for being almost as good as the Pixel 9 at less, but in reality the difference is often just In this case, it helps to know what the other differences are. The 6.3-inch, 1080p, 120Hz display is almost identical, offering the same 20:9 aspect ratio. The Pixel 9a's battery is slightly bigger at a typical 5,100mAh versus 4,700mAh, but the Pixel 9 offers wireless charging that's twice as fast, and the ability to charge other devices wirelessly. The Pixel 9a's 48MP main camerasounds similar to the Pixel 9's 50MP, but the pixels are significantly smaller and capture less light. For the same reason, the Pixel 9a's 13MP front camera isn't better than the Pixel 9's 10.5MP, especially as it doesn't offer auto-focus. However, the biggest difference is the ultrawide camera, where the Pixel 9 uses a 48MP sensor to capture 12MP photos, and the Pixel 9a offers a much smaller 13MP sensor. Both phones include the Tensor G4 processor and 256GB of storage, but the Pixel 9 has 12GB of RAM rather than just 8GB, which allows for two extra AI features: Pixel Screenshots scans your screen captures for information you may need later, and Call Notes can transcribe and summarize your phone conversations. The Pixel 9 also supports Wi-Fi 7, and G5 mmWave. What makes the Pixel line stand out is the editing tools offered by the Tensor SoC. Audio Magic Eraser is useful for removing background noise from videos. Magic Editor allows to move, resize and remove people and objects in photos. 'Best Take' allows you to combine faces from different times into the same photo – a feature that was actually introduced by BlackBerry in 2013. Add Me does the same, but with people's entire body. Auto Frame can not only crop photos, but also expand them using AI, and Reimagine completely replaces the photo's background. All of this may sound unethical, but the era of photos being more reliable than drawings is over anyway. OnePlus 13R If you prefer raw horsepower and a bigger display over optimized software and AI photo features, then the OnePlus 13R is a good alternative. For you get a Snapdragon 8 Gen 3 SoC, 12GB of RAM and 256GB of storage. The 6.8-inch display features a non-standard width of 1264p and a 120Hz refresh rate. The phone offers a large 6,000mAh battery and exceptional 80W wired charging. Four years of OS updates and six years of security updates are long enough at this price point. The main camera can save 50MP photos, but those can look under-exposed compared to the default 12.5MP. The same is true for the telephoto lens with 2x optical zoom. The selfie camera takes 16MP photos. The 8MP ultrawide lens is functional, but nothing more. Other than that, the phone's main drawbacks are mediocre water resistance and lack of mmWave support. iPhone 16e In our previous phone buying guides, we'd recommend two-year-old iPhone models over the outdated iPhone SE as the cheapest option for most Apple fans. In 2025, Apple made things simpler, discontinuing both in favor of the iPhone 16e. The 16e is cheaper than the iPhone 16 with the same storage, but how does it compare? It has the same display as the iPhone 14, with a bigger notch that's attached to the top of the 6.1-inch screen. The ring/silent switch has been replaced with the programmable Action button. The iPhone 16e has the same A18 processor with 8GB of RAM and Apple Intelligence support, but with four active graphics units instead of five. The most visible difference is the lack of the ultrawide lens. Other than that, it doesn't offer mmWave, and the iPhone 16e supports Qi wireless charging rather than the faster MagSafe. Back to top ▵ Best Phones for Most People iPhone 16 | OnePlus 13 | More Alternatives In numbers Price: The iPhone 16 offers a newer and brighter display than the iPhone 16e. The notch is smaller and integrated into the "dynamic island," which is useful to display key information from apps running in the background. As an overall package, it delivers the full iPhone experience with a solid balance of well-built hardware and polished software features, except for the 60Hz screen, which is at a disadvantage compared to competing Android handsets in the same price range and even cheaper ones. With the touch-sensitive Camera Control button, you can finally focus on your subject while taking a photo. The main camera can capture 48MP images but defaults to 24MP for better dynamic range and faster shutter speed. The ultrawide lens offers 0.5x optical zoom and, like the front camera, captures 12MP photos. If you want a larger screen and battery, you can get the 6.7-inch iPhone 16 Plus for an extra OnePlus 13 With improvements across the board, the OnePlus 13 makes it hard to justify buying more expensive Android phones, like the Samsung Galaxy S25 Ultra and the Google Pixel 9 Pro XL. The Qualcomm Snapdragon 8 Elite processor rivals the Apple A18 in single-core performance, and beats it in multi-core. The OnePlus 13 has an IP68/IP69 rating, meaning it's resistant not only to immersion but also to high-temperature water jets – making it safe to use even if you work at a car wash. With a 6,000mAh battery, the OnePlus 13 provides great battery life despite the 1440p, 6.8-inch display. Unlike the OnePlus 13R, it's a dual-cell battery, so it will charge faster with the same charger. The main, ultrawide and telephotocameras all provide a 50MP resolution. The selfie camera shoots at 32MP, but its fixed focus makes it less optimal for usage with a selfie stick. The OnePlus 13 starts at with 12GB of RAM and 256GB of storage, and for another you can increase that to 16GB of RAM and 512GB of storage. Many phones are cheaper, but none of them offer similar hardware specifications. Samsung Galaxy S25 At this point, the Samsung Galaxy S has remained mostly the same on the outside for several years, while competitors from OnePlus and Google have kept improving. So why do we recommend the Galaxy S25 for some? Because it's the only Android phone that's as powerful as the OnePlus 13 and as compact as the Pixel 9 with a 6.2-inch display. With the compact size comes a smaller 4,000mAh battery, but the smaller, 1080p display somewhat makes up for that in battery life. The 50MP main cameraremains, and the rest of the setup is well-rounded but basic, with 12MP front and ultrawide lenses, and a 10MP telephoto camera with 3x optical zoom. The Galaxy S25 starts atwith 128GB of storage, and to match the OnePlus 13's 256GB you'll need to add Samsung's advantage over OnePlus is the promise of seven years of OS and security updates. For those who favor larger screens, the Galaxy S25+ offers a 6.7-inch display, a bigger battery to compensate, and 45W wired charging compared to the base model's 25W. Additionally, it features ultra-wide bandsupport, ideal for pinpointing Bluetooth-linked items such as Galaxy SmartTags. The Galaxy S25+ starts at, providing 256GB of base storage, so if you were planning to get that amount anyway, it's only more expensive than the S24. On the other hand, the Galaxy S25 Ultra adds too little for too much money, especially now that the S Pen no longer supports Bluetooth functionality. To fold, or not to fold? Foldable phones, like the Samsung Galaxy Z Flip 6, might seem tempting. However, their design restricts battery and camera configurations. With a prev-gen SoC and a noticeable crease when unfolded, the Flip 6's price pointis higher the Galaxy S25+, and it's more scratch-prone and less dust-resistant. If you really need the unique form factor, you should wait for the reviews of the Motorola Razr, named after the legendary Razr V3 and promising a higher-quality external display that can fully replace the main one more often. Otherwise, you should probably look elsewhere. Back to top ▵ Best Budget Phones Samsung Galaxy A16 and A26 In numbers Price: on Amazon If you're in the market for an affordable device that can handle the tasks most users demand from their phones – albeit not always as proficiently – the Samsung Galaxy A16 5G is a compelling option. Six years of Android and security updates are more than you are going to get anywhere else for this price. The 6.7-inch 1080p AMOLED display offers great contrast, and it also runs at 90Hz. Additionally, the phone supports NFC for contactless payments. The Galaxy A16 features a 13MP front-facing camera, and on the rear three cameras: a 50MP primary lens, along with a 5MP ultrawide camera and a 2MP macro sensor. The phone is available with 4GB, 6GB or 8GB of RAM, and 128GB or 256GB of storage, which can be expanded with microSD. Thanks to the 5,000mAh battery, the phone features good battery life, and it also supports 25W charging. On the other hand, the mono speaker is as basic as you can imagine. For more, the Galaxy A26 features Gorilla Glass on the front and back, IP67 dust/water resistance, an always-on 120Hz display and an upgraded 8MP ultrawide camera. Motorola Moto G PowerIf you replace your phone often, you can check out the Motorola Moto G Power. Just make sure you are getting the 2024 version, as the more expensive 2025 model has a slower CPU. The older phone won't receive OS updates beyond Android 15, but it will get security updates until 2027. The LCD display runs at 120Hz. The main and ultrawide cameras are similar to the Galaxy A26's, and the 16MP front camera is equivalent with slightly smaller pixels. The differentiating features are wireless charging, stereo speakers and a headphone jack. The main problem is the amount of bloatware that Lenovo installs on the phone. Back to top ▵ Best of the Best Apple iPhone 16 Pro Max In numbers Price: The iPhone 16 Pro Max sports a 6.9-inch OLED display that supports a 120Hz refresh rate. The always-on display provides users with glanceable information without waking the device. The camera system includes a 48MP main sensor, a 48MP ultrawide lens with autofocus for improved macro photography, and a 12MP telephoto lens offering 5x optical zoom. The upgraded ultrawide sensor delivers enhanced detail and macro capabilities. Video recording is also enhanced with support for 4K at 120 frames per second, and the inclusion of four studio-quality microphones improves audio capture. The iPhone Pro line differentiates itself with the A18 Pro SoC, and supporting USB 3.1speeds via USB-C. On paper, Apple's top-tier smartphone may appear to offer similar features to those found in mainstream products from other companies. However, thanks to iOS and its finely-tuned apps, its performance is notably superior. The hardware is also top notch and more carefully built than most, using titanium instead of aluminum. The iPhone 16 Pro Max is the only model that's not available with 128GB of storage. Starting at with 256GB, it is steep, especially when contrasted with a iPhone 16e with the same 256GB. If it plays any kind of factor in your equation, a well-preserved iPhone Pro Max can typically be traded in or sold for around – after two years. If you're inclined towards a more compact device, the regular iPhone 16 Pro will save you though the Max is where its at for the most pixels and biggest battery. Back to top ▵ The Best ePaper Phone Bigme HiBreak Pro | Mudita Kompakt In numbers Price: Until recently, if you wanted to remain available on a camping trip lasting several days, your main option was an outdated and limited feature phone. Now, you can also opt for a phone with an efficient monochrome e-paper display. These phones remain perfectly usable in direct sunlight, and utilize front light to work in the dark. Except for the 21Hz display, the Bigme HiBreak Pro is a fully modern smartphone, with Android 14 and 5G support. The 6.1-inch, 824p display may not be ideal for watching video, but for reading it's arguably better than any other. Combined with a 4,500mAh battery, it's built to last for days between charges. While it's not designed for media consumption, the HiBreak Pro can still shoot color photos and videos with its 20MP rear camera and 5MP front camera. For the price, 8GB of RAM and 256GB of storage is solid. Unusually, it includes an infrared sensor, so it can double as a remote control. The main thing missing is an official IP rating. Mudita Kompakt If you don't want a full-featured smartphone, the Mudita Kompakt offers a de-Googled version of Android, with 13 apps optimized for its monochrome display. It doesn't support 5G and lacks a front-facing camera to accompany the 8MP one on the back. The smaller 4.3-inch, 480p screen helps balance out the modest 3,300mAh battery. Due to its custom software, the Kompakt costs nearly as much as the HiBreak Pro. It only includes 3GB of RAM and 32GB of storage but does offer a microSD card slot and a headphone jack. If you're buying a phone because you want just a phone, it may be your best option. Back to top ▵ Masthead credit: Amanz #best #phones #every #budget #update
    The Best Phones for Every Budget - 2025 Update
    www.techspot.com
    Choosing the right phone in 2025 involves more than just deciding between Android and iPhone. There are well-rounded options across various market segments, whether you are a budget-conscious buyer, a dedicated iPhone fan, or an Android enthusiast. We've structured our smartphone buying guide to reflect the most relevant categories for tech enthusiasts, with clear-cut recommendations if you're looking for top-tier performance, reliable midrange versatility, or affordable essentials. Additionally, upgrade cycles vary among consumers – some prefer to upgrade every year to keep up with the latest technology, while others may opt for longer intervals, prioritizing durability and value over cutting-edge features. Explore our top picks below to find the phone that best fits your needs. The Best Value Phones Google Pixel 9a | OnePlus 13R | iPhone 16e In numbers Price: $499 Google guarantees seven years of OS upgrades and security patches for its Pixel phones. As the brain behind Android, Google's updates are as prompt as Apple's iOS patches. The Google Pixel 9a has been hailed for being almost as good as the Pixel 9 at $300 less, but in reality the difference is often just $100. In this case, it helps to know what the other differences are. The 6.3-inch, 1080p, 120Hz display is almost identical, offering the same 20:9 aspect ratio. The Pixel 9a's battery is slightly bigger at a typical 5,100mAh versus 4,700mAh, but the Pixel 9 offers wireless charging that's twice as fast, and the ability to charge other devices wirelessly. The Pixel 9a's 48MP main camera (taking 12MP photos) sounds similar to the Pixel 9's 50MP, but the pixels are significantly smaller and capture less light. For the same reason, the Pixel 9a's 13MP front camera isn't better than the Pixel 9's 10.5MP, especially as it doesn't offer auto-focus. However, the biggest difference is the ultrawide camera, where the Pixel 9 uses a 48MP sensor to capture 12MP photos, and the Pixel 9a offers a much smaller 13MP sensor. Both phones include the Tensor G4 processor and 256GB of storage, but the Pixel 9 has 12GB of RAM rather than just 8GB, which allows for two extra AI features: Pixel Screenshots scans your screen captures for information you may need later, and Call Notes can transcribe and summarize your phone conversations. The Pixel 9 also supports Wi-Fi 7, and G5 mmWave. What makes the Pixel line stand out is the editing tools offered by the Tensor SoC. Audio Magic Eraser is useful for removing background noise from videos. Magic Editor allows to move, resize and remove people and objects in photos. 'Best Take' allows you to combine faces from different times into the same photo – a feature that was actually introduced by BlackBerry in 2013. Add Me does the same, but with people's entire body. Auto Frame can not only crop photos, but also expand them using AI, and Reimagine completely replaces the photo's background. All of this may sound unethical, but the era of photos being more reliable than drawings is over anyway. OnePlus 13R If you prefer raw horsepower and a bigger display over optimized software and AI photo features, then the OnePlus 13R is a good alternative. For $600 you get a Snapdragon 8 Gen 3 SoC, 12GB of RAM and 256GB of storage. The 6.8-inch display features a non-standard width of 1264p and a 120Hz refresh rate. The phone offers a large 6,000mAh battery and exceptional 80W wired charging (55W with the standard, included charger). Four years of OS updates and six years of security updates are long enough at this price point. The main camera can save 50MP photos, but those can look under-exposed compared to the default 12.5MP. The same is true for the telephoto lens with 2x optical zoom. The selfie camera takes 16MP photos. The 8MP ultrawide lens is functional, but nothing more. Other than that, the phone's main drawbacks are mediocre water resistance and lack of mmWave support. iPhone 16e In our previous phone buying guides, we'd recommend two-year-old iPhone models over the outdated iPhone SE as the cheapest option for most Apple fans. In 2025, Apple made things simpler, discontinuing both in favor of the iPhone 16e. The 16e is $200 cheaper than the iPhone 16 with the same storage (starting at 128GB), but how does it compare? It has the same display as the iPhone 14, with a bigger notch that's attached to the top of the 6.1-inch screen. The ring/silent switch has been replaced with the programmable Action button. The iPhone 16e has the same A18 processor with 8GB of RAM and Apple Intelligence support, but with four active graphics units instead of five. The most visible difference is the lack of the ultrawide lens. Other than that, it doesn't offer mmWave, and the iPhone 16e supports Qi wireless charging rather than the faster MagSafe. Back to top ▵ Best Phones for Most People iPhone 16 | OnePlus 13 | More Alternatives In numbers Price: $799 The iPhone 16 offers a newer and brighter display than the iPhone 16e. The notch is smaller and integrated into the "dynamic island," which is useful to display key information from apps running in the background. As an overall package, it delivers the full iPhone experience with a solid balance of well-built hardware and polished software features, except for the 60Hz screen, which is at a disadvantage compared to competing Android handsets in the same price range and even cheaper ones. With the touch-sensitive Camera Control button, you can finally focus on your subject while taking a photo. The main camera can capture 48MP images but defaults to 24MP for better dynamic range and faster shutter speed. The ultrawide lens offers 0.5x optical zoom and, like the front camera, captures 12MP photos. If you want a larger screen and battery, you can get the 6.7-inch iPhone 16 Plus for an extra $100. OnePlus 13 With improvements across the board, the OnePlus 13 makes it hard to justify buying more expensive Android phones, like the Samsung Galaxy S25 Ultra and the Google Pixel 9 Pro XL. The Qualcomm Snapdragon 8 Elite processor rivals the Apple A18 in single-core performance, and beats it in multi-core. The OnePlus 13 has an IP68/IP69 rating, meaning it's resistant not only to immersion but also to high-temperature water jets – making it safe to use even if you work at a car wash. With a 6,000mAh battery, the OnePlus 13 provides great battery life despite the 1440p, 6.8-inch display. Unlike the OnePlus 13R, it's a dual-cell battery, so it will charge faster with the same charger. The main, ultrawide and telephoto (with 3x zoom) cameras all provide a 50MP resolution. The selfie camera shoots at 32MP, but its fixed focus makes it less optimal for usage with a selfie stick. The OnePlus 13 starts at $899 with 12GB of RAM and 256GB of storage, and for another $100 you can increase that to 16GB of RAM and 512GB of storage. Many phones are cheaper, but none of them offer similar hardware specifications. Samsung Galaxy S25 At this point, the Samsung Galaxy S has remained mostly the same on the outside for several years, while competitors from OnePlus and Google have kept improving. So why do we recommend the Galaxy S25 for some? Because it's the only Android phone that's as powerful as the OnePlus 13 and as compact as the Pixel 9 with a 6.2-inch display. With the compact size comes a smaller 4,000mAh battery, but the smaller, 1080p display somewhat makes up for that in battery life. The 50MP main camera (saving 12MP photos by default) remains, and the rest of the setup is well-rounded but basic, with 12MP front and ultrawide lenses, and a 10MP telephoto camera with 3x optical zoom. The Galaxy S25 starts at $800 (although frequent discounts bring the price closer to $700) with 128GB of storage, and to match the OnePlus 13's 256GB you'll need to add $60. Samsung's advantage over OnePlus is the promise of seven years of OS and security updates. For those who favor larger screens, the Galaxy S25+ offers a 6.7-inch display (with a higher 1440p resolution), a bigger battery to compensate, and 45W wired charging compared to the base model's 25W. Additionally, it features ultra-wide band (UWB) support, ideal for pinpointing Bluetooth-linked items such as Galaxy SmartTags. The Galaxy S25+ starts at $1,000 (though it's often available for ~$850), providing 256GB of base storage, so if you were planning to get that amount anyway, it's only $140 more expensive than the S24. On the other hand, the Galaxy S25 Ultra adds too little for too much money, especially now that the S Pen no longer supports Bluetooth functionality. To fold, or not to fold? Foldable phones, like the Samsung Galaxy Z Flip 6, might seem tempting. However, their design restricts battery and camera configurations. With a prev-gen SoC and a noticeable crease when unfolded, the Flip 6's $1,100 price point (often discounted to $900) is higher the Galaxy S25+, and it's more scratch-prone and less dust-resistant. If you really need the unique form factor, you should wait for the reviews of the Motorola Razr (2025), named after the legendary Razr V3 and promising a higher-quality external display that can fully replace the main one more often. Otherwise, you should probably look elsewhere. Back to top ▵ Best Budget Phones Samsung Galaxy A16 and A26 In numbers Price: $176 on Amazon If you're in the market for an affordable $200 device that can handle the tasks most users demand from their phones – albeit not always as proficiently – the Samsung Galaxy A16 5G is a compelling option. Six years of Android and security updates are more than you are going to get anywhere else for this price. The 6.7-inch 1080p AMOLED display offers great contrast, and it also runs at 90Hz. Additionally, the phone supports NFC for contactless payments. The Galaxy A16 features a 13MP front-facing camera, and on the rear three cameras: a 50MP primary lens, along with a 5MP ultrawide camera and a 2MP macro sensor. The phone is available with 4GB, 6GB or 8GB of RAM, and 128GB or 256GB of storage, which can be expanded with microSD. Thanks to the 5,000mAh battery, the phone features good battery life, and it also supports 25W charging. On the other hand, the mono speaker is as basic as you can imagine. For $100 more, the Galaxy A26 features Gorilla Glass on the front and back, IP67 dust/water resistance, an always-on 120Hz display and an upgraded 8MP ultrawide camera. Motorola Moto G Power (2024 model) If you replace your phone often, you can check out the Motorola Moto G Power. Just make sure you are getting the 2024 version, as the more expensive 2025 model has a slower CPU. The older phone won't receive OS updates beyond Android 15, but it will get security updates until 2027. The LCD display runs at 120Hz. The main and ultrawide cameras are similar to the Galaxy A26's, and the 16MP front camera is equivalent with slightly smaller pixels. The differentiating features are wireless charging, stereo speakers and a headphone jack. The main problem is the amount of bloatware that Lenovo installs on the phone. Back to top ▵ Best of the Best Apple iPhone 16 Pro Max In numbers Price: $1,199 The iPhone 16 Pro Max sports a 6.9-inch OLED display that supports a 120Hz refresh rate. The always-on display provides users with glanceable information without waking the device. The camera system includes a 48MP main sensor, a 48MP ultrawide lens with autofocus for improved macro photography, and a 12MP telephoto lens offering 5x optical zoom. The upgraded ultrawide sensor delivers enhanced detail and macro capabilities. Video recording is also enhanced with support for 4K at 120 frames per second, and the inclusion of four studio-quality microphones improves audio capture. The iPhone Pro line differentiates itself with the A18 Pro SoC (with six active graphical units and double the cache), and supporting USB 3.1 (or "3.2 gen 2") speeds via USB-C. On paper, Apple's top-tier smartphone may appear to offer similar features to those found in mainstream products from other companies. However, thanks to iOS and its finely-tuned apps, its performance is notably superior. The hardware is also top notch and more carefully built than most, using titanium instead of aluminum. The iPhone 16 Pro Max is the only model that's not available with 128GB of storage. Starting at $1,200 with 256GB, it is steep, especially when contrasted with a $700 iPhone 16e with the same 256GB. If it plays any kind of factor in your equation, a well-preserved iPhone Pro Max can typically be traded in or sold for around $400 – $500 after two years. If you're inclined towards a more compact device, the regular iPhone 16 Pro will save you $100 though the Max is where its at for the most pixels and biggest battery. Back to top ▵ The Best ePaper Phone Bigme HiBreak Pro | Mudita Kompakt In numbers Price: $439 Until recently, if you wanted to remain available on a camping trip lasting several days, your main option was an outdated and limited feature phone. Now, you can also opt for a phone with an efficient monochrome e-paper display. These phones remain perfectly usable in direct sunlight, and utilize front light to work in the dark. Except for the 21Hz display, the Bigme HiBreak Pro is a fully modern smartphone, with Android 14 and 5G support. The 6.1-inch, 824p display may not be ideal for watching video, but for reading it's arguably better than any other. Combined with a 4,500mAh battery, it's built to last for days between charges. While it's not designed for media consumption, the HiBreak Pro can still shoot color photos and videos with its 20MP rear camera and 5MP front camera. For the price, 8GB of RAM and 256GB of storage is solid. Unusually, it includes an infrared sensor, so it can double as a remote control. The main thing missing is an official IP rating. Mudita Kompakt If you don't want a full-featured smartphone, the Mudita Kompakt offers a de-Googled version of Android, with 13 apps optimized for its monochrome display (yes, including chess). It doesn't support 5G and lacks a front-facing camera to accompany the 8MP one on the back. The smaller 4.3-inch, 480p screen helps balance out the modest 3,300mAh battery. Due to its custom software, the Kompakt costs nearly as much as the HiBreak Pro. It only includes 3GB of RAM and 32GB of storage but does offer a microSD card slot and a headphone jack. If you're buying a phone because you want just a phone, it may be your best option. Back to top ▵ Masthead credit: Amanz
    0 Commentaires ·0 Parts ·0 Aperçu
CGShares https://cgshares.com