How to use AI for good
Social media was mankind’s first run-in with AI, and we failed that test horribly, according to tech ethicist Tristan Harris, whom The Atlantic called “the closest thing Silicon Valley has to a conscience.” A recent survey found nearly half of Gen Z respondents wished social media had never been invented.
Yet, 60% still spend at least four hours daily on these platforms.
Bullying, social anxiety, addiction, polarization, and misinformation—social media has become a cocktail of disturbing discourse.
With GenAI, we have a second chance to ensure technology is used responsibly.
But this is proving difficult.
Major AI companies are now adopting collaborative approaches to address governance challenges.
Recently, OpenAI announced it would implement Anthropic’s Model Context Protocol, a standard for connecting AI models to data sources that’s rapidly becoming an industry norm with Google following suit.
With any new technology, there are unexpected benefits and consequences.
As Harris put it, “whatever our power is as a species, AI amplifies it to an exponential degree.”
While GenAI helps us accomplish more than ever before, dangers exist.
A seemingly safe large language model (LLM) can be manipulated by bad actors to create harmful content or be jailbroken to write malicious code.
How do we avoid these harmful use cases while benefiting from this powerful technology? Three approaches are possible, each with its own merits and drawbacks.
3 ways to benefit from AI while avoiding harm
Option #1: Government regulation
The automobile brought both convenience and tragedy.
We responded with speed limits, seatbelts, and regulations—a process spanning over a century.
Legislators worldwide are attempting similar safeguards with AI.
The European Union leads with its AI Act, which entered into force in August 2024.
Implementation is phased, with some provisions active since February 2025, banning systems posing “unacceptable risk” like social scoring and untargeted scraping of facial recognition data.
However, these regulations present challenges.
European tech leaders worry that punitive EU measures could trigger backlash from the Trump administration.
Meanwhile, U.S.
regulation develops as a patchwork of state and federal initiatives, with states like Colorado enacting their own comprehensive AI laws.
The EU AI Act’s implementation timeline illustrates this complexity: Some bans started in February 2025, codes of practice follow nine months after entry into force, rules on general-purpose AI at the 12-month mark, while high-risk systems have 36 months to comply.
A real concern exists: Excessive regulation might simply shift development elsewhere.
Building a functional LLM model costs only hundreds of millions of dollars—within reach for many countries.
While regulation has its place, the process is too flawed for developing good rules currently.
AI evolves too quickly, and the industry attracts too much investment.
Resulting regulations risk either stifling innovation or lacking meaningful impact.
So, if government regulation isn’t the panacea for AI’s dangers, what will help?
Option #2: Social discourse
Educators are struggling with GenAI and academic honesty.
Some want to block AI entirely, while others see opportunities to empower students who struggle with traditional pedagogy.
Imagine having a perpetually available tutor answering any question—but one that can also complete your assignments.
As Satya Nadella put it recently on the Dwarkesh Podcast, his new workflow is to “think with AI and work with my colleagues.” This collaborative approach to AI usage could be a model for educational settings, where AI serves as a thinking partner rather than a replacement for learning.
In homes, schools, online forums, and government, society must reckon with this technology and decide what’s acceptable.
Everyone deserves a voice in these conversations.
Unfortunately, internet discussions often devolve into trading sound bites without context or nuance.
For meaningful conversations, we must educate ourselves.
We need effective channels for public input, perhaps through grassroots movements guiding people toward safe and effective AI usage.
Option #3: Third-party evaluators
Before the 2008 financial crisis, credit rating agencies assigned AAA ratings to subprime mortgages, contributing to economic disaster.
The problem? Industry-wide self-interest.
When it comes to AI regulators, of course, we run the risk of an incestuous revolving door that does more harm than good.
That doesn’t have to be the case.
Meaningful and thoughtful research is going into AI certifications and third-party evaluators.
In the paper AI Certification: Advancing Ethical Practice by Reducing, Peter Cihon et al.
propose several notions.
First, because AI technology is advancing so quickly, AI certification should emphasize evergreen principles, such as ethics for AI developers.
Second, AI certification today lacks nuance for particular circumstances, geographies, or industries.
Not only is certification homogenous, but many programs treat AI as a “monolithic technology” rather than acknowledging the diverse types, such as facial recognition, LLMs, and anomaly detection.
Finally, to see good results, customers must demand high-quality certifications.
They have to be educated about the technology and the associated ethics and safety concerns.
The path forward
The way forward requires multistakeholder, multifaceted conversations about societal goals and preventing AI dangers.
If government becomes the default regulator, we risk an uninvestable marketplace or meaningless rubber-stamping.
Independent third-party evaluators combined with informed social discourse offers the best path forward.
But we must educate ourselves about this powerful technology’s dangers and realities, or we’ll repeat social media’s errors on a grander scale.
Peter Wang is chief AI and innovation officer at Anaconda.
Source:
https://www.fastcompany.com/91333607/how-to-use-ai-for-good" style="color:
#0066cc;">
https://www.fastcompany.com/91333607/how-to-use-ai-for-good
#how #use #for #good
How to use AI for good
Social media was mankind’s first run-in with AI, and we failed that test horribly, according to tech ethicist Tristan Harris, whom The Atlantic called “the closest thing Silicon Valley has to a conscience.” A recent survey found nearly half of Gen Z respondents wished social media had never been invented.
Yet, 60% still spend at least four hours daily on these platforms.
Bullying, social anxiety, addiction, polarization, and misinformation—social media has become a cocktail of disturbing discourse.
With GenAI, we have a second chance to ensure technology is used responsibly.
But this is proving difficult.
Major AI companies are now adopting collaborative approaches to address governance challenges.
Recently, OpenAI announced it would implement Anthropic’s Model Context Protocol, a standard for connecting AI models to data sources that’s rapidly becoming an industry norm with Google following suit.
With any new technology, there are unexpected benefits and consequences.
As Harris put it, “whatever our power is as a species, AI amplifies it to an exponential degree.”
While GenAI helps us accomplish more than ever before, dangers exist.
A seemingly safe large language model (LLM) can be manipulated by bad actors to create harmful content or be jailbroken to write malicious code.
How do we avoid these harmful use cases while benefiting from this powerful technology? Three approaches are possible, each with its own merits and drawbacks.
3 ways to benefit from AI while avoiding harm
Option #1: Government regulation
The automobile brought both convenience and tragedy.
We responded with speed limits, seatbelts, and regulations—a process spanning over a century.
Legislators worldwide are attempting similar safeguards with AI.
The European Union leads with its AI Act, which entered into force in August 2024.
Implementation is phased, with some provisions active since February 2025, banning systems posing “unacceptable risk” like social scoring and untargeted scraping of facial recognition data.
However, these regulations present challenges.
European tech leaders worry that punitive EU measures could trigger backlash from the Trump administration.
Meanwhile, U.S.
regulation develops as a patchwork of state and federal initiatives, with states like Colorado enacting their own comprehensive AI laws.
The EU AI Act’s implementation timeline illustrates this complexity: Some bans started in February 2025, codes of practice follow nine months after entry into force, rules on general-purpose AI at the 12-month mark, while high-risk systems have 36 months to comply.
A real concern exists: Excessive regulation might simply shift development elsewhere.
Building a functional LLM model costs only hundreds of millions of dollars—within reach for many countries.
While regulation has its place, the process is too flawed for developing good rules currently.
AI evolves too quickly, and the industry attracts too much investment.
Resulting regulations risk either stifling innovation or lacking meaningful impact.
So, if government regulation isn’t the panacea for AI’s dangers, what will help?
Option #2: Social discourse
Educators are struggling with GenAI and academic honesty.
Some want to block AI entirely, while others see opportunities to empower students who struggle with traditional pedagogy.
Imagine having a perpetually available tutor answering any question—but one that can also complete your assignments.
As Satya Nadella put it recently on the Dwarkesh Podcast, his new workflow is to “think with AI and work with my colleagues.” This collaborative approach to AI usage could be a model for educational settings, where AI serves as a thinking partner rather than a replacement for learning.
In homes, schools, online forums, and government, society must reckon with this technology and decide what’s acceptable.
Everyone deserves a voice in these conversations.
Unfortunately, internet discussions often devolve into trading sound bites without context or nuance.
For meaningful conversations, we must educate ourselves.
We need effective channels for public input, perhaps through grassroots movements guiding people toward safe and effective AI usage.
Option #3: Third-party evaluators
Before the 2008 financial crisis, credit rating agencies assigned AAA ratings to subprime mortgages, contributing to economic disaster.
The problem? Industry-wide self-interest.
When it comes to AI regulators, of course, we run the risk of an incestuous revolving door that does more harm than good.
That doesn’t have to be the case.
Meaningful and thoughtful research is going into AI certifications and third-party evaluators.
In the paper AI Certification: Advancing Ethical Practice by Reducing, Peter Cihon et al.
propose several notions.
First, because AI technology is advancing so quickly, AI certification should emphasize evergreen principles, such as ethics for AI developers.
Second, AI certification today lacks nuance for particular circumstances, geographies, or industries.
Not only is certification homogenous, but many programs treat AI as a “monolithic technology” rather than acknowledging the diverse types, such as facial recognition, LLMs, and anomaly detection.
Finally, to see good results, customers must demand high-quality certifications.
They have to be educated about the technology and the associated ethics and safety concerns.
The path forward
The way forward requires multistakeholder, multifaceted conversations about societal goals and preventing AI dangers.
If government becomes the default regulator, we risk an uninvestable marketplace or meaningless rubber-stamping.
Independent third-party evaluators combined with informed social discourse offers the best path forward.
But we must educate ourselves about this powerful technology’s dangers and realities, or we’ll repeat social media’s errors on a grander scale.
Peter Wang is chief AI and innovation officer at Anaconda.
Source:
https://www.fastcompany.com/91333607/how-to-use-ai-for-good
#how #use #for #good