
Poetry And Guarding Humanity In The AI Age
www.forbes.com
LOS ANGELES, CALIFORNIA - MARCH 14: Joy Buolamwini speaks ontage during the NAACP Image Awards ... [+] Dinner at Hollywood Palladium on March 14, 2024 in Los Angeles, California. (Photo by Leon Bennett/Getty Images For NAACP)Getty Images For NAACPDo a Google search on the major risks of AI, and youre likely to come up with a lot of concern about deepfakes, and singularity events in which we fail to control the technology in fundamental ways.But theres already another very real threat for many people around the world, those who may face additional challenges due to being part of marginalized groups, where AI can feed bias and discrimination.Take a look at this CNBC article in which Rommun Chowdhury, Twitters former head of machine learning ethics, talked about the similarities between traditional redlining and new algorithmic operations.There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans, Chowdhury points out. Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someones race, it is implicitly picked up.Many others have also echoed these issues, and brought up questions about how we address them. In fact, we had this with prior waves of technology, too: for instance, in the age of the blockchain (as everyone learned about the new technology through the last decade or so) we had the unbanked and underbanked the difference, in that case, was that the blockchain was going to be the great equalizer, and bring people together. Its debatable whether this ever happened in any given society, but now, with AI, its different. Were concerned that if people are ceding decision-making power to automation, machines might drive people further apart.A Personal JourneyFew people understand the discrimination that can result with AI better than Joy Buolamwini, who was an early researcher on facial recognition technology. Where was she working? Buolamwini was working on her thesis at MITs media lab, which enjoys its anniversary this year!In any case, Buolamwini noticed that the algorithm did not recognize her darker skin, and so she put on a white mask in order to get the computer vision engine to see her face.Eventually Buolamwini started the Algorithmic Justice League, which is pretty much what it sounds like: the office looks at the impact of AI on people and societies, and how to keep algorithmic decisions more equitable. Shes written a book about the need to protect the excoded, people who dont get the correct recognition and acknowledgment from AI machines.You can be excoded when you are denied a loan based on algorithmic decision-making, Buolamwini explained in a press statement around her work. You can be excoded when your resume is automatically screened out. You can be excoded when a tenant screening algorithm denies you access to housing. These examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.She also knows the unrealistic expectations that people may have about AI in general, saying this about our struggle to integrate artificial intelligence:(We) swap fallible human gatekeepers for machines that are also flawed, but assumed to be objective. Just like algorithms confronted with individuals who do not fit prior assumptions, the human gatekeeper stands in the way of opportunity, supposedly for the safety of those deemed credible enough or worthy to enter the building.Even the word bias itself was part of early machine learning analysis, with bias and variance assessments, and we are still grappling with this as a fundamental part of AI research.A Poem for its DayAt the Imagination in Action event at Davos in January, Buolamwini took the stage to recite an original work called Unstable Desire, which talks about these risks and how to protect the value of humanity in the AI age.Prompted to competition, she began. Where be the guardrails now? Threat in sight, will might make right? Hallucinations taken as prophecy, destabilized on a middling journey to outpace, to open-chase, to claim supremacy, to reign indefinitely, haste and paste control altering deletion, unstable desire remains undefeated. The fate of AI still uncompleted, responding with fear: responsible AI beware. Prophets do snare, and people still dare to believe our humanity is more than neural nets and transformations of collected muses, more than data and errata.Noting that humans are more than data and errata brings illumination to the reductive nature of seeing people only in a digital light.The poet continues:Are we not transcendent beings, more than transactional diffusions, bound in transient forms? Can this power be guided with care, augmenting the light alongside economic destitution? Temporary band aids cannot hold the wind, when the task ahead is to transform the atmosphere of innovation - the Android dreams entice, the nightmare schemes of vice, point of code, certified, human made.Its a strong reminder from someone with real experience in being a watchdog on AI impact that we need to keep thinking about all of the ramifications of our sentient AI colleagues as they evolve. There will be talk about parity between humans and machines, but there also has to be talk about parity between humans and other humans. All of this is going to be critical as we move forward.
0 Comments
·0 Shares
·10 Views