WWW.FASTCOMPANY.COM
An ex-OpenAI exec and futurist talks about AI in 2025 and beyond
Welcome toAI Decoded,Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every weekhere. This week, Im dedicating the newsletter to a conversation I had recently with the futurist Zack Kass about some of the risks and myths that will come with the advent of AI across business and society. Kass, who was head of go-to-market at OpenAI, is the author of the upcoming book, The Next Renaissance: AI and the Expansion of Human Potential.What types of risks do you think we might be facing from AI in the next decade?I give it a very low percentage, but I think there is a reasonable chance that we actually build systems that are so smart that we start to devalue critical thinking and sort of decline cognitively. This seems very unlikely because every generation is smarter than the last, but its worth calling out.More likely is that, at some point, a percentage of the population will be more interested in the virtual reality than the physical one, and that percentage may grow and actually become sort of dominant, which would obviously be catastrophic for population growth and quality of life. This trend you can sort of see with Gen Z, the anxious generationthe attachment to the device, the addiction to the device era.Do you think that job losses to AI and automation are more of a near-term problem that well have to deal with, along with the effects on the economy?This is the thing that I would love if people spent more time talking about: The risk is not an economic one. I think in a world where we actually automate all our work, something profoundly positive will happen economically. If you can actually figure out how to automate everything and the cost of everything declines so far that you can live freely, its more that people may not know what their purpose is in a world where their work changes so frequently and so much. I think that the future is incredibly optional in all sorts of interesting ways, and I really do caution that the risk in all this is simply that people will lack purpose, at least for a couple generations.It will be our generation and maybe the next that bears figuring out what we do in a world where our work is just so dynamic, and maybe relatively less meaningful because the world is so much more robust. That being said, theres also incredible new opportunities. For every job that goes away, there will probably be a new job created in some interesting new way that we just cannot imagine. And I caution people to consider how they would imagine the economy looking before the internet or before electricity, for that matter. How could you fathom the economy in 1900 or 1800?What about other things like the use of AI to flood the information space with misinformation and disinformation?I dont even list it as one of my primary concerns because misinformation is one of these things that will have an incredible counterbalancefor every article and every photo that is generated by AI, we will have a system to actually determine its validity.And we will have much more robust truth telling in the future. This has just been true forever. And by the way, I remember going to the grocery store with my mom and looking at magazine covers of women and my mom saying, Oh God, Cindy Crawford is so beautiful because for a long time they were Photoshopping photos and just not telling us. Now, of course, we all know that every photo is Photoshopped. We have this lens with which we view the world. I thinkand this is what I say to publicistswe will have this return to traditional media if we do it right. We need the institutions to recapture trust, otherwise it will be very hard for people to know what to believe because in a world where people are more interested in Reddit and Quora, this could go a little strangely. In a world where people dont trust traditional mediaand they dontthe institution has just lost so much trust.And we didnt even really need AI for that to happen.Thats exactly right. So I think now presents an opportunity for us to find ways, and theres a lot of historical precedent. The printing press introduced all sorts of incredible ways for people to behave as charlatans, and you dont have to go back that far. We studied a bunch of people who sold early Ponzi schemes. There was so much financial fraud in the late 19th/early 20th century. There was an incredible amount of financial fraud because people could just print fake securities and sell them, and there was just no way to actually validate things. And obviously, theres this incredible new way now that we can actually score things. I dont basically ever talk about blockchain, but I do think blockchain will serve as a means to keep an official record of lots of things, a place that cannot be tampered with.What are your thoughts about longer-term AI risks, the existential risks people like Geoff Hinton and Eliezer Yudkowsky talk about?The existential risk has two parts. The first is, is this machine going to unwittingly do something untowardare we building something that is going to do something really bad on its own? And that presents the alignment problem. The real risk in all this is not that the machine wakes up one day and says, Im going to kill all. The theory of the alignment problem basically says we need to make sure that it cares about its unintended consequences because we [humans] may not fully appreciate what were doing.And then theres the bad actor. And this I think is also misunderstood because the real concern around bad actors in my opinion is not high-resource bad actors. I dont spend time worrying about North Korea with AI. They already have plenty of tools at their disposal to be bad actors, and the reality is we get better at managing high-resource bad actors all the time. The low-resource bad actor problem is a risk. In a world where we embolden anyone to do interesting things with this technology, we should create very punitive measures to police bad acting with it. We should make bad actors terrified to use AI to do bad thingsfinancial crime, deepfakes, etc.And this is something that we could do really easily, like we did with mail theft. We could say, hey, we built a system thats really fragile, and if we let people steal mail, domestic commerce will collapse. We need to make it a felony offense.What needs to be done to address these risks over the next five years?We should try to figure out how to come up with international standards by which all models are measured, and companies that use models that dont meet these measures are penalized. We should just make sure that everyone honors alignment standards.Second is explainability standards. The expectation that a model can be perfectly explainable is inherently dangerous because theres plenty in a model that cannot be explained. But we should set standards by which tasks that require explainability meet explainability standards. For example, if youre going to use a model to write an insurance policy, it should meet an explainability standard.And then the third thing is bad acting: We just have to make it scary for low-resource bad actors to use this stuff. The market will figure itself out. Europe, I think, is going to have some really serious economic suffocation pretty soon here because theyve passed a bunch of really strange policies that I dont even know if it protects the consumers as much as it just gives the policymakers a reason to celebrate. If we can get these things right, the market will behave in a way that serves us the constituents.Was Bidens executive order on AI constructive?It was passed at a time where basically no one working on it knew much about what they were talking about. So its less that its lip service and more that it didnt actually change behavior. So it really is one of these things like, you know, are you just doing this to appease voters?A lot of people in Congress have the perspective that we missed the boat on social media and we sure dont want to miss it again on AI.All progress has a cost . . . the cost of social media, the cost of the internet, is pretty great. The cost of social media on young childrens minds is terrible. It is also now something that we as individuals are identifying and working through. Passing policy on these things has potentially very dangerous consequences that you cannot unwindeconomic consequences, massive learning and development consequences. Its not that the government missed the boat on social media, so to speak. They just werent even paying attention. And no one went into this thing with eyes wide open because there was no one in Congress, if you recall, who knew anything about what the internet was. So you basically had Mark Zuckerberg going on stage in front of a bunch of people who were like, I dont know.Ive written about Californias AI bill that was vetoed by the governor. What are your thoughts on that approach?I fully support the regulation of AI. Im not asking this to be the Wild West. This is the most important technology that we will build in our lifetime, maybe except for quantum. Its really scary when people celebrate policy for the sake of policy, especially when it comes at the cost of what could be truly society-improving progress. Like massive amounts of progress are probably going to be found on the other side of this. And thats not a hot take because thats what technology does for the world. People spend so much time fixated on what the government will do to solve their problems that theyve forgotten that technology is basically doing all the things that have been promised to us. You know, the utopias that we build in our mind may actually come to pass. I think they will, for what its worth, and not because of government intervention, but because of technological progress; because what one person can do today will pale in comparison to what one person can do [tomorrow].More AI coverage from Fast Company:We used Googles AI to analyze 188 predictions of whats in store for tech in 2025Andrew Ng is betting big on agentic AIWe called 1-800-ChatGPT to see if OpenAI would ruin ChristmasAs Bible sales boom, so does Christian techWant exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
0 Σχόλια
0 Μοιράστηκε
40 Views