www.vox.com
Oct 11, 2024Sigal SamuelAI companies are trying to build god. Shouldnt they get our permission first?Getty ImagesAI companies are on a mission to radically change our world. Theyre working on building machines that could outstrip human intelligence and unleash a dramatic economic transformation on us all. Sam Altman, the CEO of ChatGPT-maker OpenAI, has basically told us hes trying to build a god or magic intelligence in the sky, as he puts it. OpenAIs official term for this is artificial general intelligence, or AGI. Altman says that AGI will not only break capitalism but also that its probably the greatest threat to the continued existence of humanity. Read Article >Sep 29, 2024Sigal Samuel, Kelsey Piperand1 moreCalifornias governor has vetoed a historic AI safety billCalifornia Gov.Gavin Newsom speaks during a press conference with the California Highway Patrol announcing new efforts to boost public safety in the East Bay, in Oakland, California, July 11, 2024. Stephen Lam/San Francisco Chronicle via Getty ImagesAdvocates said it would be a modest law setting clear, predictable, common-sense safety standards for artificial intelligence. Opponents argued it was a dangerous and arrogant step that will stifle innovation.In any event, SB 1047 California state Sen. Scott Wieners proposal to regulate advanced AI models offered by companies doing business in the state is now kaput, vetoed by Gov. Gavin Newsom. The proposal had garnered wide support in the legislature, passing the California State Assembly by a margin of 48 to 16 in August. Back in May, it passed the Senate by 32 to 1.Read Article >Sep 26, 2024Sigal SamuelOpenAI as we knew it is deadSam Altman. Aaron Schwartz/Xinhua via Getty ImagesOpenAI, the company that brought you ChatGPT, just sold you out.Since its founding in 2015, its leaders have said their top priority is making sure artificial intelligence is developed safely and beneficially. Theyve touted the companys unusual corporate structure as a way of proving the purity of its motives. OpenAI was a nonprofit controlled not by its CEO or by its shareholders, but by a board with a single mission: keep humanity safe.Read Article >Sep 14, 2024Sigal SamuelThe new follow-up to ChatGPT is scarily good at deceptionMarharyta Pavliuk/Getty ImagesOpenAI, the company that brought you ChatGPT, is trying something different. Its newly released AI system isnt just designed to spit out quick answers to your questions, its designed to think or reason before responding. The result is a product officially called o1 but nicknamed Strawberry that can solve tricky logic puzzles, ace math tests, and write code for new video games. All of which is pretty cool. Read Article >Aug 18, 2024Sigal SamuelPeople are falling in love with and getting addicted to AI voicesGetty ImagesThis is our last day together. Its something you might say to a lover as a whirlwind romance comes to an end. But could you ever imagine saying it to software? Read Article >Aug 5, 2024Sigal SamuelIts practically impossible to run a big AI company ethicallyGetty Images for Amazon Web ServAnthropic was supposed to be the good AI company. The ethical one. The safe one.It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropics founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly. Read Article >Jul 19, 2024Sigal SamuelTraveling this summer? Maybe dont let the airport scan your face.Passengers enter the departure hall through face recognition at Xiaoshan International Airport in China in 2022. Future Publishing via Getty ImagesHeres something Im embarrassed to admit: Even though Ive been reporting on the problems with facial recognition for half a dozen years, I have allowed my face to be scanned at airports. Not once. Not twice. Many times. There are lots of reasons for that. For one thing, traveling is stressful. I feel time pressure to make it to my gate quickly and social pressure not to hold up long lines. (This alone makes it feel like Im not truly consenting to the face scans so much as being coerced into them.) Plus, Im always getting randomly selected for additional screenings, maybe because of my Middle Eastern background. So I get nervous about doing anything that might lead to extra delays or interrogations.Read Article >Jun 5, 2024Sigal SamuelOpenAI insiders are demanding a right to warn the publicSam Altman, CEO of OpenAI. David Paul Morris/Bloomberg via Getty ImagesEmployees from some of the worlds leading AI companies published an unusual proposal on Tuesday, demanding that the companies grant them a right to warn about advanced artificial intelligence. Whom do they want to warn? You. The public. Anyone who will listen. Read Article >May 22, 2024Sigal SamuelThe double sexism of ChatGPTs flirty Her voiceScarlett Johansson attends the Clooney Foundation for Justices 2023 Albie Awards on September 28, 2023 in New York City. Getty ImagesIf a guy told you his favorite sci-fi movie is Her, then released an AI chatbot with a voice that sounds uncannily like the voice from Her, then tweeted the single word her moments after the release what would you conclude?Its reasonable to conclude that the AIs voice is heavily inspired by Her. Read Article >May 18, 2024Sigal SamuelI lost trust: Why the OpenAI team in charge of safeguarding humanity implodedSam Altman is the CEO of ChatGPT maker OpenAI, which has been losing its most safety-focused researchers. Joel Saget/AFP via Getty ImagesEditors note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altmans tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.Read Article >May 8, 2024Sigal SamuelSome say AI will make war more humane. Israels war in Gaza shows the opposite.A December 2023 photo shows a Palestinian girl injured as a result of the Israeli bombing on Khan Yunis in the southern Gaza Strip. Saher Alghorra/Middle East images/AFP via Getty ImagesIsrael has reportedly been using AI to guide its war in Gaza and treating its decisions almost as gospel. In fact, one of the AI systems being used is literally called The Gospel.According to a major investigation published last month by the Israeli outlet +972 Magazine, Israel has been relying on AI to decide whom to target for killing, with humans playing an alarmingly small role in the decision-making, especially in the early stages of the war. The investigation, which builds on a previous expos by the same outlet, describes three AI systems working in concert. Read Article >Mar 21, 2024Sigal SamuelElon Musk wants to merge humans with AI. How many brains will be damaged along the way?Xinmei Liu for VoxOf all Elon Musks exploits the Tesla cars, the SpaceX rockets, the Twitter takeover, the plans to colonize Mars his secretive brain chip company Neuralink may be the most dangerous.What is Neuralink for? In the short term, its for helping people with paralysis people like Noland Arbaugh, a 29-year-old who demonstrated in a livestream this week that he can now move a computer cursor using just the power of his mind after becoming the first patient to receive a Neuralink implant. Read Article >Jan 18, 2024Adam Clark EstesHow copyright lawsuits could kill OpenAIPolice officers stand outside the New York Times headquarters in New York City. Drew Angerer/Getty ImagesIf youre old enough to remember watching the hit kids show Animaniacs, you probably remember Napster, too. The peer-to-peer file-sharing site, which made it easy to download music for free in an era before Spotify and Apple Music, took college campuses by storm in the late 1990s. This did not escape the notice of the record companies, and in 2001, a federal court ruled that Napster was liable for copyright infringement. The content producers fought back against the technology platform and won. But that was 2001 before the iPhone, before YouTube, and before generative AI. This generations big copyright battle is pitting journalists against artificially intelligent software that has learned from and can regurgitate their reporting. Read Article >Jan 11, 2024Pranav DixitThere are too many chatbotsPaige Vickers/Vox; Getty ImagesOn Wednesday, OpenAI announced an online storefront called the GPT Store that lets people share custom versions of ChatGPT. Its like an app store for chatbots, except that unlike the apps on your phone, these chatbots can be created by almost anyone with a few simple text prompts. Over the past couple of months, people have created more than 3 million chatbots thanks to the GPT creation tool OpenAI announced in November. At launch, for example, the store features a chatbot that builds websites for you, and a chatbot that searches through a massive database of academic papers. And like the developers for smartphone app stores, the creators of these new chatbots can make money based on how many people use their product. The store is only available to paying ChatGPT subscribers for now, and OpenAI says it will soon start sharing revenue with the chatbot makers. Read Article >Jan 4, 2024Adam Clark EstesYou thought 2023 was a big year for AI? Buckle up.2024 will be the biggest election year in history. Moor Studio/Getty ImagesEvery new year brings with it a gaggle of writers, analysts, and gamblers trying to tell the future. When it comes to tech news, that used to amount to some bloggers guessing what the new iPhone would look like. But in 2024, the technology most people are talking about is not a gadget, but rather an alternate future, one that Silicon Valley insiders say is inevitable. This future is powered by artificial intelligence, and lots of people are predicting that its going to be inescapable in the months to come.That AI will be ascendant is not the only big prediction experts are making for next year. Ive spent the past couple of days reading every list of predictions I can get my hands on, including this very good one from my colleagues at Future Perfect. A few big things show up on most of them: social medias continued fragmentation, Apples mixed-reality goggles, spaceships, and of course AI. Whats interesting to me is that AI also seems to link all these things together in much the same way that the rise of the internet basically connected all of the big predictions of 2004. Read Article >Nov 22, 2023Sigal SamuelOpenAIs board may have been right to fire Sam Altman and to rehire him, tooSam Altman, the poster boy for AI, was ousted from his company OpenAI. Andrew Caballero-Reynolds/AFP via Getty ImagesThe seismic shake-up at OpenAI involving the firing and, ultimately, the reinstatement of CEO Sam Altman came as a shock to almost everyone. But the truth is, the company was probably always going to reach a breaking point. It was built on a fault line so deep and unstable that eventually, stability would give way to chaos. That fault line was OpenAIs dual mission: to build AI thats smarter than humanity, while also making sure that AI would be safe and beneficial to humanity. Theres an inherent tension between those goals because advanced AI could harm humans in a variety of ways, from entrenching bias to enabling bioterrorism. Now, the tension in OpenAIs mandate appears to have helped precipitate the tech industrys biggest earthquake in decades.Read Article >Sep 19, 2023Sigal SamuelAI thats smarter than humans? Americans say a firm no thank you.Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence. SeongJoon Cho/Bloomberg via Getty ImagesMajor AI companies are racing to build superintelligent AI for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?Americans, by and large, dont want it.Read Article >Sep 19, 2023Sara MorrisonGoogles free AI isnt just for search anymoreGoogles new Bard extensions might get more eyes on its generative AI offerings. Leon Neal/Getty ImagesThe buzz around consumer generative AI has died down since its early 2023 peak, but Google and Microsofts battle for AI supremacy may be heating up again.Both companies are releasing updates to their AI products this week. Googles additions to Bard, its generative AI tool, are live now (but just for English speakers for the time being). They include the ability to integrate Bard into Google apps and use it across any or all of them. Microsoft is set to announce AI innovations on Thursday, though it hasnt said much more than that. Read Article >Aug 18, 2023Sigal SamuelWhat normal Americans not AI companies want for AIGetty ImagesFive months ago, when I published a big piece laying out the case for slowing down AI, it wasnt exactly mainstream to say that we should pump the brakes on this technology. Within the tech industry, it was practically taboo. OpenAI CEO Sam Altman has argued that Americans would be foolish to slow down OpenAIs progress. If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI rather than authoritarian governments, he told the Atlantic. Microsofts Brad Smith has likewise argued that we cant afford to slow down lest China race ahead on AI.Read Article >Jul 21, 2023Sara MorrisonBiden sure seems serious about not letting AI get out of controlPresident Biden is trying to make sure AI companies are being as safe and responsible as they say they are. Fatih Aktas/Anadolu Agency via Getty ImagesIn its continuing efforts to try to do something about the barely regulated, potentially world-changing generative AI wave, the Biden administration announced today that seven AI companies have committed to developing products that are safe, secure, and trustworthy.Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI are the companies making this voluntary commitment, which doesnt come with any government monitoring or enforcement provisions to ensure that companies are keeping up their end of the bargain and punish them if they arent. It shows how the government is aware of its responsibility to protect citizens from potentially dangerous technology, as well as the limits on what it can actually do.Read Article >Jul 7, 2023Sigal SamuelAI is a tragedy of the commons. Weve got solutions for that.OpenAI CEO Sam Altman speaks at an event in Tokyo in June 2023. Tomohiro Ohsumi/Getty ImagesYouve probably heard AI progress described as a classic arms race. The basic logic is that if you dont race forward on making advanced AI, someone else will probably someone more reckless and less safety-conscious. So, better that you should build a superintelligent machine than let the other guy cross the finish line first! (In American discussions, the other guy is usually China.) But as Ive written before, this isnt an accurate portrayal of the AI situation. Theres no one finish line, because AI is not just one thing with one purpose, like the atomic bomb; its a more general-purpose technology, like electricity. Plus, if your lab takes the time to iron out some AI safety issues, other labs may take those improvements on board, which would benefit everyone.Read Article >Jul 4, 2023Aja RomanoNo, AI cant tell the futureAI oracles are all the rage on TikTok. John Lund/Getty ImagesCan an AI predict your fate? Can it read your life and draw trenchant conclusions about who you are? Hordes of people on TikTok and Snapchat seem to think so. Theyve started using AI filters as fortunetellers and fate predictors, divining everything from the age of their crush to whether their marriage is meant to last.Read Article >Jun 14, 2023Kelsey PiperFour different ways of understanding AI and its risksSam Altman, CEO of OpenAI, testifies in Washington, DC, on May 16, 2023. Aaron Schwartz/Xinhua via Getty ImagesI sometimes think of there being two major divides in the world of artificial intelligence. One, of course, is whether the researchers working on advanced AI systems in everything from medicine to science are going to bring about catastrophe. But the other one which may be more important is whether artificial intelligence is a big deal or another ultimately trivial piece of tech that weve somehow developed a societal obsession over. So we have some improved chatbots, goes the skeptical perspective. That wont end our world but neither will it vastly improve it.Read Article >Jun 14, 2023A.W. OhlheiserAI automated discrimination. Heres how to spot it.Xia Gordon for Vox and Capital BPart of the discrimination issue of The Highlight. This story was produced in partnership with Capital B.Say a computer and a human were pitted against each other in a battle for neutrality. Who do you think would win? Plenty of people would bet on the machine. But this is the wrong question. Read Article >Jun 3, 2023Shirin GhaffaryWhat will stop AI from flooding the internet with fake images?CSA Archive / Getty ImagesOn May 22, a fake photo of an explosion at the Pentagon caused chaos online.Within a matter of minutes of being posted, the realistic-looking image spread on Twitter and other social media networks after being retweeted by some popular accounts. Reporters asked government officials all the way up to the White House press office what was going on.Read Article >