Upgrade to Pro

TECHREPORT.COM
OpenAI Queries Musk’s Link to Bill Threatening Its For-Profit Restructure Plans
Key takeaways: OpenAI queries Musk’s ties to a bill that could upend its for-profit restructuring. Experts are calling on regulators to prevent the move. This is part of broader concerns over how this could impact the development, ethics, and security of generative AI across the board. OpenAI is querying a group behind a bill that had the potential to threaten its for-profit restructuring plans, asking if they have ties to former co-founder Elon Musk. According to OpenAI’s lawyer, and as reported by POLITICO, arguments expressed in a letter by the Coalition for AI Nonprofit Integrity shared similarities with those made by Musk in court.   The group, founded by nonprofits, researchers, and consumer advocates, campaigns for preserving nonprofit integrity in the AI sector. The coalition denies receiving financing from Musk. According to a spokesperson, it’s a ‘grassroots’ organization funded from various sources, including relatives of Suchir Balaji, the late OpenAI engineer and whistleblower who died last year. Where’s all of this going, and what’s going to happen? Let’s dig in! Sam Altman’s Letter Goes on the Attack OpenAI’s letter makes two demands: A meeting with the coalition’s president, Jeffrey Mark Gardner A disclosure of the coalition’s sources of funding, which are unpublished in tax documents due to its recent incorporation This follows the group’s move in February to support the original bill, which could have halted OpenAI’s for-profit restructuring plans then and there.  However, the author has since amended it, citing the need for more time to work on it. To date, no other indications about a Musk connection have appeared, with the group gathering support from important industry figures, including the godfather of AI, Geoffrey Hinton. This letter is the latest addition to an ongoing court battle between Elon Musk and Sam Altman, CEO of OpenAI. Musk, who initially co-founded OpenAI with Altman before leaving to set up rival xAI, is challenging the company’s plans to switch to a for-profit model.  His argument? This would mean OpenAI deviates from its original mission to benefit the public. Musk isn’t alone in his views, either.  A letter to the Attorneys General for California and Delaware, signed by over 30 former OpenAI staff members and Nobel Prize winners, urged the regulators to prevent OpenAI from restructuring to a for-profit entity. This led the Attorney General for California to launch a probe. Why Are So Many Experts Against OpenAI’s For-Profit Restructuring? Moving from a nonprofit entity to a for-profit structure often means de-prioritizing safety, research, and development. The aforementioned letter, revealed to the public on April 23, claims that the move would ‘eliminate essential safeguards’ that protect against AGI. More on AGI Artificial General Intelligence (AGI) — systems that can outperform humans in understanding, learning, and applying knowledge across various tasks — could be the most powerful technology on the planet when it arrives.  It’s a world-changing technology, but it also comes with serious risks, which are potentially more severe under a for-profit company’s control, argue experts. A for-profit entity is focused on generating profit for its shareholders. And profit could take priority over safety, ethical concerns, or benefits to society. Added pressure to bring AGI to market first could also mean cutting corners in its development, particularly with regards to safety. And then again, OpenAI could one day decide to develop military-grade AGI purely for profits, saying goodbye to its initial society-first goals. Experts also argue that any AGI technology will likely remain proprietary to its creator. This will make it harder for the public, researchers, and regulators to develop safeguards and scrutinize its development closely.  Top talent is also likely to be drawn to for-profit companies with the promise of more competitive compensation, so nonprofits will find it harder to compete (unfair competition, anyone?) A network of neon lights above a highway, showing AGI at the heart We could also posit that industry norms could shift if the move to a for-profit structure were validated. Other AI startups could start doing the same. This could mean that for-profit firms come under less ethical scrutiny if it’s seen as commonplace for companies in the sector. It could also result in fewer investors for nonprofit entities as more resources go toward for-profit entities. This would make it harder for nonprofits to compete in an expensive and competitive industry. On the other hand, some say that safety and ethical concerns won’t suddenly disappear in a for-profit organization, with public pressure and the long-term risks of AI pushing to highlight these.  AI research organizations and advocacy groups would continue to push for the responsible development of AI across the market, regardless of whether companies are nonprofit or for-profit. While that’s certainly true, will profit be too much of a temptation? OpenAI Should Be Held Accountable to Its Initial Purpose: Benefiting Humanity There’s the inherent risk that OpenAI — or any AI startup moving to a for-profit structure — could become a monopoly, with vast control over society and the economy. Which makes the irony palpable, considering OpenAI was founded to combat this very monopoly. In December 2015, the AI startup was launched when Google seemed on the brink of creating AGI. CEO Sam Altman believed that if a commercial company developed AGI, it would harm the public. The company’s founding announcement in its Articles of Incorporation stated: Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained in a need to generate financial return. Since our research is free from financial obligations, we can better focus on a more positive human impact. – OpenAI Mission Statement As President Greg Brockman stated, OpenAI’s initial purpose wasn’t to develop AGI at all:  The true mission isn’t for OpenAI to build AGI. The true mission is for AGI to go well for humanity… our goal isn’t to be the ones to build it; our goal is to make sure it goes well for the world. – President Greg Brockman According to OpenAI, gaining a competitive advantage is the main reason behind the move. But the company’s recent track record shows worrying patterns of seeming carelessness. Statistics show that OpenAI rushed through safety testing to ensure a product met its release schedule. It also slashed its safety testing time, making testing processes ‘less thorough, with insufficient time and resources dedicated to identifying and mitigating risks.’ Is It Time for Regulators to Step In? So, should regulators step in to prevent OpenAI’s proposed restructuring? Many experts are in absolute agreement that this should happen. Speaking to Time, UC Berkeley computer science professor Stuart Russell says that as a nonprofit, OpenAI can be shut down if it deviates from its mission. This is sort of like an off-switch, he explains. Moving to a for-profit entity means, ‘basically, they’re proposing to disable that off-switch.’ If a for-profit restructuring occurred, board members would need to weigh the company’s original mission against profits for its shareholders.  Futuristic cityscape with US and Chinese flags, representing AGI competition Then there’s OpenAI’s argument that everyone would benefit if it builds AGI before a Chinese company. While this is true at its core, there are other organizations at the forefront of AI.  If it retains its nonprofit status, there’s no reason why it couldn’t offer its resources, compute, and talent to the US government or other US-based organizations to ensure America remains ahead of Chinese competition. The letter proposes that regulators protect the charitable purpose of OpenAI at a ‘potentially pivotal moment’ in the development of AGI, safeguarding the public. It also demands answers to fundamental questions, such as how a for–profit restructuring benefits humanity and the greater good.  The plan of action outline suggests that, if and when OpenAI creates AGI, it ‘should belong to the nonprofit entity. Their sole responsibility is to ensure it is used responsibly for the benefit of humanity. It should not be owned or controlled by a commercial entity or its investors.’ It’s easy to see why tensions are running high in the court case between Musk and OpenAI. And even higher between the ChatGPT creator and those staunchly opposing the company’s planned restructuring. With OpenAI now querying Musk’s ties to the coalition challenging its for-profit plans, we’ll need to wait and see how this debate will unfold. Paula has been a writer for over a decade, starting off in the travel industry for brands like Skyscanner and Thomas Cook. She’s written everything from a guide to visiting Lithuania’s top restaurants to how to survive a zombie apocalypse and also worked as an editor/proofreader for indie authors and publishing houses, focusing on mystery, gothic, and crime fiction. She made the move to tech writing in 2019 and has worked as a writer and editor for websites such as Android Authority, Android Central, XDA, Megagames, Online Tech Tips, and Xbox Advisor. These days as well as contributing articles on all-things-tech for Techreport, you’ll find her writing about mobile tech over at Digital Trends. She’s obsessed with gaming, PC hardware, AI, and the latest and greatest gadgets and is never far from a screen of some sort.Her attention to detail, ability to get lost in a rabbit hole of research, and obsessive need to know every fact ensures that the news stories she covers and features she writes are (hopefully) as interesting and engaging to read as they are to write. When she’s not working, you’ll usually find her gaming on her Xbox Series X or PS5. As well as story-driven games like The Last of Us, Firewatch, and South of Midnight she loves anything with a post-apocalyptic setting. She’s also not averse to being absolutely terrified watching the latest horror films, when she feels brave enough! View all articles by Paula Beaton Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
·79 Views