Are We Ready for Artificial General Intelligence?
www.informationweek.com
The artificial intelligence evolution is well underway. AI technology is changing how we communicate, do business, manage our energy grid, and even diagnose and treat illnesses. And it is evolving more rapidly than we could have predicted. Both companies that produce the models driving AI and governments that are attempting to regulate this frontier environment have struggled to institute appropriate guardrails.In part, this is due to how poorly we understand how AI actually functions. Its decision-making is notoriously opaque and difficult to analyze. Thus, regulating its operations in a meaningful way presents a unique challenge: How do we steer a technology away from making potentially harmful decisions when we dont exactly understand how it makes its decisions in the first place?This is becoming an increasingly pressing problem as artificial general intelligence (AGI) and its successor, artificial superintelligence (ASI), loom on the horizon.AGI is AI equivalent to or surpassing human intelligence. ASI is AI that exceeds human intelligence entirely. Until recently, AGI was believed to be a distant possibility, if it was achievable at all. Now, an increasing number of experts believe that it may only be a matter of years until AGI systems are operational.Related:As we grapple with the unintended consequences of current AI application -- understood to be less intelligent than humans because of their typically narrow and limited functions -- we must simultaneously attempt to anticipate and obviate the potential dangers of AI that might match or outstrip our capabilities.AI companies are approaching the issue with varying degrees of seriousness -- sometimes leading to internal conflicts. National governments and international bodies are attempting to impose some order on the digital Wild West, with limited success. So, how ready are we for AGI? Are we ready at all?InformationWeek investigates these questions, with insights from Tracy Jones, associate director of digital consultancy Guidehouses data and AI practice, May Habib, CEO and co-founder of generative AI company Writer, and Alexander De Ridder, chief technology officer of AI developer SmythOS.What Is AGI and How Do We Prepare Ourselves?The boundaries between narrow AI, which performs a specified set of functions, and true AGI, which is capable of broader cognition in the same way that humans are, remain blurry.As Miles Brundage, whose recent departure as senior advisor of OpenAIs AGI Readiness team has spurred further discussion of how to prepare for the phenomenon, says, AGI is an overloaded phrase.Related:AGI has many definitions, but regardless of what you call it, it is the next generation of enterprise AI, Habib says. Current AI technologies function within pre-determined parameters, but AGI can handle much more complex tasks that require a deeper, contextual understanding. In the future, AI will be capable of learning, reasoning, and adapting across any task or work domain, not just those pre-programmed or trained into it.AGI will also be capable of creative thinking and action that is independent of its creators.It will be able to operate in multiple realms, completing numerous types of tasks. It is possible that AGI may, in its general effect, be a person. There is some suggestion that personality qualities may be successfully encoded into a hypothetical AGI system, leading it to act in ways that align with certain sorts of people, with particular personality qualities that influence their decision-making.However, as it is defined, AGI appears to be a distinct possibility in the near future. We simply do not know what it will look like.AGI is still technically theoretical. How do you get ready for something that big? Jones asks. If you cant even get ready for the basics -- you cant tie your shoe --how do you control the environment when it's 1,000 times more complicated?Related:Such a system, which will approach sentience, may thus be capable of human failings due to simple malfunction or misdirection due to hacking events or even intentional disobedience on its own. If any human personality traits are encoded, intentionally or not, they ought to be benign or at least beneficial -- a highly subjective and difficult determination to make. AGI needs to be designed with the idea that it can ultimately be trusted with its own intelligence -- that it will act with the interests of its designers and users in mind. They must be closely aligned with our own goals and values.AI guardrails are and will continue to come down to self-regulation in the enterprise, Habib says. While LLMs can be unreliable, we can get nondeterministic systems to do mostly deterministic things when were specific with the outcomes we want from our generative AI applications. Innovation and safety are a balancing act. Self-regulation will continue to be key for AI's journey.Disbandment of OpenAIs AGI Readiness TeamBrundages departure from OpenAI in late October following the disbandment of its AGI Readiness team sent shockwaves through the AI community. He joined the company in 2018 as a researcher and led its policy research since 2021, serving as a key watchdog for potential issues created by the companys rapidly advancing products. The dissolution of his team and his departure followed on the heels of the implosion of its Superalignment team in May, which had served a similar oversight purpose.Brundage claimed that he would either join a nonprofit focused on monitoring AI concerns or start his own. While both he and OpenAI claimed that the split was amicable, observers have read between the lines, speculating that his concerns had not been taken seriously by the company. The members of the team who stayed with the company have been shuffled to other departments. Other significant figures at the company have also left in the past year.Though the Substack post in which he extensively described his reasons for leaving and his concerns about AGI was largely diplomatic, Brundage stated that no one was ready for AGI -- fueling the hypothesis that OpenAI and other AI companies are disregarding the guardrails their own employees are attempting to establish. A June 2024 open letter from employees of OpenAI and other AI companies warns of exactly that.Brundages exit is seen as a signifier that the old guard of AI has been sent to the hinterlands -- and that unbridled excess may follow in their absence.Potential Risks of AGIAs with the risks of narrow AI, those posed by AGI range from the mundane to the catastrophic.One underappreciated reason there are so few generative AI use cases at scale in the enterprise is fear -- but its fear of job displacement, loss of control, privacy erosion and cultural adjustments -- not the end of mankind, Habib notes. The biggest ethical concerns right now are data privacy, transparency and algorithmic bias.You dont just build a super-intelligent system and hope it behaves; you have to account for all sorts of unintended consequences, like AI following instructions too literally without understanding human intent, De Ridder adds. Were still figuring out how to handle that. Theres just not enough emphasis on these problems yet. A lot of the research is still missing.An AGI system that has negative personality traits, encoded by its designer intentionally or unintentionally, would likely amplify those traits in its actions. For example, the Big Five personality trait model characterizes human personalities according to openness, conscientiousness, extraversion, agreeableness, and neuroticism.If a model is particularly disagreeable, it might act against the interests of humans it is meant to serve if it feels that is the best course of action. Or, if it is highly neurotic, it might end up dithering over issues that are ultimately inconsequential. There is also concern that AGI models may consciously evade attempts to modify their actions -- essentially, being dishonest to their designers and users.These can result in very consequential effects when it comes to moral and ethical decision making -- with which AGI systems might conceivably be entrusted. Biases and unfair decision making might have potentially massive consequences if these systems are entrusted with large-scale decision making.Decisions that are based on inferences from information on individuals may lead to dangerous effects, essentially stereotyping people on the basis of data -- some of which may have originally been harvested for entirely different purposes. Further, data harvesting itself could increase exponentially if the system feels that it is useful. This intersects with privacy concerns -- data fed into or harvested by these models may not necessarily have been harvested with consent. The consequences could unfairly impact certain individuals or groups of individuals.Untrammeled AGI might also have society-wide effects. The fact that AGI will have human capabilities also raises the concern that it will wipe out entire employment sectors, leaving people with certain skill sets without a means of gainful employment, thus leading to social unrest and economic instability.AGI would greatly increase the magnitude of cyber-attacks and have the potential to be able to take out infrastructure, Jones adds. If you have a bunch of AI bots that are emotionally intelligent and that are talking with people constantly, the ability to spread disinformation increases dramatically. Weaponization becomes a big issue -- the ability to control your systems. Large-scale cyber-attacks that target infrastructure or government databases, or the launch of massive misinformation campaigns could be devastating.Tracy Jones, GuidehouseThe autonomy of these systems is particularly concerning. These events might happen without any human oversight if the AGI is not properly designed to consult with or respond to its human controllers. And the ability of malicious human actors to infiltrate an AGI system and redirect its power is of equal concern. It has even been proposed that AGI might assist in the production of bioweapons.The 2024 International Scientific Report on the Safety of Advanced AI articulates a host of other potential effects -- and there are almost certainly others that have not yet been anticipated.What Companies Need To Do To Be ReadyThere are a number of steps that companies can take to ensure that they are at least marginally ready for the advent of AGI.The industry needs to shift its focus toward foundational safety research, not just faster innovation. I believe in designing AGI systems that evolve with constraints -- think of them having lifespans or offspring models, so we can avoid long-term compounding misalignment, De Ridder advises.Above all, rigorous testing is necessary to prevent the development of dangerous capabilities and vulnerabilities prior to deployment. Ensuring that the model is amenable to correction is also essential. If it resists efforts to redirect its actions while it is still in the development phase, it will likely become even more resistant as its capabilities advance. It is also important to build models whose actions can be understood -- already a challenge in narrow AI. Tracing the origins of erroneous reasoning is crucial if it is to be effectively modified.Limiting its curiosity to specific domains may prevent AGI from taking autonomous action in areas where it may not understand the unintended consequences -- detonating weapons, for example, or cutting off supply of essential resources if those actions seem like possible solutions to a problem. Models can be coded to detect when a course of action is too dangerous and to stop before executing such tasks.Ensuring that products are resistant to penetration by outside adversaries during their development is also imperative. If an AGI technology proves susceptible to external manipulation, it is not safe to release it into the wild. Any data that is used in the creation of an AGI must be harvested ethically and protected from potential breaches.Human oversight must be built into the system from the start -- while the goal is to facilitate autonomy, it must be limited and targeted. Coding for conformal procedures, which request human input when more than one solution is suggested, may help to rein in potentially damaging decisions and train models to understand when they are out of line.Such procedures are one instance of a system being designed so that humans know when to intervene. There must also be mechanisms that allow humans to intervene and stop a potentially dangerous course of action -- variously referred to as kill switches and failsafes.And ultimately, AI systems must be aligned to human values in a meaningful way. If they are encoded to perform actions that do not align with fundamental ethical norms, they will almost certainly act against human interests.Engaging with the public on their concerns about the trajectory of these technologies may be a significant step toward establishing a good-faith relationship with those who will inevitably be affected. So too, transparency on where AGI is headed and what it might be capable of might facilitate trust in the companies that are developing its precursors. Some have suggested that open source code might allow for peer review and critique.Ultimately, anyone designing systems that may result in AGI needs to plan for a multitude of outcomes and be able to manage each one of them if they arise.How Ready Are AI companies?Whether or not the developers of the technology leading to AGI are actually ready to manage its effects is, at this point, anyones guess. The larger AI companies -- OpenAI, DeepMind, Meta, Adobe, and upstart Anthropic, which focuses on safe AI -- have all made public commitments to maintaining safeguards. Their statements and policies range from vague gestures toward AI safety to elaborate theses on the obligation to develop thoughtful, safe AI technology. DeepMind, Anthropic and OpenAI have released elaborate frameworks for how they plan on aligning their AI models with human values.One survey found that 98% of respondents from AI labs agreed that labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming.Even in their public statements, it is clear that these organizations are struggling to balance their rapid advancement with responsible alignment, development of models whose actions can be interpreted and monitoring of potentially dangerous capabilities.Alexander De Ridder, SmythOSRight now, companies are falling short when it comes to monitoring the broader implications of AI, particularly AGI. Most of them are spending only 1-5% of their compute budgets on safety research, when they should be investing closer to 20-40%, says De Ridder.They do not seem to know whether debiasing their models or subjecting them to human feedback is actually sufficient to mitigate the risks they might pose down the line.But other organizations have not even gotten that far. A lot of organizations that are not AI companies -- companies that offer other products and services that utilize AI -- do not have aI security teams yet, Jones says. They havent matured to that place.However, she thinks that is changing. Were starting to see a big uptick across companies and government in general in focusing on security, she observes, adding that in addition to dedicated safety and security teams, there is a movement to embed safety monitoring throughout the organization. A year ago, a lot of people were just playing with AI without that, and now people are reaching out. They want to understand AI readiness and theyre talking about AI security.This suggests a growing realization amongst both AI developers and their customers that serious consequences are a near inevitability. Ive seen organizations sharing information -- there's an understanding that we all have to move forward and that we can all learn from each other, Jones claims.Whether the leadership and the actual developers behind the technology are taking the recommendations of any of these teams seriously is a separate question. The exodus of multiple OpenAI staffers -- and the letter of warning they signed earlier this year -- suggests that at least in some cases, safety monitoring is being ignored or at least downplayed.It highlights the tension that is going to be there between really fast innovation and ensuring that it is responsible, Jones adds.
0 Comments
·0 Shares
·1 Views