Meta promises it wont release dangerous AI systems
www.computerworld.com
According to a new policy document from Meta, the Frontier AI Framework, the company might not release AI systems developed in-house in certain risky scenarios.The document defines two types of AI systems that can be classified as either high risk or critical risk. In both cases, these are systems that could help carry out cyber, chemical or biological attacks.Systems classified as high risk might facilitate such an attack, though not to the same extent as a critical risk system, which could result in catastrophic outcomes. These could include, for example, taking over a corporate environment or deploying powerful biological weapons.In the document, Meta states that if a system is high risk, the company will restrict internal access to it and will not release it until measures have been taken to reduce the risk to moderate levels. If, instead, the system is critical risk, security protections will be put in place to prevent it from spreading and development will stop until the system can be made safer.
0 Comments ·0 Shares ·64 Views