Biden-Harris Administration Secures Voluntary Safety Commitments from Eight Leading ai Companies
The Biden-Harris Administration has announced that eight prominent ai companies have made voluntary safety commitments to promote the development of safe, secure, and trustworthy artificial intelligence (ai).
Companies Pledge Allegiance to Safety, Security, and Trust
Representatives from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale ai, and Stability attended the White House for the announcement. These eight companies have pledged to play a pivotal role in responsible ai development.
Government Pursues Executive Order and Legislation for Responsible ai Development
The Biden-Harris Administration is actively working on an Executive Order and bipartisan legislation to ensure the US leads the way in responsible ai development, unlocking its potential while managing its risks.
Three Fundamental Principles: Safety, Security, and Trust
The commitments made by these companies revolve around three fundamental principles: safety, security, and trust. They have committed to:
Rigorous Security Testing
They will conduct rigorous internal and external security testing of their ai systems before releasing them to the public. This includes assessments by independent experts, helping guard against significant risks such as biosecurity, cybersecurity, and broader societal effects.
Information Sharing
They will actively share information on ai risk management with governments, civil society, academia, and across the industry. This collaborative approach includes sharing best practices for safety, information on attempts to circumvent safeguards, and technical cooperation.
Cybersecurity Investment
They will invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Recognizing the critical importance of these model weights, they will release them only when intended and when security risks are adequately addressed.
Third-Party Vulnerability Reporting
They will facilitate third-party discovery and reporting of vulnerabilities in their ai systems. This proactive approach ensures that issues can be identified and resolved promptly even after an ai system is deployed.
Transparency and Accountability
To enhance transparency and accountability, they will develop robust technical mechanisms such as watermarking systems to indicate when content is ai-generated. This step aims to foster creativity and productivity while reducing the risks of fraud and deception.
Public Reporting on Capabilities, Limitations, and Use
They will publicly report on their ai systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This includes addressing both security risks and societal risks such as fairness and bias. Furthermore, these companies are committed to prioritizing research on the societal risks posed by ai systems.
Industry Leaders Addressing Societal Challenges
These leading ai companies will also develop and deploy advanced ai systems to address significant societal challenges, from cancer prevention to climate change mitigation.
Global Collaboration on Responsible ai Development
The Biden-Harris Administration’s engagement with these commitments extends beyond the US, with consultations involving numerous international partners and allies. These commitments complement global initiatives such as the contact Union’s ai regulation proposals, Japan’s leadership of the G-7 Hiroshima Process, and India’s leadership as Chair of the Global Partnership on ai.
A Milestone in Responsible ai Development
The announcement marks a significant milestone in the journey towards responsible ai development, with industry leaders and the government coming together to ensure that ai technology benefits society while mitigating its inherent risks.
Explore other upcoming enterprise technology events and webinars powered by TechForge