The United States, United Kingdom, Australia, and 15 other nations have jointly released global directives to safeguard AI models from interference, urging companies to prioritize making their models “secure by design.”
On November 26, these 18 countries unveiled a 20-page document delineating cybersecurity measures for AI firms during the development and utilization of AI models, noting that “security can often be a secondary consideration” in the rapidly evolving industry.
The guidelines primarily offered general recommendations, emphasizing stringent control over the AI model’s infrastructure, continuous monitoring to detect any tampering pre and post-release, and comprehensive staff training on cybersecurity risks.
Notably absent were contentious topics within the AI realm, such as regulations surrounding image-generating models, deep fakes, data collection methods, and their application in model training—areas where several AI firms have faced lawsuits on grounds of copyright infringement.
U.S. Secretary of Homeland Security Alejandro Mayorkas emphasized the pivotal moment in AI advancement, labeling it potentially “the most consequential technology of our time.” He stressed the importance of cybersecurity in crafting AI systems that are safe, secure, and reliable.
These guidelines align with various other government initiatives addressing AI, including an AI Safety Summit in London where governments and AI firms convened to coordinate agreements on AI development.
Concurrently, the European Union is finalizing details for its AI Act overseeing the sector, while in October, U.S. President Joe Biden issued an executive order setting standards for AI safety and security—both facing resistance from the AI industry citing potential innovation constraints.
Other signatories to the “secure by design” guidelines encompass Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, and Singapore. AI entities like OpenAI, Microsoft, Google, Anthropic, and Scale AI also contributed to shaping these guidelines.