• Join your peers at SSTI's 2024 Annual Conference!

    Join us December 10-12 in Arizona to connect with and learn from your peers working around the country to strengthen their regional innovation economies. Visit ssticonference.org for more information and to register today.

  • Become an SSTI Member

    As the most comprehensive resource available for those involved in technology-based economic development, SSTI offers the services that are needed to help build tech-based economies.  Learn more about membership...

  • Subscribe to the SSTI Weekly Digest

    Each week, the SSTI Weekly Digest delivers the latest breaking news and expert analysis of critical issues affecting the tech-based economic development community. Subscribe today!

AI giants pledge to ensure the technology’s safety, security, and trustworthiness

July 27, 2023

Representatives from leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI) gathered at the White House on July 21 for the announcement of their voluntary commitment to “help move toward safe, secure, and transparent development of AI technology.” According to a White House statement, the companies have made commitments to ensuring products are safe before introducing them to the public, building systems that put security first, and earn the public’s trust.

Among the steps they will take are:

1) Internal and external red teaming of models or systems. Red teaming is a procedure where systems are tested for vulnerabilities without the knowledge of the team that created them, thus testing the creators' effective response to the attack. This method of ensuring safety is critical for national security. It also protects from bio, chemical, radiological, and societal risks, such as bias and discrimination. To further help ensure the safety of AI, the companies committed to advancing ongoing research and publicly disclosing their red-teaming and safety procedures.

2) Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards. The companies committed to establishing or joining a forum or mechanism through which they can develop, advance, and adopt shared standards and best practices. This forum would allow them to engage with governments, including the U.S. government, civil society, and academia.

3) Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Weights can be altered to change the output of the AI model. Companies making this commitment will treat unreleased AI model weights as core intellectual property for their business. This commitment includes limiting access to model weights on a need-to-know basis and establishing an insider threat detection program. In addition, they have committed to storing and working with the weights in a secure environment to reduce the risk of unsanctioned release.

4) Incent third-party discovery and reporting of issues and vulnerabilities. AI systems may continue to have weaknesses and vulnerabilities even after robust red teaming. The companies committed to establishing methods to encourage third parties to disclose flaws.

5) Develop and deploy mechanisms enabling users to understand whether audio or visual content is AI-generated.

6) Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias. Companies making this commitment will publish reports for all new significant model public releases.

7) Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination and protecting privacy. The companies generally commit to empowering trust and safety teams, advancing AI safety research, advancing privacy, protecting children, and working to proactively manage the risks of AI so that its benefits can be realized. 

8) Develop and deploy frontier AI systems to help address society’s greatest challenges. These challenges include climate change, early cancer detection and prevention, and cyber threats. The companies will also support the education and training of students and workers to benefit from AI and help citizens understand the technology's nature, capabilities, limitations, and impact. 

artificial intelligence, technology, cybersecurity