18 countries including the United States, Britain, and Australia signed the “Safe by Design” Artificial Intelligence Guidelines
According to the global guidelines released by the United States, the United Kingdom, Australia, and 15 other countries, the aim is to help protect artificial intelligence models from tampering and urge companies to ensure that their models are "designed securely". The guidelines mainly include general recommendations, such as strict control of the infrastructure of artificial intelligence models, monitoring whether the models have been tampered with before and after model release, and providing network security risk training to employees. The guidelines recommend cybersecurity practices that artificial intelligence companies should implement when designing, developing, launching, and monitoring artificial intelligence models. Other countries signing the new guidelines include Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, and Singapore. Artificial intelligence companies such as OpenAI, Microsoft, Google, Anthropic, and Scale AI have also contributed to the development of the guidelines. (Cointelegraph)
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Crypto Faces Ongoing Challenges Despite Pro-Crypto Shift in U.S. Leadership
Australia Seeks Public Input on Adopting Global Crypto Reporting Standards
FET breaks through $1.5