The United States, the United Kingdom, Australia and 15 other countries have issued global guidelines aimed at helping protect artificial intelligence models from tampering, urging companies to ensure their models are "safe by design."
The guidance mainly includes general recommendations, such as tightly controlling the infrastructure of AI models, monitoring models for tampering before and after they are released, and training employees on cybersecurity risks. The guidance recommends cybersecurity practices that AI companies should implement when designing, developing, launching, and monitoring AI models.
Other countries that have signed up to the new guidelines include Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea and Singapore. Artificial intelligence companies such as OpenAI, Microsoft, Google, Anthropic and Scale AI also contributed to the development of the guidance. (Cointelegraph)