According to Cointelegraph, the United States National Institute of Standards and Technology (NIST) and the Department of Commerce are seeking members for the newly-established Artificial Intelligence (AI) Safety Institute Consortium. The consortium aims to evaluate AI systems to improve the technology's safety and trustworthiness.In a document published to the Federal Registry on November 2, NIST announced the formation of the new AI consortium and requested applicants with relevant credentials. The collaboration's purpose is to create and implement specific policies and measurements to ensure US lawmakers take a human-centered approach to AI safety and governance. Collaborators will be required to contribute to various functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.These efforts come in response to a recent executive order given by US President Joseph Biden, which established six new standards for AI safety and security. While many European and Asian states have begun instituting policies governing AI systems development, the US has comparatively lagged in this arena. President Biden's executive order and the formation of the Safety Institute Consortium mark progress towards establishing specific policies to govern AI in the US. However, there is still no clear timeline for implementing laws governing AI development or deployment in the country beyond legacy policies governing businesses and technology, which many experts consider inadequate for the growing AI sector.