#Money & Finance

AI Safety Takes Center Stage: Biden Administration Implements New Reporting Requirements

In a significant move towards ensuring the safety of Artificial Intelligence (AI) systems, the Biden administration has announced a new requirement for AI developers. As per this mandate, developers of major AI systems will need to disclose their safety test results to the U.S. government.

This decision comes as part of an executive order signed by President Joe Biden three months ago, aimed at managing the rapidly evolving technology. The order included a 90-day goal to mandate AI companies to share vital information with the Commerce Department, including safety tests.

Ben Buchanan, the White House special adviser on AI, emphasized the government’s intent to ensure AI systems are safe before they are released to the public. This move underscores the administration’s commitment to balancing technological advancement with public safety.

While the software companies have agreed to a set of categories for the safety tests, there is currently no common standard for these tests. To address this, the government’s National Institute of Standards and Technology will develop a uniform framework for assessing safety.

AI has emerged as a leading economic and national security consideration for the federal government. The launch of new AI tools, such as ChatGPT, has brought about investments and uncertainties. The Biden administration is also considering congressional legislation and working with other countries and the European Union on rules for managing the technology.

The Commerce Department has developed a draft rule on U.S. cloud companies that provide servers to foreign AI developers. Furthermore, nine federal agencies, including the departments of Defense, Transportation, Treasury, and Health and Human Services, have completed risk assessments regarding AI’s use in critical national infrastructure.

Leave a comment

Your email address will not be published. Required fields are marked *