Every U.S. federal agency must hire a Chief AI Officer

All US federal agencies will now be required to have a senior leader overseeing all AI systems they use, as the government wants to ensure that the use of AI in the public service remains safe.

Vice President Kamala Harris announced the new OMB guidelines in a briefing with reporters, saying agencies should also establish AI governance councils to coordinate how AI is used within the agency. Agencies will also be required to submit an annual report to the Office of Management and Budget (OMB) outlining all AI systems they use, the risks associated with them, and how they plan to mitigate those risks.

“We have directed all federal agencies to appoint a chief AI officer with the experience, expertise and authority to oversee all AI technologies used by that agency, and this is to ensure that AI is used responsibly, recognizing that we must have senior leaders within our government specifically charged with overseeing the adoption and use of AI,” Harris told reporters.

The Chief AI Officer does not necessarily have to be a political appointee, although this depends on the structure of the federal agency. Governing councils should be established by the summer.

These guidelines build on previously announced policies outlined in the Biden administration’s AI Executive Order, which required federal offices to create security standards and increase the number of AI talent working in government.

Some agencies began hiring chief AI officers even before today’s announcement. The Department of Justice announced Jonathan Mayer as its first CAIO in February. He will lead a team of cybersecurity experts in figuring out how AI can be used in law enforcement.

According to OMB Chairwoman Shlanda Young, the US government plans to hire 100 AI professionals by the summer.

Part of the responsibility of agency AI officers and governance committees is to regularly monitor their AI systems. Young said agencies should submit an inventory of the AI ​​products an agency uses. If AI systems are deemed “sensitive” enough to be removed from the list, the agency must publicly provide a reason for the exclusion. Agencies must also independently assess the security risk of each AI platform they use.

Federal agencies must also verify that any AI they deploy meets safeguards that “mitigate the risks of algorithmic discrimination and provide the public with transparency into how government uses AI.” The OMB fact sheet provides several examples, including:

At the airport, travelers have the option to opt out of using TSA facial recognition without any delays or without losing their place in line.

When AI is used in the federal healthcare system to support critical diagnostic decisions, a human oversees the process to verify the instruments’ results and avoids disparities in healthcare access.

When AI is used to detect fraud in government departments, there is human oversight of impactful decisions and affected individuals have the opportunity to seek redress for AI harms.

“If an agency cannot implement these safeguards, the agency must stop using the AI ​​system unless the agency’s leadership justifies why doing so would increase risks to safety or rights generally or would pose an unacceptable barrier for critical activities of the agency,” the fact sheet said.

Under the new guidelines, all government-owned AI models, codes, and data must be released to the public unless they pose a risk to government operations.

The United States still has no laws regulating AI. The AI ​​Executive Order provides guidance for government agencies under the executive branch on how to approach the technology. While several bills have been introduced regulating certain aspects of AI, there has not been much movement in terms of legislation regarding AI technologies.

Leave a Reply

Your email address will not be published. Required fields are marked *