However, a new paper by Susan Aaronson, research professor of international affairs and director of the Digital Trade and Data Governance Hub at the George Washington University, found that many governments failed to evaluate or report on their AI initiatives.
After examining 814 initiatives from 62 nations, Aaronson found that policymakers are missing an opportunity to learn from their programs. Less than one percent of the programs listed on the OECD.AI website had been evaluated. In addition, Aaronson discovered discrepancies between what governments said they were doing on the OECD.AI website and what they reported on their own websites. In some cases, there was no evidence of government actions; in other cases, links to government sites did not work.
“Evaluations of AI policies are important because they help governments demonstrate how they are building trust in both AI and AI governance and that policy makers are accountable to their fellow citizens,” Aaronson says.
The paper, “Building Trust in AI: A Landscape Analysis of Government AI Programs,” was published by the Centre for International Governance Innovation. You can find a copy of the paper here.