How new leaders should think about artificial intelligence

October 1, 2020

By Katie Malague and Dan Chenok

Building on the exponential growth of artificial intelligence over the past decade, federal agencies are using intelligent automation to further improve productivity. Intelligent automation incorporates AI, blockchain, cloud computing, robotics and other technologies, and is collectively transforming how agencies work—from managing paperwork to using data for decision-making to providing services to customers.

Indeed, past presidential administrations recognized the potential of artificial intelligence and other technologies and paved the way toward more complex use of intelligent automation: first through the National Artificial Intelligence Research and Development Strategic Plan to maximize the benefits of federal AI funding, then through the American Artificial Intelligence Initiative to accelerate AI adoption.

Whatever the outcome of the presidential election, leaders can build on these initiatives to make government operations more effective and efficient and service delivery seamless.

The Partnership for Public Service and the IBM Center for The Business of Government hosted five events this year with agencies that moved from technology pilot projects to full-scale adoption: the U.S. Marine Corps, the General Services Administration, the Department of Defense’s Joint Artificial Intelligence Center, and the departments of Homeland Security and Health and Human Services.

These events highlighted lessons for the administration to consider for its AI plans and policies, and for effective adoption of intelligent automation:

  • Start with the problem, not the technology. AI and other technologies should not be spread around like peanut butter. Agencies should focus on users’ needs and process improvements to determine if technology tools would be useful to boost performance. The Marine Corps’ approach, for example, is to adopt AI tools that will help Marines solve problems and become more effective. 
  • Foster a culture of innovation. Supporting innovation would help reduce agencies’ risk aversion and fear of failure and encourage employees to pursue new approaches for working more efficiently. The Marine Corps fosters a culture of innovative thinking, both in business operations and war zones—leading the Marines to adopt new technologies, including AI, to become more effective. Other agencies could seek advice from the GSA, which supports innovation government-wide by, for example, helping agencies adopt new technologies and organizing the Presidential Innovation Fellows program. 
  • Free employee time to focus on higher-level tasks. Tools that perform repetitive tasks free employees to tackle complex tasks only people can do. DHS, for example, is using AI in its Contractor Performance Assessment Reporting System to help acquisition professionals find data about contractors and make better procurement decisions. 
  • Minimize bias by encouraging diversity of thought. When making decisions about AI and the data it relies on, pulling in diverse stakeholders helps minimize bias and inaccuracies. The DOD Joint Artificial Intelligence Center and its data governance council engages a diverse group of stakeholders when making data-related decisions. Stakeholders could include engineers, security experts, data scientists, ethicists and other specialists from government, industry and academia. 

The event recordings are available on the Partnership’s website, and event summaries are on the IBM Center’s blog.

Katie Malague is the vice president for government effectiveness at the Partnership for Public Service.

Dan Chenok is executive director of the IBM Center for The Business of Government.

The Partnership and the IBM Center have collaborated on several other resources a second or first-term administration in 2021 might use to help government take full advantage of artificial intelligence. In our 2018 report, “The Future Has Begun,” we presented examples of the government’s successful use of AI. In the 2019 reports, “More Than Meets AI” and “More Than Meets AI Part II,” we explored impacts on the federal workforce, as well as issues of ethics, bias, security and privacy.