This report summarizes the findings and recommendations of the Bipartisan House Task Force on Artificial Intelligence regarding government use of AI. The task force examined how federal agencies are currently leveraging AI, the potential benefits and risks, and key considerations for responsible AI adoption in government.
Current State of Government AI Use
Federal agencies have already begun using AI in various applications to enhance existing missions and streamline programs. A 2020 Stanford University report found that nearly half of the federal agencies studied had experimented with AI and machine learning tools. As of December 2023, 20 out of 23 surveyed agencies reported using AI, with approximately 200 total instances of AI use across the government. While AI use cases vary in application and maturity, the potential benefits of responsible government AI adoption are significant. However, irresponsible or improper use also poses risks to individual privacy, security, and fair treatment of citizens.
Key Principles for Responsible Government AI Use
The task force identified several key principles that should guide federal AI adoption:
Here are some key reasons why businesses may want to hire a CAIO.
- Accuracy
- Reliability
- Robustness
- Safety and Effectiveness
- Security
- Privacy
- Transparency
- Explainability and Interpretability
- Notice and Explanation
- Human Alternatives and Fallback
- Equity
- Mitigation of Harmful Bias
Operationalizing these principles across the AI lifecycle remains challenging. Agencies need comprehensive guidance and resources to implement them effectively.
Federal AI Governance and Transparency
The task force emphasized the need for governance and transparency requirements around federal AI use. This could include documenting and disclosing information about:
- Data and metadata used to train and test AI systems
- Software components and origins
- Model development processes and metrics
- Model deployment and monitoring practices
- Specific use cases and decision-making applications
- Risk assessment and mitigation plans
While full public disclosure may not always be possible due to privacy, security, or national security concerns, agencies should strive for maximum transparency within those constraints.
The National Institute of Standards and Technology (NIST) plays a key role in developing standards for federal IT systems, including AI. NIST released an AI Risk Management Framework in January 2023 to guide the safe and responsible development of AI. However, AI-related standards are still significantly underdeveloped overall.
AI-Enabling Infrastructure
Effective AI governance requires sound policies across the entire AI system lifecycle, including:
- Modernization of legacy federal IT systems
- Improved cybersecurity practices
- Data privacy protections
- Workforce development
The report notes that legacy IT systems and unmanaged data are significant barriers to AI adoption. Investments in AI should be coupled with broader IT modernization efforts.
Artificial Intelligence and Data Privacy
AI systems often rely on large amounts of personal data, raising privacy concerns. The federal government led early data privacy efforts through the Privacy Act of 1974, but further updates may be needed to address AI-specific challenges.The task force recommends exploring mechanisms to promote access to government-controlled data, including sensitive data, through privacy-protective means. This could open significant data resources for AI development while safeguarding individual privacy.
Artificial Intelligence and the Federal Workforce
As the nation’s largest employer, the federal government will require both technical experts to develop and maintain AI systems, as well as broader workforce upskilling to utilize AI effectively. Key workforce recommendations include:
- Clarifying AI roles and associated knowledge/skills requirements
- Supporting skills-based hiring practices
- Expanding AI training and educational pathways for government work
- Improving hiring flexibilities for AI talent
Key Findings
- The federal government should utilize core principles for AI use and avoid conflicting with existing laws.
- Agencies should be wary of fully automated algorithmic decision-making, instead pursuing “algorithmic-informed” decision-making with appropriate human oversight.
- The public should be notified when AI plays a significant role in governmental functions affecting them.
- Agencies must pay attention to cybersecurity, privacy, and data/IT infrastructure foundations when adopting AI.
- AI roles and required skills are currently unclear and highly varied across the federal workforce. A standard taxonomy or framework is needed.
- Skills-based hiring practices are critical for meeting federal AI talent needs.
Key Recommendations
- Take an information and systems-level approach to AI use, considering existing laws on federal information policy, cybersecurity, data management, etc.
- Support flexible governance frameworks that can adapt to rapidly evolving AI capabilities.
- Leverage AI to reduce administrative burdens and bureaucracy where appropriate.
- Require agencies to provide public notification of AI’s role in governmental functions.
- Facilitate the development and adoption of AI standards for federal use.
- Improve cybersecurity of federal systems, including AI systems.
- Encourage data governance strategies that support responsible AI development.
- Develop a clear understanding of federal AI workforce needs and support diverse pathways into government AI roles.
In conclusion, the task force’s report provides a comprehensive overview of the opportunities and challenges surrounding federal AI adoption. By following these principles and recommendations, the government can work to harness the benefits of AI while mitigating potential risks and ensuring responsible use that protects citizens’ rights and promotes the public good.