The Artificial Intelligence R&D (AI R&D) Interagency Working Group (IWG) coordinates Federal AI R&D and supports activities tasked by both the NSTC Select Committee on AI and the Subcommittee on Machine Learning and Artificial Intelligence. This vital work promotes U.S. leadership and global competitiveness in AI R&D and its applications. The AI IWG reports investments to the AI R&D Program Component Area.
Overview
The Artificial Intelligence R&D Interagency Working Group (AI R&D IWG) was formed in 2018 to coordinate Federal AI R&D across 32 participating agencies and to support activities tasked by both the NSTC Select Committee on AI and the Subcommittee on Machine Learning and Artificial Intelligence (MLAI). Through the NITRD Subcommittee, the AI IWG will coordinate AI activities to advance the mission of the National AI Initiative Office (NAIIO).
Guided by the nine strategic priorities of the National AI R&D Strategic Plan: 2023 Update, the IWG gathers information from AI experts to ensure that government investment in AI R&D results in innovative applications to address the Nation’s challenges, take advantage of its opportunities, and promote U.S. leadership and global competitiveness. Details of many recent and ongoing Federal AI Research and Development programs and applications are available in the 2020-2024 Progress Report: Advancing Trustworthy Artificial Intelligence R&D. The Video and Image Analytics (VIA Team reports to the Artificial Intelligence R&D Interagency Working Group.
Strategic Priorities
The 9 strategic priorities below are key focus areas for Federal coordination and collaboration:
- Strategy 1: Make long-term investments in fundamental and responsible AI research. Prioritize investments in the next generation of AI to drive responsible innovation that will serve the public good and enable the United States to remain a world leader in AI. This includes advancing foundational AI capabilities such as perception, representation, learning, and reasoning, as well as focused efforts to make AI easier to use and more reliable and to measure and manage risks associated with generative AI.
- Strategy 2: Develop effective methods for human-AI collaboration. Increase understanding of how to create AI systems that effectively complement and augment human capabilities. Open research areas include the attributes and requirements of successful human-AI teams; methods to measure the efficiency, effectiveness, and performance of AI-teaming applications; and mitigating the risk of human misuse of AI-enabled applications that lead to harmful outcomes.
- Strategy 3: Understand and address the ethical, legal, and societal implications of AI. Develop approaches to understand and mitigate the ethical, legal, and social risks posed by AI to ensure that AI systems reflect our Nation’s values and promote equity. This includes interdisciplinary research to protect and support values through technical processes and design, as well as to advance areas such as AI explainability and privacy-preserving design and analysis. Efforts to develop metrics and frameworks for verifiable accountability, fairness, privacy, and bias are also essential.
- Strategy 4: Ensure the safety and security of AI systems. Advance knowledge of how to design AI systems that are trustworthy, reliable, dependable, and safe. This includes research to advance the ability to test, validate, and verify the functionality and accuracy of AI systems, and secure AI systems from cybersecurity and data vulnerabilities.
- Strategy 5: Develop shared public datasets and environments for AI training and testing. Develop and enable access to high-quality datasets and environments, as well as to testing and training resources. A broader, more diverse community engaging with the best data and tools for conducting AI research increases the potential for more innovative and equitable results.
- Strategy 6: Measure and evaluate AI systems through standards and benchmarks. Develop a broad spectrum of evaluative techniques for AI, including technical standards and benchmarks, informed by the Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework (RMF).
- Strategy 7: Better understand the national AI R&D workforce needs. Improve opportunities for R&D workforce development to strategically foster an AI-ready workforce in America. This includes R&D to improve understanding of the limits and possibilities of AI and AI-related work, and the education and fluency needed to effectively interact with AI systems.
- Strategy 8: Expand public-private partnerships to accelerate advances in AI. Promote opportunities for sustained investment in responsible AI R&D and for transitioning advances into practical capabilities, in collaboration with academia, industry, international partners, and other non-federal entities.
- Strategy 9: Establish a principled and coordinated approach to international collaboration in AI research. Prioritize international collaborations in AI R&D to address global challenges, such as environmental sustainability, healthcare, and manufacturing. Strategic international partnerships will help support responsible progress in AI R&D and the development and implementation of international guidelines and standards for AI.
Co-Chairs
John Garofolo |
Steven Lee |
Michael Littman |
Technical Coordinator
Faisal D’Souza |