澳门六合彩

Skip to main content

Event Summary | Assessing the AI Agenda: Policy Opportunities and Challenges in the 117th Congress

A Critical Non-Partisan Policy Concern聽

Artificial intelligence (AI) occupies a growing share of the legislative agenda.聽 On December 3rd, 澳门六合彩鈥檚 Science and Technology Innovation Program (STIP) hosted its first all congressional staff panel 鈥Assessing the AI Agenda in the 117th Congress.鈥 This bipartisan, bicameral event brought panelists together with an array of perspectives on AI--including graduates of 澳门六合彩's Technology Labs, a series of educational seminars on for Congressional (and soon, Executive) staff. Together, panelists discussed the greatest legislative achievements of this Congress, and challenges needed to be addressed next year.The result of this discussion is the start of an AI agenda for the 117th Congress.

Why Assess the AI Agenda Now?

One of the most successful legislative achievements to date is the number of critical AI bills included in the 2021 National Defense Authorization Act (NDAA). This year鈥檚 NDAA includes the , first introduced in May 2019.聽 The AAIA creates a National Artificial Intelligence Research and Development Initiative, and directs the Office of Science and Technology Policy (OSTP) to establish a National Artificial Intelligence Coordination Office, among other provisions. According to Dahlia Sokolov, Staff Director of the House Science Subcommittee on Research and Technology, the AIIA is 鈥渁 really, really strong piece of legislation that, unless things go sideways, should be enacted as part of the NDAA.鈥 Also included in the NDAA is the National AI Research Resource Taskforce Act, which would develop the first National Cloud for scientists and students to access computing resources and datasets.聽 Senate AI Caucus co-chairs introduced the bill in June along with Reps. Anna Eshoo, Anthony Gonzales and Mikie Sherrill in the House.

The 2021 NDAA builds on numerous successes. In 2019, Senators Rob Portman and Martin Heinrich founded the Senate AI Caucus staffed by two graduates of 澳门六合彩鈥檚 Technology Labs. This initiative offered the Senate a venue for exchanging expertise and hosting major conversations about AI governance issues. The last congress already saw three AI bills signed into law. By the end of the 116th, at least six more proposals may also be signed. 鈥淲hat this means is we actually stand to get every single AI Caucus initiative signed into law by the end of this Congress鈥o I think we鈥檙e all ears on what we should do next,鈥 said Sam Mulopulos, Senior Advisor for technology and trade issues to Senator Portman.

What鈥檚 Next for AI Policy?

In addition to reflecting on previous achievements, panelists discussed opportunities for future work around (1) STEM Education, broadening participation, and workforce development; (2) ethics, transparency, and accountability; and (3) competitiveness and trade.

Science, Technology, Engineering, and Math (STEM)

On STEM education, the panel argued that more federal involvement is necessary to develop a K-12 curriculum and to increase incentives. Mike Richards, Deputy Chief of Staff to Rep. Pete Olson said, 鈥淧art of what we need to do is start really looking internally about how we can make sure our children are...educated enough to tackle these jobs in the future. How do we start incentivizing these kids wanting to be and who are interested in STEM education? I think that Congress should look at those ways.鈥

Sokolov recognized that her position on the House Science Committee offers a unique opportunity. 鈥淲e鈥檒l certainly be looking to those tools--the scholarship and fellowship kind of tools--and be thinking harder and more deeply about this diversity issue in terms of what really works...because we鈥檝e kind of been running on this hamster wheel on the diversity issue for decades now.鈥

She took a deeper dive into the diversity issue: 鈥淥ne of [Chairman Johnson鈥檚] priorities is looking at rural access in rural areas and systemic education. [She and Ranking Member Lucas] have really worked as partners across that whole universe of what it means to broaden participation in STEM--specifically AI. We鈥檒l continue to do that on the next science committee in the next Congress.鈥

Just as we need more diversity, we also need a larger talent pool. However, there is growing concern that AI will reduce jobs, rather than enhance or redefine current jobs. One solution is targeted efforts to enhance the existing workforce, including by raising skills and expertise among the general public, and within government.聽

鈥淚 think one really important thing is really increasing the armed services usage and getting a talent pool in effect for the armed services for AI,鈥 said Sean Duggan, Military Legislative Assistant to Senator Martin Heinrich. One forthcoming policy proposal will address armed forces recruitment tests, to include assessments of AI or advanced computing skills.聽 Another bill 鈥渨ould allow the secretary to take advantage of current authorities [to] directly hire professionals with AI and advanced computing specialties.鈥

Ethical, Transparent, and Accountable AI

Panelists also focused on the factors required to support safe and ethical AI--beginning with standards, and encompassing transparency, accountability, and trust.

Many speakers, beginning with Mulopulos, identified the need for foundational work, including 鈥渢he development of a framework for best practices, guidelines, and voluntary consensus standards.鈥澛 Such efforts might begin with 鈥渁greeing on definitions鈥 (Senators Portman and Heinrich have already helped write a definition for 鈥渆xplainable AI鈥 in law). Sokolov suggested that the National Institute of Standards and Technology (NIST), one of the most 鈥渦nderappreciated and underfunded agencies in our government for what they do,鈥 has a strong leadership role to play in this area.聽聽

In addition to developing standards, the panel recognized a need to develop processes for measuring adherence to standards, and articulating minimum requirements or metrics around concepts like transparency and trustworthiness. Such work can help ensure the development of safe and ethical AI, and is a key strategy for mitigating policy concerns such as bias in facial recognition, and other civil rights issues.聽

While laws can be used on the 鈥渂ack end鈥 to address misuse, work is also needed on the 鈥渇ront end鈥 to 鈥渕ake sure you never get the unintended consequences of bias and other ways that [AI] can cause harm,鈥 said Sokolov. By making often opaque processes more open, transparency can help avoid unintended consequences through increased scrutiny, and create accountability and trust.

Beyond Capitol Hill, panelists agreed work on transparency can help meet the needs of a critical stakeholder, the public at large. 鈥淚 think that AI will only be as useful or as accepted as people feel comfortable that it is, you know, a part of our everyday lives. If you can't explain that to somebody on the street, then I think that a lot of its usage and future could be limited,鈥 said Duggan. 鈥淪o I think that's one big focus of the Senator and hopefully of the Senate AI caucus next year-how do we increase familiarity, trustworthiness and accountability with AI?鈥澛

Competitiveness, Multi-Sector Partnerships, and Trade

Finally, panelists discussed key issues around US competition, multi-sector partnerships, and trade.

Historically, the United States has lacked a defense strategy that includes an overarching AI strategy. This may have contributed to a lack of cross-sector cooperation and impacted America鈥檚 ability to lead, potentially falling behind competitors like China. In addition to cohesive messaging, there must also be willingness between sectors to participate in joint research, share knowledge, and work together to advance America鈥檚 leadership.

When asked how to balance US public and private sector leaderships, Sean Duggan provided insight.

鈥淥ne thing I鈥檝e been concerned about from an armed services perspective is industry reactions to working more closely with the Pentagon or working closely with the Department of Defense,鈥 said Duggan. 鈥淭here鈥檚 a gap between those in Silicon Valley and Ohio and other great places doing AI, and they may have a lack of trust about where their software or their work is going to end up...I know that the Joint Artificial Intelligence Center within the Pentagon is doing a lot of hard work to try to bridge that gap.鈥

Panelists suggested additional interdependencies between government and other sectors. Government depends on the private sector for some access to computing power:聽 鈥淭he [supercomputing pattern] is not where it needs to be for us to actually be able to compete,鈥 said Richards. 鈥淸The] federal government is having to use private computers to do a lot of our research, so we really need, at this time, to rely on the private industry.鈥 Steps are also required to ensure researchers from all sectors can access data. 鈥淸W]hat we hear from industry, the number one, best thing we could be doing would be making good government data available,鈥 said Mulopulos.

These discussions will be pushed to the forefront when the 2015 Trade Promotion Authority expires in July. 鈥淚 think we have no reason to believe that the next administration would not want to renew the trade promotion authority," said Mulopulos. 鈥淪o, we are setting ourselves up for a must-past vehicle to debate: What should American competitiveness look like over the next five years? Over the next generation?鈥


Science and Technology Innovation Program

The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders.  Read more