Senator John Thune, US Senator for South Dakota | Official U.S. Senate headshot
Senator John Thune, US Senator for South Dakota | Official U.S. Senate headshot
U.S. Sen. John Thune (R-S.D.), ranking member of the Subcommittee on Communications, Media, and Broadband, today commended the Senate Commerce Committee’s approval of his bipartisan Artificial Intelligence (AI) Research, Innovation, and Accountability Act. The legislation aims to enhance innovation while ensuring transparency, accountability, and security in the development and operation of high-impact AI applications.
“As this technology continues to evolve, we should identify some basic rules of the road that protect consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention,” said Thune. “This legislation would bolster the United States’ leadership in AI, strengthen innovation, and establish common-sense safety and security guardrails for the highest-risk AI applications. I hope Leader Schumer will prioritize AI transparency legislation and bring this important bill to the Senate floor for a vote as soon as possible.”
The bill includes several key provisions:
- **Content Provenance and Emergence Detection Standards**: It mandates the National Institute of Standards and Technology (NIST) to conduct research to develop standards for distinguishing between human-generated and AI-generated content. This is similar to efforts by the Coalition for Content Provenance and Authenticity. NIST is also directed to support standardization methods for detecting emergent properties in AI systems to address issues from unanticipated behavior.
- **AI Definitions**: New definitions are provided for “generative,” “high-impact,” and “critical-impact” AI systems. It also distinguishes between “developer” and “deployer” of AI systems concerning outlined requirements.
- **Generative AI Transparency**: Large internet platforms must notify users when generative AI creates content they see. The U.S. Department of Commerce will enforce this requirement.
- **NIST Recommendations to Agencies**: NIST must develop recommendations for technical risk-based guardrails on "high-impact" AI systems in consultation with other agencies and non-government stakeholders. The Office of Management and Budget will oversee interagency implementation.
- **Risk Management Assessment and Reporting**: Companies deploying critical-impact AI must perform detailed risk assessments aligned with NIST’s AI Risk Management Framework. Deployers of "high-impact" AI systems must submit transparency reports to the Commerce Department.
- **Critical-Impact AI Certification**: Critical-impact AI systems are required to undergo a certification framework where organizations self-certify compliance with standards set by the Commerce Department.
Find additional information on the specifics of these provisions through official channels.