The US AI Regulatory Landscape
The United States takes a fundamentally different approach to AI regulation than the EU. Rather than comprehensive horizontal legislation, the US relies on a patchwork of executive actions, agency guidance, existing statutory authorities, and emerging state laws.
💡 Regulatory Philosophy
The US approach emphasizes innovation, sector-specific regulation, and voluntary frameworks over prescriptive horizontal legislation. This creates a complex landscape where compliance requirements vary by sector, jurisdiction, and application.
Key Components of US AI Governance
- Executive Orders: Presidential directives setting policy priorities and agency mandates
- NIST Frameworks: Voluntary standards providing technical guidance
- Agency Actions: FTC, EEOC, HHS, DOJ enforcement under existing authorities
- State Laws: Emerging comprehensive and sector-specific AI legislation
- Industry Standards: Self-regulatory initiatives and codes of conduct
Executive Order on AI Safety (October 2023)
Executive Order 14110, signed October 30, 2023, represents the most comprehensive federal AI policy directive to date.
Key Requirements
Reporting Requirements for Foundation Models
- Developers of the most powerful AI systems must share safety test results with the federal government
- Applies to models trained using more than 10^26 computational operations (FLOP)
- Lower thresholds for models trained primarily on biological sequence data
- Information sharing through Defense Production Act authorities
Standards Development
- NIST directed to develop standards, tools, and tests for AI safety and security
- Guidelines for red-teaming and adversarial testing
- Standards for AI content authentication and watermarking
- Guidance on secure software development practices for AI
Agency-Specific Mandates
| Agency |
AI-Related Mandate |
| Commerce (NIST) |
Develop AI safety standards; establish AI Safety Institute |
| HHS |
Establish safety program for AI in healthcare; develop AI assurance policy |
| DOL |
Develop principles for AI in workplace; address displacement risks |
| DOJ/DHS |
Address AI-enabled threats; guidance on civil rights implications |
| Treasury |
Report on AI cybersecurity in financial sector |
⚠ Executive Order Limitations
Executive Orders bind federal agencies but do not directly regulate private sector entities (except through existing agency authorities). Requirements can be modified or revoked by subsequent administrations. They are not equivalent to binding legislation.
NIST AI Risk Management Framework
The NIST AI RMF 1.0, released January 2023, provides a voluntary framework for managing risks throughout the AI lifecycle. It has become a de facto standard referenced by regulators and industry.
Framework Structure
🎯
GOVERN
Establish governance structures, policies, and accountability mechanisms for AI risk management across the organization.
🔍
MAP
Identify and document context, capabilities, and potential impacts of AI systems. Understand the AI system and its environment.
📈
MEASURE
Assess, analyze, and track AI risks using appropriate metrics, methods, and benchmarks.
🛠
MANAGE
Allocate resources to address mapped and measured risks. Implement risk treatment strategies.
AI RMF Characteristics of Trustworthy AI
- Valid and Reliable: Accurate, consistent, and generalizable performance
- Safe: Does not endanger human life, health, property, or environment
- Secure and Resilient: Protected against unauthorized access and maintains performance under adverse conditions
- Accountable and Transparent: Clear responsibility assignment and explainable operations
- Explainable and Interpretable: Understandable to relevant stakeholders
- Privacy-Enhanced: Protects privacy and enables human control over data
- Fair - with Harmful Bias Managed: Equitable treatment across groups and individuals
✓ Practical Application
While voluntary, implementing the NIST AI RMF demonstrates due diligence and can provide evidence of reasonable AI governance. Many federal contracts now reference the framework, and some state laws explicitly incorporate it.
FTC Enforcement Authority
The Federal Trade Commission has emerged as the primary federal enforcer for AI-related harms, using its existing authorities under Section 5 of the FTC Act (prohibiting unfair and deceptive practices).
FTC AI Enforcement Priorities
- Deceptive Claims: False or unsubstantiated claims about AI capabilities
- Algorithmic Discrimination: AI systems that discriminate based on protected characteristics
- Data Security: Inadequate protection of data used in AI systems
- Dark Patterns: AI-enabled manipulative design
- Synthetic Media: Deceptive use of AI-generated content
Notable FTC AI Actions
| Action |
Issue |
Remedy |
| Rite Aid (2023) |
Facial recognition misidentification causing harm |
5-year ban on facial recognition; algorithm deletion ordered |
| Weight Watchers/Kurbo (2022) |
Children's data used for AI training without consent |
Data and algorithm deletion; $1.5M penalty |
| Amazon/Alexa (2023) |
Children's voice data retention; dark patterns |
$25M penalty; data deletion requirements |
⚠ Algorithmic Disgorgement
The FTC has ordered "algorithmic disgorgement" - deletion of AI models trained on improperly obtained data - as a remedy. This represents a significant enforcement tool that can eliminate the value of AI investments built on non-compliant data practices.
State AI Legislation
In the absence of comprehensive federal AI legislation, states have become active AI regulators. The landscape is rapidly evolving.
The first comprehensive state AI law in the US, effective February 1, 2026.
Key Requirements:
- Scope: "High-risk AI systems" making or substantially contributing to consequential decisions about consumers
- Consequential Decisions: Education, employment, financial services, government services, healthcare, housing, insurance, legal services
- Developer Obligations: Documentation, known limitations, testing results, data governance, statements of intended uses
- Deployer Obligations: Risk management policy, impact assessment, consumer notice, human oversight capability
- Consumer Rights: Notice of AI use, explanation of decision factors, opportunity to correct data, human review appeal
- Enforcement: Attorney General exclusive enforcement; no private right of action; affirmative defense for NIST AI RMF compliance
- CPRA/CCPA: Profiling opt-out rights; automated decision-making access rights
- AB 2013: AI training data transparency requirements for generative AI
- SB 1047 (vetoed 2024): Would have imposed safety requirements on large AI models
- Bot Disclosure Laws: Require bots to disclose non-human identity in certain contexts
- AB 302: State agencies must inventory high-risk automated decision systems
- Illinois: AI Video Interview Act (consent for AI analysis in hiring); proposed Artificial Intelligence Video Interview Act amendments
- Maryland: Facial recognition restrictions in housing
- New York City: Local Law 144 - automated employment decision tools (bias audits)
- Texas: Proposed comprehensive AI legislation
- Connecticut: AI task force established; legislation expected