Part 4.5 of 7

United States Federal & State

📚 2-2.5 hours 🎯 Intermediate 📅 Updated January 2026

The US AI Regulatory Landscape

The United States takes a fundamentally different approach to AI regulation than the EU. Rather than comprehensive horizontal legislation, the US relies on a patchwork of executive actions, agency guidance, existing statutory authorities, and emerging state laws.

💡 Regulatory Philosophy

The US approach emphasizes innovation, sector-specific regulation, and voluntary frameworks over prescriptive horizontal legislation. This creates a complex landscape where compliance requirements vary by sector, jurisdiction, and application.

Key Components of US AI Governance

  • Executive Orders: Presidential directives setting policy priorities and agency mandates
  • NIST Frameworks: Voluntary standards providing technical guidance
  • Agency Actions: FTC, EEOC, HHS, DOJ enforcement under existing authorities
  • State Laws: Emerging comprehensive and sector-specific AI legislation
  • Industry Standards: Self-regulatory initiatives and codes of conduct

Executive Order on AI Safety (October 2023)

Executive Order 14110, signed October 30, 2023, represents the most comprehensive federal AI policy directive to date.

Key Requirements

Reporting Requirements for Foundation Models

  • Developers of the most powerful AI systems must share safety test results with the federal government
  • Applies to models trained using more than 10^26 computational operations (FLOP)
  • Lower thresholds for models trained primarily on biological sequence data
  • Information sharing through Defense Production Act authorities

Standards Development

  • NIST directed to develop standards, tools, and tests for AI safety and security
  • Guidelines for red-teaming and adversarial testing
  • Standards for AI content authentication and watermarking
  • Guidance on secure software development practices for AI

Agency-Specific Mandates

Agency AI-Related Mandate
Commerce (NIST) Develop AI safety standards; establish AI Safety Institute
HHS Establish safety program for AI in healthcare; develop AI assurance policy
DOL Develop principles for AI in workplace; address displacement risks
DOJ/DHS Address AI-enabled threats; guidance on civil rights implications
Treasury Report on AI cybersecurity in financial sector
⚠ Executive Order Limitations

Executive Orders bind federal agencies but do not directly regulate private sector entities (except through existing agency authorities). Requirements can be modified or revoked by subsequent administrations. They are not equivalent to binding legislation.

NIST AI Risk Management Framework

The NIST AI RMF 1.0, released January 2023, provides a voluntary framework for managing risks throughout the AI lifecycle. It has become a de facto standard referenced by regulators and industry.

Framework Structure

🎯

GOVERN

Establish governance structures, policies, and accountability mechanisms for AI risk management across the organization.

🔍

MAP

Identify and document context, capabilities, and potential impacts of AI systems. Understand the AI system and its environment.

📈

MEASURE

Assess, analyze, and track AI risks using appropriate metrics, methods, and benchmarks.

🛠

MANAGE

Allocate resources to address mapped and measured risks. Implement risk treatment strategies.

AI RMF Characteristics of Trustworthy AI

  • Valid and Reliable: Accurate, consistent, and generalizable performance
  • Safe: Does not endanger human life, health, property, or environment
  • Secure and Resilient: Protected against unauthorized access and maintains performance under adverse conditions
  • Accountable and Transparent: Clear responsibility assignment and explainable operations
  • Explainable and Interpretable: Understandable to relevant stakeholders
  • Privacy-Enhanced: Protects privacy and enables human control over data
  • Fair - with Harmful Bias Managed: Equitable treatment across groups and individuals
✓ Practical Application

While voluntary, implementing the NIST AI RMF demonstrates due diligence and can provide evidence of reasonable AI governance. Many federal contracts now reference the framework, and some state laws explicitly incorporate it.

FTC Enforcement Authority

The Federal Trade Commission has emerged as the primary federal enforcer for AI-related harms, using its existing authorities under Section 5 of the FTC Act (prohibiting unfair and deceptive practices).

FTC AI Enforcement Priorities

  • Deceptive Claims: False or unsubstantiated claims about AI capabilities
  • Algorithmic Discrimination: AI systems that discriminate based on protected characteristics
  • Data Security: Inadequate protection of data used in AI systems
  • Dark Patterns: AI-enabled manipulative design
  • Synthetic Media: Deceptive use of AI-generated content

Notable FTC AI Actions

Action Issue Remedy
Rite Aid (2023) Facial recognition misidentification causing harm 5-year ban on facial recognition; algorithm deletion ordered
Weight Watchers/Kurbo (2022) Children's data used for AI training without consent Data and algorithm deletion; $1.5M penalty
Amazon/Alexa (2023) Children's voice data retention; dark patterns $25M penalty; data deletion requirements
⚠ Algorithmic Disgorgement

The FTC has ordered "algorithmic disgorgement" - deletion of AI models trained on improperly obtained data - as a remedy. This represents a significant enforcement tool that can eliminate the value of AI investments built on non-compliant data practices.

State AI Legislation

In the absence of comprehensive federal AI legislation, states have become active AI regulators. The landscape is rapidly evolving.

🏟 Colorado AI Act (SB 24-205) Enacted 2024

The first comprehensive state AI law in the US, effective February 1, 2026.

Key Requirements:

  • Scope: "High-risk AI systems" making or substantially contributing to consequential decisions about consumers
  • Consequential Decisions: Education, employment, financial services, government services, healthcare, housing, insurance, legal services
  • Developer Obligations: Documentation, known limitations, testing results, data governance, statements of intended uses
  • Deployer Obligations: Risk management policy, impact assessment, consumer notice, human oversight capability
  • Consumer Rights: Notice of AI use, explanation of decision factors, opportunity to correct data, human review appeal
  • Enforcement: Attorney General exclusive enforcement; no private right of action; affirmative defense for NIST AI RMF compliance
California AI Regulations Multiple Enacted
  • CPRA/CCPA: Profiling opt-out rights; automated decision-making access rights
  • AB 2013: AI training data transparency requirements for generative AI
  • SB 1047 (vetoed 2024): Would have imposed safety requirements on large AI models
  • Bot Disclosure Laws: Require bots to disclose non-human identity in certain contexts
  • AB 302: State agencies must inventory high-risk automated decision systems
🇬 Other State Activity Various
  • Illinois: AI Video Interview Act (consent for AI analysis in hiring); proposed Artificial Intelligence Video Interview Act amendments
  • Maryland: Facial recognition restrictions in housing
  • New York City: Local Law 144 - automated employment decision tools (bias audits)
  • Texas: Proposed comprehensive AI legislation
  • Connecticut: AI task force established; legislation expected

Sector-Specific Federal Guidance

Healthcare (FDA/HHS)

  • FDA has cleared/authorized 500+ AI/ML-enabled medical devices
  • Software as Medical Device (SaMD) framework applies
  • Predetermined Change Control Plans for adaptive AI
  • Transparency requirements for clinical AI

Financial Services (Federal Regulators)

  • OCC, Fed, FDIC model risk management guidance (SR 11-7)
  • Fair lending laws apply to AI credit decisions (ECOA, Fair Housing Act)
  • SEC guidance on AI in trading and investment advice
  • CFPB focus on algorithmic discrimination in consumer finance

Employment (EEOC)

  • Technical Assistance Document on AI in employment (May 2023)
  • Title VII and ADA apply to AI hiring tools
  • Employers responsible for vendor AI tools
  • Adverse impact analysis required

📚 Key Takeaways

  • US AI governance combines federal executive action, agency enforcement, and state legislation - no single comprehensive federal law
  • Executive Order 14110 establishes reporting requirements for frontier models and directs agency action but doesn't directly regulate private entities
  • NIST AI RMF provides voluntary but influential standards; implementing it demonstrates due diligence
  • FTC enforces AI harms through Section 5 authority; algorithmic disgorgement is a significant remedy
  • Colorado AI Act (effective 2026) is the first comprehensive state AI law; compliance with NIST AI RMF provides affirmative defense
  • California has multiple AI-related laws including CCPA/CPRA profiling provisions and training data transparency requirements
  • Sector-specific requirements apply in healthcare (FDA), financial services (fair lending), and employment (EEOC)