Audi Launches New F1 Driver Program and Names First Driver
- The United States is navigating a complex landscape of artificial intelligence (AI) regulation as of January 27,2026,characterized by a patchwork of federal and state initiatives,evolving executive orders,and ongoing...
- does not have a single, overarching law governing AI.
- The National Institute of Standards and Technology (NIST) released its AI Risk Management framework in february 2023, which provides a voluntary, consensus-based guidance for organizations designing, developing, deploying,...
“`html
The Future of AI Regulation in the United states (2026)
Table of Contents
The United States is navigating a complex landscape of artificial intelligence (AI) regulation as of January 27,2026,characterized by a patchwork of federal and state initiatives,evolving executive orders,and ongoing debate over the appropriate level of government oversight. While a extensive federal AI law hasn’t yet passed, significant regulatory momentum is building, notably around issues of bias, safety, and national security.
Federal regulatory Landscape
Currently,the U.S. does not have a single, overarching law governing AI. Rather, regulation occurs through existing agencies applying existing laws to AI systems, and through new executive actions and proposed legislation.
The AI risk Management Framework (AI RMF)
The National Institute of Standards and Technology (NIST) released its AI Risk Management framework in february 2023, which provides a voluntary, consensus-based guidance for organizations designing, developing, deploying, or using AI systems. Its become a foundational document for many federal agencies.
Executive Order 14110
President Biden signed Executive Order 14110, “Safe, Secure, and Trustworthy Growth and Use of Artificial Intelligence,” on October 30, 2023, directing federal agencies to take specific actions related to AI safety and security. This order focuses on areas like AI safety testing, cybersecurity, protecting privacy, and promoting responsible innovation.
Agency-Specific Regulations
Several federal agencies are actively regulating AI within their existing jurisdictions. The Federal Trade Commission (FTC) is focusing on AI-driven fraud and deceptive practices, using its authority under Section 5 of the FTC Act. The Equal Employment Opportunity Commission (EEOC) is addressing AI bias in hiring and employment practices.
State-Level AI Regulation
States are increasingly enacting their own AI laws, often moving faster than the federal government. This has created a complex regulatory habitat for companies operating nationally.
California AI Law (AB 3271)
California passed Assembly Bill 3271 in October 2023, requiring developers of ”high-risk” AI systems to disclose potential risks and take steps to mitigate them. The law focuses on systems that could significantly impact individuals’ rights or safety.The law went into effect February 15, 2024.
New York City Local Law 144
New York City enacted Local Law 144 in December 2023, requiring bias audits of automated employment decision tools. This law aims to prevent discriminatory hiring practices driven by AI algorithms.The first audits were due April 15, 2024.
Illinois Biometric Information Privacy Act (BIPA)
While not specifically an AI law, Illinois’ BIPA, enacted in 2008, has significant implications for AI systems that utilize biometric data, such as facial recognition. Numerous lawsuits have been filed under BIPA against companies using biometric AI.
Key Areas of Regulatory Focus
Several key areas are receiving the most attention from regulators and lawmakers.
AI bias and Discrimination
Addressing bias in AI algorithms is a major concern. Regulators are focused on ensuring that AI systems do not perpetuate or exacerbate existing societal inequalities.The EEOC’s AI initiative and state laws like New York City’s Local Law 144 directly address this issue.
AI Safety and Security
Concerns about the potential for AI systems to cause harm, either intentionally or unintentionally, are driving increased regulatory scrutiny. Executive Order 14110 emphasizes the need for AI safety testing and cybersecurity measures. The Cybersecurity and Infrastructure Security Agency (CISA) is playing a key role in this area.
