What the federal government did on AI this month, documented for K-12 leaders. Every entry cites a primary source. Every K-12 implication is grounded in the action that triggered it. This is not speculation. It is the record.
Version: Spring 2026 · For practitioners and policy students
The Department of Health and Human Services published its AI Strategy, a 21-page document outlining five pillars for integrating AI across the agency: governance and risk management, infrastructure and platform design, workforce development and burden reduction, health research and reproducibility, and care and public health delivery modernization. While HHS does not regulate K-12 directly, the strategy signals how federal agencies are framing AI governance internally, including data standards, risk management, and workforce training.
This Executive Order (EO 14365) declares it U.S. policy to establish a "minimally burdensome national policy framework for AI" and initiates a coordinated federal effort to challenge state AI laws deemed inconsistent with that framework. The order directs the creation of a DOJ AI Litigation Task Force, requires the Commerce Department to evaluate and identify "onerous" state AI laws, directs the FTC to issue a preemption policy statement and the FCC to consider adopting a federal AI disclosure standard, and conditions federal broadband funding (BEAD Program) on state compliance. For K-12: the order explicitly preserves state authority over child safety, state procurement and use of AI, and AI compute infrastructure, but the boundary between "child safety" regulation and general AI regulation affecting schools has not been defined.
The December 11 EO directs the Attorney General to establish an AI Litigation Task Force within 30 days. The Task Force's sole responsibility is to challenge state AI laws deemed inconsistent with federal policy, on grounds of unconstitutional regulation of interstate commerce, federal preemption, or other legal theories. The Task Force will consult with the White House Special Advisor for AI and Crypto (David Sacks) and senior policy officials. This is not a rulemaking body. It is a litigation unit. Its output will be federal lawsuits against state governments.
The Secretary of Commerce must publish an evaluation identifying state AI laws that conflict with the EO's federal policy, including laws that require AI systems to alter "truthful outputs" or compel disclosures that may raise First Amendment concerns. Laws identified as "onerous" may be referred to the DOJ AI Litigation Task Force for federal challenge. The evaluation must also issue a BEAD Program Policy Notice conditioning broadband infrastructure funding on state compliance. For K-12: this evaluation will define the federal government's target list. Once published, districts will know which state laws are at risk.
The EO directs two independent agencies to act on different timelines. The FTC Chairman must issue a policy statement within 90 days of the EO (Section 7), explaining when state laws that "require alterations to the truthful outputs of AI models" are preempted by Section 5 of the FTC Act. The FCC Chairman must initiate a proceeding within 90 days of the Commerce Department's evaluation (Section 6), not 90 days from the EO itself. This means the FCC proceeding will begin later than the FTC statement. For K-12: the FTC statement is the one to watch. If the FTC declares that requiring AI bias mitigation is "deceptive" under federal law, state laws mandating algorithmic fairness in AI tools, including some that affect edtech vendors, could face preemption arguments.
The December 11 Executive Order (EO 14365) was officially published in the Federal Register on December 16, 2025. Federal Register publication is the legal trigger for agency compliance deadlines. All 30-day and 90-day clocks referenced in the order run from December 11 (the date of signing), not December 16. The DOJ Task Force deadline is January 10, 2026. The Commerce and FTC deadlines fall around March 11, 2026. The FCC deadline runs 90 days from the Commerce Department's evaluation, not from the EO itself.
Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act), which requires large frontier AI model developers to create, publish, and comply with safety and security protocols; report safety incidents to the state within 72 hours; and submit to oversight by a new office within the Department of Financial Services. The negotiated version sets civil penalties at up to $1 million for a first violation and up to $3 million for subsequent violations. The RAISE Act was signed eight days after the federal EO directing challenges to state AI laws, making it a potential target of the new federal preemption posture. For K-12: New York districts relying on state-level AI safety requirements should monitor whether the RAISE Act is included in the Commerce Department's evaluation of "onerous" state laws.
As December 2025 closes, the TAKE IT DOWN Act (S. 146), signed on May 19, 2025, remains the only AI-specific federal statute enacted this year. It is included in this December digest because it establishes the federal AI statutory baseline as of year's end: one law, focused on nonconsensual intimate imagery, with no comprehensive AI governance framework in place. The law criminalizes nonconsensual distribution of intimate images (including AI-generated deepfakes) and imposes notice-and-removal obligations on covered platforms. Covered platforms have until May 19, 2026 to establish the required removal process. For K-12: this law establishes a new federal floor for AI-generated intimate imagery that most district governance documents were not written to account for. Districts should review whether their acceptable use policies and incident response protocols address this category of harm under its current legal status.
Multiple state AI laws take effect on January 1, 2026. They are included in this December digest because their effective dates are the immediate downstream consequence of the December 11 EO: these are the laws the EO was timed to preempt. The laws taking effect include California's Transparency in Frontier AI Act (SB 53), which requires safety frameworks, incident reporting, and whistleblower protections for frontier model developers, and the Texas Responsible AI Governance Act (RAIGA). These laws took effect just three weeks after the federal EO directing their potential preemption. King & Spalding's analysis observes that the timing of the EO "suggests that the California TFAIA, the Texas RAIGA, and other state AI laws with proximate effective dates are targets of the Executive Order." For K-12: these laws remain fully enforceable. No court has enjoined any of them. Districts in affected states should continue compliance, but should also prepare for the possibility that some provisions face federal challenge in 2026.