What the federal government did on AI this month, documented for K-12 leaders. Every entry cites a primary source. Every K-12 implication is grounded in the action that triggered it. This is not speculation. It is the record.
Version: Spring 2026 · For practitioners and policy students
Attorney General Pam Bondi issued an internal memorandum to all DOJ employees formally establishing the AI Litigation Task Force directed by EO 14365. The Task Force's sole responsibility is to challenge state AI laws deemed inconsistent with federal policy, on grounds that they unconstitutionally regulate interstate commerce, are preempted by existing federal regulations, or are otherwise unlawful. Bondi or her designee will serve as Chair. The Associate Attorney General will serve as Vice Chair. The Task Force includes representatives from the Office of the Deputy Attorney General, the Office of the Associate Attorney General, the Office of the Solicitor General, and the Civil Division. The memorandum directs the Task Force to consult with the Special Advisor for AI and Crypto (identified in news reporting as David Sacks) on which state laws to target. As of the end of January 2026, no reporting has indicated that the Task Force has filed any lawsuits. Multiple analyses published in late January and early February 2026 describe the Task Force as operational but awaiting the Commerce Department evaluation (due March 11, 2026) before initiating litigation.
The White House Council of Economic Advisers published a report titled "Artificial Intelligence and the Great Divergence," comparing AI's potential economic impact to the Industrial Revolution. The report argues that countries leading in AI investment, infrastructure, and adoption are positioned to capture outsized productivity gains, while lagging economies risk falling further behind. It highlights that AI-related investment contributed roughly 1.3 percentage points to U.S. GDP growth on an annualized basis in early 2025, and advocates for continued deregulation and infrastructure development. The report frames AI dominance as a matter of economic statecraft, not just technology policy. For K-12: this report does not create compliance obligations, but it establishes the economic framing the administration will use to justify its preemption strategy. When the Commerce Department evaluation arrives in March, it will be grounded in this argument: that state AI regulation threatens national economic competitiveness.
OMB issued Memorandum M-26-05, "Adopting a Risk-based Approach to Software and Hardware Security," rescinding the Biden-era requirement (M-22-18 and M-23-16) that federal agencies obtain standardized secure software development attestations from software vendors before using their products. The memorandum states that the prior requirements "imposed unproven and burdensome software accounting processes that prioritized compliance over genuine security investments." Agencies are no longer required to use CISA's Common Form attestation. Instead, each agency must develop its own risk-based approach to validating software and hardware security. Agencies retain discretion to require Software Bills of Materials (SBOMs) and attestations, but these are now optional, not mandatory. For K-12: districts that receive federal technology funding or operate under federal grant programs may see changes in how agencies assess the security posture of software vendors. The shift from standardized attestation to agency-level discretion could create inconsistency in what documentation vendors are required to produce.
As January closes, three major federal AI deliverables are due within 41 days. First: the Commerce Department must publish its evaluation of state AI laws identifying those deemed "onerous" and appropriate for referral to the DOJ AI Litigation Task Force (EO 14365, Section 4). Second: the FTC Chairman must issue a policy statement explaining when state laws requiring alterations to AI model outputs are preempted by Section 5 of the FTC Act (EO 14365, Section 7). Third: federal agencies must update their procurement policies to require LLM vendor compliance with the Unbiased AI Principles of truth-seeking and ideological neutrality (OMB M-26-04). These three deliverables, if published on schedule, will collectively define the federal government's position on which state AI laws it considers incompatible with federal policy, how it intends to use existing federal law to preempt them, and what procurement documentation standards federal agencies will impose on AI vendors.