What the federal government did on AI this month, documented for K-12 leaders. Every entry cites a primary source. Every K-12 implication is grounded in the action that triggered it. This is not speculation. It is the record.
Version: Spring 2026 · For practitioners and policy students
President Trump signed Executive Order 14390 on March 6, 2026, directing federal agencies to develop coordinated strategies for combating transnational cyber-enabled fraud, including AI-powered impersonation scams and deepfake-driven schemes. The EO was published in the Federal Register on March 11, 2026. The order requires the Attorney General and the Secretaries of State, Treasury, Defense, and Homeland Security to submit an action plan to the President within 120 days. That plan must identify transnational criminal organizations responsible for scam operations, propose strategies for disrupting them, and establish a dedicated operational cell within the National Coordination Center. The EO also directs the Attorney General to prioritize prosecutions of cyber-enabled fraud and to recommend a Victims Restoration Program funded from seized and forfeited assets. The White House fact sheet noted that American consumers lost over $12.5 billion to cyber-enabled fraud in 2024, with seniors losing the most on average, and that one in seven young people who experienced sextortion as a minor reported self-harm.
March 11, 2026 was the compliance deadline for OMB Memorandum M-26-04, "Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles," issued December 11, 2025. By this date, all executive agencies were required to update their procurement policies to ensure that contracts for large language models include requirements addressing compliance with the Unbiased AI Principles established by Executive Order 14319 (July 2025). The two Unbiased AI Principles are "truth-seeking" (LLM outputs must be truthful, prioritize historical and scientific accuracy, and acknowledge uncertainty) and "ideological neutrality" (LLMs must not integrate partisan or ideological judgments). Updated agency policies must include processes for users to report LLM outputs that violate these principles. Agencies must also, to the extent practicable, modify existing LLM contracts to include these requirements before exercising any contract options. The memo establishes two tiers of transparency: minimum transparency (acceptable use policies, model or system cards, end-user resources, feedback mechanisms) and enhanced transparency (pre-training and post-training activity disclosures, bias evaluation results, enterprise-level controls, third-party modification disclosures). The guidance expires December 11, 2027 unless the OMB Director extends it.
Executive Order 14365 (December 11, 2025) required the Secretary of Commerce to publish, by March 11, 2026, an evaluation identifying state AI laws that are "onerous" and that conflict with the federal policy of maintaining AI dominance through a minimally burdensome national framework. The evaluation was required, at minimum, to identify state laws that require AI models to alter their truthful outputs or that compel disclosures violating the First Amendment. The Commerce Department was also required, by the same date, to send a Policy Notice to any state identified as having onerous AI laws, declaring that state ineligible for non-deployment funds under the Broadband Equity Access and Deployment (BEAD) Program. The BEAD Program controls approximately $42 billion in broadband funding. As of late March 2026, the Commerce Department had not publicly released this evaluation. Holland & Knight noted the delay and observed that it introduces uncertainty about the administration's near-term posture on state preemption enforcement, even as the White House National Policy Framework (released March 20) reinforced a clear preference for federal uniformity.
Executive Order 14365 (December 11, 2025), Section 7, directed the FTC Chairman to issue a policy statement within 90 days explaining how the FTC Act's prohibition on unfair and deceptive practices applies to AI, and the circumstances under which state laws requiring alterations to the truthful outputs of AI models are preempted by federal law. That 90-day deadline fell on March 11, 2026. On March 10, TechFreedom released an open letter calling on the FTC to take public comments before finalizing the statement, noting that the Commission was operating with only two Commissioners (both Republican) and that public input would strengthen the statement's legal standing under Skidmore deference. As of the March 11 deadline, at least one source reported the FTC had not yet released the statement. The Language Firm was unable to locate the full text of this policy statement on ftc.gov as of the publication date of this digest. No major law firm has published an analysis of the statement's actual content, which would be expected if a significant FTC policy statement had been publicly released. This entry will be updated when the statement is published or when its status is clarified. The underlying legal question remains significant: the administration's theory is that state laws requiring AI developers to adjust model outputs to prevent algorithmic discrimination could compel the production of "deceptive" outputs under Section 5. Legal analysts have described this theory as untested. Colorado's AI Act (effective June 30, 2026) requires algorithmic discrimination prevention measures, which could be directly affected if the FTC adopts this position in a finalized statement.
On March 18, 2026, Senator Marsha Blackburn (R-TN) released an updated discussion draft of the TRUMP AMERICA AI Act (formally titled the Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act). The 291-page draft is organized into 17 titles and represents the most comprehensive federal AI legislation proposed in the United States to date. The bill is built around protecting children, creators, conservatives, and communities. It incorporates the Kids Online Safety Act (KOSA) and the NO FAKES Act. Key provisions include a duty of care requiring AI developers to take reasonable steps to mitigate foreseeable harms from their products; a sunset of Section 230 of the Communications Act; chatbot safety provisions under the GUARD Act with age verification for accounts belonging to minors under 18; mandatory third-party audits for bias and discrimination; registration requirements for foreign AI developers; and broad federal preemption of state AI laws that regulate inherently interstate AI development. On copyright, the draft establishes that training AI on copyrighted works is per se not fair use. Blackburn has been working with the White House on the draft and shared a copy with Senate Commerce Chair Ted Cruz before its release.
On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, a four-page document outlining legislative recommendations for Congress. The Framework was required by Executive Order 14365 (December 11, 2025). It organizes its recommendations around six areas: protecting children and empowering parents; safeguarding communities from AI-enabled harms; respecting intellectual property rights; preventing AI-driven censorship; enabling innovation; and developing an AI-ready workforce. On child safety, it calls for Congress to mandate age-assurance requirements, give parents tools to control privacy settings and content exposure, and clarify that existing child privacy laws apply to AI systems. On preemption, it supports broad federal override of state AI laws while preserving state authority to enforce generally applicable child protection, consumer protection, and fraud laws. It explicitly opposes creating any new federal regulatory body for AI. The Framework is not binding and does not itself create legal obligations. On copyright, it defers to the courts. On workforce, it calls for non-regulatory methods and further study of job displacement trends. Legal analysts at GovTech noted the Framework is silent on algorithmic accountability, adult data privacy, transparency and explainability requirements, and enforcement mechanisms.
On March 20, 2026, the same day the White House released its National Policy Framework, Rep. Don Beyer (D-VA) and Reps. Doris Matsui (D-CA), Ted Lieu (D-CA), Sara Jacobs (D-CA), and April McClain Delaney (D-MD) introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act in the House. Senator Brian Schatz (D-HI) announced companion legislation in the Senate, formally introduced on March 26 as S.4216, cosponsored by Senators Chris Coons (D-DE), Chris Murphy (D-CT), Tammy Duckworth (D-IL), Lisa Blunt Rochester (D-DE), and Andy Kim (D-NJ). The bill would repeal Executive Order 14365 and prohibit the order from taking effect. Its stated purpose is to ensure states can continue to enact AI safeguards without risk of losing federal funds. The GUARDRAILS Act is not expected to advance under the current Republican congressional majority. Its significance is as a marker of the political fault line: the preemption debate is now formalized in competing legislative vehicles.
On March 31, 2026, OMB issued Memorandum M-26-10, "Reinforcing Transparency, Accountability, and Oversight of Federal Technology," signed by OMB Director Russell Vought. The memo directs federal agencies to centralize IT contract oversight under their Chief Information Officers and to treat information technology as a core component of government operations. Beginning May 2026, CIOs at CFO Act agencies must submit monthly reports to OMB detailing approved IT contracts and agreements, including those tied to public-facing digital services. The policy requires all future solicitations and contracts to disclose utilization and pricing information to the government without limiting how that information can be shared across agencies. The reporting requirements run through October 2026 with potential extension. Federal CIO Greg Barbaccia described the goal as ending situations where agencies pay different prices for the same tools without cross-agency visibility.