Federal AI MovementK-12 Intelligence DigestExecutive Orders · Agency Directives · Preemption SignalsMarch 2026The Language Firm Federal AI MovementK-12 Intelligence DigestExecutive Orders · Agency Directives · Preemption SignalsMarch 2026The Language Firm
The Language Firm | Federal AI Movement Digest

March 2026
Federal AI Digest

What the federal government did on AI this month, documented for K-12 leaders. Every entry cites a primary source. Every K-12 implication is grounded in the action that triggered it. This is not speculation. It is the record.

Version: Spring 2026  ·  For practitioners and policy students

How this digest works. Each month documents verified federal actions (executive orders, agency directives, task force formations, rulemaking proceedings, and legislative signals) that affect K-12 AI governance. Entries are classified by type: Executive Action (signed orders with force of law), Agency Directive (required agency deliverables with deadlines), and K-12 Signal (downstream implications that have not yet produced a compliance obligation but will shape the environment districts operate in). No entry is invented. Every date is sourced. K-12 implications labeled "TLF analysis" represent The Language Firm's interpretation of what each federal action means for district governance. They are grounded in the source material but are not sourced claims. They reflect professional judgment, not regulatory mandate.
All entries
Executive action, verified
Agency directive, deadline set
K-12 signal, monitor
Executive Action
Executive Order 14390: Combating Cybercrime, Fraud, and Predatory Schemes Against American Citizens

President Trump signed Executive Order 14390 on March 6, 2026, directing federal agencies to develop coordinated strategies for combating transnational cyber-enabled fraud, including AI-powered impersonation scams and deepfake-driven schemes. The EO was published in the Federal Register on March 11, 2026. The order requires the Attorney General and the Secretaries of State, Treasury, Defense, and Homeland Security to submit an action plan to the President within 120 days. That plan must identify transnational criminal organizations responsible for scam operations, propose strategies for disrupting them, and establish a dedicated operational cell within the National Coordination Center. The EO also directs the Attorney General to prioritize prosecutions of cyber-enabled fraud and to recommend a Victims Restoration Program funded from seized and forfeited assets. The White House fact sheet noted that American consumers lost over $12.5 billion to cyber-enabled fraud in 2024, with seniors losing the most on average, and that one in seven young people who experienced sextortion as a minor reported self-harm.

K-12 action items (TLF analysis)
  • This EO does not create direct obligations for districts but it establishes the federal enforcement posture districts should prepare for
  • AI-generated voice cloning and deepfake impersonation are now a named federal enforcement priority. Districts using AI tools that generate synthetic voice or visual content, or whose vendors offer such capabilities, should document what safeguards are in place
  • If a district's communications platform, phone system, or parent-facing tools rely on AI-generated voice or media, those tools should be reviewed for impersonation risk
  • The Weekly Incident Bulletin will track enforcement actions that follow from this EO as they develop
Source: Executive Order 14390, White House, March 6, 2026 · White House Fact Sheet, March 6, 2026
Agency Directive
OMB M-26-04 Procurement Deadline: Federal Agencies Must Update AI Contract Policies

March 11, 2026 was the compliance deadline for OMB Memorandum M-26-04, "Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles," issued December 11, 2025. By this date, all executive agencies were required to update their procurement policies to ensure that contracts for large language models include requirements addressing compliance with the Unbiased AI Principles established by Executive Order 14319 (July 2025). The two Unbiased AI Principles are "truth-seeking" (LLM outputs must be truthful, prioritize historical and scientific accuracy, and acknowledge uncertainty) and "ideological neutrality" (LLMs must not integrate partisan or ideological judgments). Updated agency policies must include processes for users to report LLM outputs that violate these principles. Agencies must also, to the extent practicable, modify existing LLM contracts to include these requirements before exercising any contract options. The memo establishes two tiers of transparency: minimum transparency (acceptable use policies, model or system cards, end-user resources, feedback mechanisms) and enhanced transparency (pre-training and post-training activity disclosures, bias evaluation results, enterprise-level controls, third-party modification disclosures). The guidance expires December 11, 2027 unless the OMB Director extends it.

Federal procurement standards ripple into education markets. When the federal government changes what it requires from AI vendors, those vendors adjust their products and documentation for all customers, not just federal ones. Districts using AI tools that also serve federal agencies may see changes to terms of service, model behavior, or data practices in the months following this deadline. The question is not whether the changes happen. It is whether someone in the building notices when they do.
K-12 monitoring actions (TLF analysis)
  • This directive applies to federal agencies, not directly to districts. But the downstream effect matters
  • If a district uses an AI tool that also serves federal agencies, the vendor is now subject to new contractual transparency requirements for that federal relationship
  • Districts should monitor whether any AI tools in their stack change their terms of service, privacy policies, or model behavior in the months following this deadline
  • The District Filing tracks these vendor-level shifts as they surface
Source: OMB M-26-04, December 11, 2025 · Crowell & Moring analysis, December 19, 2025
Agency Directive
Commerce Department Evaluation of State AI Laws: Deadline Passed Without Public Release

Executive Order 14365 (December 11, 2025) required the Secretary of Commerce to publish, by March 11, 2026, an evaluation identifying state AI laws that are "onerous" and that conflict with the federal policy of maintaining AI dominance through a minimally burdensome national framework. The evaluation was required, at minimum, to identify state laws that require AI models to alter their truthful outputs or that compel disclosures violating the First Amendment. The Commerce Department was also required, by the same date, to send a Policy Notice to any state identified as having onerous AI laws, declaring that state ineligible for non-deployment funds under the Broadband Equity Access and Deployment (BEAD) Program. The BEAD Program controls approximately $42 billion in broadband funding. As of late March 2026, the Commerce Department had not publicly released this evaluation. Holland & Knight noted the delay and observed that it introduces uncertainty about the administration's near-term posture on state preemption enforcement, even as the White House National Policy Framework (released March 20) reinforced a clear preference for federal uniformity.

K-12 action items (TLF analysis)
  • The delay matters for districts in states with active AI legislation, particularly Colorado, California, and Texas
  • Until this evaluation is published, no state has been formally identified as "onerous" and no BEAD funding has been conditioned on AI law changes. State AI laws remain in full force
  • Districts should confirm with their state broadband office whether BEAD-funded projects serving their buildings are at risk
  • The Weekly Incident Bulletin will flag the Commerce evaluation the moment it is published
Source: Executive Order 14365, Sections 4 and 5, December 11, 2025 · Holland & Knight analysis, March 2026
Agency Directive
FTC AI and Section 5 Policy Statement: March 11 Deadline Arrived, Full Text Not Yet Published on FTC Website

Executive Order 14365 (December 11, 2025), Section 7, directed the FTC Chairman to issue a policy statement within 90 days explaining how the FTC Act's prohibition on unfair and deceptive practices applies to AI, and the circumstances under which state laws requiring alterations to the truthful outputs of AI models are preempted by federal law. That 90-day deadline fell on March 11, 2026. On March 10, TechFreedom released an open letter calling on the FTC to take public comments before finalizing the statement, noting that the Commission was operating with only two Commissioners (both Republican) and that public input would strengthen the statement's legal standing under Skidmore deference. As of the March 11 deadline, at least one source reported the FTC had not yet released the statement. The Language Firm was unable to locate the full text of this policy statement on ftc.gov as of the publication date of this digest. No major law firm has published an analysis of the statement's actual content, which would be expected if a significant FTC policy statement had been publicly released. This entry will be updated when the statement is published or when its status is clarified. The underlying legal question remains significant: the administration's theory is that state laws requiring AI developers to adjust model outputs to prevent algorithmic discrimination could compel the production of "deceptive" outputs under Section 5. Legal analysts have described this theory as untested. Colorado's AI Act (effective June 30, 2026) requires algorithmic discrimination prevention measures, which could be directly affected if the FTC adopts this position in a finalized statement.

The legal theory matters even without a finalized statement. The administration's position that bias mitigation equals deception under Section 5 is now part of the policy record through EO 14365 itself. Vendors serving K-12 districts may begin adjusting their practices in anticipation, regardless of whether the FTC has finalized its statement. The question for districts is whether vendor agreements reference specific compliance commitments that could shift if this theory gains enforcement traction.
K-12 monitoring actions (TLF analysis)
  • No compliance action is required from this entry. The FTC statement, if and when finalized, would apply to vendors, not districts directly
  • Districts should monitor whether this statement is published. When it is, its preemption position will affect the legal environment around AI tools used in schools, particularly in states with active AI legislation
  • The administration's theory that bias mitigation equals deception is already influencing vendor behavior. Districts should document whether any AI vendors in their stack change their bias mitigation, output filtering, or content moderation practices
  • The Weekly Incident Bulletin will flag the FTC statement the moment it is published
Source: EO 14365, Section 7 (directing FTC statement), December 11, 2025 · TechFreedom open letter calling for public comments, March 10, 2026 · Baker Botts analysis of March 11 deadlines, March 3, 2026
K-12 Signal
TRUMP AMERICA AI Act: Sen. Blackburn Releases 291-Page Discussion Draft

On March 18, 2026, Senator Marsha Blackburn (R-TN) released an updated discussion draft of the TRUMP AMERICA AI Act (formally titled the Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act). The 291-page draft is organized into 17 titles and represents the most comprehensive federal AI legislation proposed in the United States to date. The bill is built around protecting children, creators, conservatives, and communities. It incorporates the Kids Online Safety Act (KOSA) and the NO FAKES Act. Key provisions include a duty of care requiring AI developers to take reasonable steps to mitigate foreseeable harms from their products; a sunset of Section 230 of the Communications Act; chatbot safety provisions under the GUARD Act with age verification for accounts belonging to minors under 18; mandatory third-party audits for bias and discrimination; registration requirements for foreign AI developers; and broad federal preemption of state AI laws that regulate inherently interstate AI development. On copyright, the draft establishes that training AI on copyrighted works is per se not fair use. Blackburn has been working with the White House on the draft and shared a copy with Senate Commerce Chair Ted Cruz before its release.

K-12 action items (TLF analysis)
  • This is a discussion draft, not enacted law. It creates no compliance obligations today
  • The KOSA provisions would require platforms to implement safeguards reducing risks of exploitation and self-harm for minors. The GUARD Act provisions would impose age verification requirements on AI chatbots interacting with minors under 18
  • If any version of these provisions becomes law, districts will need to document that every AI tool accessible to students either complies with federal chatbot safety requirements or has been restricted from student access
  • Districts should begin inventorying which AI tools in their buildings interact directly with students through conversational interfaces. That inventory becomes the compliance baseline if this legislation advances
Source: Sen. Blackburn press release, March 18, 2026 · Alston & Bird analysis, March 23, 2026 · Axios, March 18, 2026
K-12 Signal
White House Releases National Policy Framework for Artificial Intelligence

On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence, a four-page document outlining legislative recommendations for Congress. The Framework was required by Executive Order 14365 (December 11, 2025). It organizes its recommendations around six areas: protecting children and empowering parents; safeguarding communities from AI-enabled harms; respecting intellectual property rights; preventing AI-driven censorship; enabling innovation; and developing an AI-ready workforce. On child safety, it calls for Congress to mandate age-assurance requirements, give parents tools to control privacy settings and content exposure, and clarify that existing child privacy laws apply to AI systems. On preemption, it supports broad federal override of state AI laws while preserving state authority to enforce generally applicable child protection, consumer protection, and fraud laws. It explicitly opposes creating any new federal regulatory body for AI. The Framework is not binding and does not itself create legal obligations. On copyright, it defers to the courts. On workforce, it calls for non-regulatory methods and further study of job displacement trends. Legal analysts at GovTech noted the Framework is silent on algorithmic accountability, adult data privacy, transparency and explainability requirements, and enforcement mechanisms.

K-12 action items (TLF analysis)
  • The child safety section signals that any federal AI legislation is likely to include requirements for platforms to implement parental controls, age assurance, and protections against exploitation when AI tools interact with minors
  • Districts that are already documenting which AI tools interact with students and what parental notification or consent processes are in place will be positioned ahead of whatever compliance requirements emerge
  • The Framework's silence on algorithmic accountability and adult data privacy means districts cannot rely on forthcoming federal law to address those gaps. State laws covering those areas remain the operative standard
  • The preemption language should be monitored but not acted on yet: until Congress passes legislation, no state law is preempted by this Framework alone
Source: National Policy Framework for AI, White House, March 20, 2026 · White House announcement, March 20, 2026 · GovTech analysis, April 2026
K-12 Signal
GUARDRAILS Act Introduced to Block Federal Preemption of State AI Laws

On March 20, 2026, the same day the White House released its National Policy Framework, Rep. Don Beyer (D-VA) and Reps. Doris Matsui (D-CA), Ted Lieu (D-CA), Sara Jacobs (D-CA), and April McClain Delaney (D-MD) introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act in the House. Senator Brian Schatz (D-HI) announced companion legislation in the Senate, formally introduced on March 26 as S.4216, cosponsored by Senators Chris Coons (D-DE), Chris Murphy (D-CT), Tammy Duckworth (D-IL), Lisa Blunt Rochester (D-DE), and Andy Kim (D-NJ). The bill would repeal Executive Order 14365 and prohibit the order from taking effect. Its stated purpose is to ensure states can continue to enact AI safeguards without risk of losing federal funds. The GUARDRAILS Act is not expected to advance under the current Republican congressional majority. Its significance is as a marker of the political fault line: the preemption debate is now formalized in competing legislative vehicles.

K-12 action items (TLF analysis)
  • No compliance action required. The GUARDRAILS Act is unlikely to become law in its current form
  • It confirms that state AI laws will remain the primary compliance environment for districts for the foreseeable future. Federal preemption is politically contested, not settled
  • Districts should continue building governance infrastructure against current state requirements, not against the possibility that those requirements may be preempted
  • If a district is in a state with active AI legislation (Colorado, California, Texas, Illinois, among others), the compliance obligation has not changed. The District Filing incorporates state-level tracking for this reason
Source: Rep. Beyer press release, March 20, 2026 · Rep. Jacobs press release, March 20, 2026 · Benton Institute, March 2026
Agency Directive
OMB M-26-10: Reinforcing Transparency, Accountability, and Oversight of Federal Technology

On March 31, 2026, OMB issued Memorandum M-26-10, "Reinforcing Transparency, Accountability, and Oversight of Federal Technology," signed by OMB Director Russell Vought. The memo directs federal agencies to centralize IT contract oversight under their Chief Information Officers and to treat information technology as a core component of government operations. Beginning May 2026, CIOs at CFO Act agencies must submit monthly reports to OMB detailing approved IT contracts and agreements, including those tied to public-facing digital services. The policy requires all future solicitations and contracts to disclose utilization and pricing information to the government without limiting how that information can be shared across agencies. The reporting requirements run through October 2026 with potential extension. Federal CIO Greg Barbaccia described the goal as ending situations where agencies pay different prices for the same tools without cross-agency visibility.

K-12 relevance (TLF analysis)
  • This memo applies to federal agencies, not K-12 districts. But it has an indirect effect worth tracking
  • If AI vendors selling to the federal government are now required to disclose pricing and utilization data across agencies, that transparency pressure may ripple into the education market
  • Vendors that serve both federal and K-12 customers may face questions about pricing consistency. Districts negotiating AI tool contracts should be aware that the federal government is now requiring the kind of pricing transparency districts have historically lacked the leverage to demand
  • This is context for future procurement conversations, not an action item with a deadline
Source: OMB Memoranda page, listing M-26-10, March 31, 2026 · HSToday analysis, March 31, 2026 · GovCIO Media, April 1, 2026
1 executive
EO 14390 on cybercrime and AI-enabled fraud
4 directives
OMB procurement, Commerce evaluation, FTC statement, OMB IT oversight
3 signals
TRUMP AMERICA AI Act, White House Framework, GUARDRAILS Act