How Not to Automate Government Grant Review: Lessons from DOGE's ChatGPT Misstep

By

Overview

In early 2025, the Department of Government Efficiency (DOGE) attempted to streamline the cancellation of over $100 million in grants from the National Endowment for the Humanities (NEH) by using ChatGPT. DOGE's method: feed the AI a list of grants and ask it to identify any related to diversity, equity, and inclusion (DEI). The AI flagged certain grants based on keywords such as 'equity' or 'inclusion,' and DOGE subsequently eliminated them. This approach was both procedurally and constitutionally flawed, as U.S. District Judge Colleen McMahon ruled in a 143-page decision. This tutorial examines DOGE's process step by step, explains why it failed, and offers best practices for using AI in government decision-making.

How Not to Automate Government Grant Review: Lessons from DOGE's ChatGPT Misstep
Source: www.theverge.com

Prerequisites

Before diving into the tutorial, you should have a basic understanding of:

No coding experience is required, but familiarity with prompt engineering will help you follow the technical analysis.

Step-by-Step Instructions: How DOGE Attempted to Use ChatGPT

Step 1: Gather Grant Data

DOGE first compiled a list of all active NEH grants. Ideally, such a dataset would be in a structured format (CSV, JSON) with fields like grant title, abstract, amount, and awardee. According to the court filing, DOGE used internal databases to export this information.

Step 2: Craft a Prompt for ChatGPT

DOGE's prompt likely instructed ChatGPT to classify each grant as either 'DEI-related' or 'not DEI-related.' A simplified version might be:

"For each grant description below, answer YES if the grant is related to diversity, equity, or inclusion (DEI). Otherwise answer NO. Use only the text provided."

The model was then given each grant's title and abstract one by one (or batched). No additional context about legal standards or constitutional protections was included.

Step 3: Run the Model and Collect Output

DOGE processed grants through ChatGPT, possibly via the API for scalability. The model returned 'YES' or 'NO' for each. The court noted that DOGE relied on these outputs without further human review. An example output might look like:

Grant #12345: 'The Role of Equity in Rural Education' → YES (flagged for cancellation)
Grant #12346: 'Classical Text Preservation' → NO (kept)

Step 4: Automate Grant Cancellations

Any grant that received a 'YES' was flagged for immediate cancellation. DOGE issued mass termination notices to awardees, citing 'administrative efficiency.' No explanations were given beyond a generic reference to the AI's assessment.

Why This Process Failed (According to the Ruling)

Judge McMahon identified three critical flaws:

  1. Constitutional violation: The cancellations were based on 'protected characteristics' (i.e., content related to DEI), which amounts to viewpoint discrimination. The First Amendment prohibits the government from disfavoring speech based on its viewpoint.
  2. Lack of due process: Grantees received no notice or opportunity to respond before losing funding. The AI's decision was treated as final without any human review or appeal mechanism.
  3. Reliance on an unreliable tool: ChatGPT has no understanding of legal definitions or context. It can produce false positives (e.g., flagging a grant about 'equity in mathematics' even when 'equity' refers to a financial concept) and false negatives.

Common Mistakes When Using AI for Government Decisions

Mistake 1: Treating AI Output as Authoritative

DOGE never validated ChatGPT's classifications against legal criteria. In contrast, a proper workflow would include human review of every flagged grant, with a written justification tied to statute or regulation.

How Not to Automate Government Grant Review: Lessons from DOGE's ChatGPT Misstep
Source: www.theverge.com

Mistake 2: Failing to Provide Context in Prompts

The prompt lacked legal guardrails. DOGE did not instruct the model to ignore generic uses of DEI terms or to consider the grant's primary purpose. For example, a grant studying 'equity in tax policy' could be misclassified.

Mistake 3: Ignoring Due Process Requirements

Federal grant law (e.g., 2 CFR Part 200) requires notice and an opportunity to be heard before termination. DOGE bypassed these procedures entirely, leading to the lawsuit.

Mistake 4: Using a Black Box Without Audit Trails

ChatGPT's decision-making is opaque. Even if DOGE had logged the prompts and outputs, they could not explain why a particular grant was flagged. Courts require clear reasoning for adverse government actions.

Mistake 5: Scaling a Flawed Pilot Without Testing

Before canceling $100M in grants, DOGE should have run a pilot on a small sample, then audited the AI's accuracy against human judgments. They did not.

Best Practices for Using AI in Government Grant Review

1. Define Clear Legal Criteria

Work with legal counsel to specify what constitutes a disqualifying characteristic. For DEI-related grants, the government must show that the grant violates a specific law or policy, not just that it mentions DEI.

2. Use AI as a Triage Tool Only

ChatGPT can prescreen grants to flag potential issues, but every flag must be manually reviewed by a trained human. The AI's output should be a recommendation, not a decision.

3. Build a Transparent Prompting System

Document every prompt and its reasoning. Use a structured format that asks the AI to cite specific text from the grant supporting its classification. Example:

"For each grant, list keywords from the abstract that relate to DEI. Then classify as HIGH, MEDIUM, or LOW risk. Only HIGH risk requires immediate review."

4. Implement an Appeal Process

Grantees must have a way to challenge an AI-based flag. Provide a simple form and a timeline for human reconsideration.

5. Regularly Audit AI Performance

Compare random samples of AI flags against human decisions. Track false positives and negatives. If error rates exceed a threshold, adjust the prompt or retrain the model.

Summary

DOGE's use of ChatGPT to cancel over $100 million in NEH grants was declared unconstitutional because it lacked human oversight, violated due process, and relied on an AI that could not interpret legal context. This tutorial dissected the flawed process, identified common mistakes, and provided best practices for responsibly integrating AI into government grant review. The key lesson: AI can assist, but never replace, human judgment and legal compliance.

Related Articles

Recommended

Discover More

Volvo EX30: The Luxury Electric Crossover That Underprices the Kia NiroNew Step-by-Step Guide Empowers Go Developers to Containerize Apps with DockerData Quality Bug Overturns Key Election Finding, Researchers WarnHow to Navigate AI Job Interviews Without Getting BlindsidedHow to Understand the Unique Design of the 007 First Light Controller