coding

AI Agent Used Old Credentials, Deleted Wrong Database

An AI coding assistant discovered outdated credentials in a developer's filesystem and accidentally executed destructive commands against a legacy production

AI Agent Found Old Credentials and Nuked Wrong Database

What It Is

AI coding assistants like Claude, GitHub Copilot, and ChatGPT can read files across an entire filesystem when generating commands or scripts. Unlike traditional IDEs that typically scope themselves to a project directory, these agents scan broadly for relevant configuration files, credentials, and environment variables. When an AI agent encounters multiple credential files—say, a current project’s database config and an old JSON key file buried in Downloads from a year ago—it may grab whichever file matches its pattern recognition first.

This filesystem-wide access becomes dangerous during destructive operations. A developer recently experienced this when cleaning up a database: Claude generated a deletion command using credentials from an unrelated project. The old JSON file hadn’t been touched in over a year, but the AI agent found it, assumed it was relevant, and built a command around it. One execution later, 25,000 documents vanished from the wrong database.

The core issue stems from how AI agents resolve file paths and environment variables. When a script references $GOOGLE_APPLICATION_CREDENTIALS or similar variables, the agent searches the filesystem for matching files. Without explicit project boundaries, it treats a Downloads folder the same as a current working directory.

Why It Matters

This incident highlights a fundamental shift in how developers must think about credential management and command verification. Traditional workflows assumed humans would catch context mismatches—spotting that a file path points to the wrong project or that credentials belong to a different environment. AI agents eliminate that natural checkpoint by automating the entire command construction process.

The consequences extend beyond individual mishaps. Teams using AI coding assistants need new safeguards around destructive operations, particularly for database modifications, cloud resource deletions, and bulk data operations. A single stale credential file can become a loaded gun when AI agents have unrestricted filesystem access.

For organizations, this creates compliance and security concerns. If an AI agent can inadvertently use production credentials when a developer intended to target staging, the blast radius of mistakes grows exponentially. The speed advantage of AI-generated commands becomes a liability without proper guardrails.

Getting Started

Before running any destructive command generated by an AI agent, verify which credentials the system will actually use:

# Check the projectId matches your current project

For scripts that perform bulk deletes or updates, add explicit confirmation steps:

echo "Project ID: $(cat $GOOGLE_APPLICATION_CREDENTIALS | grep projectId)"
read -p "Continue? (y/n) " -n 1 -r echo if [[ ! $REPLY =~ ^[Yy]$ ]]
then
 exit 1
fi

Create a pre-flight checklist for destructive operations: verify the target database name, confirm the credential file path, and manually inspect the first few records that would be affected. Many database clients support dry-run modes that show what would be deleted without actually removing data.

Consider setting up project-specific environment files that explicitly define credential paths, rather than relying on global environment variables that might point anywhere on the system.

Context

This problem differs from traditional security vulnerabilities because the credentials themselves aren’t compromised—they’re simply misapplied. The AI agent operates exactly as designed, with full filesystem access to assist development tasks. The issue lies in the gap between human intent and machine interpretation.

Some developers have started using containerized development environments or virtual machines to create hard boundaries around project contexts. Tools like Docker or VS Code dev containers limit what files an AI agent can access, reducing the chance of credential confusion.

Alternative approaches include credential management systems like HashiCorp Vault or cloud provider secret managers, which require explicit authentication per project rather than relying on local JSON files. These systems add friction but prevent stale credentials from lingering in random directories.

The broader lesson applies beyond credentials to any destructive operation: AI agents optimize for speed and pattern matching, not caution. Developers must build verification steps into workflows, treating AI-generated commands with the same scrutiny as code from an unfamiliar colleague. The convenience of AI assistance requires a corresponding increase in deliberate safety checks.