Skip to content

2026

Your AI Coding Agent Can Exfiltrate Your Credentials. You Would Never Know.

I spent last night configuring Claude Code's security and realized something uncomfortable: for months, I had been running an LLM with unrestricted access to my terminal. It could read my SSH keys, browse my AWS credentials, curl data to any endpoint, and push code to production. I just never thought about it because the tool was helpful and nothing bad had happened yet.

That is exactly the kind of reasoning that gets production databases dropped.

The Tribal Knowledge Problem Nobody Is Solving for Analytics

Your AI can write SQL. It just has no idea what the data means.

I have spent the last four years building AI products in healthcare. Our databases have columns like amt_1, stat_cd, eff_dt. A model looking at raw schema has no way to know that amt_1 is patient copay in one table and coinsurance in another. That stat_cd means enrollment status, not statistical code. That eff_dt is the date a policy became active, not when something happened.

This is tribal knowledge. It lives in the heads of the three people who built the database. It is not documented anywhere. And it is the reason text-to-SQL fails in production.

Clawdbot and the Era of AI in a Box

There's a lot of hype around Clawdbot. People claiming it'll make you a billion dollars, automate your business, act as your chief of staff. And yes, it's also a security nightmare.

But there's something real here. Clawdbot (now renamed Moltbot) is pointing toward a fundamentally different relationship with AI. Not a chat window you visit, but a system running on YOUR machine, 24/7, on your infrastructure, with your files. AI in a box.