My AI Had 45 Rule Files. It Was Hallucinating Because of Me.

April 11, 20263 min read

I've been building with AI agents for months. Custom rules, workspace knowledge, skill files — honestly, probably too much stuff. I had 45 rule files telling my AI how to behave. Naming conventions. Code patterns. Anti-patterns. Deployment checklists. Security gates.

Forty-five files.

And the AI was getting worse.

It wasn't broken, but the output was getting sloppy in ways I couldn't pin down. It would confidently reference a file path that didn't exist anymore. It would apply a pattern from one rule while ignoring a contradicting rule three files later. Code that technically followed the instructions but missed the point entirely.

I kept adding more rules to fix the problems the other rules were causing. Classic.

The measurement

Then I actually measured it. Each session was loading roughly 170KB of rule context — about 25,000 tokens — before I even said anything.

That's like handing someone a 50-page employee handbook and then asking them to fix a bug. They skim. They miss stuff. And whatever they don't see, they guess.

Our brains actually work the same way — there's a well-known psychology concept called Miller's Law that says working memory can only hold about four to seven things at once before it starts dropping stuff. Context overload isn't just an AI problem. It's a human one.

I think that's what hallucination actually is a lot of the time. Not the AI being dumb — just the AI drowning in your instructions and filling in the gaps.

Honestly, people do the same thing. When we don't know something, we don't always say "I don't know." We compensate. We fill in the gaps with something that sounds right. Imposter syndrome is basically the human version of hallucination — you're pattern-matching confidence because admitting uncertainty feels worse.

What I actually cut

So I made myself delete half of them.

Not randomly. I went through every rule file and asked: is this contradicting another rule? Is there fragmentation — two files saying similar things slightly differently? Is this actually reducing hallucination risk, or just making me feel like I covered my bases?

If it was redundant, it got cut. If two rules overlapped, I merged them. If a file was longer than 5KB, it got split into smaller pieces that only load when relevant.

45 files became 30. 170KB became 72KB. ~25,000 tokens saved per session.

The output quality improved immediately. Not because the rules were bad — because fewer rules meant the AI could actually follow the ones that mattered.

Same problem, different file

I'm building this platform, but I'm also navigating the job market like everyone else. And the resume problem is the same problem.

An eye-tracking study by The Ladders found that recruiters spend about six seconds on a resume. Six seconds on something most people spend hours writing. Twenty bullet points of skills, every project you've ever touched — and the recruiter scans past all of it. Too much information, not enough signal.

I've been stripping mine down. Fewer bullet points. Fewer buzzwords. Just the work that actually shows how I think and what I can build. It feels wrong, but that's the point — less is what makes the rest land.

I'm starting to think about interview prep the same way. Most career coaches recommend the STAR method — prepping three to five real stories you can adapt to any behavioral question instead of memorizing thirty different answers. I haven't tried it yet, but it makes sense — the more answers you memorize, the more scripted you sound. I'd rather have a real conversation than a recital.

At some point more prep just becomes more noise. Whether it's AI rules, resume bullet points, or interview answers — I keep learning the same lesson.

Anyway, the rule cleanup saved me about three days of accumulated drift. Not bad for deleting files.

Newsletter

Stay informed without the noise.

Daily AI updates for builders. No clickbait. Just what matters.