
Jeff Liu
NeuroDivergent Builder + AI Explorer
I spend a lot of time building with AI and using it as an execution partner. It has been a force multiplier for learning and executing. But there are times where I am managing multiple agents for either coding, research, brainstorming ideas, or creating content, and the feedback starts to hurt my brain.
I found a way to reverse engineer the output for context window fatigue, where AI agents can proactively provide a meter of their context being depleted. The longer the session, the more the response output starts to degrade.
For someone who has ADHD, I programmed the AI agents (Claude, Google Antigravity) to always provide responses that are structured to ingest information better.
I did this early on while experimenting with Claude to execute coding across different systems with MCP connectors (Supabase, Cloudflare, Hostinger, WordPress, Airtable, Cloudinary, Apify, and many others). I found the less word vomited, the better the responses I got and the results that came with it.
The science around the input and output for token economics is real, and is something I want to document more internally around every measurable input and output for generative AI. Doing this allowed me to reduce latency, costs, and discover open source resources, the more I drilled down this area to scale as close to zero.
LLMs are a commodity these days, and the AI companies who prioritize infrastructure for full AI observability will not only survive, they will compound on the data they ingest where it becomes self-healing.