dangroch.com
Writing about AI, metacognition, and genuinely intelligent systems.
-
Task Decomposition: planning without pretending the plan is the work
There are two common failures in agent planning.
-
Task Composition: when intelligence looks like recombination
One mark of intelligence is not merely breaking down a known problem. It is seeing a solution shape that wasn’t explicitly handed to you.
-
Self-Model: the agent should know what it can actually do
A surprising amount of agent failure begins with a simple lie: the system behaves as if having a language model means having a capability.
-
Resource Selection: the best tool is rarely the first one the model remembers
Many agent failures are not failures of reasoning in the abstract. They are failures of choosing how to act.
-
Knowing when to ask: the boundary between autonomy and good judgment
People often talk about autonomous agents as if asking the user a question is evidence that autonomy failed.
-
Failure Recovery: real agents need more than retries
Blind retries are not resilience. They are just faster repetition of the same mistake.
-
Epistemic Calibration: confidence should match reality
One of the most dangerous things a language model can do is sound right.
-
Env-Model: intelligence depends on knowing the world around you
A system can know exactly what tools it has and still behave stupidly.
-
Context Management: thinking is limited by what you can keep in mind
A lot of agent design still treats context as if it were just more memory. Give the system a bigger window, better retrieval, maybe a summary layer, and the problem...
-
Context Guard: preserving the mind across compaction
One of the hardest things about long-running agent systems is that memory loss is not binary.
-
Attention Filter: what the agent sees shapes what it can think
Before an agent reasons, something has already decided what is in front of it.
-
Attention Awareness: intelligence is also about ignoring the right things
There is a naïve theory of intelligence that says more attention is always better.
-
Building metacognition for AI agents
Most AI agents fail in a strangely familiar way. They do impressive work right up until they don’t. They use the wrong tool because it came to mind first. They...
-
Most AI systems don’t think. They perform.
That’s not a criticism — it’s a description. The current generation of large language models, agents, and autonomous systems are extraordinarily capable task runners. They pattern-match, they plan, they execute....