Research Thoughts
Explorations in AI research, insights from experiments, and perspectives on the future of intelligent systems. Less formal than papers, more structured than tweets.
Featured
When AI Thinks Too Hard: Discovering Token Exhaustion in Claude Opus 4.6
While testing Claude Opus 4.6 on research-level mathematics, we discovered something unexpected: the model allocates 100% of its tokens to reasoning, leaving nothing for the actual answer. Here's what we learned.
Why AI Memory Matters: The Case for Persistent Context
Current AI systems have no persistent memory. Each conversation starts fresh. Each query recomputes attention over its entire context. This fundamental limitation is what we're solving with ARMS and HAT.
When AI Thinks Too Hard: Discovering Token Exhaustion in Claude Opus 4.6
While testing Claude Opus 4.6 on research-level mathematics, we discovered something unexpected: the model allocates 100% of its tokens to reasoning, leaving nothing for the actual answer. Here's what we learned.
Why AI Memory Matters: The Case for Persistent Context
Current AI systems have no persistent memory. Each conversation starts fresh. Each query recomputes attention over its entire context. This fundamental limitation is what we're solving with ARMS and HAT.
The Future of AI Research: Open, Collaborative, and Accessible
Reflections on how AI research is evolving toward more open and collaborative approaches, and why this matters for the future of the field.