Research

For such a fundamental component of our civilization, the design of text – the written expression of our thoughts – has advanced relatively little compared to other technologies since its invention. What would happen if we could make the written word one percent more ergonomic? One percent more expressive?

My research aims to dramatically improve human knowledge representationsnotations and mediums we use to understand the world, and the tools we use to interact with these representations. My goal is to build on breakthroughs in artificial intelligence like neural representation learning with ideas from interaction design to directly implement new notations that augment and replace existing ones, including natural language text.

The most fundamental knowledge representation today is natural language and written text, but in every domain and niche we also find specialized notations that are key to their progress, from mathematics to music to medicine. Most such notations are “folk notations”, invented and evolved over time by usage rather than intentional design. If we were equipped with a general set of powerful tools and methods for improving the design of our notations even incrementally, it seems likely to accelerate us towards the future.

Concretely, my current line of investigation explores how we could automatically extract learned concepts and synthesize notations from representations of the world within latent spaces of large neural networks like language models. I’m also pursuing ways to control the output of generative models more easily and more precisely using the same kind of learned representations.

More than simply tools, knowledge representations define the abstractions by which we observe and understand the world. By studying how to improve the way we represent our thoughts, we may also expand the domain of thoughts we can think and qualia we can feel.

Publications

Work in progress.

Essays

Notational Intelligence dives into notations as a fundamental component of human intelligence, and argues that notation design is just as critical as tool building for advancing our ability to solve hard problems and understand the world.

Imagining better interfaces to language models and Thoughts at the boundary between machine and mind are a pair of essays that explore a motivating question for my current work: as machine understanding of language improves at a rapid pace to approach human level, what interface changes do we need in our software systems? What new interfaces are possible?

A GPS for the mind argues that a good tool for thinking should make the combined human + tool system more effective at hunting for novel explanations within our idea mazes, and that the space of possibilities in this domain far exceeds anything already explored in personal computing today.

AI as a creative collaborator discusses two ways to think about incorporating AI into creative workflows; as tools that offer precise control and interpretability, and as agentic collaborators that are as flexible as human partners.

Towards a research community for better thinking tools is my overview of the community that has loosely self-organized around the future of personal computing and knowledge tools, and what it may look like to further develop the scene moving forward.

Talks & Demos

Sentence Gradients, presented at Betaworks, was the first time I publicly demoed an early prototype of my work on controllable text generation from a language model’s latent space.

At the Boundary of Machine and Mind was a conversation with The Gradient about notational intelligence, language models, and interface design for generative models and creative tools.