Designing Claude
When Kyle Turman joined Anthropic as the first full-time product designer, Claude was still early: functional, but far from transformative. There were no formal design systems, no design reviews, and very little process. But that lack of structure turned out to be a feature, not a bug. It gave Kyle the chance to define what design means when you're building for something as slippery and unpredictable as an AI assistant.
From Figma to code: Speed as a design principle
In the early days, Kyle did everything, from UI and frontend code to social assets and naming strategy. "Design something? Ship it." That was the ethos. Most of his designs weren’t even finished in Figma; they were rough enough to get into code quickly. This approach wasn’t about cutting corners, it was about prototyping through real interaction, especially in a product whose behavior wasn’t fully predictable.
“You're designing for a system that doesn’t behave the same way twice,” he explained. So static mockups didn’t cut it. Claude’s experience came to life through iteration, not perfection. Shipping quickly let the team see how users responded in the wild and course-correct just as fast.
Design beyond the screen: Prompts as UX
Claude isn’t just a visual interface. It's a conversation. And that means a huge part of the design work is invisible. Kyle and the team often start by asking: Can the model even do this? Before designing UI, they run tests with prompts to see what’s possible.
Sometimes, a UX breakthrough is as simple as changing the system prompt. One tweak, having Claude ask follow-up questions, completely changed how helpful it felt. It wasn’t a new feature in the traditional sense, but it made Claude feel more thoughtful, more human, and ultimately more trustworthy.
These kinds of changes don't show up in Dribbble shots, but they're core to designing for AI. The line between product design and prompt engineering is already blurred and Kyle sees that as a good thing.
Emotion as infrastructure
Kyle and the brand team made bold, subtle choices to make Claude feel different. A warm beige background instead of sterile white. Serif typefaces that evoke trust and history. These decisions weren’t random, they were rooted in behavioral science and emotion.
“There’s a piece of our DNA fossilized in the early product,” he said. The team aimed to make Claude feel less like a machine and more like a companion. Not a robot butler, but a “nano-suit” that helps you become more capable. This emotional framing influenced everything from the logo to the tone of Claude’s responses.
Why chat still wins
Despite hundreds of prototypes exploring alternative AI interfaces, chat remains the default and for good reason. It’s flexible, fast, and familiar. "Anyone can type in a box," Kyle noted. That accessibility is key when your product can technically do anything but users don’t know where to start.
What chat lacks in novelty, it makes up for in iterative flow. It gives users space to adjust, clarify, and co-create with the model in a natural way. That back-and-forth is itself a design surface.
Designing in the dark
Designing an AI product means making decisions without always knowing what’s coming. You don’t control the output. You don’t always know how users will interact with it. And yet, you have to make it feel consistent, safe, and intuitive.
This is why Kyle emphasizes design as a tool for emotional storytelling and trust-building especially when AI is involved. In an age where outputs are non-deterministic, trust is the design. And that starts with every tiny decision: a line of prompt, a pixel of color, or a moment of follow-up.
Listen to the full episode here.
Source: Dive Club podcast
You might also like