·Future of AI·2 min read·Senior developers

The Post-Chat Era: Why Conversational UIs Are a Local Maximum

Every AI product has a chat box. Most of them shouldn't. Chat is the worst interface for most LLM use cases. We just defaulted there because OpenAI shipped it first.

Your users keep asking the chatbot the same three questions. Their inputs are awkward. The bot's responses are verbose. The interaction averages 47 seconds for something a button would have solved in two. The product is "AI-powered" and the conversion metrics are worse than before. The problem isn't the model. The problem is the chat box.

I will die on this hill.

Why chat became the default

When ChatGPT shipped, the cheapest way to expose an LLM was a text input and a scrolling list of replies. Every product copied it. Three years later, "open a chat" has become the universal AI affordance. Even for problems where chat is the wrong tool. We didn't choose chat. We defaulted to it.

A more honest framing: chat is unstructured input bolted to unstructured output. That's powerful for open-ended tasks. It's terrible for most tasks, which actually have structure.

What replaces chat

The next interaction model is structured affordances driven by language. Instead of "type what you want," the UI exposes domain-specific controls that the model populates, edits, and reasons over.

// Bad: chat box for booking a meeting.
<ChatBox onSubmit={prompt => llm.handle(prompt)} />

// Better: structured form whose fields the model can fill,
// edit, and explain in natural language alongside.
<MeetingDraft>
  <FieldGroup llmFill="meeting.attendees">
    <AttendeeList suggested={ai.attendees} />
  </FieldGroup>
  <FieldGroup llmFill="meeting.time">
    <TimeSlotPicker proposals={ai.timeSlots} />
  </FieldGroup>
  <Notes llmFill="meeting.notes" editable />
  <Action llmFill="meeting.next_step" />
</MeetingDraft>

Why this matters

Structured affordances give the user a UI they can scan, edit, and trust. The model still does the hard work. Extraction, ranking, drafting. But it lands its output in fields, not a paragraph. The user keeps the steering wheel. The model becomes the navigator. Chat is the opposite. The model takes the wheel and the user shouts directions from the back seat.

When chat is still right

Open-ended exploration. Research, brainstorming, debugging conversations where the next step depends on the last reply. Also for power users who genuinely want a free-text interface, and low-frequency tasks where building a custom UI doesn't earn back the cost. ChatGPT itself is the prime example. The product is the open-ended interface.

When to drop chat for structure

Any high-frequency task with a stable shape. Filling forms, booking meetings, summarising documents into known fields, classifying items, extracting data. If a junior designer could sketch the wireframe before talking to anyone, the right UI isn't a chat box. It's that wireframe.

The shift in roles

flowchart LR
    subgraph Chat ["Chat era"]
        U1[User types intent] --> M1[Model interprets]
        M1 --> R1[Model writes prose]
        R1 --> U1
    end
    subgraph Post ["Post-chat era"]
        U2[User intent] --> A[Affordances<br/>structured fields]
        A --> M2["Model fills / edits"]
        M2 --> A
        A --> U2
    end

Conclusion

Open the most-used chat feature in your product and write down the three things users actually do with it. If those three things would be a form on a normal SaaS, that's your real UI. The chat box is just hiding it.