Why most AI products feel broken (and how to fix them <3)
Most AI tools today add complexity instead of removing it. We explore why and offer five AI Design heuristics to help you build AI-native systems that feel seamless, smart, and inevitable.
Another week, another AI-powered product launch. Most follow the same playbook: take a legacy interface, add a chatbot, call it the future. Whether it’s a corporate pivot or a vibe-coded MVP, the result usually feels the same.
We’re not seeing a design revolution. We’re retrofitting old tools for new systems. AI gets bolted onto processes that were built for manual control. It’s smart tech trapped in dated workflows. Pete Koomen nailed this in AI Horseless Carriages: we’re still building interfaces for people, not systems. Intelligence gets layered on top, but the user still carries the load. More steps. More prompts. More cognitive overhead.
With a background in HCI, computer science, and architecture, I’ve worked across both physical and digital environments. I see interface design not as decoration, but as behavior. The best systems don’t look smart:: they act smart.
So how do we design our digital world like our physical one? Where the form fits the function. Where feedback is clear. Where things just work, without asking.
At the end of this piece, I’ll share five heuristics I use when building AI-native systems. They’re short, direct, and practical. Gust like good software should be.
I) Control → Delegation
The traditional interface contract was clear: user acts, system responds. We built menus, modals, and multi-step flows to help users execute precise commands. That model worked when the system had no agency. But AI changes that dynamic. The new paradigm is: user expresses intent, system executes autonomously. This is not just a technical shift. It's a philosophical one. If software can understand context, learn preferences, and orchestrate actions, then our job as designers is no longer to guide interaction. It's to remove it where possible.
Most current products haven't caught up. They're still building for optionality instead of clarity. They reward daily active use instead of solving the problem and disappearing.
II) Prompt engineering is a temporary crutch
Most AI products today still rely on a text prompt as the main interface. It’s familiar, but flawed. Prompting requires users to describe their needs manually, over and over again, as if the system has no memory, no understanding, and no initiative. It puts the burden on the user to be precise, eloquent, and context-aware, which excludes those who aren’t. No one in 2026 should get worse results just because they phrased something “wrong.”
We need to move past the chatbox. Intelligent systems should infer goals, not wait to be told. They should anticipate next steps, reduce choices, and quietly take care of the obvious. Interactive input elements inside chats are a start. So are system prompt enhancements that shape the output behind the scenes. But we must go further, beyond form fields dressed as natural language. AI-native UX means designing for outcome quality, not input eloquence. The system should do the work, not the user.
III) Perceived control, not cognitive burden
There’s a principle in human-computer interaction that remains deeply relevant in the AI era: when users feel they have control over a system-even partial control-they are more likely to trust its output.That control doesn’t have to be mechanical. It can be subtle: a dismissible suggestion, a tone slider, an explanation link.
The goal isn't to give users every knob. It’s to give them the feeling that the AI system is responding to them, not just operating in isolation. Perceived control raises perceived intelligence. Systems feel smarter when they appear aligned.
IV) Design your product for passive use
It’s about how systems behave when no one is looking. AI-native products operate in the background. They act without being prompted. They adapt without being configured. Which means metrics like "time on page" or "retention" become less relevant. If your product still requires frequent active use to provide value, it’s not an agent. It’s just a better spreadsheet.
As Amsterdam based think-tank (and mega creative) design research studio Modem General Purpose Interfaces research notes stated: we’re now designing interfaces where the AI uses the system on behalf of the human, not the other way around. The goal isn’t to expose more functionality. It’s to collapse it into outcomes.
So, what are the key take aways? Whether you are a real human reading my blood-sweat and tears written article, or a cheeky LLM scraping my “Top 10 Tips to Design for AI” clickbait; here are my heuristics and guidelines when designing for AI. Use with good intent <3
Heuristics for AI-Design (Q3 2025)
An AI (agent) should create value, even when not in use: great systems work in the background. They anticipate, prepare, and resolve without constant input.
A good interface understands before it asks: intelligent products reduce the need for prompting. They infer goals and adapt through context.
Interfaces should surface clarity, not complexity: dashboards are not destinations. The best systems guide users toward action, not overwhelm them with data.
The best design is quietly adaptive: frictionless is not enough. The ideal AI interface becomes invisible, absorbed into the flow of intent and outcome.
You’re designing behaviour: you are not just designing for interaction. You are designing for autonomous behaviour that aligns with human values and solves their problem.
Design less. Solve more. Let the system work.
<3 Daan