This article was written by Ricardo Trindade, Design Lead at SPREAD AI. Ricardo leads the Design team, driving collaboration and excellence across SPREAD’s platform ecosystem. With a background spanning graphic, industrial, and digital product design, he ensures a consistent, user-centric experience that makes complex engineering solutions intuitive, coherent, and pleasant to use.
A grandmother from Tennessee spent nearly six months in jail because an AI facial recognition system told police she was their suspect, and nobody stopped to check.
Angela Lipps had never been to North Dakota. She'd never been on an airplane. But when Fargo police ran surveillance footage through AI-powered facial recognition, the system returned a match, and that match was treated as fact. U.S. Marshals arrested her at gunpoint while she was babysitting. She was held without bail for months as an alleged fugitive. The first time anyone from Fargo PD actually interviewed her was after she'd already been incarcerated for over five months. Her bank records, which could have cleared her name from the start, immediately proved she was more than 1,200 miles away at the time of the alleged crimes.
The case was dismissed on Christmas Eve. By then, she had lost her home, her car, and her dog.
This story is extreme, but the mechanism behind it is not. An AI system produced an output. A human process accepted it without verification. And the consequences cascaded.
When we strip away the context of law enforcement, the underlying failure is a UX problem: there was no friction between the AI's output and the real-world action taken on it. No mandatory review step. No confidence indicator. No checkpoint that forced a human to pause, look at the evidence, and confirm. The system moved from prediction to consequence with nothing in between.
At SPREAD, we build AI-powered tools for engineering intelligence — requirements matching, ticket analysis, product exploration — in domains like automotive, aerospace, and industrial manufacturing. The stakes are different from criminal justice, but the principle is identical. When AI generates a diagnosis, a classification, or a recommendation, someone downstream will act on it. The question is: did they have the opportunity to critically evaluate it first?
Traditional UX wisdom says: reduce friction. Fewer clicks, faster flows, less cognitive load. And for most interactions, that's correct. Nobody wants unnecessary confirmation dialogs when renaming a file.
But AI changes the equation. When the content is machine-generated — probabilistic, potentially wrong, and confidently presented — friction becomes a feature, not a bug. The right friction at the right moment is what keeps the human in the loop.
Microsoft's copilot UX guidance frames this through three foundational principles: the human is the pilot, avoid anthropomorphizing AI, and consider both direct and indirect stakeholders. Their framework positions the user in the driver's seat, with editable prompts, explicit controls, and preview-before-apply as non-negotiable patterns. The same principles show up in IBM's Carbon for AI guidelines on labeling, transparency, and explainability, and in John Maeda's three laws for the AI era: emotion, trust, and designing for failure.
In our internal AI UX playbook, we call this constructive friction: deliberate design choices that slow users down at high-impact moments — save, share, copy, apply — so they take ownership of the AI output before it moves forward. This includes patterns like:
The goal isn't to make the experience tedious. It's to make the moments that matter feel different from the moments that don't.
Here's a small example from our own work that made this tangible for me.
I was recently prototyping an interface where users receive several AI-generated outputs at once — think multiple field suggestions, classifications, or recommended actions presented together. During one iteration, the tool I was using to build the prototype generated an "Accept all" button.
In almost any other context, "Accept all" is a sensible convenience pattern. Cookie banners, file merge dialogs, notification settings — when the user has already decided and just wants to move on, batching the action is good UX. It removes tedious, repetitive clicking.
But when the content is AI-generated, "Accept all" does something different. It skips the very moment where the human is supposed to evaluate each output. It collapses the review step into a single click and treats all AI suggestions as equally trustworthy. The one-by-one review is the checkpoint. Remove it, and you've removed the human from the loop.
As Nielsen Norman Group has noted, AI's aptitude for producing well-written, confident text can fool users into believing its expertise generalizes to everything else. Bad advice will appear to be the product of carefully considered analysis. This makes the curation step even more critical: when ideation becomes virtually free, the need for human judgment in selecting and validating outputs only increases.
It was a useful reminder that patterns which feel natural in traditional UX can become anti-patterns in AI UX. The instinct to reduce clicks can inadvertently reduce oversight.
At SPREAD, our approach to this is built around a few key principles that guide how AI shows up in the product:
Human = Pilot. AI = Copilot. The user is always in control. AI assists, suggests, and surfaces — but the human decides.
Preview before apply. AI-generated content is never committed automatically. Users see what will change, can edit, can selectively accept, and can undo.
Transparency at every layer. Every AI surface is labeled. Sources are cited. Confidence levels are visible. Users can always ask "why this?" and get an answer.
Friction where it counts, speed everywhere else. We remove gratuitous friction aggressively — fast typing, good defaults, inline editing, easy undo. But at the moments where content transitions from "AI suggestion" to "user-owned decision," we slow things down deliberately.
These principles echo what our colleague Paula Polli observed at Future Product Days 2025: user tolerance for AI failures is low, and once trust is lost, it's difficult to recover. Keeping humans in control, designing for collaboration rather than command, and framing AI as learning rather than infallible. These are the trust-preserving patterns that make AI products viable long-term.
The Lipps case is a reminder that the consequences of unchecked AI output are not abstract. They're measured in months of someone's life, in homes and relationships lost. Most AI products won't have stakes this severe. But the design question is the same at every scale: is there a moment in this flow where a human can pause, evaluate, and decide, or does the system move from output to outcome without interruption?
As AI becomes embedded in more workflows — in engineering, in enterprise software, in tools that inform real decisions — the teams building these products carry a responsibility. Not just to make the AI accurate, but to design the experience around it so that when the AI is wrong, someone notices before it matters.
That's the case for friction. Not friction as an obstacle, but friction as a safeguard. A deliberate pause that says: this is yours now — take a look before it moves forward.