<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=5292226&amp;fmt=gif">
Skip to content

Integrating AI into product workflows: Insights from Future Product Days 2025

by Paula Polli on

 

This article was written by Paula Polli, Product Designer at SPREAD AI.

Future Product Days 2025 brought together more than 5,000 designers, engineers, researchers, product managers, and industry leaders, for three days of talks, workshops, and networking. With tracks dedicated to Product Design & User Experience, Management & Strategy, and Artificial Intelligence & Engineering, the event offered a rich environment to explore emerging user needs, new design patterns, and the evolving role of technology in digital products.

The program covered a broad range of topics, from user research and product management to leveraging workflows with AI. My personal focus, however, was on how AI is being integrated into digital products, as I’m especially interested in bridging this new language of collaboration in a way that the experience remains clear and genuinely helpful for the user. In this post, I share the key learnings that emerged across the different talks and sessions I followed.

1. Start with user problems, not with "how can we add ai?"

With the growing demand for AI, many teams start in the same place: How can we implement AI in our product? It’s a tempting question, especially when competitors are shipping AI features fast. But when “adding AI” becomes the goal, it often leads to solutions looking for a problem: nice capabilities that don’t necessarily support real user needs. As product designers, we already know that the starting point should live in the problem space, not the solution.

So when we think about integrating AI into digital products, the first step is to ground the focus in the user’s reality. A more useful framing would be: Where does AI bring value to the enterprise user? From there, research becomes an important tool to identify the right opportunities. We should look for moments that are tedious, overly complex, highly repetitive, or consistently frustrating. In other words, the parts of a workflow where people lose time or confidence. These signals don’t just reveal pain points; they highlight opportunities where AI might genuinely reduce effort or increase clarity.

This leads to a core principle of AI product making: think simply, deliver value. If an AI feature is worth building, users should quickly understand what to expect, what they get in return, and what to do next. Without that clarity, even a powerful system feels unpredictable, and unpredictable tools rarely earn trust or adoption.

2. Same humans, new interface

AI introduces a new interface layer: new interaction patterns, but the same humans using the product. It doesn’t only expand what a product can do; it changes how users relate to it. We’re moving from clicking through predefined paths to prompting, conversing, and collaborating, shifting the mental model from “I control the UI” to “I work with the system.

But for AI features to succeed, usability fundamentals still apply. In fact, they become even more critical because AI introduces ambiguity: outcomes can be probabilistic, language can be interpreted in multiple ways, and the system may behave differently across contexts. So when we design AI capabilities with a user-first mindset, there are three common failure points we should address:

  • Users need a reason to use it: This returns to the core question of value. If it doesn’t reduce effort, increase clarity, or improve outcomes, it will feel like extra work, especially in complex enterprise workflows.

  • Users need to see it: Many AI features fail quietly because users don’t notice them, or don’t immediately understand what they’re for. Discoverability and findability depend on where the entry point lives, how it’s named, and how clearly we communicate the benefit upfront. On top of that, it’s important to consider that users are typically task-focused: they open an app to complete a task, not to explore new possibilities.

  • Users need to understand how to succeed: When we drop users into an open canvas without guidance, we’re asking them to do the hardest part. Clear usage comes from small design decisions: examples, suggested prompts, constrained actions, progressive disclosure, and tight feedback loops that show what’s happening and what to do next.

3. Lost trust in Ai is hard to win back

User tolerance for AI failures is low, and once trust is lost, it’s difficult to recover. If a first experience feels unclear, unreliable, or disruptive, users don’t just stop using the AI feature; they may start doubting the product overall. Trust breaks especially fast when the feature doesn’t deliver the promised value, is hard to understand, produces inconsistent results, or interrupts an established workflow.

That’s why expectation-setting and transparency are essential. The way we describe AI tools, through labels, descriptions, and feedback, shapes what users expect, and those expectations heavily influence how they rate the experience. Confident positioning can attract attention, but if the product can’t match the promise, disappointment follows quickly. A more resilient move might be to slightly undersell so the product can overdeliver, for example, using assistant instead of expert.”

From a design standpoint, there are a few strategies that can help as trust-preserving patterns:

  • Keep humans in control: provide editable prompts, explicit controls, preview-before-apply options, and an easy undo.

  • Design for collaboration, not command: let users iterate with the copilot toward an outcome instead of relying on one-shot outputs.

  • Frame AI as “learning,” not infallible: set healthier expectations so users are more likely to stick with the feature when something goes wrong.

Takeaways

Across these main learnings, one point becomes consistent: AI only works in products when it strengthens the user’s path to getting real work done. That means:

  • Starting from user problems (not from the technology)

  • Designing AI as a usable interface that fits existing behaviors (not as a separate “AI mode”)

  • Protecting trust through clear expectations and user control

At SPREAD, this is how we’re approaching AI integration: we aim to reduce effort in complex workflows, helping users reach their goals faster without adding friction, uncertainty, or extra cognitive load. We focus on embedding AI into the workflow, keeping the core task in the foreground, and offering context-aware support so users can find and act on relevant information without needing to stop, prompt, or change context. In the end, our goal is the impact on the user journey: value first, regardless of whether the user thinks of it as “AI.”

 

Note: These learnings were shaped by sessions from Dave Crawford (Microsoft), Kate Moran (Nielsen Norman Group), and Sarah Thompson (Live Neuron Labs), whose talks provided some of the principles and examples reflected in this post.