<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>UX on Smashing Magazine — For Web Designers And Developers</title><link>https://www.smashingmagazine.com/category/ux/index.xml</link><description>Recent content in UX on Smashing Magazine — For Web Designers And Developers</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Mon, 09 Feb 2026 03:03:08 +0000</lastBuildDate><item><author>Victor Yocco</author><title>Beyond Generative: The Rise Of Agentic AI And User-Centric Design</title><link>https://www.smashingmagazine.com/2026/01/beyond-generative-rise-agentic-ai-user-centric-design/</link><pubDate>Thu, 22 Jan 2026 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2026/01/beyond-generative-rise-agentic-ai-user-centric-design/</guid><description>Developing effective agentic AI requires a new research playbook. When systems plan, decide, and act on our behalf, UX moves beyond usability testing into the realm of trust, consent, and accountability. Victor Yocco outlines the research methods needed to design agentic AI systems responsibly.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2026/01/beyond-generative-rise-agentic-ai-user-centric-design/" />
              <title>Beyond Generative: The Rise Of Agentic AI And User-Centric Design</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Beyond Generative: The Rise Of Agentic AI And User-Centric Design</h1>
                  
                    
                    <address>Victor Yocco</address>
                  
                  <time datetime="2026-01-22T13:00:00&#43;00:00" class="op-published">2026-01-22T13:00:00+00:00</time>
                  <time datetime="2026-01-22T13:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>Agentic AI stands ready to transform customer experience and operational efficiency, necessitating a new strategic approach from leadership. This evolution in artificial intelligence empowers systems to <strong>plan</strong>, <strong>execute</strong>, and <strong>persist</strong> in tasks, moving beyond simple recommendations to proactive action. For UX teams, product managers, and executives, understanding this shift is crucial for unlocking opportunities in innovation, streamlining workflows, and redefining how technology serves people.</p>

<p>It’s easy to confuse <strong>Agentic AI</strong> with Robotic Process Automation (RPA), which is technology that focuses on rules-based tasks performed on computers. The distinction lies in rigidity versus reasoning. RPA is excellent at following a strict script: if X happens, do Y. It mimics human hands. Agentic AI mimics human reasoning. It does not follow a linear script; it <strong>creates</strong> one.</p>

<p>Consider a recruiting workflow. An RPA bot can scan a resume and upload it to a database. It performs a repetitive task perfectly. An Agentic system looks at the resume, notices the candidate lists a specific certification, cross-references that with a new client requirement, and decides to draft a personalized outreach email highlighting that match. RPA executes a predefined plan; Agentic AI formulates the plan based on a goal. This autonomy separates agents from the predictive tools we have used for the last decade.</p>

<p>Another example is managing meeting conflicts. A predictive model integrated into your calendar might analyze your meeting schedule and the schedules of your colleagues. It could then suggest potential conflicts, such as two important meetings scheduled at the same time, or a meeting scheduled when a key participant is on vacation. It provides you with information and flags potential issues, but you are responsible for taking action.</p>

<p>An agentic AI, in the same scenario, would go beyond just suggesting conflicts to avoid. Upon identifying a conflict with a key participant, the agent could act by:</p>

<ul>
<li>Checking the availability of all necessary participants.</li>
<li>Identifying alternative time slots that work for everyone.</li>
<li>Sending out proposed new meeting invitations to all attendees.</li>
<li>If the conflict is with an external participant, the agent could draft and send an email explaining the need to reschedule and offering alternative times.</li>
<li>Updating your calendar and the calendars of your colleagues with the new meeting details once confirmed.</li>
</ul>

<p>This agentic AI understands the goal (resolving the meeting conflict), plans the steps (checking availability, finding alternatives, sending invites), executes those steps, and persists until the conflict is resolved, all with minimal direct user intervention. This demonstrates the “agentic” difference: the system takes <strong>proactive steps</strong> for the user, rather than just providing information to the user.</p>

<p>Agentic AI systems understand a goal, plan a series of steps to achieve it, execute those steps, and even adapt if things go wrong. Think of it like a <strong>proactive digital assistant</strong>. The underlying technology often combines large language models (LLMs) for understanding and reasoning, with planning algorithms that break down complex tasks into manageable actions. These agents can interact with various tools, APIs, and even other AI models to accomplish their objectives, and critically, they can maintain a persistent state, meaning they remember previous actions and continue working towards a goal over time. This makes them fundamentally different from typical generative AI, which usually completes a single request and then resets.</p>

<h2 id="a-simple-taxonomy-of-agentic-behaviors">A Simple Taxonomy of Agentic Behaviors</h2>

<p>We can categorize agent behavior into four distinct modes of autonomy. While these often look like a progression, they function as independent operating modes. A user might trust an agent to act autonomously for scheduling, but keep it in “suggestion mode” for financial transactions.</p>

<p>We derived these levels by adapting industry standards for autonomous vehicles (<a href="https://www.sae.org/news/blog/sae-levels-driving-automation-clarity-refinements">SAE levels</a>) to digital user experience contexts.</p>

<h3 id="observe-and-suggest">Observe-and-Suggest</h3>

<p>The agent functions as a monitor. It analyzes data streams and flags anomalies or opportunities, but takes zero action.</p>

<p><strong>Differentiation</strong><br />
Unlike the next level, the agent generates no complex plan. It points to a problem.</p>

<p><strong>Example</strong><br />
A DevOps agent notices a server CPU spike and alerts the on-call engineer. It does not know how or attempt to fix it, but it knows something is wrong.</p>

<p><strong>Implications for design and oversight</strong><br />
At this level, design and oversight should prioritize clear, non-intrusive notifications and a well-defined process for users to act on suggestions. The focus is on empowering the user with timely and relevant information without taking control. UX practitioners should focus on making suggestions clear and easy to understand, while product managers need to ensure the system provides value without overwhelming the user.</p>

<h3 id="plan-and-propose">Plan-and-Propose</h3>

<p>The agent identifies a goal and generates a multi-step strategy to achieve it. It presents the full plan for human review.</p>

<p><strong>Differentiation</strong><br />
The agent acts as a strategist. It does not execute; it waits for approval on the entire approach.</p>

<p><strong>Example</strong><br />
The same DevOps agent notices the CPU spike, analyzes the logs, and proposes a remediation plan:</p>

<ol>
<li>Spin up two extra instances.</li>
<li>Restart the load balancer.</li>
<li>Archive old logs.</li>
</ol>

<p>The human reviews the logic and clicks “Approve Plan”.</p>

<p><strong>Implications for design and oversight</strong><br />
For agents that plan and propose, design must ensure the proposed plans are easily understandable and that users have intuitive ways to modify or reject them. Oversight is crucial in monitoring the quality of proposals and the agent’s planning logic. UX practitioners should design clear visualizations of the proposed plans, and product managers must establish clear review and approval workflows.</p>

<h3 id="act-with-confirmation">Act-with-Confirmation</h3>

<p>The agent completes all preparation work and places the final action in a staged state. It effectively holds the door open, waiting for a nod.</p>

<p><strong>Differentiation</strong><br />
This differs from “Plan-and-Propose” because the work is already done and staged. It reduces friction. The user confirms the outcome, not the strategy.</p>

<p><strong>Example</strong><br />
A recruiting agent drafts five interview invitations, finds open times on calendars, and creates the calendar events. It presents a “Send All” button. The user provides the final authorization to trigger the external action.</p>

<p><strong>Implications for design and oversight</strong><br />
When agents act with confirmation, the design should provide transparent and concise summaries of the intended action, clearly outlining potential consequences. Oversight needs to verify that the confirmation process is robust and that users are not being asked to blindly approve actions. UX practitioners should design confirmation prompts that are clear and provide all necessary information, and product managers should prioritize a robust audit trail for all confirmed actions.</p>

<h3 id="act-autonomously">Act-Autonomously</h3>

<p>The agent executes tasks independently within defined boundaries.</p>

<p><strong>Differentiation</strong><br />
The user reviews the history of actions, not the actions themselves.</p>

<p><strong>Example</strong><br />
The recruiting agent sees a conflict, moves the interview to a backup slot, updates the candidate, and notifies the hiring manager. The human only sees a notification: Interview rescheduled to Tuesday.</p>

<p><strong>Implications for design and oversight</strong><br />
For autonomous agents, the design needs to establish clear pre-approved boundaries and provide robust monitoring tools. Oversight requires continuous evaluation of the agent’s performance within these boundaries, a critical need for robust logging, clear override mechanisms, and user-defined kill switches to maintain user control and trust. UX practitioners should focus on designing effective dashboards for monitoring autonomous agent behavior, and product managers must ensure clear governance and ethical guidelines are in place.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="640"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png"
			
			sizes="100vw"
			alt="The Agentic Autonomy Matrix"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <strong>Figure 1</strong>: The Agentic Autonomy Matrix. This framework maps four distinct operating modes by correlating the level of agent initiative against the required amount of human intervention. (<a href='https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/1-agentic-autonomy-matrix.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Let’s look at a real-world application in HR technology to see these modes in action. Consider an “Interview Coordination Agent” designed to handle the logistics of hiring.</p>

<ul>
<li><strong>In Suggest Mode</strong><br />
The agent notices an interviewer is double-booked. It highlights the conflict on the recruiter’s dashboard: <em>“Warning: Sarah is double-booked for the 2 PM interview.”</em></li>
<li><strong>In Plan Mode</strong><br />
The agent analyzes Sarah’s calendar and the candidate’s availability. It presents a solution: <em>“I recommend moving the interview to Thursday at 10 AM. This requires moving Sarah’s 1:1 with her manager.”</em> The recruiter reviews this logic.</li>
<li><strong>In Confirmation Mode</strong><br />
The agent drafts the emails to the candidate and the manager. It populates the calendar invites. The recruiter sees a summary: <em>“Ready to reschedule to Thursday. Send updates?”</em> The recruiter clicks <em>“Confirm.”</em></li>
<li><strong>In Autonomous Mode</strong><br />
The agent handles the conflict instantly. It respects a pre-set rule: <em>“Always prioritize candidate interviews over internal 1:1s.”</em> It moves the meeting and sends the notifications. The recruiter sees a log entry: <em>“Resolved schedule conflict for Candidate B.”</em></li>
</ul>

<h2 id="research-primer-what-to-research-and-how">Research Primer: What To Research And How</h2>

<p>Developing effective agentic AI demands a distinct research approach compared to traditional software or even generative AI. The autonomous nature of AI agents, their ability to make decisions, and their potential for proactive action necessitate specialized methodologies for understanding user expectations, mapping complex agent behaviors, and anticipating potential failures. The following research primer outlines key methods to measure and evaluate these unique aspects of agentic AI.</p>

<h3 id="mental-model-interviews">Mental-Model Interviews</h3>

<p>These interviews uncover users’ preconceived notions about how an AI agent should behave. Instead of simply asking what users <em>want</em>, the focus is on understanding their internal models of the agent’s capabilities and limitations. We should avoid using the word “agent” with participants. It carries sci-fi baggage or is a term too easily confused with a human agent offering support or services. Instead, frame the discussion around “assistants” or “the system.”</p>

<p>We need to uncover where users draw the line between helpful automation and intrusive control.</p>

<ul>
<li><strong>Method:</strong> Ask users to describe, draw, or narrate their expected interactions with the agent in various hypothetical scenarios.</li>
<li><strong>Key Probes (reflecting a variety of industries):</strong>

<ul>
<li>To understand the boundaries of desired automation and potential anxieties around over-automation, ask:

<ul>
<li>If your flight is canceled, what would you want the system to do automatically? What would worry you if it did that without your explicit instruction?</li>
</ul></li>
<li>To explore the user’s understanding of the agent’s internal processes and necessary communication, ask:

<ul>
<li>Imagine a digital assistant is managing your smart home. If a package is delivered, what steps do you imagine it takes, and what information would you expect to receive?</li>
</ul></li>
<li>To uncover expectations around control and consent within a multi-step process, ask:

<ul>
<li>If you ask your digital assistant to schedule a meeting, what steps do you envision it taking? At what points would you want to be consulted or given choices?</li>
</ul></li>
</ul></li>
<li><strong>Benefits of the method:</strong> Reveals implicit assumptions, highlights areas where the agent’s planned behavior might diverge from user expectations, and informs the design of appropriate controls and feedback mechanisms.</li>
</ul>

<h3 id="agent-journey-mapping">Agent Journey Mapping:</h3>

<p>Similar to traditional user journey mapping, agent journey mapping specifically focuses on the anticipated actions and decision points of the AI agent itself, alongside the user’s interaction. This helps to proactively identify potential pitfalls.</p>

<ul>
<li><strong>Method:</strong> Create a visual map that outlines the various stages of an agent’s operation, from initiation to completion, including all potential actions, decisions, and interactions with external systems or users.</li>
<li><strong>Key Elements to Map:</strong>

<ul>
<li><strong>Agent Actions:</strong> What specific tasks or decisions does the agent perform?</li>
<li><strong>Information Inputs/Outputs:</strong> What data does the agent need, and what information does it generate or communicate?</li>
<li><strong>Decision Points:</strong> Where does the agent make choices, and what are the criteria for those choices?</li>
<li><strong>User Interaction Points:</strong> Where does the user provide input, review, or approve actions?</li>
<li><strong>Points of Failure:</strong> Crucially, identify specific instances where the agent could misinterpret instructions, make an incorrect decision, or interact with the wrong entity.

<ul>
<li><strong>Examples:</strong> Incorrect recipient (e.g., sending sensitive information to the wrong person), overdraft (e.g., an automated payment exceeding available funds), misinterpretation of intent (e.g., booking a flight for the wrong date due to ambiguous language).</li>
</ul></li>
<li><strong>Recovery Paths:</strong> How can the agent or user recover from these failures? What mechanisms are in place for correction or intervention?</li>
</ul></li>
<li><strong>Benefits of the method:</strong> Provides a holistic view of the agent’s operational flow, uncovers hidden dependencies, and allows for the proactive design of safeguards, error handling, and user intervention points to prevent or mitigate negative outcomes.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="437"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg"
			
			sizes="100vw"
			alt="Agent Journey Map"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <strong>Figure 2</strong>: Agent Journey Map. Mapping the Agent Logic distinct from the System helps identify where the reasoning, not just the code, might fail. (<a href='https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/2-agent-journey-map.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="simulated-misbehavior-testing">Simulated Misbehavior Testing:</h3>

<p>This approach is designed to stress-test the system and observe user reactions when the AI agent <em>fails</em> or deviates from expectations. It’s about understanding trust repair and emotional responses in adverse situations.</p>

<ul>
<li><strong>Method:</strong> In controlled lab studies, deliberately introduce scenarios where the agent makes a mistake, misinterprets a command, or behaves unexpectedly.</li>
<li><strong>Types of “Misbehavior” to Simulate:</strong>

<ul>
<li><strong>Command Misinterpretation:</strong> The agent performs an action slightly different from what the user intended (e.g., ordering two items instead of one).</li>
<li><strong>Information Overload/Underload:</strong> The agent provides too much irrelevant information or not enough critical details.</li>
<li><strong>Unsolicited Action:</strong> The agent takes an action the user explicitly did not want or expect (e.g., buying stock without approval).</li>
<li><strong>System Failure:</strong> The agent crashes, becomes unresponsive, or provides an error message.</li>
<li><strong>Ethical Dilemmas:</strong> The agent makes a decision with ethical implications (e.g., prioritizing one task over another based on an unforeseen metric).</li>
</ul></li>
<li><strong>Observation Focus:</strong>

<ul>
<li><strong>User Reactions:</strong> How do users react emotionally (frustration, anger, confusion, loss of trust)?</li>
<li><strong>Recovery Attempts:</strong> What steps do users take to correct the agent’s behavior or undo its actions?</li>
<li><strong>Trust Repair Mechanisms:</strong> Do the system’s built-in recovery or feedback mechanisms help restore trust? How do users want to be informed about errors?</li>
<li><strong>Mental Model Shift:</strong> Does the misbehavior alter the user’s understanding of the agent’s capabilities or limitations?</li>
</ul></li>
<li><strong>Benefits of the method:</strong> Crucial for identifying design gaps related to error recovery, feedback, and user control. It provides insights into how resilient users are to agent failures and what is needed to maintain or rebuild trust, leading to more robust and forgiving agentic systems.</li>
</ul>

<p>By integrating these research methodologies, UX practitioners can move beyond simply making agentic systems <em>usable</em> to making them <em>trusted</em>, <em>controllable</em>, and <em>accountable</em>, fostering a positive and productive relationship between users and their AI agents. Note that these aren’t the only methods relevant to exploring agentic AI effectively. Many other methods exist, but these are most accessible to practitioners in the near term. I’ve previously covered the Wizard of Oz method, a slightly more advanced method of concept testing, which is also a valuable tool for exploring agentic AI concepts.</p>

<h2 id="ethical-considerations-in-research-methodology">Ethical Considerations In Research Methodology</h2>

<p>When researching agentic AI, particularly when simulating misbehavior or errors, ethical considerations are key to take into account. There are many publications focusing on ethical UX research, including an <a href="https://www.smashingmagazine.com/2020/12/ethical-considerations-ux-research/">article I wrote for Smashing Magazine</a>, <a href="https://www.uxdesigninstitute.com/blog/what-are-user-research-ethics/">these guidelines</a> from the UX Design Institute, and this page from the <a href="https://www.inclusivedesigntoolkit.com/ethics/">Inclusive Design Toolkit</a>.</p>

<h2 id="key-metrics-for-agentic-ai">Key Metrics For Agentic AI</h2>

<p>You’ll need a comprehensive set of key metrics to effectively assess the performance and reliability of agentic AI systems. These metrics provide insights into user trust, system accuracy, and the overall user experience. By tracking these indicators, developers and designers can identify areas for improvement and ensure that AI agents operate safely and efficiently.</p>

<p><strong>1. Intervention Rate</strong><br />
For autonomous agents, we measure success by silence. If an agent executes a task and the user does not intervene or reverse the action within a set window (e.g., 24 hours), we count that as acceptance. We track the Intervention Rate: how often does a human jump in to stop or correct the agent? A high intervention rate signals a misalignment in trust or logic.</p>

<p><strong>2. Frequency of Unintended Actions per 1,000 Tasks</strong><br />
This critical metric quantifies the number of actions performed by the AI agent that were not desired or expected by the user, normalized per 1,000 completed tasks. A low frequency of unintended actions signifies a well-aligned AI that accurately interprets user intent and operates within defined boundaries. This metric is closely tied to the AI’s understanding of context, its ability to disambiguate commands, and the robustness of its safety protocols.</p>

<p><strong>3. Rollback or Undo Rates</strong><br />
This metric tracks how often users need to reverse or undo an action performed by the AI. High rollback rates suggest that the AI is making frequent errors, misinterpreting instructions, or acting in ways that are not aligned with user expectations. Analyzing the reasons behind these rollbacks can provide valuable feedback for improving the AI’s algorithms, understanding of user preferences, and its ability to predict desirable outcomes.</p>

<p>To understand why, you must implement a microsurvey on the undo action. For example, when a user reverses a scheduling change, a simple prompt can ask: <em>“Wrong time? Wrong person? Or did you just want to do it yourself?”</em> Allowing the user to click on the option that best corresponds to their reasoning.</p>

<p><strong>4. Time to Resolution After an Error</strong><br />
This metric measures the duration it takes for a user to correct an error made by the AI or for the AI system itself to recover from an erroneous state. A short time to resolution indicates an efficient and user-friendly error recovery process, which can mitigate user frustration and maintain productivity. This includes the ease of identifying the error, the accessibility of undo or correction mechanisms, and the clarity of error messages provided by the AI.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="437"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg"
			
			sizes="100vw"
			alt="A Trust &amp; Accountability Dashboard"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <strong>Figure 3</strong>: A Trust & Accountability Dashboard. Note the focus on “Rollback Reasons”. This qualitative data is vital for tuning the agent’s logic. (<a href='https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/3-trust-accountability-dashboard.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Collecting these metrics requires instrumenting your system to track Agent Action IDs. Every distinct action the agent takes, such as proposing a schedule or booking a flight, must generate a unique ID that persists in the logs. To measure the Intervention Rate, we do not look for an immediate user reaction. We look for the absence of a counter-action within a defined window. If an Action ID is generated at 9:00 AM and no human user modifies or reverts that specific ID by 9:00 AM the next day, the system logically tags it as Accepted. This allows us to quantify success based on user silence rather than active confirmation.</p>

<p>For Rollback Rates, raw counts are insufficient because they lack context. To capture the underlying reason, you must implement intercept logic on your application’s Undo or Revert functions. When a user reverses an agent-initiated action, trigger a lightweight microsurvey. This can be a simple three-option modal asking the user to categorize the error as factually incorrect, lacking context, or a simple preference to handle the task manually. This combines quantitative telemetry with qualitative insight. It enables engineering teams to distinguish between a broken algorithm and a user preference mismatch.</p>

<p>These metrics, when tracked consistently and analyzed holistically, provide a robust framework for evaluating the performance of agentic AI systems, allowing for continuous improvement in control, consent, and accountability.</p>

<h2 id="designing-against-deception">Designing Against Deception</h2>

<p>As agents become increasingly capable, we face a new risk: <strong>Agentic Sludge</strong>. Traditional sludge creates friction that makes it hard to cancel a subscription or delete an account. Agentic sludge acts in reverse. It removes friction to a fault, making it too easy for a user to agree to an action that benefits the business rather than their own interests.</p>

<p>Consider an agent assisting with travel booking. Without clear guardrails, the system might prioritize a partner airline or a higher-margin hotel. It presents this choice as the optimal path. The user, trusting the system’s authority, accepts the recommendation without scrutiny. This creates a deceptive pattern where the system optimizes for revenue under the guise of convenience.</p>

<h3 id="the-risk-of-falsely-imagined-competence">The Risk Of Falsely Imagined Competence</h3>

<p>Deception may not stem from malicious intent. It often manifests in AI as <strong>Imagined Competence</strong>. Large Language Models frequently sound authoritative even when incorrect. They present a false booking confirmation or an inaccurate summary with the same confidence as a verified fact. Users may naturally trust this confident tone. This mismatch creates a dangerous gap between system capability and user expectations.</p>

<p>We must design specifically to bridge this gap. If an agent fails to complete a task, the interface must signal that failure clearly. If the system is unsure, it must express uncertainty rather than masking it with polished prose.</p>

<h3 id="transparency-via-primitives">Transparency via Primitives</h3>

<p>The antidote to both sludge and hallucination is <strong>provenance</strong>. Every autonomous action requires a specific metadata tag explaining the origin of the decision. Users need the ability to inspect the logic chain behind the result.</p>

<p>To achieve this, we must <strong>translate primitives into practical answers</strong>. In software engineering, primitives refer to the core units of information or actions an agent performs. To the engineer, this looks like an API call or a logic gate. To the user, it must appear as a <strong>clear explanation</strong>.</p>

<p>The design challenge lies in mapping these technical steps to human-readable rationales. If an agent recommends a specific flight, the user needs to know why. The interface cannot hide behind a generic suggestion. It must expose the underlying primitive: <em>Logic: Cheapest_Direct_Flight</em> or <em>Logic: Partner_Airline_Priority</em>.</p>

<p>Figure 4 illustrates this translation flow. We take the raw system primitive &mdash; the actual code logic &mdash; and map it to a user-facing string. For instance, a primitive checking a calendar schedule a meeting becomes a clear statement: I’ve proposed a 4 PM meeting.</p>

<p>This level of transparency ensures the agent’s actions appear logical and beneficial. It allows the user to verify that the agent acted in their best interest. By exposing the primitives, we transform a black box into a glass box, ensuring users remain the final authority on their own digital lives.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="437"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg"
			
			sizes="100vw"
			alt="Translation flow"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <strong>Figure 4</strong>: Translating a primitive to an end explanation is key to explaining the behavior of Agentic AI. (<a href='https://files.smashing.media/articles/beyond-generative-rise-agentic-ai-user-centric-design/4-translation-flow.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="setting-the-stage-for-design">Setting The Stage For Design</h2>

<p>Building an agentic system requires a new level of psychological and behavioral understanding. It forces us to move beyond conventional usability testing and into the realm of <strong>trust</strong>, <strong>consent</strong>, and <strong>accountability</strong>. The research methods we’ve discussed, from probing mental models to simulating misbehavior and establishing new metrics, provide a necessary foundation. These practices are the essential tools for proactively identifying where an autonomous system might fail and, more importantly, how to repair the user-agent relationship when it does.</p>

<p>The shift to agentic AI is a <strong>redefinition of the user-system relationship</strong>. We are no longer designing for tools that simply respond to commands; we are designing for partners that act on our behalf. This changes the design imperative from efficiency and ease of use to <strong>transparency</strong>, <strong>predictability</strong>, and <strong>control</strong>.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aWhen%20an%20AI%20can%20book%20a%20flight%20or%20trade%20a%20stock%20without%20a%20final%20click,%20the%20design%20of%20its%20%e2%80%9con-ramps%e2%80%9d%20and%20%e2%80%9coff-ramps%e2%80%9d%20becomes%20paramount.%20It%20is%20our%20responsibility%20to%20ensure%20that%20users%20feel%20they%20are%20in%20the%20driver%e2%80%99s%20seat,%20even%20when%20they%e2%80%99ve%20handed%20over%20the%20wheel.%0a&url=https://smashingmagazine.com%2f2026%2f01%2fbeyond-generative-rise-agentic-ai-user-centric-design%2f">
      
When an AI can book a flight or trade a stock without a final click, the design of its “on-ramps” and “off-ramps” becomes paramount. It is our responsibility to ensure that users feel they are in the driver’s seat, even when they’ve handed over the wheel.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>This new reality also elevates the role of the UX researcher. We become the custodians of user trust, working collaboratively with engineers and product managers to define and test the guardrails of an agent’s autonomy. Beyond being researchers, we become advocates for user control, transparency, and the ethical safeguards within the development process. By translating primitives into practical questions and simulating worst-case scenarios, we can build robust systems that are both powerful and safe.</p>

<p>This article has outlined the “what” and “why” of researching agentic AI. It has shown that our traditional toolkits are insufficient and that we must adopt new, forward-looking methodologies. The next article will build upon this foundation, providing the specific design patterns and organizational practices that make an agent’s utility transparent to users, ensuring they can harness the power of agentic AI with confidence and control. The future of UX is about making systems trustworthy.</p>

<p>For additional understanding of agentic AI, you can explore the following resources:</p>

<ul>
<li><a href="https://www.google.com/search?q=https://ai.googleblog.com/blog/topic/agentic-ai/">Google AI Blog on Agentic AI</a></li>
<li><a href="https://www.microsoft.com/en-us/research/project/agent-ai/">Microsoft’s research on AI Agents</a></li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Vitaly Friedman</author><title>UX And Product Designer’s Career Paths In 2026</title><link>https://www.smashingmagazine.com/2026/01/ux-product-designer-career-paths/</link><pubDate>Mon, 12 Jan 2026 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2026/01/ux-product-designer-career-paths/</guid><description>How to shape your career path for 2026, with decision trees for designers and a UX skills self-assessment matrix. The only limits for tomorrow are the doubts we have today. Brought to you by &lt;a href="https://smart-interface-design-patterns.com/">Smart Interface Design Patterns&lt;/a>, a &lt;strong>friendly video course on UX&lt;/strong> and design patterns by Vitaly.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2026/01/ux-product-designer-career-paths/" />
              <title>UX And Product Designer’s Career Paths In 2026</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>UX And Product Designer’s Career Paths In 2026</h1>
                  
                    
                    <address>Vitaly Friedman</address>
                  
                  <time datetime="2026-01-12T10:00:00&#43;00:00" class="op-published">2026-01-12T10:00:00+00:00</time>
                  <time datetime="2026-01-12T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>As the new year begins, I often find myself in a strange place &mdash; reflecting back at the previous year or looking forward to the year ahead. And as I speak with colleagues and friends at the time, it typically doesn’t take long for a conversation about <strong>career trajectory</strong> to emerge.</p>

<p>So I thought I’d share a few thoughts on <strong>how to shape your career path</strong> as we are looking ahead to 2026. Hopefully you’ll find it useful.</p>

<h2 id="run-a-retrospective-for-last-year">Run A Retrospective For Last Year</h2>

<p>To be honest, for many years, I was mostly reacting. Life was happening <em>to</em> me, rather than me shaping the life that I was living. I was <strong>making progress reactively</strong> and I was looking out for all kinds of opportunities. It was easy and quite straightforward &mdash; I was floating and jumping between projects and calls and making things work as I was going along.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.linkedin.com/posts/lilyyue_uxdesign-careergrowth-productdesign-activity-7343261653901144066-8nLf?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAABGhVGkBT-YMeCMZd9fKgSaE_H8BrQil438">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="996"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/4-career-paths-ux-designers.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/product-designer-career-paths/4-career-paths-ux-designers.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/product-designer-career-paths/4-career-paths-ux-designers.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/product-designer-career-paths/4-career-paths-ux-designers.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/product-designer-career-paths/4-career-paths-ux-designers.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/4-career-paths-ux-designers.jpeg"
			
			sizes="100vw"
			alt="An overview of diverse career paths, from UX research to design lead, to senior designer and design consultant."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://www.linkedin.com/posts/lilyyue_uxdesign-careergrowth-productdesign-activity-7343261653901144066-8nLf/'>Diverse career paths for UX Designers</a>, a helpful overview by Lili Yue. You might find yourself doing a little bit of everything in this overview &mdash; but you need to know where you want to go next. (<a href='https://files.smashing.media/articles/product-designer-career-paths/4-career-paths-ux-designers.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Years ago, my wonderful wife introduced <strong>one little annual ritual</strong> which changed that dynamic entirely. By the end of each year, we sit with nothing but paper and pencil and run a thorough <strong>retrospective of the past year</strong> &mdash; successes, mistakes, good moments, bad moments, things we loved, and things we wanted to change.</p>

<p>We look back at our memories, projects, and events that stood out that year. And then we take notes for where we stand in terms of personal growth, professional work, and social connections &mdash; and <strong>how we want to grow</strong>.</p>

<p>These are <strong>the questions</strong> I’m trying to answer there:</p>

<ul>
<li>What did I find <strong>most rewarding</strong> and fulfilling last year?</li>
<li>What <strong>fears and concerns slowed me down</strong> the most?</li>
<li>What could I <strong>leave behind</strong>, give away or simplify?</li>
<li>What tasks would be <strong>good to delegate</strong> or automate?</li>
<li>What are my <strong>3 priorities to grow</strong> this upcoming year?</li>
<li>What <strong>times do I block</strong> in my calendar for my priorities?</li>
</ul>

<p>It probably sounds quite cliche, but these 4&ndash;5h of our time every year set a <strong>foundation for changes</strong> to introduce for the next year. This little exercise shapes the trajectory that I’ll be designing and prioritizing next year. I can’t recommend it enough.</p>

<h2 id="ux-skills-self-assessment-matrix">UX Skills Self-Assessment Matrix</h2>

<p>Another little tool that I found helpful for professional growth is <a href="https://www.figma.com/community/file/1142203484282738794/design-skills-matrix">UX Skills Self-Assessment Matrix</a> (Figma template) by Maigen Thomas. It’s a neat little tool that’s designed to help you understand what you’d like to do more of, what you’d prefer to do less, and where your <strong>current learning curve</strong> lies vs. where you feel <strong>confident in your expertise</strong>.</p>














<figure class="
  
  
  ">
  
    <a href="https://www.figma.com/community/file/1142203484282738794/design-skills-matrix">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="1223"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/2-design-skills-self-assessment-matrix.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/product-designer-career-paths/2-design-skills-self-assessment-matrix.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/product-designer-career-paths/2-design-skills-self-assessment-matrix.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/product-designer-career-paths/2-design-skills-self-assessment-matrix.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/product-designer-career-paths/2-design-skills-self-assessment-matrix.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/2-design-skills-self-assessment-matrix.jpeg"
			
			sizes="100vw"
			alt="A ‘Design Skills Self-Assessment Matrix’ with a colorful header and a grid below plotting skills across ‘Still Learning,’ ‘Want to Do More,’ ‘Expert at This,’ and ‘Want to Do Less’ quadrants."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://www.figma.com/community/file/1142203484282738794/design-skills-matrix'>A neat little tool</a> to identify where you stand, what you want to do less of, more of, and what you’d like to learn. (<a href='https://files.smashing.media/articles/product-designer-career-paths/2-design-skills-self-assessment-matrix.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The exercise typically takes around 20&ndash;30 minutes, and it helps identify the <strong>UX skills with a sweet spot</strong> &mdash; typically the upper half of the canvas. You’ll also pinpoint areas where you’re improving, and those where you are already pretty good at. It’s a neat reality check &mdash; and a great reminder once you review it year after year. Highly recommended!</p>

<h2 id="ux-career-levels-for-design-systems-teams">UX Career Levels For Design Systems Teams</h2>

<p>A while back, <a href="https://www.linkedin.com/in/javiercuello/?lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3BarGUwB3ET%2FyNMblHCHHbCg%3D%3D">Javier Cuello</a> has put together a Career Levels For Design System Teams (Figma Kit), a neat little helper for product designers looking to transition into design systems teams or managers building a career matrix for them. The model maps progression levels (Junior, Semi-Senior, Senior, and Staff) to key development areas, with skills and responsibilities required at each stage.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.figma.com/community/file/1444342030560583543/design-systems-product-design-career-levels">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="877"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/5-ux-career-levels.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/product-designer-career-paths/5-ux-career-levels.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/product-designer-career-paths/5-ux-career-levels.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/product-designer-career-paths/5-ux-career-levels.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/product-designer-career-paths/5-ux-career-levels.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/5-ux-career-levels.jpeg"
			
			sizes="100vw"
			alt="UX Career Levels for design system teams"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Career Levels For Design System Teams (Figma Kit). Kindly put together by Javier Cuello. (<a href='https://files.smashing.media/articles/product-designer-career-paths/5-ux-career-levels.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>What I find quite valuable in Javier’s model is the mapping of strategy and impact, along with systematic thinking and governance. While as designers we often excel at tactical design &mdash; from elegant UI components to file organization in Figma &mdash; we often lag a little bit behind in strategic decisions.</p>

<p>To a large extent, the difference between levels of seniority is moving from tactical initiatives to strategic decisions. It’s proactively looking for organizational challenges that a system can help with. It’s finding and inviting key people early. It’s also about embedding yourself in other teams when needed.</p>

<p>But it’s also keeping an eye out for situations when design systems fail, and paving the way to make it more difficult to fail. And: adapting the workflow around the design system to ship on a tough deadline when needed, but with a viable plan of action on how and when to pay back accumulating UX debt.</p>

<h2 id="find-your-product-design-career-path">Find Your Product Design Career Path</h2>

<p>When we speak about career trajectory, it’s almost always assumed that the career progression inevitably leads to <strong>management</strong>. However, this hasn’t been a path I preferred, and it isn’t always the ideal path for everyone.</p>

<p>Personally, I prefer to work on intricate fine details of UX flows and deep dive into <strong>complex UX challenges</strong>. However, eventually it might feel like you’ve stopped growing &mdash; perhaps you’ve hit a ceiling in your organization, or you have little room for exploration and learning. So where do you go from there?</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://uxdesign.cc/fixing-product-design-career-paths-with-the-mirror-model-76152b7e547?sk=v2%2F0a6cb162-4def-4f1c-ac5e-b145597646c7">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="562"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/3-producst-design-career-paths-mirror-model.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/product-designer-career-paths/3-producst-design-career-paths-mirror-model.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/product-designer-career-paths/3-producst-design-career-paths-mirror-model.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/product-designer-career-paths/3-producst-design-career-paths-mirror-model.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/product-designer-career-paths/3-producst-design-career-paths-mirror-model.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/3-producst-design-career-paths-mirror-model.jpeg"
			
			sizes="100vw"
			alt="A complex flowchart titled ‘Product Design Career Paths: The Mirror Model’ in blue, detailing two parallel career progression tracks: individual contributor and management."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://uxdesign.cc/fixing-product-design-career-paths-with-the-mirror-model-76152b7e547?sk=v2%2F0a6cb162-4def-4f1c-ac5e-b145597646c7'>The Mirror Model</a> (<a href='https://drive.google.com/file/d/1BePJyrd8q0D1mVgIV2h8ghds8IbbyzBR/view'>PDF</a>) is a helpful way to visualize creative and managerial paths with equivalent influence and compensation. (<a href='https://files.smashing.media/articles/product-designer-career-paths/3-producst-design-career-paths-mirror-model.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>A helpful model to think about your next steps is to consider Ryan Ford’s <a href="https://uxdesign.cc/fixing-product-design-career-paths-with-the-mirror-model-76152b7e547?sk=v2%2F0a6cb162-4def-4f1c-ac5e-b145597646c7">Mirror Model</a>. It explores <strong>career paths and expectations</strong> that you might want to consider to advocate for a position or influence that you wish to achieve next.</p>

<p>That’s typically something you might want to study and <strong>decide on your own first</strong>, and then bring it up for discussion. Usually, there are internal opportunities out there. So before changing the company, you can switch teams, or you could shape a more fulfilling role <strong>internally</strong>.</p>

<p>You just need to find it first. Which brings us to the next point.</p>

<h2 id="proactively-shaping-your-role">Proactively Shaping Your Role</h2>

<p>I keep reminding myself of <a href="https://www.linkedin.com/in/jasonmesut?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAAawX0BwaORuqGb58dyVh03pJIPpuU6s68&amp;lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3BarGUwB3ET%2FyNMblHCHHbCg%3D%3D">Jason Mesut</a>’s observation that when we speak about career ladders, it assumes that we can either go up, down, or fall off. But in reality, you can <strong>move up, move down, and move sideways</strong>. As Jason says, “promoting just the vertical progression doesn’t feel healthy, especially in such a diverse world of work, and diverse careers ahead of us all.”</p>

<p>So, in the attempt to climb up, perhaps consider also moving sideways. <strong>Zoom out and explore</strong> where your interests are. Focus on the much-needed intersection between business needs and user needs. Between problem space and solution space. Between strategic decisions and operations. Then zoom in. In the end, you might not need to climb anything &mdash; but rather just find that right spot that brings your expertise to light and makes the biggest impact.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.linkedin.com/posts/lilyyue_uxdesign-careerpath-careergrowth-activity-7345798373368578050-6c77?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAABGhVGkBT-YMeCMZd9fKgSaE_H8BrQil438">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="996"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/1-career-decision-map-ux-designers.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/product-designer-career-paths/1-career-decision-map-ux-designers.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/product-designer-career-paths/1-career-decision-map-ux-designers.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/product-designer-career-paths/1-career-decision-map-ux-designers.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/product-designer-career-paths/1-career-decision-map-ux-designers.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/1-career-decision-map-ux-designers.jpeg"
			
			sizes="100vw"
			alt="A flowchart titled Career Decision Map for UX Designers, put together by Lily Yue"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://www.linkedin.com/feed/update/urn:li:activity:7345798373368578050/'>A career decision map for UX Designers</a>. Kindly put together by Lily Yue. (<a href='https://files.smashing.media/articles/product-designer-career-paths/1-career-decision-map-ux-designers.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Sometimes these roles might involve acting as a <strong>“translator”</strong> between design and engineering, specializing in UX and accessibility. They could also involve <strong>automating design processes</strong> with AI, improving workflow efficiency, or focusing on internal search UX or legacy systems.</p>

<p>These roles are never advertised, but they have a <strong>tremendous impact</strong> on a business. If you spot such a gap and proactively bring it to senior management, you might be able to shape a role that brings your strengths into the spotlight, rather than trying to fit into a predefined position.</p>

<h2 id="what-about-ai">What About AI?</h2>

<p>One noticeable skill that is worth sharpening is, of course, around <strong>designing AI experiences</strong>. The point isn’t about finding ways to replace design work with AI automation. Today, it seems like people crave nothing more than actual human experience &mdash; created by humans, with attention to humans’ needs and intentions, designed and built and tested with humans, embedding human values and working well for humans.</p>














<figure class="
  
  
  ">
  
    <a href="https://www.linkedin.com/posts/vitalyfriedman_ux-ai-design-activity-7340989532290306051-fLtc/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAACDcgQBa_vsk5breYKwZAgyIhsHtJaFbL8">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="1017"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/6-design-patterns-ai-interfaces.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/product-designer-career-paths/6-design-patterns-ai-interfaces.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/product-designer-career-paths/6-design-patterns-ai-interfaces.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/product-designer-career-paths/6-design-patterns-ai-interfaces.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/product-designer-career-paths/6-design-patterns-ai-interfaces.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/6-design-patterns-ai-interfaces.jpeg"
			
			sizes="100vw"
			alt="Design Patterns For AI Interfaces, including chatbot widget, inline overlays, infinite canvas, center stage, left panel, right panel."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Design Patterns For AI Interfaces, a quick overview by Sharang Sharma. (<a href='https://files.smashing.media/articles/product-designer-career-paths/6-design-patterns-ai-interfaces.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If anything, we should be more <strong>obsessed with humans</strong>, not with AI. If anything, AI amplifies the need for authenticity, curation, critical thinking, and strategy. And that’s a skill that will be very much needed in 2026. We need designers who can design beautiful AI experiences (and frankly, I do have a <a href="https://ai-design-patterns.com/">whole course</a> on that) &mdash; experiences people understand, value, use, and <strong>trust</strong>.</p>

<p>No technology can create <strong>clarity, structure, trust, and care</strong> out of poor content, poor metadata, and poor value for end users. If we understand the fundamentals of good design, and then design with humans in mind, and consider humans’ needs and wants and struggles, we can help users and businesses bridge that gap in a way AI never could. And that’s what you and perhaps your renewed role could bring to the table.</p>

<h2 id="wrapping-up">Wrapping Up</h2>

<p>The most important thing about all these little tools and activities is that they help you <strong>get more clarity</strong>. Clarity on where you currently stand and where you actually want to grow towards.</p>

<p>These are <strong>wonderful conversation starters</strong> to help you find a path you’d love to explore, on your own or with your manager. However, just one thing I’d love to emphasize:</p>

<blockquote>Absolutely, feel free to refine the role to amplify your strengths, rather than finding a way to match a particular role perfectly.</blockquote>

<p>Don’t forget: you bring <strong>incredible value</strong> to your team and to your company. Sometimes it just needs to be highlighted or guided to the right spot to bring it into the spotlight.</p>

<p>You’ve got this &mdash; and happy 2026! ✊🏼✊🏽✊🏾</p>

<h2 id="meet-design-patterns-for-ai-interfaces">Meet “Design Patterns For AI Interfaces”</h2>

<p>Meet <strong>design patterns that work</strong> for AI products in <a href="https://ai-design-patterns.com/"><strong>Design Patterns For AI Interfaces</strong></a>, Vitaly’s shiny new <strong>video course</strong> with practical examples from real-life products &mdash; with a <a href="https://smashingconf.com/online-workshops/workshops/ai-interfaces-vitaly-friedman/">live UX training</a> happening soon. <a href="https://www.youtube.com/watch?v=jhZ3el3n-u0">Jump to a free preview</a>. Use code <strong>SNOWFLAKE</strong> to <strong>save 20%</strong> off!</p>

<p><figure class="break-out article__image" style="margin-bottom: 0"><a href="https://ai-design-patterns.com/"><img style="border-radius:11px" loading="lazy" decoding="async" fetchpriority="low" width="800" height="414" srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/design-patterns-ai-interfaces.png 400w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/product-designer-career-paths/design-patterns-ai-interfaces.png 800w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/product-designer-career-paths/design-patterns-ai-interfaces.png 1200w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/product-designer-career-paths/design-patterns-ai-interfaces.png 1600w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/product-designer-career-paths/design-patterns-ai-interfaces.png 2000w" src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/product-designer-career-paths/design-patterns-ai-interfaces.png" sizes="100vw" alt="Design Patterns For AI Interfaces promo picture"></a><figcaption class="op-vertical-bottom">Meet <a href="https://ai-design-patterns.com/">Design Patterns For AI Interfaces</a>, Vitaly’s video course on interface design &amp; UX.</figcaption></figure>
<div class="book-cta__inverted"><div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)"><nav class="content-tabs content-tabs--books"><ul><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">
Video + UX Training</button></a></li><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">Video only</button></a></li></ul></nav><div class="book-cta__col book-cta__hardcover content-tab--content"><h3 class="book-cta__title"><span>Video + UX Training</span></h3><span class="book-cta__price"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>450<span class="sup">.00</span></span></span> <span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>799<span class="sup">.00</span></span></span></span></span>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3476562?price_id=4401578" class="btn btn--full btn--medium btn--text-shadow">
Get Video + UX Training<div></div></a><p class="book-cta__desc">30 video lessons (10h) + <a href="https://smashingconf.com/online-workshops/workshops/ai-interfaces-vitaly-friedman/">Live UX Training</a>.<br>100 days money-back-guarantee.</p></div><div class="book-cta__col book-cta__ebook content-tab--content"><h3 class="book-cta__title"><span>Video only</span></h3><div data-audience="anonymous free supporter" data-remove="true"><span class="book-cta__price" data-handler="PriceTag"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>275<span class="sup">.00</span></span></span><span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>395<span class="sup">.00</span></span></span></span></div>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3476562?price_id=4397456" class="btn btn--full btn--medium btn--text-shadow">
Get the video course<div></div></a><p class="book-cta__desc" data-audience="anonymous free supporter" data-remove="true">30 video lessons (10h). Updated yearly.<br>Also available as a <a href="https://smart-interface-design-patterns.thinkific.com/enroll/3570306?price_id=4503439">UX Bundle with 3 video courses.</a></p></div><span></span></div></div></p>

<h2 id="useful-resources">Useful Resources</h2>

<ul>
<li><a href="https://www.figma.com/community/file/1142203484282738794/design-skills-matrix">UX Skills Self-Assessment Matrix (Figma template)</a>, by Maigen Thomas</li>
<li>“<a href="https://uxdesign.cc/fixing-product-design-career-paths-with-the-mirror-model-76152b7e547?sk=v2%2F0a6cb162-4def-4f1c-ac5e-b145597646c7">Product Designer’s Career Levels Paths</a>” + <a href="https://drive.google.com/file/d/1BePJyrd8q0D1mVgIV2h8ghds8IbbyzBR/view">PNG</a>, by Ryan Ford</li>
<li><a href="https://www.linkedin.com/feed/update/urn:li:activity:7345798373368578050/">Career Decision Map For UX Designers (PNG)</a>, by Lily Yue</li>
<li><a href="https://www.linkedin.com/posts/lilyyue_uxdesign-careergrowth-productdesign-activity-7343261653901144066-8nLf/">Diverse Career Paths For UX Designers (PNG)</a>, by Lily Yue</li>
<li><a href="https://medium.com/shapingdesign">Shaping Designers and Design Teams</a>, by Jason Mesut</li>
<li><a href="https://miro.com/templates/skills-map-design/">UX Skills Self-Assessment Map template (Miro)</a>, by Paóla Quintero</li>
<li><a href="https://www.nngroup.com/articles/skill-mapping/">UX Skill Mapping Template (Google Sheets)</a>, by Rachel Krause, NN/g</li>
<li>“<a href="https://shannonethomas.com/2023/08/08/growth-framework">Design Team’s Growth Matrix</a>”, by Shannon E. Thomas</li>
<li><a href="https://www.figma.com/community/file/1220482745322443565/figma-product-design-writing-career-levels">Figma Product Design &amp; Writing Career Levels</a>, by Figma</li>
<li><a href="https://miro.com/templates/content-design-role-frameworks/">Content Design Role Frameworks</a>, by Tempo</li>
<li>“<a href="https://dscout.com/people-nerds/uxr-career-framework">UX Research Career Framework</a>”, by Nikki Anderson</li>
<li><a href="https://uxchrisnguyen.gumroad.com/l/uxcareerladder"><em>UX Career Ladders</em> (free eBook)</a>, by Christopher Nguyen</li>
<li><a href="https://docs.google.com/spreadsheets/d/1cNkL4nY3Z8vTyIpIsvqpaFortYZfF-VIoUE0mkbkRMo/edit?gid=0#gid=0">Product Design Level Expectations</a>, by Aaron James</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Vitaly Friedman</author><title>How To Design For (And With) Deaf People</title><link>https://www.smashingmagazine.com/2025/12/how-design-for-with-deaf-people/</link><pubDate>Tue, 30 Dec 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/how-design-for-with-deaf-people/</guid><description>Practical UX guidelines to keep in mind for 466 million people who experience hearing loss. More design patterns in &lt;a href="https://smart-interface-design-patterns.com/">Smart Interface Design Patterns&lt;/a>, a &lt;strong>friendly video course on UX&lt;/strong> and design patterns by Vitaly.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/how-design-for-with-deaf-people/" />
              <title>How To Design For (And With) Deaf People</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Design For (And With) Deaf People</h1>
                  
                    
                    <address>Vitaly Friedman</address>
                  
                  <time datetime="2025-12-30T10:00:00&#43;00:00" class="op-published">2025-12-30T10:00:00+00:00</time>
                  <time datetime="2025-12-30T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>When we think about people who are deaf, we often assume stereotypes, such as “disabled” older adults with <strong>hearing aids</strong>. However, this perception is far from the truth and often leads to poor decisions and broken products.</p>

<p>Let’s look at when and how deafness emerges, and how to design better experiences <strong>for people with hearing loss</strong>.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="783"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg"
			
			sizes="100vw"
			alt="A diagram illustrates sign language with a torso, hands, and blue lines indicating &#39;SPACE&#39; and &#39;TIME,&#39; beside blue text stating &#39;Sign language is four-dimensional."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Sign language is 4-dimensional, including 3D space and time, and often includes facial expressions, too. From a wonderful talk by <a href='https://www.youtube.com/watch?v=M0cR_HTeWUo'>Marie van Driessche</a>. (<a href='https://files.smashing.media/articles/how-design-for-with-deaf-people/1-sign-language.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="deafness-is-a-spectrum">Deafness Is A Spectrum</h2>

<p>Deafness spans a <strong>broad continuum</strong>, from minor to profound hearing loss. Around 90&ndash;95% of deaf people <a href="https://scholarworks.wmich.edu/jssw/vol51/iss1/11/">come from hearing families</a>, and deafness often isn’t merely a condition that people are born with. It frequently occurs due to <strong>exposure to loud noises</strong>, and it also emerges with age, disease, and accidents.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.didyousaydeaf.com/hearing-loss-and-how-hearing-loss-works">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="814"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/2-chart-sound-frequencies-decibel-levels.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-design-for-with-deaf-people/2-chart-sound-frequencies-decibel-levels.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-design-for-with-deaf-people/2-chart-sound-frequencies-decibel-levels.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-design-for-with-deaf-people/2-chart-sound-frequencies-decibel-levels.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-design-for-with-deaf-people/2-chart-sound-frequencies-decibel-levels.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/2-chart-sound-frequencies-decibel-levels.jpg"
			
			sizes="100vw"
			alt="A chart showing sound frequencies and decibel levels, illustrating types of hearing loss and common everyday sounds."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A <a href='https://www.didyousaydeaf.com/hearing-loss-and-how-hearing-loss-works'>chart</a> showing sound frequencies and decibel levels, illustrating types of hearing loss and common everyday sounds. (<a href='https://files.smashing.media/articles/how-design-for-with-deaf-people/2-chart-sound-frequencies-decibel-levels.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The loudness of sound is measured in units called <strong>decibels (dB)</strong>. Everybody is on the <a href="https://www.didyousaydeaf.com/hearing-loss-and-how-hearing-loss-works">spectrum of deafness</a>, from normal hearing (up to 15 dB) to profound hearing loss (91+ dB):</p>

<ul>
<li><strong>Slight Hearing Loss</strong>, 16&ndash;25 dB<br />
At 16 dB hearing loss, a person can miss up to 10% of speech when a speaker is at a distance greater than 3 feet.</li>
<li><strong>Mild hearing loss</strong>, 26&ndash;40 dB<br />
Soft sounds are hard to hear, including whispering, which is around 40 dB in volume. It’s more difficult to hear soft speech sounds spoken at a normal volume. At 40dB hearing loss, a person may miss 50% of meeting discussions.</li>
<li><strong>Moderate hearing loss</strong>, 41&ndash;55 dB<br />
A person may hear almost no speech when another person is talking at normal volume. At a 50dB hearing loss, a person may not pick up to 80% of speech.</li>
<li><strong>Moderately Severe Hearing Loss</strong>, 56&ndash;70 dB<br />
A person may have problems hearing the sounds of a dishwasher (60dB). At 70 dB, they might miss almost all speech.</li>
<li><strong>Severe Hearing Loss</strong>, 71&ndash;90 dB<br />
A person will hear no speech when a person is talking at a normal level. They may hear only some very loud noises: vacuum (70 dB), blender (78 dB), hair dryer (90 dB).</li>
<li><strong>Profound Hearing Loss</strong>, 91+ dB<br />
Hear no speech and at most very loud sounds such as a music player at full volume (100 dB), which would be damaging for people with normal hearing, or a car horn (110 dB).</li>
</ul>

<p>It’s worth mentioning that loss of hearing can also be situational and temporary, as people with “normal” hearing (0 to 25 dB hearing loss) will always encounter situations where they can’t hear, e.g., due to <strong>noisy environments</strong>.</p>

<h2 id="useful-things-to-know-about-deafness">Useful Things To Know About Deafness</h2>

<p>Assumptions are always dangerous, and in the case of deafness, there are quite a few that aren’t accurate. For example, most deaf people actually do not know a sign language &mdash; it’s only around <a href="https://www.accessibility.com/blog/do-all-deaf-people-use-sign-language">1% in the US</a>.</p>

<p>Also, despite our expectations, there is actually <strong>no universal sign language</strong> that everybody uses. For example, British signers often cannot understand American signers. There are globally around <a href="https://education.nationalgeographic.org/resource/sign-language/">300 different sign languages</a> actively used.</p>

<blockquote>“We never question making content available in different written or spoken languages, and the same should apply to signed languages.”<br /><br />&mdash; <a href="https://www.linkedin.com/feed/update/urn:li:activity:7178702360649547778?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7178702360649547778%2C7178776416979718144%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287178776416979718144%2Curn%3Ali%3Aactivity%3A7178702360649547778%29">Johanna Steiner</a></blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://uxplanet.org/podcasts-for-the-deaf-d4d9b5f3ce99">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="517"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/3-heardio.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-design-for-with-deaf-people/3-heardio.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-design-for-with-deaf-people/3-heardio.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-design-for-with-deaf-people/3-heardio.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-design-for-with-deaf-people/3-heardio.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/3-heardio.jpg"
			
			sizes="100vw"
			alt="Three smartphone screens displaying parts of a podcast app, including a browsing page, a now-playing screen with an avatar, and a live transcription feature."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://uxplanet.org/podcasts-for-the-deaf-d4d9b5f3ce99'>Heardio</a> concept: making podcasts accessible for deaf people â€” with live transcription and sign language avatars. (<a href='https://files.smashing.media/articles/how-design-for-with-deaf-people/3-heardio.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Sign languages are <a href="https://www.youtube.com/watch?v=M0cR_HTeWUo&amp;t=188">not just gestures or pantomime</a>. They are <strong>4D spatial languages</strong> with their own grammar and syntax, separate from spoken languages, and they don’t have a written form. They rely heavily on facial expression to convey meaning and emphasis. And they are also not universal &mdash; every country has its own sign language and dialects.</p>

<ul>
<li>You can only understand <strong>30% of words</strong> via lip-reading.</li>
<li>Most deaf people do not know any <strong>sign language</strong>.</li>
<li>Many sign languages have local dialects that can be hard to interpret.</li>
<li>Not all deaf people are fluent signers and often rely on <strong>visual clues</strong>.</li>
<li>For many deaf people, a spoken language is their <strong>second language</strong>.</li>
<li><a href="https://www.youtube.com/watch?v=M0cR_HTeWUo"><strong>Sign language is 4-dimensional</strong></a>, incorporating 3D space, time and also facial expressions.</li>
</ul>

<h2 id="how-to-communicate-respectfully">How To Communicate Respectfully</h2>

<p>Keep in mind that many deaf people use the spoken language of their country as <strong>their second language</strong>. So to communicate with a deaf person, it’s best to ask in writing. Don’t ask how much a person can understand, or if they can lip-read you.</p>

<p>However, as Rachel Edwards <a href="https://www.linkedin.com/posts/rachel-edwards-scotland_excellent-overview-on-designing-for-ddeaf-activity-7409172866983804928-489h?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAACDcgQBa_vsk5breYKwZAgyIhsHtJaFbL8">noted</a>, don’t assume someone is comfortable with written language because they are deaf. Sometimes their literacy may be low, and so providing information as text and assuming that covers your deaf users might not be the answer.</p>

<p>Also, don’t assume that every deaf person can lip-read. You can see only about 30% of words on someone’s mouth. That’s why many deaf people need <strong>additional visual cues</strong>, like text or cued speech.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.healthyhearing.com/report/52264-Universal-signs-for-hearing-loss">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="675"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/4-accessibility-symbols.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-design-for-with-deaf-people/4-accessibility-symbols.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-design-for-with-deaf-people/4-accessibility-symbols.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-design-for-with-deaf-people/4-accessibility-symbols.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-design-for-with-deaf-people/4-accessibility-symbols.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/4-accessibility-symbols.png"
			
			sizes="100vw"
			alt="Seven accessibility symbols for people with hearing loss are displayed: International Symbol of Access, assistive listening devices, telephone typewriter, volume control telephone, sign language interpretation, closed captioning, and open captioning."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      7 accessibility symbols for people with hearing loss. <a href='https://www.healthyhearing.com/report/52264-Universal-signs-for-hearing-loss'>Universal signs for hearing loss</a>. (<a href='https://files.smashing.media/articles/how-design-for-with-deaf-people/4-accessibility-symbols.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It’s also crucial to use <strong>respectful language</strong>. Deaf people do not always see themselves as <em>disabled</em>, but rather as a cultural linguistic minority with a unique identity. Others, as Meryl Evan has <a href="https://www.linkedin.com/feed/update/urn:li:activity:7178702360649547778?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7178702360649547778%2C7178721132345270272%29&amp;dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287178721132345270272%2Curn%3Ali%3Aactivity%3A7178702360649547778%29">noted</a>, don’t identify as <em>deaf</em> or <em>hard of hearing</em>, but rather as “hearing impaired”. So, it’s mostly up to an individual how they want to identify.</p>

<ul>
<li><strong>Deaf</strong> (Capital ‘D’)<br />
Culturally Deaf people who have been deaf since birth or before learning to speak. Sign language is often the first language, and written language is the second.</li>
<li><strong>deaf</strong> (Lowercase ‘d’)<br />
People who developed hearing loss later in life. Used by people who feel closer to the hearing/hard-of-hearing world and prefer to communicate written and/or oral.</li>
<li><strong>Hard of Hearing</strong><br />
People with mild to moderate hearing loss who typically communicate orally and use hearing aids.</li>
</ul>

<p>In general, <strong>avoid hearing impairment</strong> if you can, and use <em>Deaf</em> (for those deaf for most of their lives), <em>deaf</em> (for those who became deaf later), or <em>hard of hearing</em> (HoH) for partial hearing loss. But either way, ask politely first and then respect the person’s preferences.</p>

<h2 id="practical-ux-guidelines">Practical UX Guidelines</h2>

<p>When designing UIs and content, consider these key accessibility guidelines for deaf and hard-of-hearing users:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://prospect.org.uk/article/designing-content-for-users-who-are-deaf-or-hard-of-hearing/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/5-designing-deaf-users.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-design-for-with-deaf-people/5-designing-deaf-users.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-design-for-with-deaf-people/5-designing-deaf-users.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-design-for-with-deaf-people/5-designing-deaf-users.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-design-for-with-deaf-people/5-designing-deaf-users.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-design-for-with-deaf-people/5-designing-deaf-users.jpg"
			
			sizes="100vw"
			alt="An infographic on a teal background titled &#39;Designing for users who are deaf or hard of hearing,&#39; listing &#39;Do&#39;s and Don&#39;ts&#39; with icons."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      How to design for users who are deaf or hard of hearing, a Gov.uk-inspired poster by <a href='https://prospect.org.uk/article/designing-content-for-users-who-are-deaf-or-hard-of-hearing/'>Prospect.org.uk</a>. <a href='https://d28j9ucj9uj44t.cloudfront.net/uploads/2021/09/Designing_for_accessibility6.pdf'>Download a printable PDF</a>. (<a href='https://files.smashing.media/articles/how-design-for-with-deaf-people/5-designing-deaf-users.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<ol>
<li><strong>Don’t make the phone required</strong> or the only method of contact.</li>
<li><strong>Provide text alternatives</strong> for all audible alerts or notices.</li>
<li><strong>Add haptic feedback</strong> on mobile (e.g., vibration patterns).</li>
<li><strong>Ensure good lighting</strong> to help people see facial expressions.</li>
<li><strong>Circular seating</strong> usually works better, so everyone can see each other’s faces.</li>
<li><strong>Always include descriptions of non-spoken sounds</strong> (e.g., rain, laughter) in your content.</li>
<li><strong>Add a transcript and closed captions</strong> for audio and video.</li>
<li><strong>Clearly identify each speaker</strong> in all audio and video content.</li>
<li><strong>Design multiple ways to communicate</strong> in every instance (online + in-person).</li>
<li><strong>Invite video participants to keep the camera on</strong> to facilitate lip-reading and the viewing of facial expressions, which convey tone.</li>
<li><strong>Always test products with the actual community</strong>, instead of making assumptions for them.</li>
</ol>

<h2 id="wrapping-up">Wrapping Up</h2>

<p>I keep repeating myself like a broken record, but better accessibility <strong>always benefits everyone</strong>. When we improve experiences for some groups of people, it often improves experiences for entirely different groups as well.</p>

<p>As Marie Van Driessche rightfully noted, to design a great experience for accessibility, we must design <strong>with</strong> people, rather than <em>for</em> them. And that means always include people with <strong>lived experience of exclusion</strong> into the design process &mdash; as they are the true experts.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAccessibility%20never%20happens%20by%20accident%20%e2%80%94%20it%e2%80%99s%20a%20deliberate%20decision%20and%20a%20commitment.%0a&url=https://smashingmagazine.com%2f2025%2f12%2fhow-design-for-with-deaf-people%2f">
      
Accessibility never happens by accident — it’s a deliberate decision and a commitment.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>No digital product is neutral. There must be a deliberate effort to make products and services more accessible. Not only does it benefit everyone, but it also shows what a company stands for and values.</p>

<p>And once you do have a commitment, it will be so much easier to <strong>retain accessibility</strong> rather than adding it last minute as a crutch &mdash; when it’s already too late to do it right and way too expensive to do it well.</p>

<h2 id="meet-smart-interface-design-patterns">Meet “Smart Interface Design Patterns”</h2>

<p>You can find more details on <strong>design patterns and UX</strong> in <a href="https://smart-interface-design-patterns.com/"><strong>Smart Interface Design Patterns</strong></a>, our <strong>15h-video course</strong> with 100s of practical examples from real-life projects &mdash; with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables &mdash; with 5 new segments added every year. <a href="https://www.youtube.com/watch?v=jhZ3el3n-u0">Jump to a free preview</a>. Use code <a href="https://smart-interface-design-patterns.com"><strong>BIRDIE</strong></a> to <strong>save 15%</strong> off.</p>

<figure style="margin-bottom: 0"><a href="https://smart-interface-design-patterns.com/"><img style="border-radius: 11px" decoding="async" fetchpriority="low" width="950" height="492" srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7cc4e1de-6921-474e-a3fb-db4789fc13dd/b4024b60-e627-177d-8bff-28441f810462.jpeg 400w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7cc4e1de-6921-474e-a3fb-db4789fc13dd/b4024b60-e627-177d-8bff-28441f810462.jpeg 800w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7cc4e1de-6921-474e-a3fb-db4789fc13dd/b4024b60-e627-177d-8bff-28441f810462.jpeg 1200w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7cc4e1de-6921-474e-a3fb-db4789fc13dd/b4024b60-e627-177d-8bff-28441f810462.jpeg 1600w,
https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7cc4e1de-6921-474e-a3fb-db4789fc13dd/b4024b60-e627-177d-8bff-28441f810462.jpeg 2000w" src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/7cc4e1de-6921-474e-a3fb-db4789fc13dd/b4024b60-e627-177d-8bff-28441f810462.jpeg" sizes="100vw" alt="Smart Interface Design Patterns"></a><figcaption class="op-vertical-bottom">Meet <a href="https://smart-interface-design-patterns.com/">Smart Interface Design Patterns</a>, our video course on interface design &amp; UX.</figcaption></figure>

<div class="book-cta__inverted"><div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)"><nav class="content-tabs content-tabs--books"><ul><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">
Video + UX Training</button></a></li><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">Video only</button></a></li></ul></nav><div class="book-cta__col book-cta__hardcover content-tab--content"><h3 class="book-cta__title"><span>Video + UX Training</span></h3><span class="book-cta__price"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>495<span class="sup">.00</span></span></span> <span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>699<span class="sup">.00</span></span></span></span></span>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3951439" class="btn btn--full btn--medium btn--text-shadow">
Get Video + UX Training<div></div></a><p class="book-cta__desc">25 video lessons (15h) + <a href="https://smashingconf.com/online-workshops/workshops/vitaly-friedman-impact-design/">Live UX Training</a>.<br>100 days money-back-guarantee.</p></div><div class="book-cta__col book-cta__ebook content-tab--content"><h3 class="book-cta__title"><span>Video only</span></h3><div data-audience="anonymous free supporter" data-remove="true"><span class="book-cta__price" data-handler="PriceTag"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>300<span class="sup">.00</span></span></span><span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>395<span class="sup">.00</span></span></span></span></div>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3950630" class="btn btn--full btn--medium btn--text-shadow">
Get the video course<div></div></a><p class="book-cta__desc" data-audience="anonymous free supporter" data-remove="true">40 video lessons (15h). Updated yearly.<br>Also available as a <a href="https://smart-interface-design-patterns.thinkific.com/enroll/3082557?price_id=3951421">UX Bundle with 2 video courses.</a></p></div><span></span></div></div>

<h2 id="useful-resources">Useful Resources</h2>

<ul>
<li><a href="https://www.youtube.com/watch?v=M0cR_HTeWUo">Designing For Deaf People Helps Everyone</a>, by Marie Van Driessche</li>
<li>“<a href="https://medium.com/@paulrobwest/ux-ui-design-considerations-for-the-deaf-deaf-and-hard-of-hearing-dbfe28850fbe">Design considerations for the Deaf, deaf, and hard of hearing</a>”, by Paul Roberts</li>
<li><a href="https://www.youtube.com/watch?v=AEXKOASrTVM">Beyond Video Captions and Sign Language</a>, by Svetlana Kouznetsova</li>
<li>“<a href="https://www.smashingmagazine.com/2023/01/closed-captions-subtitles-ux/">Best Practices For CC and Subtitles UX</a>”, by Vitaly Friedman</li>
<li><a href="https://www.accessi.org/blog/web-accessibility-for-deaf-users/">Web Accessibility for Deaf Users</a></li>
<li><a href="https://www.inclusivedesigntoolkit.com/UChearing/hearing.html">Inclusive Design Toolkit: Hearing</a></li>
<li>“<a href="https://funkybrownchick.substack.com/p/i-hear-you-really-i-do-">What It&rsquo;s Like To Be Born Hard of Hearing</a>”, by Twanna A. Hines, M.S.</li>
<li>“<a href="https://uxplanet.org/podcasts-for-the-deaf-d4d9b5f3ce99">Accessibility: Podcasts for the deaf</a>”, by Mubarak Alabidun</li>
</ul>

<h3 id="useful-books">Useful Books</h3>

<ul>
<li><a href="https://audio-accessibility.com/book/"><em>Sound Is Not Enough</em></a>, by Svetlana Kouznetsova</li>
<li><em>Mismatch: How Inclusion Shapes Design</em>, by Kat Holmes</li>
<li><em>Building for Everyone: Extend Your Product&rsquo;s Reach Through Inclusive Design</em> (+ <a href="https://design.google/library/building-for-everyone">free excerpt</a>), by Annie Jean-Baptiste</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Paul Boag</author><title>Giving Users A Voice Through Virtual Personas</title><link>https://www.smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/</link><pubDate>Tue, 23 Dec 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/</guid><description>Turn scattered user research into AI-powered personas that give anyone consolidated multi-perspective feedback from a single question.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/" />
              <title>Giving Users A Voice Through Virtual Personas</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Giving Users A Voice Through Virtual Personas</h1>
                  
                    
                    <address>Paul Boag</address>
                  
                  <time datetime="2025-12-23T10:00:00&#43;00:00" class="op-published">2025-12-23T10:00:00+00:00</time>
                  <time datetime="2025-12-23T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>In my <a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">previous article</a>, I explored how AI can help us create functional personas more efficiently. We looked at building personas that focus on what users are trying to accomplish rather than demographic profiles that look good on posters but rarely change design decisions.</p>

<p>But creating personas is only half the battle. The bigger challenge is getting those insights into the hands of people who need them, at the moment they need them.</p>

<p>Every day, people across your organization make decisions that affect user experience. Product teams decide which features to prioritize. Marketing teams craft campaigns. Finance teams design invoicing processes. Customer support teams write response templates. All of these decisions shape how users experience your product or service.</p>

<p>And most of them happen without any input from actual users.</p>

<h2 id="the-problem-with-how-we-share-user-research">The Problem With How We Share User Research</h2>

<p>You do the research. You create the personas. You write the reports. You give the presentations. You even make fancy infographics. And then what happens?</p>

<p>The research sits in a shared drive somewhere, slowly gathering digital dust. The personas get referenced in kickoff meetings and then forgotten. The reports get skimmed once and never opened again.</p>

<p>When a product manager is deciding whether to add a new feature, they probably do not dig through last year’s research repository. When the finance team is redesigning the invoice email, they almost certainly do not consult the user personas. They make their best guess and move on.</p>

<p>This is not a criticism of those teams. They are busy. They have deadlines. And honestly, even if they wanted to consult the research, they probably would not know where to find it or how to interpret it for their specific question.</p>

<p>The knowledge stays locked inside the heads of the UX team, who cannot possibly be present for every decision being made across the organization.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="what-if-users-could-actually-speak">What If Users Could Actually Speak?</h2>

<blockquote>What if, instead of creating static documents that people need to find and interpret, we could give stakeholders a way to consult all of your user personas at once?</blockquote>

<p>Imagine a marketing manager working on a new campaign. Instead of trying to remember what the personas said about messaging preferences, they could simply ask: <em>“I’m thinking about leading with a discount offer in this email. What would our users think?”</em></p>

<p>And the AI, drawing on all your research data and personas, could respond with a consolidated view: how each persona would likely react, where they agree, where they differ, and a set of recommendations based on their collective perspectives. One question, synthesized insight across your entire user base.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="496"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png"
			
			sizes="100vw"
			alt="Personas"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      You can question how personas will react to different scenarios based on the research available. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/1-user-research-personas.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This is not science fiction. With AI, we can build exactly this kind of system. We can take all of that scattered research (the surveys, the interviews, the support tickets, the analytics, the personas themselves) and turn it into an <strong>interactive resource</strong> that anyone can query for multi-perspective feedback.</p>

<h2 id="building-the-user-research-repository">Building the User Research Repository</h2>

<p>The foundation of this approach is a centralized repository of everything you know about your users. Think of it as a single source of truth that AI can access and draw from.</p>

<p>If you have been doing user research for any length of time, you probably have more data than you realize. It is just scattered across different tools and formats:</p>

<ul>
<li>Survey results sitting in your survey platform,</li>
<li>Interview transcripts in Google Docs,</li>
<li>Customer support tickets in your helpdesk system,</li>
<li>Analytics data in various dashboards,</li>
<li>Social media mentions and reviews,</li>
<li>Old personas from previous projects,</li>
<li>Usability test recordings and notes.</li>
</ul>

<p>The first step is gathering all of this into one place. It does not need to be perfectly organized. AI is remarkably good at making sense of messy inputs.</p>

<p>If you are starting from scratch and do not have much existing research, you can use AI deep research tools to establish a baseline.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="599"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png"
			
			sizes="100vw"
			alt="Research with perplexity"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Online deep research with a tool like perplexity can be invaluable as a starting point for user research. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/2-user-research-perplexity.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>These tools can scan the web for discussions about your product category, competitor reviews, and common questions people ask. This gives you something to work with while you build out your primary research.</p>

<h2 id="creating-interactive-personas">Creating Interactive Personas</h2>

<p>Once you have your repository, the next step is creating personas that the AI can consult on behalf of stakeholders. This builds directly on <a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">the functional persona approach I outlined in my previous article</a>, with one key difference: these personas become <strong>lenses</strong> through which the AI analyzes questions, not just reference documents.</p>

<p>The process works like this:</p>

<ol>
<li>Feed your research repository to an AI tool.</li>
<li>Ask it to identify distinct user segments based on goals, tasks, and friction points.</li>
<li>Have it generate detailed personas for each segment.</li>
<li>Configure the AI to consult all personas when stakeholders ask questions, providing consolidated feedback.</li>
</ol>

<p>Here is where this approach diverges significantly from traditional personas. Because the AI is the primary consumer of these persona documents, they do not need to be scannable or fit on a single page. Traditional personas are constrained by human readability: you have to distill everything down to bullet points and key quotes that someone can absorb at a glance. But AI has no such limitation.</p>

<p>This means your personas can be considerably <strong>more detailed</strong>. You can include lengthy behavioral observations, contradictory data points, and nuanced context that would never survive the editing process for a traditional persona poster. The AI can hold all of this complexity and draw on it when answering questions.</p>

<p>You can also create <strong>different lenses or perspectives within each persona</strong>, tailored to specific business functions. Your “Weekend Warrior” persona might have a marketing lens (messaging preferences, channel habits, campaign responses), a product lens (feature priorities, usability patterns, upgrade triggers), and a support lens (common questions, frustration points, resolution preferences). When a marketing manager asks a question, the AI draws on the marketing-relevant information. When a product manager asks, it pulls from the product lens. Same persona, different depth depending on who is asking.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="568"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png"
			
			sizes="100vw"
			alt="Persona Lenses"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Personas can have different lenses relevant to different functions within the business. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/3-persona-lenses.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The personas should still include all the functional elements we discussed before: goals and tasks, questions and objections, pain points, touchpoints, and service gaps. But now these elements become the basis for how the AI evaluates questions from each persona’s perspective, synthesizing their views into actionable recommendations.</p>

<div class="partners__lead-place"></div>

<h2 id="implementation-options">Implementation Options</h2>

<p>You can set this up with varying levels of sophistication depending on your resources and needs.</p>

<h3 id="the-simple-approach">The Simple Approach</h3>

<p>Most AI platforms now offer project or workspace features that let you upload reference documents. In ChatGPT, these are called Projects. Claude has a similar feature. Copilot and Gemini call them Spaces or Gems.</p>

<p>To get started, create a dedicated project and upload your key research documents and personas. Then write clear instructions telling the AI to consult all personas when responding to questions. Something like:</p>

<blockquote>You are helping stakeholders understand our users. When asked questions, consult all of the user personas in this project and provide: (1) a brief summary of how each persona would likely respond, (2) an overview highlighting where they agree and where they differ, and (3) recommendations based on their collective perspectives. Draw on all the research documents to inform your analysis. If the research does not fully cover a topic, search social platforms like Reddit, Twitter, and relevant forums to see how people matching these personas discuss similar issues. If you are still unsure about something, say so honestly and suggest what additional research might help.</blockquote>

<p>This approach has some limitations. There are caps on how many files you can upload, so you might need to prioritize your most important research or consolidate your personas into a single comprehensive document.</p>

<h3 id="the-more-sophisticated-approach">The More Sophisticated Approach</h3>

<p>For larger organizations or more ongoing use, a tool like <a href="https://www.notion.com/">Notion</a> offers advantages because it can hold your entire <strong>research repository</strong> and has AI capabilities built in. You can create databases for different types of research, link them together, and then use the AI to query across everything.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="599"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png"
			
			sizes="100vw"
			alt="Notion homepage"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Notion is a powerful tool for user research with built-in AI functionality that can refer to all your personas as well as your entire research repository. (<a href='https://files.smashing.media/articles/giving-users-voice-virtual-personas/4-notion-user-research.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The benefit here is that the AI has access to much <strong>more context</strong>. When a stakeholder asks a question, it can draw on surveys, support tickets, interview transcripts, and analytics data all at once. This makes for richer, more nuanced responses.</p>

<h2 id="what-this-does-not-replace">What This Does Not Replace</h2>

<p>I should be clear about the limitations.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aVirtual%20personas%20are%20not%20a%20substitute%20for%20talking%20to%20real%20users.%20They%20are%20a%20way%20to%20make%20existing%20research%20more%20accessible%20and%20actionable.%0a&url=https://smashingmagazine.com%2f2025%2f12%2fgiving-users-voice-virtual-personas%2f">
      
Virtual personas are not a substitute for talking to real users. They are a way to make existing research more accessible and actionable.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>There are several scenarios where you still need primary research:</p>

<ul>
<li>When launching something genuinely new that your existing research does not cover;</li>
<li>When you need to validate specific designs or prototypes;</li>
<li>When your repository data is getting stale;</li>
<li>When stakeholders need to hear directly from real humans to build empathy.</li>
</ul>

<p>In fact, you can configure the AI to recognize these situations. When someone asks a question that goes beyond what the research can answer, the AI can respond with something like: <em>“I do not have enough information to answer that confidently. This might be a good question for a quick user interview or survey.”</em></p>

<p>And when you do conduct new research, that data feeds back into the repository. The personas evolve over time as your understanding deepens. This is much better than the traditional approach, where personas get created once and then slowly drift out of date.</p>

<div class="partners__lead-place"></div>

<h2 id="the-organizational-shift">The Organizational Shift</h2>

<p>If this approach catches on in your organization, something interesting happens.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aThe%20UX%20team%e2%80%99s%20role%20shifts%20from%20being%20the%20gatekeepers%20of%20user%20knowledge%20to%20being%20the%20curators%20and%20maintainers%20of%20the%20repository.%0a&url=https://smashingmagazine.com%2f2025%2f12%2fgiving-users-voice-virtual-personas%2f">
      
The UX team’s role shifts from being the gatekeepers of user knowledge to being the curators and maintainers of the repository.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Instead of spending time creating reports that may or may not get read, you spend time ensuring the repository stays current and that the AI is configured to give helpful responses.</p>

<p>Research communication changes from push (presentations, reports, emails) to pull (stakeholders asking questions when they need answers). <strong>User-centered thinking</strong> becomes distributed across the organization rather than concentrated in one team.</p>

<p>This does not make UX researchers less valuable. If anything, it makes them more valuable because their work now has a wider reach and greater impact. But it does change the nature of the work.</p>

<h2 id="getting-started">Getting Started</h2>

<p>If you want to try this approach, start small. If you need a primer on functional personas before diving in, I have written a <a href="https://boagworld.com/usability/personas/">detailed guide to creating them</a>. Pick one project or team and set up a simple implementation using ChatGPT Projects or a similar tool. Gather whatever research you have (even if it feels incomplete), create one or two personas, and see how stakeholders respond.</p>

<p>Pay attention to what questions they ask. These will tell you where your research has gaps and what additional data would be most valuable.</p>

<p>As you refine the approach, you can expand to more teams and more sophisticated tooling. But the core principle stays the same: <strong>take all that scattered user knowledge and give it a voice that anyone in your organization can hear.</strong></p>

<p>In my previous article, I argued that we should move from demographic personas to functional personas that focus on what users are trying to do. Now I am suggesting we take the next step: from static personas to interactive ones that can actually participate in the conversations where decisions get made.</p>

<p>Because every day, across your organization, people are making decisions that affect your users. And your users deserve a seat at the table, even if it is a virtual one.</p>

<h3 id="further-reading-on-smashingmag">Further Reading On SmashingMag</h3>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2014/08/a-closer-look-at-personas-part-1/">A Closer Look At Personas: What They Are And How They Work | 1</a>”, Shlomo Goltz</li>
<li>“<a href="https://www.smashingmagazine.com/2018/04/design-process-data-based-personas/">How To Improve Your Design Process With Data-Based Personas</a>”, Tim Noetzel</li>
<li>“<a href="https://www.smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/">How To Make Your UX Research Hard To Ignore</a>”, Vitaly Friedman</li>
<li>“<a href="https://www.smashingmagazine.com/2023/01/build-strong-customer-relationships-user-research/">How To Build Strong Customer Relationships For User Research</a>”, Renaissance Rachel</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Vitaly Friedman</author><title>How To Measure The Impact Of Features</title><link>https://www.smashingmagazine.com/2025/12/how-measure-impact-features-tars/</link><pubDate>Fri, 19 Dec 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/how-measure-impact-features-tars/</guid><description>Meet TARS — a simple, repeatable, and meaningful UX metric designed specifically to track the performance of product features. Upcoming part of the &lt;a href="https://measure-ux.com/">Measure UX &amp;amp; Design Impact&lt;/a> (use the code 🎟 &lt;code>IMPACT&lt;/code> to save 20% off today).</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/how-measure-impact-features-tars/" />
              <title>How To Measure The Impact Of Features</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Measure The Impact Of Features</h1>
                  
                    
                    <address>Vitaly Friedman</address>
                  
                  <time datetime="2025-12-19T10:00:00&#43;00:00" class="op-published">2025-12-19T10:00:00+00:00</time>
                  <time datetime="2025-12-19T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>So we design and ship a <strong>shiny new feature</strong>. How do we know if it’s working? How do we measure and track its impact? There is <a href="https://measuringu.com/an-overview-of-70-ux-metrics/">no shortage in UX metrics</a>, but what if we wanted to establish a <strong>simple, repeatable</strong>, meaningful UX metric &mdash; specifically for our features? Well, let’s see how to do just that.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="975"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg"
			
			sizes="100vw"
			alt="Adrian Raudaschl&#39;s framework for measuring feature impact."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      With <a href='https://uxdesign.cc/tars-a-product-metric-game-changer-c523f260306a?sk=v2%2F2a9d7d1e-bae9-4875-9063-4b6a10ae110c'>TARS</a>, we can assess how effective features are and how well they are performing.(<a href='https://files.smashing.media/articles/how-measure-impact-features-tars/1-impact-features-tars.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I first heard about the <strong>TARS framework</strong> from Adrian H. Raudschl’s wonderful article on “<a href="https://uxdesign.cc/tars-a-product-metric-game-changer-c523f260306a?sk=v2%2F2a9d7d1e-bae9-4875-9063-4b6a10ae110c">How To Measure Impact of Features</a>”. Here, Adrian highlighted how his team tracks and decides which features to focus on &mdash; and then maps them against each other in a <strong>2×2 quadrants matrix</strong>.</p>

<p>It turned out to be a very useful framework to <strong>visualize</strong> the impact of UX work through the lens of business metrics.</p>

<p>Let’s see how it works.</p>

<h2 id="1-target-audience">1. Target Audience (%)</h2>

<p>We start by quantifying the <strong>target audience</strong> by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.</p>

<p>Target audience <strong>isn’t the same</strong> as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. <strong>More users</strong> might have the problem that the export feature is trying to solve, but they can’t find it.</p>

<blockquote>Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”</blockquote>

<h2 id="2-a-adoption">2. A = Adoption (%)</h2>

<p>Next, we measure how well we are <strong>“acquiring”</strong> our target audience. For that, we track how many users actually engage <em>successfully</em> with that feature over a specific period of time.</p>

<p>We <strong>don’t focus on CTRs or session duration</strong> there, but rather if users <em>meaningfully</em> engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="395"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg"
			
			sizes="100vw"
			alt="The TARS Framework Step"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Adoption rates: from low adoption (<20%) to high adoption (>60%). Illustration by <a href='https://uxdesign.cc/tars-a-product-metric-game-changer-c523f260306a?sk=v2%2F2a9d7d1e-bae9-4875-9063-4b6a10ae110c'>Adrian Raudaschl</a>. (<a href='https://files.smashing.media/articles/how-measure-impact-features-tars/2-impact-features-tars.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>High <strong>feature adoption</strong> (&gt;60%) suggests that the problem was impactful. Low adoption (&lt;20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.</p>

<p>Sometimes, low feature adoption has nothing to do with the feature itself, but rather <strong>where it sits in the UI</strong>. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.</p>

<p>Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a <strong>success</strong>.</p>

<blockquote>Question we ask: “What percentage of active target users actually use the feature to solve that problem?”</blockquote>

<h2 id="3-retention">3. Retention (%)</h2>

<p>Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for <strong>meaningful impact</strong>.</p>

<p>If a feature has &gt;50% retention rate (avg.), we can be quite confident that it has a <strong>high strategic importance</strong>. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.</p>

<blockquote>Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”</blockquote>

<h2 id="4-satisfaction-score-ces">4. Satisfaction Score (CES)</h2>

<p>Finally, we measure the <strong>level of satisfaction</strong> that users have with that feature that we’ve shipped. We don’t ask everyone &mdash; we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="395"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg"
			
			sizes="100vw"
			alt="Customer Satisfaction Score, measured with a survey"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      We ask users how easy it was to solve a problem after they used a feature. Illustration by <a href='https://uxdesign.cc/tars-a-product-metric-game-changer-c523f260306a?sk=v2%2F2a9d7d1e-bae9-4875-9063-4b6a10ae110c'>Adrian Raudaschl</a>. (<a href='https://files.smashing.media/articles/how-measure-impact-features-tars/3-impact-features-tars.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once users actually used a feature multiple times, we ask them <strong>how easy it was to solve</strong> a problem after they used that feature &mdash; between “much more difficult” and “much easier than expected”. We know how we want to score.</p>

<h2 id="using-tars-for-feature-strategy">Using TARS For Feature Strategy</h2>

<p>Once we start measuring with TARS, we can calculate an <strong>S÷T score</strong> &mdash; the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a <strong>2×2 matrix</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="400"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg"
			
			sizes="100vw"
			alt="Feature retention curves"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Evaluating features on a 2×2 matrix based on S/T score Illustration by <a href='https://uxdesign.cc/tars-a-product-metric-game-changer-c523f260306a?sk=v2%2F2a9d7d1e-bae9-4875-9063-4b6a10ae110c'>Adrian Raudaschl</a>. (<a href='https://files.smashing.media/articles/how-measure-impact-features-tars/4-impact-features-tars.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Overperforming features</strong> are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.</p>

<p><strong>Liability features</strong> have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify <strong>core features</strong> and project features &mdash; and have a conversation with designers, PMs, and engineers on what we should work on next.</p>

<div class="partners__lead-place"></div>

<h2 id="conversion-rate-is-not-a-ux-metric">Conversion Rate Is Not a UX Metric</h2>

<p>TARS doesn’t cover conversion rate, and for a good reason. As <a href="https://www.linkedin.com/posts/fabian-lenz-digital-experience-leadership_conversion-rate-is-not-a-ux-metric-yes-activity-7394261839506739200-78G9">Fabian Lenz noted</a>, conversion is often considered to be the <strong>ultimate indicator of success</strong> &mdash; yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="274"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png"
			
			sizes="100vw"
			alt="Chart comparing Leading vs Lagging Measures for UX metrics"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Leading vs. Lagging Measures by <a href='https://measuringu.com/leading-vs-lagging/'>Jeff Sauro and James R. Lewis</a>. (But please do avoid NPS at all costs). (<a href='https://files.smashing.media/articles/how-measure-impact-features-tars/5-impact-features-tars.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to <strong>many different initiatives</strong> &mdash; from sales and marketing to web performance boost to seasonal effects to UX initiatives.</p>

<p>UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply <strong>can’t choose the product</strong> they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.</p>

<h3 id="high-conversion-despite-bad-ux">High Conversion Despite Bad UX</h3>

<p>As Fabian <a href="https://www.linkedin.com/posts/fabian-lenz-digital-experience-leadership_conversion-rate-is-not-a-ux-metric-yes-activity-7394261839506739200-78G9/">writes</a>, <strong>high conversion rate</strong> can happen despite poor UX, because:</p>

<ul>
<li><strong>Strong brand power</strong> pulls people in,</li>
<li>Aggressive but effective <strong>urgency tactics</strong>,</li>
<li>Prices are extremely attractive,</li>
<li>Marketing performs brilliantly,</li>
<li>Historical customer loyalty,</li>
<li>Users simply have no alternative.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="509"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg"
			
			sizes="100vw"
			alt="UX Scorecard and design metrics overview"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A practical overview of design metrics and UX scorecards: <a href='https://uxplanet.org/measuring-ux-your-first-step-towards-objective-evaluation-a408b312777b'>Measuring UX: Your First Step Towards Objective Evaluation</a> by Roman Videnov. (<a href='https://files.smashing.media/articles/how-measure-impact-features-tars/6-impact-features-tars.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="low-conversion-despite-great-ux">Low Conversion Despite Great UX</h3>

<p>At the same time, a low conversion rate can occur despite great UX, because:</p>

<ul>
<li><strong>Offers aren’t relevant</strong> to the audience,</li>
<li><strong>Users don’t trust the brand</strong>,</li>
<li>Poor business model or high risk of failure,</li>
<li>Marketing doesn’t reach the right audience,</li>
<li>External factors (price, timing, competition).</li>
</ul>

<p>An improved conversion is the <strong>positive outcome of UX initiatives</strong>. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of <a href="https://www.linkedin.com/posts/vitalyfriedman_how-to-measure-ux-httpslnkdine5uedtzy-activity-7332664809382952960-HERA">actionable design metrics we could use</a> to track UX and drive sustainable success.</p>

<h2 id="wrapping-up">Wrapping Up</h2>

<p><strong>Product metrics</strong> alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg"
			
			sizes="100vw"
			alt="Chart comparing Leading vs Lagging Measures for UX metrics"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://www.linkedin.com/posts/vitalyfriedman_ux-design-activity-7140641630507687936-YTI7'>Design KPIs and UX Metrics</a>, a quick overview by yours truly. Numbers are, of course, placeholders. (<a href='https://files.smashing.media/articles/how-measure-impact-features-tars/7-impact-features-tars.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and <strong>customers’ experience with relevant product metrics</strong>. Personally, I would extend TARS with <a href="https://www.linkedin.com/posts/vitalyfriedman_ux-design-activity-7140641630507687936-YTI7">UX-focused metrics and KPIs</a> as well &mdash; depending on the needs of the project.</p>

<p>Huge thanks to <a href="https://www.linkedin.com/in/adrian-raudaschl/">Adrian H. Raudaschl</a> for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!</p>

<h2 id="meet-how-to-measure-ux-and-design-impact">Meet “How To Measure UX And Design Impact”</h2>

<p>You can find more details on <strong>UX Strategy</strong> in 🪴&nbsp;<a href="https://measure-ux.com/"><strong>Measure UX &amp; Design Impact</strong></a> (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 <code>IMPACT</code> to save 20% off today. <a href="https://measure-ux.com/">Jump to the details</a>.</p>

<figure style="margin-bottom:0;padding-bottom:0" class="article__image">
    <a href="https://measure-ux.com/" title="How To Measure UX and Design Impact, with Vitaly Friedman">
    <img width="900" height="466" style="border-radius: 11px" src="https://files.smashing.media/articles/ux-metrics-video-course-release/measure-ux-and-design-impact-course.png" alt="How to Measure UX and Design Impact, with Vitaly Friedman.">
    </a>
</figure>

<div class="book-cta__inverted"><div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)"><nav class="content-tabs content-tabs--books"><ul><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">
Video + UX Training</button></a></li><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">Video only</button></a></li></ul></nav><div class="book-cta__col book-cta__hardcover content-tab--content"><h3 class="book-cta__title"><span>Video + UX Training</span></h3><span class="book-cta__price"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>495<span class="sup">.00</span></span></span> <span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>799<span class="sup">.00</span></span></span></span></span>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3951439" class="btn btn--full btn--medium btn--text-shadow">
Get Video + UX Training<div></div></a><p class="book-cta__desc">25 video lessons (8h) + <a href="https://smashingconf.com/online-workshops/workshops/vitaly-friedman-impact-design/">Live UX Training</a>.<br>100 days money-back-guarantee.</p></div><div class="book-cta__col book-cta__ebook content-tab--content"><h3 class="book-cta__title"><span>Video only</span></h3><div data-audience="anonymous free supporter" data-remove="true"><span class="book-cta__price" data-handler="PriceTag"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>250<span class="sup">.00</span></span></span><span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>395<span class="sup">.00</span></span></span></span></div>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3950630" class="btn btn--full btn--medium btn--text-shadow">
Get the video course<div></div></a><p class="book-cta__desc" data-audience="anonymous free supporter" data-remove="true">25 video lessons (8h). Updated yearly.<br>Also available as a <a href="https://smart-interface-design-patterns.thinkific.com/enroll/3570306?price_id=4503439">UX Bundle with 3 video courses.</a></p></div><span></span></div></div>

<h2 id="useful-resources">Useful Resources</h2>

<ul>
<li>“<a href="https://measure-ux.com">How To Measure UX and Design Impact</a>”, by yours truly</li>
<li>“<a href="https://thecdo.school/books">Business Thinking For Designers</a>”, by Ryan Rumsey</li>
<li>“<a href="https://www.linkedin.com/feed/update/urn:li:activity:7338462034763661312/">ROI of Design Project</a></li>
<li>“<a href="https://articles.centercentre.com/how-the-right-ux-metrics-show-game-changing-value/">How the Right UX Metrics Show Game-Changing Value</a>”, by Jared Spool</li>
<li>“<a href="https://www.linkedin.com/posts/vitalyfriedman_ux-design-research-activity-7164173642887606274-rEqq">Research Sample Size Calculators</a>”</li>
</ul>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2025/11/designing-for-stress-emergency/">Designing For Stress And Emergency</a>”, Vitaly Friedman</li>
<li>“<a href="https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/">AI In UX: Achieve More With Less</a>”, Paul Boag</li>
<li>“<a href="https://www.smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/">The Accessibility Problem With Authentication Methods Like CAPTCHA</a>”, Eleanor Hecks</li>
<li>“<a href="https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/">From Prompt To Partner: Designing Your Custom AI Assistant</a>”, Lyndon Cerejo</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Ari Stiles</author><title>Accessible UX Research, eBook Now Available For Download</title><link>https://www.smashingmagazine.com/2025/12/accessible-ux-research-ebook-release/</link><pubDate>Tue, 09 Dec 2025 16:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/accessible-ux-research-ebook-release/</guid><description>We’ve got exciting news! eBook versions of “Accessible UX Research,” a new Smashing Book by Michele A. Williams, are now available for download! Which means soon the book will go to the printer. Order the eBook for instant download now or &lt;a href="/printed-books/accessible-ux-research/" data-product-sku="accessible-ux-research" data-component="AddToCart" data-product-path="/printed-books/accessible-ux-research" data-silent="true">reserve your print copy at the presale price.&lt;/a></description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/accessible-ux-research-ebook-release/" />
              <title>Accessible UX Research, eBook Now Available For Download</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Accessible UX Research, eBook Now Available For Download</h1>
                  
                    
                    <address>Ari Stiles</address>
                  
                  <time datetime="2025-12-09T16:00:00&#43;00:00" class="op-published">2025-12-09T16:00:00+00:00</time>
                  <time datetime="2025-12-09T16:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                <p>This article is sponsored by <b>Accessible UX Research</b></p>
                <p>Smashing Library expands again! We’re so happy to announce our newest book, <strong><em>Accessible UX Research</em></strong>, is now <strong>available for download</strong> in eBook formats. Michele A. Williams takes us for a deep dive into the real world of UX testing, and provides a road map for including users with different abilities and needs in every phase of testing.</p>

<p>But the truth is, you don’t need to be conducting UX testing or even be a UX professional to get a lot out of this book. Michele gives in-depth descriptions of the <strong>assistive technology we should all be familiar with</strong>, in addition to disability etiquette, common pitfalls when creating accessible prototypes, and so much more. You’ll refer to this book again and again in your daily work.</p>

<figure style="margin-bottom:0;padding-bottom:0" class="break-out article__image">
    <a href="https://files.smashing.media/articles/accessible-ux-research-pre-release/accessible-ux-research-coming-soon-light-opt.png" title="Tap for a large preview of the book.">
    <img width="900" height="506" style="border-radius: 11px" src="https://files.smashing.media/articles/accessible-ux-research-pre-release/accessible-ux-research-coming-soon-light-varnish-opt.png" alt="Illustration showing Smashing Magazine’s mascot Topple, a red, cartoon-style cat wearing a black sweater. It is smiling and holding a post-it note in its right hand that reads “New” as it is peeking from behind Michele A. William’s new book “Accessible UX Research”. The book has a teal cover that shows a three times three grid of windows in different architectural styles. Inside the windows, there are icons related to UX research, such as speech bubbles, a looking glass, a keyboard, and UI components. The text on the illustration reads: “Coming soon to the Smashing Library! Pre-order your copy now.”">
    </a>
</figure>


<div class="book-cta__inverted">
	


	
	
	




















<div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)">

  
 
<nav class="content-tabs content-tabs--books">
  <ul>
    <li class="content-tab">
      <a href="#">
        <button class="btn btn--small btn--white btn--white--bordered">
          Print + eBook
        </button>
      </a>
    </li>

    <li class="content-tab">
      <a href="#">
        <button class="btn btn--small btn--white btn--white--bordered">
          eBook
        </button>
      </a>
    </li>
  </ul>
</nav>


	<div class="book-cta__col book-cta__hardcover content-tab--content">
		<h3 class="book-cta__title">
			<span>Print + eBook</span>
		</h3>

		
			



	
	
	
	
	
	
	<script class="gocommerce-product" type="application/json" data-sku="accessible-ux-research" data-type="Book">
	{
		"sku": "accessible-ux-research",
		"type": "Book",
		"price": "44.00",
		
		"prices": [{
			"amount": "44.00",
			"currency": "USD",
			"items": [
				{"amount": "34.00", "type": "Book"},
				{"amount": "10.00", "type": "E-Book"}
			]
		}, {
			"amount": "44.00",
			"currency": "EUR",
			"items": [
				{"amount": "34.00", "type": "Book"},
				{"amount": "10.00", "type": "E-Book"}
			]
		}
		]
	}
	</script>


<span class="book-cta__price" data-handler="PriceTag" data-sku="accessible-ux-research" data-type="Book" data-insert="true">
  <span class="placeholder">
    
      
<span class="currency-sign">$</span>
44<span class="sup">.00</span>


    

    
  </span>
</span>

		
		<button class="btn btn--full btn--medium btn--text-shadow"
						
		        data-product-path="/printed-books/accessible-ux-research/"
						data-product-sku="accessible-ux-research"
            data-author="Michele Williams"
            data-authors=""
						data-link=""
						
            data-component="AddToCart">
			 Get Print + eBook
		</button>
		<p class="book-cta__desc">
			Quality hardcover. <a href="https://www.smashingmagazine.com/delivery-times/">Free worldwide shipping early 2026</a>.<br/> 100 days money-back-guarantee.
		</p>
	</div>
	<div class="book-cta__col book-cta__ebook content-tab--content">
		<h3 class="book-cta__title">
			<span>eBook</span>
		</h3>

		
			<div data-audience="anonymous free supporter" data-remove="true">
				



	
	
	
	
	
	
	<script class="gocommerce-product" type="application/json" data-sku="accessible-ux-research-ebook" data-type="E-Book">
	{
		"sku": "accessible-ux-research-ebook",
		"type": "E-Book",
		"price": "19.00",
		
		"prices": [{
			"amount": "19.00",
			"currency": "USD"
		}, {
			"amount": "19.00",
			"currency": "EUR"
		}
		]
	}
	</script>


<span class="book-cta__price" data-handler="PriceTag" data-sku="accessible-ux-research-ebook" data-type="E-Book" data-insert="true">
  <span class="placeholder">
    
      
<span class="currency-sign">$</span>
19<span class="sup">.00</span>


    

    
  </span>
</span>

			</div>
		

    
      <span class="book-cta__price hidden" data-audience="smashing member" data-remove="true">
        <span class="green">Free!</span>
      </span>
    

		<button class="btn btn--full btn--medium btn--text-shadow"
		        data-product-path="/printed-books/accessible-ux-research/"
						data-product-sku="accessible-ux-research-ebook"
            data-author="Michele Williams"
            data-authors=""
						data-link=""
            data-component="AddToCart"
						
            
              data-audience="anonymous free supporter"
              data-remove="true"
            
            >
			  Get the eBook
		</button>
		<p
      class="book-cta__desc"
      
        data-audience="anonymous free supporter"
        data-remove="true"
      
    >
			DRM-free, of course. ePUB, Kindle, PDF.<br/>Included with your <a href="https://www.smashingmagazine.com/membership/">Smashing Membership.</a>
		</p>

    
  <div data-audience="smashing member" class="hidden" data-remove="true">
    <a href="accessibleresearchpdf" class="btn btn--medium btn--green btn--full js-add-to-cart">
      Get the eBook
    </a>
    <p class="book-cta__desc book-cta__desc--light">
      <a href="accessibleresearchpdf">Download PDF</a>, <a href="accessibleresearchepub">ePUB</a>, <a href="accessibleresearchmobi">Kindle</a>.<br/>Thanks for being smashing!&nbsp;❤️
    </p>
  </div>


	</div>
</div>

</div>

<p>This is also your last chance to get your printed copy at our discounted presale price. We expect printed copies to start <strong>shipping in early 2026</strong>. We know you’ll love this book, but don’t just take our word for it — we asked a few industry experts to check out <em>Accessible UX Research</em> too:</p>

<blockquote style="font-style: normal"><img loading="lazy" decoding="async" style="clear:both;float:right;margin-top:0em;margin-left:0.9em;margin-bottom:1em;border-radius:50%; max-width:calc(50% - 5vh);height:auto;" src="https://files.smashing.media/articles/accessible-ux-research-pre-release/eric-bailey-opt.png" width="150" height="150" alt="Eric Bailey" />“<em>Accessible UX Research</em> stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a <strong>solid framework</strong> for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.<br /><br />This is <strong>the book I wish I had</strong> when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”<br /><br />Eric Bailey, Accessibility Advocate</blockquote>

<blockquote style="font-style: normal"><img loading="lazy" decoding="async" style="clear:both;float:right;margin-top:0em;margin-left:0.9em;margin-bottom:1em;border-radius:50%; max-width:calc(50% - 5vh);height:auto;" src="https://files.smashing.media/articles/accessible-ux-research-pre-release/devon-pershing-opt.png" width="150" height="150" alt="Devon Pershing" />“User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a <strong>critical piece in the puzzle</strong> of actually doing and integrating that research into accessibility work day to day.”<br /><br />Devon Pershing, Author of <em>The Accessibility Operations Guidebook</em></blockquote>

<blockquote style="font-style: normal"><img loading="lazy" decoding="async" style="clear:both;float:right;margin-top:0em;margin-left:0.9em;margin-bottom:1em;border-radius:50%; max-width:calc(50% - 5vh);height:auto;" src="https://files.smashing.media/articles/accessible-ux-research-pre-release/manuel-matuzovic-opt.png" width="150" height="150" alt="Manuel Matuzović" />“Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the <strong>depth of human experience</strong>. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”<br /><br />Manuel Matuzović, Author of the <em>Web Accessibility Cookbook</em></blockquote>

<blockquote style="font-style: normal"><img loading="lazy" decoding="async" style="clear:both;float:right;margin-top:0em;margin-left:0.9em;margin-bottom:1em;border-radius:50%; max-width:calc(50% - 5vh);height:auto;" src="https://files.smashing.media/articles/accessible-ux-research-pre-release/anna-e-cook-opt.png" width="150" height="150" alt="Anna E. Cook" />“This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A <strong>strong focus on real-world application</strong> equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”<br /><br />Anna E. Cook, Accessibility and Inclusive Design Specialist</blockquote>

<h2>About The Book</h2>

<p>The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a <strong>practical guide to inclusive UX research</strong>, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.</p>

<p>Inside, you’ll learn how to:</p>

<ul>
<li><strong>Plan research</strong> that includes disabled participants from the start,</li>
<li><strong>Recruit participants</strong> with disabilities,</li>
<li><strong>Facilitate sessions</strong> that work for a range of access needs,</li>
<li><strong>Ask better questions</strong> and avoid unintentionally biased research methods,</li>
<li><strong>Build trust and confidence</strong> in your team around accessibility and inclusion.</li>
</ul>

<p>The book also challenges common assumptions about disability and urges readers to <strong>rethink what inclusion really means</strong> in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.</p>

<p>High-quality hardcover, 320 pages. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. <strong>Print edition shipping early 2026.</strong> eBook now available for download.</strong> <a href="accessibleresearchsample">Download a free sample</a> (PDF, 2.3MB) and <a href="/printed-books/accessible-ux-research/" data-product-sku="accessible-ux-research" data-component="AddToCart" data-product-path="/printed-books/accessible-ux-research" data-silent="true">reserve your print copy at the presale price.</a></p>

<figure style="margin-bottom:0;padding-bottom:0" class="break-out article__image">
    <a href="https://files.smashing.media/articles/accessible-ux-research-ebook-release/accessible-ux-research-inside-the-book-1-opt.png" title="Tap for a large preview.">
    <img width="900" height="458" style="border-radius: 11px" src="https://files.smashing.media/articles/accessible-ux-research-ebook-release/accessible-ux-research-inside-the-book-1-opt.png" alt="A look inside the book.">
    </a><figcaption>“Accessible UX Research” shares successful strategies that’ll help you recruit the participants you need for the study you’re designing. (<a href="https://files.smashing.media/articles/accessible-ux-research-ebook-release/accessible-ux-research-inside-the-book-1-opt.png">Large preview</a>)</figcaption>
</figure>

<h2>Contents</h2>

<ol>
<li><strong>Disability mindset</strong>: For inclusive research to succeed, we must first confront our mindset about disability, typically influenced by ableism.</li>
<li><strong>Diversity of disability</strong>: Accessibility is not solely about blind screen reader users; disability categories help us unpack and process the diversity of disabled users.</li>
<li><strong>Disability in the stages of UX research</strong>: Disabled participants can and should be part of every research phase — formative, prototype, and summative.</li>
<li><strong>Recruiting disabled participants</strong>: Recruiting disabled participants is not always easy, but that simply means we need to learn strategies on where to look.</li>
<li><strong>Designing your research</strong>: While our goal is to influence accessible products, our research execution must also be accessible.</li>
<li><strong>Facilitating an accessible study</strong>: Preparation and communication with your participants can ensure your study logistics run smoothly.</li>
<li><strong>Analyzing and reporting with accuracy and impact</strong>: How you communicate your findings is just as important as gathering them in the first place — so prepare to be a storyteller, educator, and advocate.</li>
<li><strong>Disability in the UX research field</strong>: Inclusion isn’t just for research <em>participants</em>, it’s important for our <em>colleagues</em> as well, as explained by blind UX Researcher Dr. Cynthia Bennett.</li>
</ol>

<figure style="margin-bottom:0;padding-bottom:0" class="break-out article__image">
    <a href="https://files.smashing.media/articles/accessible-ux-research-ebook-release/accessible-ux-research-inside-the-book-2-opt.png" title="Tap for a large preview.">
    <img width="900" height="458" style="border-radius: 11px" src="https://files.smashing.media/articles/accessible-ux-research-ebook-release/accessible-ux-research-inside-the-book-2-opt.png" alt="A look inside the book.">
    </a><figcaption>The book will challenge your disability mindset and what it means to be truly inclusive in your work. (<a href="https://files.smashing.media/articles/accessible-ux-research-ebook-release/accessible-ux-research-inside-the-book-2-opt.png">Large preview</a>)</figcaption>
</figure>

<h2>Who This Book Is For</h2>

<p>Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…</p>

<ol>
<li>You have been tasked to <strong>improve accessibility of your product</strong>, but need to know where to start to facilitate this successfully.</li>
<li>You want to establish a <strong>culture for accessibility</strong> in your company, but not sure how to make it work.</li>
<li>You want to <strong>move from WCAG/EAA compliance</strong> to established accessibility practices and inclusion in research practices and beyond.</li>
<li>You want to <strong>improve your overall accessibility knowledge</strong> and be viewed as an Accessibility Specialist for your organization.</li>
</ol>

<figure style="margin-bottom:0;padding-bottom:0" class="break-out article__image">
    <a href="https://files.smashing.media/articles/accessible-ux-research-pre-release/accessible-ux-research-kind-support-light-opt.png" title="Tap for a large preview of the book.">
    <img style="border-radius: 11px" src="https://files.smashing.media/articles/accessible-ux-research-pre-release/accessible-ux-research-kind-support-light-varnish-opt.png" alt="Illustration showing Smashing Magazine’s mascot Topple, a red, cartoon-style cat wearing a black sweater. It is smiling and holding a post-it note in its right hand that reads “New” as it is peeking from behind Michele A. William’s new book “Accessible UX Research”. The book has a teal cover that shows a three times three grid of windows in different architectural styles. Inside the windows, there are icons related to UX research, such as speech bubbles, a looking glass, a keyboard, and UI components. The text on the illustration reads: “Thanks for your kind support.”">
    </a>
</figure>


<div class="book-cta__inverted">
	


	
	
	




















<div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)">

  
 
<nav class="content-tabs content-tabs--books">
  <ul>
    <li class="content-tab">
      <a href="#">
        <button class="btn btn--small btn--white btn--white--bordered">
          Print + eBook
        </button>
      </a>
    </li>

    <li class="content-tab">
      <a href="#">
        <button class="btn btn--small btn--white btn--white--bordered">
          eBook
        </button>
      </a>
    </li>
  </ul>
</nav>


	<div class="book-cta__col book-cta__hardcover content-tab--content">
		<h3 class="book-cta__title">
			<span>Print + eBook</span>
		</h3>

		
			



	
	
	
	
	
	
	<script class="gocommerce-product" type="application/json" data-sku="accessible-ux-research" data-type="Book">
	{
		"sku": "accessible-ux-research",
		"type": "Book",
		"price": "44.00",
		
		"prices": [{
			"amount": "44.00",
			"currency": "USD",
			"items": [
				{"amount": "34.00", "type": "Book"},
				{"amount": "10.00", "type": "E-Book"}
			]
		}, {
			"amount": "44.00",
			"currency": "EUR",
			"items": [
				{"amount": "34.00", "type": "Book"},
				{"amount": "10.00", "type": "E-Book"}
			]
		}
		]
	}
	</script>


<span class="book-cta__price" data-handler="PriceTag" data-sku="accessible-ux-research" data-type="Book" data-insert="true">
  <span class="placeholder">
    
      
<span class="currency-sign">$</span>
44<span class="sup">.00</span>


    

    
  </span>
</span>

		
		<button class="btn btn--full btn--medium btn--text-shadow"
						
		        data-product-path="/printed-books/accessible-ux-research/"
						data-product-sku="accessible-ux-research"
            data-author="Michele Williams"
            data-authors=""
						data-link=""
						
            data-component="AddToCart">
			 Get Print + eBook
		</button>
		<p class="book-cta__desc">
			Quality hardcover. <a href="https://www.smashingmagazine.com/delivery-times/">Free worldwide shipping early 2026</a>.<br/> 100 days money-back-guarantee.
		</p>
	</div>
	<div class="book-cta__col book-cta__ebook content-tab--content">
		<h3 class="book-cta__title">
			<span>eBook</span>
		</h3>

		
			<div data-audience="anonymous free supporter" data-remove="true">
				



	
	
	
	
	
	
	<script class="gocommerce-product" type="application/json" data-sku="accessible-ux-research-ebook" data-type="E-Book">
	{
		"sku": "accessible-ux-research-ebook",
		"type": "E-Book",
		"price": "19.00",
		
		"prices": [{
			"amount": "19.00",
			"currency": "USD"
		}, {
			"amount": "19.00",
			"currency": "EUR"
		}
		]
	}
	</script>


<span class="book-cta__price" data-handler="PriceTag" data-sku="accessible-ux-research-ebook" data-type="E-Book" data-insert="true">
  <span class="placeholder">
    
      
<span class="currency-sign">$</span>
19<span class="sup">.00</span>


    

    
  </span>
</span>

			</div>
		

    
      <span class="book-cta__price hidden" data-audience="smashing member" data-remove="true">
        <span class="green">Free!</span>
      </span>
    

		<button class="btn btn--full btn--medium btn--text-shadow"
		        data-product-path="/printed-books/accessible-ux-research/"
						data-product-sku="accessible-ux-research-ebook"
            data-author="Michele Williams"
            data-authors=""
						data-link=""
            data-component="AddToCart"
						
            
              data-audience="anonymous free supporter"
              data-remove="true"
            
            >
			  Get the eBook
		</button>
		<p
      class="book-cta__desc"
      
        data-audience="anonymous free supporter"
        data-remove="true"
      
    >
			DRM-free, of course. ePUB, Kindle, PDF.<br/>Included with your <a href="https://www.smashingmagazine.com/membership/">Smashing Membership.</a>
		</p>

    
  <div data-audience="smashing member" class="hidden" data-remove="true">
    <a href="accessibleresearchpdf" class="btn btn--medium btn--green btn--full js-add-to-cart">
      Get the eBook
    </a>
    <p class="book-cta__desc book-cta__desc--light">
      <a href="accessibleresearchpdf">Download PDF</a>, <a href="accessibleresearchepub">ePUB</a>, <a href="accessibleresearchmobi">Kindle</a>.<br/>Thanks for being smashing!&nbsp;❤️
    </p>
  </div>


	</div>
</div>

</div>

<h2>About the Author</h2>

<p><a href="https://mawconsultingllc.com/"><img loading="lazy" decoding="async" style="float:right;margin-top:1em;margin-left:0.9em;margin-bottom:1em;border-radius:50%;    max-width:calc(50% - 5vh);height:auto;" src="https://files.smashing.media/articles/accessible-ux-research-pre-release/michele-williams-opt.png" width="150" height="150" alt="Michele A. Williams" /></a>Dr. Michele A. Williams is owner of <a href="https://mawconsultingllc.com/">M.A.W. Consulting, LLC - Making Accessibility Work</a>. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, <a href="https://scholar.google.com/citations?user=1IfsBJEAAAAJ&hl=en">published academic author</a>, and <a href="https://patents.justia.com/patent/10854103">patented inventor</a>, she is passionate about educating and advising on technology that does not exclude disabled users.</p>

<h2>Technical Details</h2>

<ul>
<li>ISBN: <span class="small-caps">978-3-910835-03-0</span> (print)</li>
<li><strong>Quality hardcover</strong>, stitched binding, ribbon page marker.</li>
<li>Free worldwide airmail <strong>shipping from Germany early 2026</strong>. We are currently <strong>unable to ship printed books to the United States</strong> due to customs clearance issues.</strong> If you have any questions, please <a href="mailto:help@smashingmagazine.com">contact us any time</a>.</li>
<li>eBook now available for download as <strong>PDF, ePUB, and Amazon Kindle</strong>.</li>
<li><strong><a href="/ebooks/accessible-ux-research-ebook/" data-product-sku="accessible-ux-research-ebook" data-component="AddToCart" data-product-path="/ebooks/accessible-ux-research-ebook/" data-silent="true">Order the eBook for instant download now.</a></strong></li>
<li><strong><a href="/printed-books/accessible-ux-research/" data-product-sku="accessible-ux-research" data-component="AddToCart" data-product-path="/printed-books/accessible-ux-research" data-silent="true">Reserve your print copy at the presale price.</a></strong></li>
</ul>

<h2>Community Matters ❤️</h2>

<p>Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful <strong>community</strong>. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be <a href="https://www.smashingmagazine.com/membership">free for <em>Smashing Members</em></a>. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)</p>

<h2>More Smashing Books &amp; Goodies</h2>

<p>Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the <strong>core of everything we do</strong> at Smashing.</p>

<p>In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as <a href="/printed-books/">printed books that stand the test of time</a>. Trine, Heather, and Steven are three of these people. Have you checked out their books already?</p>

<div class="book-grid break-out book-grid__in-post">

<figure class="book--featured"><div class="book--featured__image"><a href="/printed-books/ethical-design-handbook/"><img loading="lazy" decoding="async" src="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/1f1cc2de-e6ed-4262-a1cb-cc0b2d4e3243/ethical-design-cover-shop-opt.png" alt="The Ethical Design Handbook" width="160" height="232" /></a></div><figcaption><h4 class="book--featured__title"><a style="padding: 7px 0;text-decoration-skip-ink: auto;text-decoration-thickness: 1px;text-underline-offset: 1px;text-decoration-line: underline;text-decoration-color: #006fc6;" href="/printed-books/ethical-design-handbook/">The Ethical Design Handbook</a></h4><p class="book--featured__desc">A practical guide on ethical design for digital products.</p><p><a style="font-style: normal !important; color: #fff !important;" class="btn btn--medium btn--green" href="/printed-books/ethical-design-handbook/" data-product-path="/printed-books/ethical-design-handbook/" data-product-sku="ethical-design-handbook" data-component="AddToCart">Add to cart <span style="color:#fff;font-size:1em;">$44</span></a></p></figcaption></figure>

<figure class="book--featured"><div class="book--featured__image"><a href="/printed-books/understanding-privacy/"><img loading="lazy" decoding="async" src="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/d2da7a90-acdb-43c7-82b2-5225c33ca4d7/understanding-privacy-cover-new-opt.png" alt="Understanding Privacy" width="160" height="232" /></a></div><figcaption><h4 class="book--featured__title"><a href="/printed-books/understanding-privacy/" style="padding: 7px 0;text-decoration-skip-ink: auto;text-decoration-thickness: 1px;text-underline-offset: 1px;text-decoration-line: underline;text-decoration-color: #006fc6;">Understanding Privacy</a></h4><p class="book--featured__desc">Everything you need to know to put your users first and make a better web.</p><p><a style="font-style: normal !important; color: #fff !important;" class="btn btn--medium btn--green" href="/printed-books/understanding-privacy/" data-product-path="/printed-books/understanding-privacy/" data-product-sku="understanding-privacy" data-component="AddToCart">Add to cart <span style="color:#fff;font-size:1em;">$44</span></a></p></figcaption></figure>

<figure class="book--featured"><div class="book--featured__image"><a href="/printed-books/touch-design-for-mobile-interfaces/"><img loading="lazy" decoding="async" src="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/b14658fc-bb2d-41a6-8d1a-70eaaf1b8ec8/touch-design-book-shop-opt.png" alt="Touch Design for Mobile Interfaces" width="160" height="232" /></a></div><figcaption><h4 class="book--featured__title"><a style="padding: 7px 0;text-decoration-skip-ink: auto;text-decoration-thickness: 1px;text-underline-offset: 1px;text-decoration-line: underline;text-decoration-color: #006fc6;" href="/printed-books/touch-design-for-mobile-interfaces/">Touch Design for Mobile Interfaces</a></h4><p class="book--featured__desc">Learn how touchscreen devices really work &mdash; and how people really use them.</p><p><a style="font-style: normal !important; color: #fff !important;" class="btn btn--medium btn--green" href="/printed-books/touch-design-for-mobile-interfaces/" data-product-path="/printed-books/touch-design-for-mobile-interfaces/" data-product-sku="touch-design-for-mobile-interfaces" data-component="AddToCart">Add to cart <span style="color:#fff;font-size:1em;">$44</span></a></p></figcaption></figure>

</div>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(cm, il)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Paul Boag</author><title>How UX Professionals Can Lead AI Strategy</title><link>https://www.smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/</link><pubDate>Mon, 08 Dec 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/</guid><description>Lead your organization’s AI strategy before someone else defines it for you. A practical framework for UX professionals to shape AI implementation.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/" />
              <title>How UX Professionals Can Lead AI Strategy</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How UX Professionals Can Lead AI Strategy</h1>
                  
                    
                    <address>Paul Boag</address>
                  
                  <time datetime="2025-12-08T08:00:00&#43;00:00" class="op-published">2025-12-08T08:00:00+00:00</time>
                  <time datetime="2025-12-08T08:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>Your senior management is excited about AI. They’ve read the articles, attended the webinars, and seen the demos. They’re convinced that AI will transform your organization, boost productivity, and give you a competitive edge.</p>

<p>Meanwhile, you’re sitting in your UX role wondering what this means for your team, your workflow, and your users. You might even be worried about your job security.</p>

<p>The problem is that the conversation about how AI gets implemented is happening right now, and if you’re not part of it, <strong>someone else will decide how it affects your work</strong>. That someone probably doesn’t understand user experience, research practices, or the subtle ways poor implementation can damage the very outcomes management hopes to achieve.</p>

<p>You have a choice. You can wait for directives to come down from above, or you can take control of the conversation and lead the AI strategy for your practice.</p>

<h2 id="why-ux-professionals-must-own-the-ai-conversation">Why UX Professionals Must Own the AI Conversation</h2>

<p>Management sees AI as efficiency gains, cost savings, competitive advantage, and innovation all wrapped up in one buzzword-friendly package. They’re not wrong to be excited. The technology is genuinely impressive and can deliver real value.</p>

<p><strong>But without UX input, AI implementations often fail users in predictable ways:</strong></p>

<ul>
<li>They automate tasks without understanding the judgment calls those tasks require.</li>
<li>They optimize for speed while destroying the quality that made your work valuable.</li>
</ul>

<p>Your expertise positions you perfectly to guide implementation. You understand users, workflows, quality standards, and the gap between what looks impressive in a demo and what actually works in practice.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="use-ai-momentum-to-advance-your-priorities">Use AI Momentum to Advance Your Priorities</h3>

<p>Management’s enthusiasm for AI creates an opportunity to advance priorities you’ve been fighting for unsuccessfully. When management is willing to invest in AI, you can connect those long-standing needs to the AI initiative. Position user research as essential for training AI systems on real user needs. Frame usability testing as the validation method that ensures AI-generated solutions actually work.</p>

<p>How AI gets implemented will shape your team’s roles, your users’ experiences, and your organization’s capability to deliver quality digital products.</p>

<h2 id="your-role-isn-t-disappearing-it-s-evolving">Your Role Isn’t Disappearing (It’s Evolving)</h2>

<p>Yes, AI will automate some of the tasks you currently do. But someone needs to decide which tasks get automated, how they get automated, what guardrails to put in place, and how automated processes fit around real humans doing complex work.</p>

<p>That someone should be <em>you</em>.</p>

<p>Think about what you already do. When you conduct user research, AI might help you transcribe interviews or identify themes. But you’re the one who knows which participant hesitated before answering, which feedback contradicts what you observed in their behavior, and which insights matter most for your specific product and users.</p>

<p>When you design interfaces, AI might generate layout variations or suggest components from your design system. But you’re the one who understands the constraints of your technical platform, the political realities of getting designs approved, and the edge cases that will break a clever solution.</p>

<p><strong>Your future value comes from the work you’re already doing:</strong></p>

<ul>
<li><strong>Seeing the full picture.</strong><br />
You understand how this feature connects to that workflow, how this user segment differs from that one, and why the technically correct solution won’t work in your organization’s reality.</li>
<li><strong>Making judgment calls.</strong><br />
You decide when to follow the design system and when to break it, when user feedback reflects a real problem versus a feature request from one vocal user, and when to push back on stakeholders versus find a compromise.</li>
<li><strong>Connecting the dots.</strong><br />
You translate between technical constraints and user needs, between business goals and design principles, between what stakeholders ask for and what will actually solve their problem.</li>
</ul>

<p>AI will keep getting better at individual tasks. But you’re the person who decides which solution actually works for your specific context. The people who will struggle are those doing simple, repeatable work without understanding why. Your value is in understanding context, making judgment calls, and connecting solutions to real problems.</p>

<h2 id="step-1-understand-management-s-ai-motivations">Step 1: Understand Management’s AI Motivations</h2>

<p>Before you can lead the conversation, you need to understand what’s driving it. Management is responding to real pressures: cost reduction, competitive pressure, productivity gains, and board expectations.</p>

<p><strong>Speak their language.</strong><br />
When you talk to management about AI, frame everything in terms of ROI, risk mitigation, and competitive advantage. <em>“This approach will protect our quality standards”</em> is less compelling than <em>“This approach reduces the risk of damaging our conversion rate while we test AI capabilities.”</em></p>

<p><strong>Separate hype from reality.</strong><br />
Take time to research what AI capabilities actually exist versus what’s hype. Read case studies, try tools yourself, and talk to peers about what’s actually working.</p>

<p><strong>Identify real pain points.</strong><br />
AI might legitimately address in your organization. Maybe your team spends hours formatting research findings, or accessibility testing creates bottlenecks. These are the problems worth solving.</p>

<div class="partners__lead-place"></div>

<h2 id="step-2-audit-your-current-state-and-opportunities">Step 2: Audit Your Current State and Opportunities</h2>

<p>Map your team’s work. Where does time actually go? Look at the past quarter and categorize how your team spent their hours.</p>

<p><strong>Identify high-volume, repeatable tasks versus high-judgment work.</strong><br />
Repeatable tasks are candidates for automation. High-judgment work is where you add irreplaceable value.</p>

<p><strong>Also, identify what you’ve wanted to do but couldn’t get approved.</strong><br />
This is your opportunity list. Maybe you’ve wanted quarterly usability tests, but only get budget annually. Write these down separately. You’ll connect them to your AI strategy in the next step.</p>

<p>Spot opportunities where AI could genuinely help:</p>

<ul>
<li><strong>Research synthesis:</strong><br />
AI can help organize and categorize findings.</li>
<li><strong>Analyzing user behavior data:</strong><br />
AI can process analytics and session recordings to surface patterns you might miss.</li>
<li><strong>Rapid prototyping:</strong><br />
AI can quickly generate testable prototypes, speeding up your test cycles.</li>
</ul>

<h2 id="step-3-define-ai-principles-for-your-ux-practice">Step 3: Define AI Principles for Your UX Practice</h2>

<p>Before you start forming your strategy, establish principles that will guide every decision.</p>

<p><strong>Set non-negotiables.</strong><br />
User privacy, accessibility, and human oversight of significant decisions. Write these down and get agreement from leadership before you pilot anything.</p>

<p><strong>Define criteria for AI use.</strong><br />
AI is good at pattern recognition, summarization, and generating variations. AI is poor at understanding context, making ethical judgments, and knowing when rules should be broken.</p>

<p><strong>Define success metrics beyond efficiency.</strong><br />
Yes, you want to save time. But you also need to measure quality, user satisfaction, and team capability. Build a balanced scorecard that captures what actually matters.</p>

<p><strong>Create guardrails.</strong><br />
Maybe every AI-generated interface needs human review before it ships. These guardrails prevent the obvious disasters and give you space to learn safely.</p>

<h2 id="step-4-build-your-ai-in-ux-strategy">Step 4: Build Your AI-in-UX Strategy</h2>

<p>Now you’re ready to build the actual strategy you’ll pitch to leadership. <strong>Start small</strong> with pilot projects that have a clear scope and evaluation criteria.</p>

<p><strong>Connect to business outcomes management cares about.</strong><br />
Don’t pitch <em>“using AI for research synthesis.”</em> Pitch <em>“reducing time from research to insights by 40%, enabling faster product decisions.”</em></p>

<p><strong>Piggyback your existing priorities on AI momentum.</strong><br />
Remember that opportunity list from Step 2? Now you connect those long-standing needs to your AI strategy. If you’ve wanted more frequent usability testing, explain that AI implementations need continuous validation to catch problems before they scale. AI implementations genuinely benefit from good research practices. You’re simply using management’s enthusiasm for AI as the vehicle to finally get resources for practices that should have been funded all along.</p>

<p><strong>Define roles clearly.</strong><br />
Where do humans lead? Where does AI assist? Where won’t you automate? Management needs to understand that some work requires human judgment and should never be fully automated.</p>

<p><strong>Plan for capability building.</strong><br />
Your team will need training and new skills. Budget time and resources for this.</p>

<p><strong>Address risks honestly.</strong><br />
AI could generate biased recommendations, miss important context, or produce work that looks good but doesn’t actually function. For each risk, explain how you’ll detect it and what you’ll do to mitigate it.</p>

<h2 id="step-5-pitch-the-strategy-to-leadership">Step 5: Pitch the Strategy to Leadership</h2>

<p>Frame your strategy as de-risking management’s AI ambitions, not blocking them. You’re showing them how to implement AI successfully while avoiding the obvious pitfalls.</p>

<p><strong>Lead with outcomes and ROI they care about.</strong><br />
Put the business case up front.</p>

<p><strong>Bundle your wish list into the AI strategy.</strong><br />
When you present your strategy, include those capabilities you’ve wanted but couldn’t get approved before. Don’t present them as separate requests. Integrate them as essential components. <em>“To validate AI-generated designs, we’ll need to increase our testing frequency from annual to quarterly”</em> sounds much more reasonable than <em>“Can we please do more testing?”</em> You’re explaining what’s required for their AI investment to succeed.</p>

<p><strong>Show quick wins alongside a longer-term vision.</strong><br />
Identify one or two pilots that can show value within 30-60 days. Then show them how those pilots build toward bigger changes over the next year.</p>

<p><strong>Ask for what you need.</strong><br />
Be specific. You need a budget for tools, time for pilots, access to data, and support for team training.</p>

<div class="partners__lead-place"></div>

<h2 id="step-6-implement-and-demonstrate-value">Step 6: Implement and Demonstrate Value</h2>

<p>Run your pilots with clear before-and-after metrics. Measure everything: time saved, quality maintained, user satisfaction, team confidence.</p>

<p><strong>Document wins and learning.</strong><br />
Failures are useful too. If a pilot doesn’t work out, document why and what you learned.</p>

<p><strong>Share progress in management’s language.</strong>
 Monthly updates should focus on business outcomes, not technical details. <em>“We’ve reduced research synthesis time by 35% while maintaining quality scores”</em> is the right level of detail.</p>

<p><strong>Build internal advocates by solving real problems.</strong><br />
When your AI pilots make someone’s job easier, you create advocates who will support broader adoption.</p>

<p><strong>Iterate based on what works in your specific context.</strong>
 Not every AI application will fit your organization. Pay attention to what’s actually working and double down on that.</p>

<h2 id="taking-initiative-beats-waiting">Taking Initiative Beats Waiting</h2>

<p>AI adoption is happening. The question isn’t whether your organization will use AI, but whether you’ll shape how it gets implemented.</p>

<p>Your UX expertise is exactly what’s needed to implement AI successfully. You understand users, quality, and the gap between impressive demos and useful reality.</p>

<p><strong>Take one practical first step this week.</strong><br />
Schedule 30 minutes to map one AI opportunity in your practice. Pick one area where AI might help, think through how you’d pilot it safely, and sketch out what success would look like.</p>

<p>Then start the conversation with your manager. You might be surprised how receptive they are to someone stepping up to lead this.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aYou%20know%20how%20to%20understand%20user%20needs,%20test%20solutions,%20measure%20outcomes,%20and%20iterate%20based%20on%20evidence.%20Those%20skills%20don%e2%80%99t%20change%20just%20because%20AI%20is%20involved.%20You%e2%80%99re%20applying%20your%20existing%20expertise%20to%20a%20new%20tool.%0a&url=https://smashingmagazine.com%2f2025%2f12%2fhow-ux-professionals-can-lead-ai-strategy%2f">
      
You know how to understand user needs, test solutions, measure outcomes, and iterate based on evidence. Those skills don’t change just because AI is involved. You’re applying your existing expertise to a new tool.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Your role isn’t disappearing. It’s evolving into something more strategic, more valuable, and more secure. But only if you take the initiative to shape that evolution yourself.</p>

<h3 id="further-reading-on-smashingmag">Further Reading On SmashingMag</h3>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/">Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</a>”, Ilia Kanazin &amp; Marina Chernyshova</li>
<li>“<a href="https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/">Beyond The Hype: What AI Can Really Do For Product Design</a>”, Nikita Samutin</li>
<li>“<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”, Lyndon Cerejo</li>
<li>“<a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">Functional Personas With AI: A Lean, Practical Workflow</a>”, Paul Boag</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk, il)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Victor Yocco</author><title>Beyond The Black Box: Practical XAI For UX Practitioners</title><link>https://www.smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/</link><pubDate>Fri, 05 Dec 2025 15:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/</guid><description>Explainable AI isn’t just a challenge for data scientists. It’s also a design challenge and a core pillar of trustworthy, effective AI products. Victor Yocco offers practical guidance and design patterns for building explainability into real products.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/" />
              <title>Beyond The Black Box: Practical XAI For UX Practitioners</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Beyond The Black Box: Practical XAI For UX Practitioners</h1>
                  
                    
                    <address>Victor Yocco</address>
                  
                  <time datetime="2025-12-05T15:00:00&#43;00:00" class="op-published">2025-12-05T15:00:00+00:00</time>
                  <time datetime="2025-12-05T15:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>In my <a href="https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/">last piece</a>, we established a foundational truth: for users to adopt and rely on AI, they must <strong>trust</strong> it. We talked about trust being a multifaceted construct, built on perceptions of an AI’s <strong>Ability</strong>, <strong>Benevolence</strong>, <strong>Integrity</strong>, and <strong>Predictability</strong>. But what happens when an AI, in its silent, algorithmic wisdom, makes a decision that leaves a user confused, frustrated, or even hurt? A mortgage application is denied, a favorite song is suddenly absent from a playlist, and a qualified resume is rejected before a human ever sees it. In these moments, ability and predictability are shattered, and benevolence feels a world away.</p>

<p>Our conversation now  must evolve from the <em>why</em> of trust to the <em>how</em> of transparency. The field of <strong>Explainable AI (XAI)</strong>, which focuses on developing methods to make AI outputs understandable to humans, has emerged to address this, but it’s often framed as a purely technical challenge for data scientists. I argue it’s a critical design challenge for products relying on AI. It’s our job as UX professionals to bridge the gap between algorithmic decision-making and human understanding.</p>

<p>This article provides practical, actionable guidance on how to research and design for explainability. We’ll move beyond the buzzwords and into the mockups, translating complex XAI concepts into concrete design patterns you can start using today.</p>

<h2 id="de-mystifying-xai-core-concepts-for-ux-practitioners">De-mystifying XAI: Core Concepts For UX Practitioners</h2>

<p>XAI is about answering the user’s question: “<strong>Why?</strong>” Why was I shown this ad? Why is this movie recommended to me? Why was my request denied? Think of it as the AI showing its work on a math problem. Without it, you just have an answer, and you’re forced to take it on faith. In showing the steps, you build comprehension and trust. You also allow for your work to be double-checked and verified by the very humans it impacts.</p>

<h3 id="feature-importance-and-counterfactuals">Feature Importance And Counterfactuals</h3>

<p>There are a number of techniques we can use to clarify or explain what is happening with AI. While methods range from providing the entire logic of a decision tree to generating natural language summaries of an output, two of the most practical and impactful types of information UX practitioners can introduce into an experience are <strong>feature importance</strong> (Figure 1) and <strong>counterfactuals</strong>. These are often the most straightforward for users to understand and the most actionable for designers to implement.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="478"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png"
			
			sizes="100vw"
			alt="A fictional example of feature importance"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 1: A fictional example of feature importance where a bank system shows the importance of various features that lead to a model’s decision. Image generated using Google Gemini. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/1-example-feature-importance.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h4 id="feature-importance">Feature Importance</h4>

<p>This explainability method answers, “<strong>What were the most important factors the AI considered?</strong>” It’s about identifying the top 2-3 variables that had the biggest impact on the outcome. It’s the headline, not the whole story.</p>

<blockquote><strong>Example</strong>: Imagine an AI that predicts whether a customer will churn (cancel their service). Feature importance might reveal that “number of support calls in the last month” and “recent price increases” were the two most important factors in determining if a customer was likely to churn.</blockquote>

<h4 id="counterfactuals">Counterfactuals</h4>

<p>This powerful method answers, “<strong>What would I need to change to get a different outcome?</strong>” This is crucial because it gives users a sense of agency. It transforms a frustrating “no” into an actionable “not yet.”</p>

<blockquote><strong>Example</strong>: Imagine a loan application system that uses AI. A user is denied a loan. Instead of just seeing “Application Denied,” a counterfactual explanation would also share, “If your credit score were 50 points higher, or if your debt-to-income ratio were 10% lower, your loan would have been approved.” This gives Sarah clear, actionable steps she can take to potentially get a loan in the future.</blockquote>

<h3 id="using-model-data-to-enhance-the-explanation">Using Model Data To Enhance The Explanation</h3>

<p>Although technical specifics are often handled by data scientists, it&rsquo;s helpful for UX practitioners to know that tools like <a href="https://www.geeksforgeeks.org/artificial-intelligence/introduction-to-explainable-aixai-using-lime/">LIME</a> (Local Interpretable Model-agnostic Explanations) which explains individual predictions by approximating the model locally, and <a href="https://shap.readthedocs.io/en/latest/example_notebooks/overviews/An%20introduction%20to%20explainable%20AI%20with%20Shapley%20values.html">SHAP</a> (SHapley Additive exPlanations) which uses a game theory approach to explain the output of any machine learning model are commonly used to extract these “why” insights from complex models. These libraries essentially help break down an AI’s decision to show which inputs were most influential for a given outcome.</p>

<p>When done properly, the data underlying an AI tool’s decision can be used to tell a powerful story. Let’s walk through feature importance and counterfactuals and show how the data science behind the decision can be utilized to enhance the user’s experience.</p>

<p>Now let’s cover feature importance with the assistance of <strong>Local Explanations (e.g., LIME)</strong> data: This approach answers, “<strong>Why did the AI make <em>this specific</em> recommendation for me, right now?</strong>” Instead of a general explanation of how the model works, it provides a focused reason for a single, specific instance. It’s personal and contextual.</p>

<blockquote><strong>Example</strong>: Imagine an AI-powered music recommendation system like Spotify. A local explanation would answer, “Why did the system recommend <strong>this specific</strong> song by Adele to <strong>you</strong> right now?” The explanation might be: “Because you recently listened to several other emotional ballads and songs by female vocalists.”</blockquote>

<p>Finally, let’s cover the inclusion of <strong>Value-based Explanations (e.g. Shapley Additive Explanations (SHAP)</strong> data to an explanation of a decision: This is a more nuanced version of feature importance that answers, “<strong>How did each factor push the decision one way or the other?</strong>” It helps visualize <em>what</em> mattered, and whether its influence was positive or negative.</p>

<blockquote><strong>Example</strong>: Imagine a bank uses an AI model to decide whether to approve a loan application.</blockquote>

<p><strong>Feature Importance</strong>: The model output might show that the applicant’s credit score, income, and debt-to-income ratio were the most important factors in its decision. This answers <em>what</em> mattered.</p>

<p><strong>Feature Importance with Value-Based Explanations (SHAP)</strong>: SHAP values would take feature importance further based on elements of the model.</p>

<ul>
<li>For an approved loan, SHAP might show that a high credit score significantly <em>pushed</em> the decision towards approval (positive influence), while a slightly higher-than-average debt-to-income ratio <em>pulled</em> it slightly away (negative influence), but not enough to deny the loan.</li>
<li>For a denied loan, SHAP could reveal that a low income and a high number of recent credit inquiries <em>strongly pushed</em> the decision towards denial, even if the credit score was decent.</li>
</ul>

<p>This helps the loan officer explain to the applicant beyond <em>what</em> was considered, to <em>how each factor contributed</em> to the final “yes” or “no” decision.</p>

<p>It’s crucial to recognize that the ability to provide good explanations often starts much earlier in the development cycle. Data scientists and engineers play a pivotal role by intentionally structuring models and data pipelines in ways that inherently support explainability, rather than trying to bolt it on as an afterthought.</p>

<p>Research and design teams can foster this by initiating early conversations with data scientists and engineers about user needs for understanding, contributing to the development of explainability metrics, and collaboratively prototyping explanations to ensure they are both accurate and user-friendly.</p>

<h2 id="xai-and-ethical-ai-unpacking-bias-and-responsibility">XAI And Ethical AI: Unpacking Bias And Responsibility</h2>

<p>Beyond building trust, XAI plays a critical role in addressing the profound <strong>ethical implications of AI</strong>*, particularly concerning algorithmic bias. Explainability techniques, such as analyzing SHAP values, can reveal if a model’s decisions are disproportionately influenced by sensitive attributes like race, gender, or socioeconomic status, even if these factors were not explicitly used as direct inputs.</p>

<p>For instance, if a loan approval model consistently assigns negative SHAP values to applicants from a certain demographic, it signals a potential bias that needs investigation, empowering teams to surface and mitigate such unfair outcomes.</p>

<p>The power of XAI also comes with the potential for “<strong>explainability washing</strong>.” Just as “greenwashing” misleads consumers about environmental practices, explainability washing can occur when explanations are designed to obscure, rather than illuminate, problematic algorithmic behavior or inherent biases. This could manifest as overly simplistic explanations that omit critical influencing factors, or explanations that strategically frame results to appear more neutral or fair than they truly are. It underscores the ethical responsibility of UX practitioners to design explanations that are genuinely transparent and verifiable.</p>

<p>UX professionals, in collaboration with data scientists and ethicists, hold a crucial responsibility in communicating the <em>why</em> of a decision, and also the limitations and potential biases of the underlying AI model. This involves setting realistic user expectations about AI accuracy, identifying where the model might be less reliable, and providing clear channels for recourse or feedback when users perceive unfair or incorrect outcomes. Proactively addressing these ethical dimensions will allow us to build AI systems that are truly just and trustworthy.</p>

<h2 id="from-methods-to-mockups-practical-xai-design-patterns">From Methods To Mockups: Practical XAI Design Patterns</h2>

<p>Knowing the concepts is one thing; designing them is another. Here’s how we can translate these XAI methods into intuitive design patterns.</p>

<h3 id="pattern-1-the-because-statement-for-feature-importance">Pattern 1: The &ldquo;Because&rdquo; Statement (for Feature Importance)</h3>

<p>This is the simplest and often most effective pattern. It’s a direct, plain-language statement that surfaces the primary reason for an AI’s action.</p>

<ul>
<li><strong>Heuristic</strong>: Be direct and concise. Lead with the single most impactful reason. Avoid jargon at all costs.</li>
</ul>

<blockquote><strong>Example</strong>: Imagine a music streaming service. Instead of just presenting a “Discover Weekly” playlist, you add a small line of microcopy.<br /><br /><strong>Song Recommendation</strong>: “Velvet Morning”<br />Because you listen to “The Fuzz” and other psychedelic rock.</blockquote>

<h3 id="pattern-2-the-what-if-interactive-for-counterfactuals">Pattern 2: The &ldquo;What-If&rdquo; Interactive (for Counterfactuals)</h3>

<p>Counterfactuals are inherently about empowerment. The best way to represent them is by giving users interactive tools to explore possibilities themselves. This is perfect for financial, health, or other goal-oriented applications.</p>

<ul>
<li><strong>Heuristic</strong>: Make explanations interactive and empowering. Let users see the cause and effect of their choices.</li>
</ul>

<blockquote><strong>Example</strong>: A loan application interface. After a denial, instead of a dead end, the user gets a tool to determine how various scenarios (what-ifs) might play out (See Figure 1).</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="582"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png"
			
			sizes="100vw"
			alt="An example of Counterfactuals"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 2: An example of Counterfactuals using a what-if scenario, letting the user see how changing different values of the model’s features can impact outcomes. Image generated using Google Gemini. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/2-example-counterfactuals.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="pattern-3-the-highlight-reel-for-local-explanations">Pattern 3: The Highlight Reel (For Local Explanations)</h3>

<p>When an AI performs an action on a user’s content (like summarizing a document or identifying faces in photos), the explanation should be visually linked to the source.</p>

<ul>
<li><strong>Heuristic</strong>: Use visual cues like highlighting, outlines, or annotations to connect the explanation directly to the interface element it’s explaining.</li>
</ul>

<blockquote><strong>Example</strong>: An AI tool that summarizes long articles.<br /><br /><strong>AI-Generated Summary Point</strong>:<br />Initial research showed a market gap for sustainable products.<br /><br /><strong>Source in Document</strong>:<br />“...Our Q2 analysis of market trends conclusively demonstrated that <strong>no major competitor was effectively serving the eco-conscious consumer, revealing a significant market gap for sustainable products</strong>...”</blockquote>

<h3 id="pattern-4-the-push-and-pull-visual-for-value-based-explanations">Pattern 4: The Push-and-Pull Visual (for Value-based Explanations)</h3>

<p>For more complex decisions, users might need to understand the interplay of factors. Simple data visualizations can make this clear without being overwhelming.</p>

<ul>
<li><strong>Heuristic</strong>: Use simple, color-coded data visualizations (like bar charts) to show the factors that positively and negatively influenced a decision.</li>
</ul>

<blockquote><strong>Example</strong>: An AI screening a candidate’s profile for a job.<br /><br />Why this candidate is a 75% match:<br /><br /><strong>Factors pushing the score up</strong>:<br /><ul><li>5+ Years UX Research Experience</li><li>Proficient in Python</li></ul><br /><strong>Factors pushing the score down</strong>:<br /><ul><li>No experience with B2B SaaS</li></ul></blockquote>

<p>Learning and using these design patterns in the UX of your AI product will help increase the explainability. You can also use additional techniques that I’m not covering in-depth here. This includes the following:</p>

<ul>
<li><strong>Natural language explanations</strong>: Translating an AI’s technical output into simple, conversational human language that non-experts can easily understand.</li>
<li><strong>Contextual explanations</strong>: Providing a rationale for an AI’s output at the specific moment and location, it is most relevant to the user’s task.</li>
<li><strong>Relevant visualizations</strong>: Using charts, graphs, or heatmaps to visually represent an AI’s decision-making process, making complex data intuitive and easier for users to grasp.</li>
</ul>

<p><strong>A Note For the Front End</strong>: <em>Translating these explainability outputs into seamless user experiences also presents its own set of technical considerations. Front-end developers often grapple with API design to efficiently retrieve explanation data, and performance implications (like the real-time generation of explanations for every user interaction) need careful planning to avoid latency.</em></p>

<h2 id="some-real-world-examples">Some Real-world Examples</h2>

<p><strong>UPS Capital’s DeliveryDefense</strong></p>

<p>UPS uses AI to assign a “delivery confidence score” to addresses to predict the likelihood of a package being stolen. Their <a href="https://about.ups.com/us/en/our-stories/innovation-driven/ups-s-deliverydefense-pits-ai-against-criminals.html">DeliveryDefense</a> software analyzes historical data on location, loss frequency, and other factors. If an address has a low score, the system can proactively reroute the package to a secure UPS Access Point, providing an explanation for the decision (e.g., “Package rerouted to a secure location due to a history of theft”). This system demonstrates how XAI can be used for risk mitigation and building customer trust through transparency.</p>

<p><strong>Autonomous Vehicles</strong></p>

<p>These vehicles of the future will need to effectively use <a href="https://online.hbs.edu/blog/post/ai-in-business">XAI to help their vehicles make safe, explainable decisions</a>. When a self-driving car brakes suddenly, the system can provide a real-time explanation for its action, for example, by identifying a pedestrian stepping into the road. This is not only crucial for passenger comfort and trust but is a regulatory requirement to prove the safety and accountability of the AI system.</p>

<p><strong>IBM Watson Health (and its challenges)</strong></p>

<p>While often cited as a general example of AI in healthcare, it’s also a valuable case study for the <em>importance</em> of XAI. The <a href="https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html">failure of its Watson for Oncology project</a> highlights what can go wrong when explanations are not clear, or when the underlying data is biased or not localized. The system’s recommendations were sometimes inconsistent with local clinical practices because they were based on U.S.-centric guidelines. This serves as a cautionary tale on the need for robust, context-aware explainability.</p>

<h2 id="the-ux-researcher-s-role-pinpointing-and-validating-explanations">The UX Researcher’s Role: Pinpointing And Validating Explanations</h2>

<p>Our design solutions are only effective if they address the right user questions at the right time. An explanation that answers a question the user doesn’t have is just noise. This is where UX research becomes the critical connective tissue in an XAI strategy, ensuring that we explain the what and how that actually matters to our users. The researcher’s role is twofold: first, to inform the strategy by identifying where explanations are needed, and second, to validate the designs that deliver those explanations.</p>

<h3 id="informing-the-xai-strategy-what-to-explain">Informing the XAI Strategy (What to Explain)</h3>

<p>Before we can design a single explanation, we must understand the user’s mental model of the AI system. What do they believe it’s doing? Where are the gaps between their understanding and the system’s reality? This is the foundational work of a UX researcher.</p>

<h4 id="mental-model-interviews-unpacking-user-perceptions-of-ai-systems">Mental Model Interviews: Unpacking User Perceptions Of AI Systems</h4>

<p>Through deep, semi-structured interviews, UX practitioners can gain invaluable insights into how users perceive and understand AI systems. These sessions are designed to encourage users to literally draw or describe their internal “mental model” of how they believe the AI works. This often involves asking open-ended questions that prompt users to explain the system’s logic, its inputs, and its outputs, as well as the relationships between these elements.</p>

<p>These interviews are powerful because they frequently reveal profound misconceptions and assumptions that users hold about AI. For example, a user interacting with a recommendation engine might confidently assert that the system is based purely on their past viewing history. They might not realize that the algorithm also incorporates a multitude of other factors, such as the time of day they are browsing, the current trending items across the platform, or even the viewing habits of similar users.</p>

<p>Uncovering this gap between a user’s mental model and the actual underlying AI logic is critically important. It tells us precisely what specific information we need to communicate to users to help them build a more accurate and robust mental model of the system. This, in turn, is a fundamental step in fostering trust. When users understand, even at a high level, how an AI arrives at its conclusions or recommendations, they are more likely to trust its outputs and rely on its functionality.</p>

<h4 id="ai-journey-mapping-a-deep-dive-into-user-trust-and-explainability">AI Journey Mapping: A Deep Dive Into User Trust And Explainability</h4>

<p>By meticulously mapping the user’s journey with an AI-powered feature, we gain invaluable insights into the precise moments where confusion, frustration, or even profound distrust emerge. This uncovers critical junctures where the user’s mental model of how the AI operates clashes with its actual behavior.</p>

<p>Consider a music streaming service: Does the user’s trust plummet when a playlist recommendation feels “random,” lacking any discernible connection to their past listening habits or stated preferences? This perceived randomness is a direct challenge to the user’s expectation of intelligent curation and a breach of the implicit promise that the AI understands their taste. Similarly, in a photo management application, do users experience significant frustration when an AI photo-tagging feature consistently misidentifies a cherished family member? This error is more than a technical glitch; it strikes at the heart of accuracy, personalization, and even emotional connection.</p>

<p>These pain points are vivid signals indicating precisely where a well-placed, clear, and concise explanation is necessary. Such explanations serve as crucial repair mechanisms, mending a breach of trust that, if left unaddressed, can lead to user abandonment.</p>

<p>The power of AI journey mapping lies in its ability to move us beyond simply explaining the final output of an AI system. While understanding <em>what</em> the AI produced is important, it’s often insufficient. Instead, this process compels us to focus on explaining the <em>process</em> at critical moments. This means addressing:</p>

<ul>
<li><strong>Why a particular output was generated</strong>: Was it due to specific input data? A particular model architecture?</li>
<li><strong>What factors influenced the AI’s decision</strong>: Were certain features weighted more heavily?</li>
<li><strong>How the AI arrived at its conclusion</strong>: Can we offer a simplified, analogous explanation of its internal workings?</li>
<li><strong>What assumptions the AI made</strong>: Were there implicit understandings of the user’s intent or data that need to be surfaced?</li>
<li><strong>What the limitations of the AI are</strong>: Clearly communicating what the AI <em>cannot</em> do, or where its accuracy might waver, builds realistic expectations.</li>
</ul>

<p>AI journey mapping transforms the abstract concept of XAI into a practical, actionable framework for UX practitioners. It enables us to move beyond theoretical discussions of explainability and instead pinpoint the exact moments where user trust is at stake, providing the necessary insights to build AI experiences that are powerful, transparent, understandable, and trustworthy.</p>

<p>Ultimately, research is how we uncover the unknowns. Your team might be debating how to explain why a loan was denied, but research might reveal that users are far more concerned with understanding how their data was used in the first place. Without research, we are simply guessing what our users are wondering.</p>

<h2 id="collaborating-on-the-design-how-to-explain-your-ai">Collaborating On The Design (How to Explain Your AI)</h2>

<p>Once research has identified what to explain, the collaborative loop with design begins. Designers can prototype the patterns we discussed earlier—the “Because” statement, the interactive sliders—and researchers can put those designs in front of users to see if they hold up.</p>

<p><strong>Targeted Usability &amp; Comprehension Testing</strong>: We can design research studies that specifically test the XAI components. We don’t just ask, “*Is this easy to use?*” We ask, “*After seeing this, can you tell me in your own words why the system recommended this product?*” or “*Show me what you would do to see if you could get a different result.*” The goal here is to measure comprehension and actionability, alongside usability.</p>

<p><strong>Measuring Trust Itself</strong>: We can use simple surveys and rating scales before and after an explanation is shown. For instance, we can ask a user on a 5-point scale, “*How much do you trust this recommendation?*” before they see the “Because” statement, and then ask them again afterward. This provides quantitative data on whether our explanations are actually moving the needle on trust.</p>

<p>This process creates a powerful, iterative loop. Research findings inform the initial design. That design is then tested, and the new findings are fed back to the design team for refinement. Maybe the “Because” statement was too jargony, or the “What-If” slider was more confusing than empowering. Through this collaborative validation, we ensure that the final explanations are technically accurate, genuinely understandable, useful, and trust-building for the people using the product.</p>

<h2 id="the-goldilocks-zone-of-explanation">The Goldilocks Zone Of Explanation</h2>

<p>A critical word of caution: it is possible to <em>over-explain</em>. As in the fairy tale, where Goldilocks sought the porridge that was ‘just right’, the goal of a good explanation is to provide the right amount of detail—not too much and not too little. Bombarding a user with every variable in a model will lead to cognitive overload and can actually <em>decrease</em> trust. The goal is not to make the user a data scientist.</p>

<p>One solution is <strong>progressive disclosure</strong>.</p>

<ol>
<li><strong>Start with the simple.</strong> Lead with a concise “Because” statement. For most users, this will be enough.</li>
<li><strong>Offer a path to detail.</strong> Provide a clear, low-friction link like “Learn More” or “See how this was determined.”</li>
<li><strong>Reveal the complexity.</strong> Behind that link, you can offer the interactive sliders, the visualizations, or a more detailed list of contributing factors.</li>
</ol>

<p>This layered approach respects user attention and expertise, providing just the right amount of information for their needs. Let’s imagine you’re using a smart home device that recommends optimal heating based on various factors.</p>

<p><strong>Start with the simple</strong>: “*Your home is currently heated to 72 degrees, which is the optimal temperature for energy savings and comfort.*”</p>

<p><strong>Offer a path to detail</strong>: Below that, a small link or button: “<em>Why is 72 degrees optimal?</em>&ldquo;</p>

<p><strong>Reveal the complexity</strong>: Clicking that link could open a new screen showing:</p>

<ul>
<li>Interactive sliders for outside temperature, humidity, and your preferred comfort level, demonstrating how these adjust the recommended temperature.</li>
<li>A visualization of energy consumption at different temperatures.</li>
<li>A list of contributing factors like “Time of day,” “Current outside temperature,” “Historical energy usage,” and “Occupancy sensors.”</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="449"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png"
			
			sizes="100vw"
			alt="An example of progressive disclosure in three stages"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 3: An example of progressive disclosure in three stages: the simple details with an option to click for more details, more details with the option to understand what will happen if the user changes the settings. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/3-example-progressive-disclosure.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It’s effective to combine multiple XAI methods and this Goldilocks Zone of Explanation pattern, which advocates for progressive disclosure, implicitly encourages this. You might start with a simple “Because” statement (Pattern 1) for immediate comprehension, and then offer a “Learn More” link that reveals a “What-If” Interactive (Pattern 2) or a “Push-and-Pull Visual” (Pattern 4) for deeper exploration.</p>

<p>For instance, a loan application system could initially state the primary reason for denial (feature importance), then allow the user to interact with a “What-If” tool to see how changes to their income or debt would alter the outcome (counterfactuals), and finally, provide a detailed “Push-and-Pull” chart (value-based explanation) to illustrate the positive and negative contributions of all factors. This layered approach allows users to access the level of detail they need, when they need it, preventing cognitive overload while still providing comprehensive transparency.</p>

<p>Determining which XAI tools and methods to use is primarily a function of thorough UX research. Mental model interviews and AI journey mapping are crucial for pinpointing user needs and pain points related to AI understanding and trust. Mental model interviews help uncover user misconceptions about how the AI works, indicating areas where fundamental explanations (like feature importance or local explanations) are needed. AI journey mapping, on the other hand, identifies critical moments of confusion or distrust in the user’s interaction with the AI, signaling where more granular or interactive explanations (like counterfactuals or value-based explanations) would be most beneficial to rebuild trust and provide agency.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="399"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png"
			
			sizes="100vw"
			alt="An example of a fictitious AI business startup assistant"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 4: An example of a fictitious AI business startup assistant. Here, the AI presents the key factor in how the risk level was determined. When the user asks what would change if they manipulate that factor, the counterfactual statement is shown, confirming the impact of that specific factor in the model. (<a href='https://files.smashing.media/articles/beyond-black-box-practical-xai-ux-practitioners/4-ai-business-startup-assistant.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Ultimately, the <em>best</em> way to choose a technique is to let user research guide your decisions, ensuring that the explanations you design directly address actual user questions and concerns, rather than simply offering technical details for their own sake.</p>

<h2 id="xai-for-deep-reasoning-agents">XAI for Deep Reasoning Agents</h2>

<p>Some of the newest AI systems, known as <a href="https://learn.microsoft.com/en-us/microsoft-copilot-studio/faqs-reasoning">deep reasoning agents</a>, produce an explicit “chain of thought” for every complex task. They do not merely cite sources; they show the logical, step-by-step path they took to arrive at a conclusion. While this transparency provides valuable context, a play-by-play that spans several paragraphs can feel overwhelming to a user simply trying to complete a task.</p>

<p>The principles of XAI, especially the Goldilocks Zone of Explanation, apply directly here. We can curate the journey, using progressive disclosure to show only the final conclusion and the most salient step in the thought process first. Users can then opt in to see the full, detailed, multi-step reasoning when they need to double-check the logic or find a specific fact. This approach respects user attention while preserving the agent’s full transparency.</p>

<h2 id="next-steps-empowering-your-xai-journey">Next Steps: Empowering Your XAI Journey</h2>

<p>Explainability is a fundamental pillar for building <strong>trustworthy and effective AI products</strong>. For the advanced practitioner looking to drive this change within their organization, the journey extends beyond design patterns into advocacy and continuous learning.</p>

<p>To deepen your understanding and practical application, consider exploring resources like the <a href="https://research.ibm.com/blog/ai-explainability-360">AI Explainability 360 (AIX360) toolkit</a> from IBM Research or Google’s <a href="https://pair-code.github.io/what-if-tool/">What-If Tool</a>, which offer interactive ways to explore model behavior and explanations. Engaging with communities like the <a href="https://responsibleaiforum.com">Responsible AI Forum</a> or specific research groups focused on human-centered AI can provide invaluable insights and collaboration opportunities.</p>

<p>Finally, be an advocate for XAI within your own organization. Frame explainability as a strategic investment. Consider a brief pitch to your leadership or cross-functional teams:</p>

<blockquote>“By investing in XAI, we’ll go beyond building trust; we’ll accelerate user adoption, reduce support costs by empowering users with understanding, and mitigate significant ethical and regulatory risks by exposing potential biases. This is good design and smart business.”</blockquote>

<p>Your voice, grounded in practical understanding, is crucial in bringing AI out of the black box and into a collaborative partnership with users.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Eleanor Hecks</author><title>The Accessibility Problem With Authentication Methods Like CAPTCHA</title><link>https://www.smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/</link><pubDate>Thu, 27 Nov 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/</guid><description>CAPTCHAs were meant to keep bots out, but too often, they lock people with disabilities out, too. From image classification to click-based tests, many “human checks” are anything but inclusive. There’s no universal solution, but understanding real user needs is where accessibility truly starts.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/" />
              <title>The Accessibility Problem With Authentication Methods Like CAPTCHA</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>The Accessibility Problem With Authentication Methods Like CAPTCHA</h1>
                  
                    
                    <address>Eleanor Hecks</address>
                  
                  <time datetime="2025-11-27T10:00:00&#43;00:00" class="op-published">2025-11-27T10:00:00+00:00</time>
                  <time datetime="2025-11-27T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) has become ingrained in internet browsing since personal computers gained momentum in the consumer electronics market. For nearly as long as people have been going online, web developers have sought ways to block spam bots.</p>

<p>The CAPTCHA service distinguishes between human and bot activity to keep bots out. Unfortunately, its methods are less than precise. In trying to protect humans, developers have made much of the web <strong>inaccessible</strong> to people with disabilities.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="533"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg"
			
			sizes="100vw"
			alt="Authentication failed’ error message"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://unsplash.com/photos/black-flat-screen-computer-monitor-bMvuh0YQQ68'>unsplash.com</a>. (<a href='https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/1-authentication-failed.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Authentication methods, such as CAPTCHA, typically use image classification, puzzles, audio samples, or click-based tests to determine whether the user is human. While the types of challenges are well-documented, their <strong>logic is not public knowledge</strong>. People can only guess what it takes to “prove” they are human.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="547"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png"
			
			sizes="100vw"
			alt="reCAPTCHA"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://support.google.com/recaptcha/?hl=en'>Google</a>. (<a href='https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/2-recaptcha.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="what-is-captcha">What Is CAPTCHA?</h2>

<p>A CAPTCHA <a href="https://medium.com/@leo.weiss33/a-reverse-turing-test-story-cf677b0ff282">is a reverse Turing test</a> that takes the form of a challenge-response test. For example, if it instructs users to “select all images with stairs,” they must pick the stairs out from railings, driveways, and crosswalks. Alternatively, they may be asked to enter the text they see, add the sum of dice faces, or complete a sliding puzzle.</p>

<p>Image-based CAPTCHAs are responsible for the most frustrating shared experiences internet users have &mdash; deciding whether to select a square when only a small sliver of the object in question is in it.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="549"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png"
			
			sizes="100vw"
			alt="Image-based CAPTCHA showing traffic lights"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://onezero.medium.com/why-captcha-pictures-are-so-unbearably-depressing-20679b8cf84a'>Medium</a>. (<a href='https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/3-captcha-picture.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Regardless of the method, a computer or algorithm ultimately determines whether the test-taker is human or machine. This authentication service has spawned many offshoots, including reCAPTCHA and hCAPTCHA. It has even led to the creation of entire companies, such as GeeTest and Arkose Labs. The Google-owned automated system reCAPTCHA requires users to click a checkbox labeled “I’m not a robot” for authentication. It runs an adaptive analysis in the background to assign a risk score. hCAPTCHA is an image-classification-based alternative.</p>

<p>Other authentication methods include multi-factor authentication (MFA), QR codes, temporary personal identification numbers (PINs), and biometrics. They do not follow the challenge-response formula, but serve fundamentally similar purposes.</p>

<p>These offshoots are intended to be better than the original, but they often fail to meet modern accessibility standards. Take hCaptcha, for instance, which uses a cookie to let you bypass the challenge-response test entirely. It’s a great idea in theory, but it doesn’t work in practice.</p>

<p>You’re supposed to receive a one-time code via email that you send to a specific number over SMS. Users <a href="https://fireborn.mataroa.blog/blog/hellcaptcha-accessibility-theater-at-its-worst/">report receiving endless error messages</a>, forcing them to complete the standard text CAPTCHA. This is only available if the site explicitly enabled it during configuration. If it is not set up, you must complete an image challenge that does not recognize screen readers.</p>

<p>Even when the initial process works, subsequent authentication relies on a third-party cross-site cookie, which most browsers block automatically. Also, the code expires after a short period, so you have to redo the entire process if it takes you too long to move on to the next step.</p>

<h2 id="why-do-teams-use-captcha-and-similar-authentication-methods">Why Do Teams Use CAPTCHA And Similar Authentication Methods?</h2>

<p>CAPTCHA is common because it is <strong>easy to set up</strong>. Developers can program it to appear, and it conducts the test automatically. This way, they can focus on more important matters while still preventing spam, fraud, and abuse. These tools are supposed to make it easier for humans to use the internet safely, but they often keep real people from logging in.</p>

<p>These tests result in a <strong>poor user experience</strong> overall. One study found users <a href="https://arxiv.org/abs/2311.10911">wasted over 819 million hours</a> on over 512 billion reCAPTCHA v2 sessions as of 2023. Despite it all, bots prevail. Machine learning models can solve text-based CAPTCHA within fractions of a second with over 97% accuracy.</p>

<p>A 2024 study on Google’s reCAPTCHA v2 &mdash; which is still widely used despite the rollout of reCAPTCHA v3 &mdash; found bots can solve image classification CAPTCHA <a href="https://arstechnica.com/ai/2024/09/ai-defeats-traffic-image-captcha-in-another-triumph-of-machine-over-man/">with up to 100% accuracy</a>, depending on the object they are tasked with identifying. The researchers used a free, open-source model, which means that bad actors could easily replicate their work.</p>

<h2 id="why-should-web-developers-stop-using-captcha">Why Should Web Developers Stop Using CAPTCHA?</h2>

<p>Authentication methods like CAPTCHA have an accessibility problem. Machine learning advances forced these services to grow increasingly complex. Even still, they are not foolproof. Bots get it right more than people do. Research shows they can <a href="https://arxiv.org/abs/2307.12108">complete reCAPTCHA within 17.5 seconds</a>, achieving 85% accuracy. Humans take longer and are less accurate.</p>

<p>Many people fail CAPTCHA tests and have no idea what they did wrong. For example, a prompt instructing users to “select all squares with traffic lights” seems simple enough, but it gets complicated if a sliver of the pole is in another square. Should they select that box, or is that what an algorithm would do?</p>

<p>Although bot capabilities have grown by magnitudes, humans have remained the same. As tests get progressively more difficult, they feel less inclined to attempt them. One survey shows <a href="https://www.regpacks.com/blog/user-onboarding-mistakes/">nearly 59% of people</a> will stop using a product after several bad experiences. If authentication is too cumbersome or complex, they might stop using the website entirely.</p>

<p>People can fail these tests for various reasons, including technical ones. If they block third-party cookies, have a local proxy running, or have not updated their browser in a while, they may keep failing, regardless of how many times they try.</p>

<h2 id="authentication-issues-with-captcha">Authentication Issues With CAPTCHA</h2>

<p>Due to the reasons mentioned above, most types of CAPTCHA are inherently inaccessible. This is especially true for people with disabilities, as these challenge-response tests were not designed with their needs in mind. Some of the common issues include the following:</p>

<h3 id="issues-related-to-visuals-and-screen-reader-use">Issues Related To Visuals And Screen Reader Use</h3>

<p>Screen readers cannot read standard visual CAPTCHAs, such as the distorted text test, since the jumbled, contorted words are not machine-readable. The image classification and sliding puzzle methods are similarly inaccessible.</p>

<p>In one WebAIM survey conducted from 2023 to 2024, screen reader users <a href="https://webaim.org/projects/screenreadersurvey10/#problematic">agreed CAPTCHA was the most problematic</a> item, ranking it above ambiguous links, unexpected screen changes, missing alt text, inaccessible search, and lack of keyboard accessibility. Its spot at the top has remained largely unchanged for over a decade, illustrating its history of inaccessibility.</p>

<h3 id="issues-related-to-hearing-and-audio-processing">Issues Related To Hearing and Audio Processing</h3>

<p>Audio CAPTCHAs are relatively uncommon because web development best practices advise against autoplay audio and emphasize the importance of user controls. However, audio CAPTCHAs still exist. People who are hard of hearing or deaf may encounter a barrier when attempting these tests. Even with assistive technology, the intentional audio distortion and background noise make these samples challenging for individuals with auditory processing disorders to comprehend.</p>

<h3 id="issues-related-to-motor-and-dexterity">Issues Related To Motor And Dexterity</h3>

<p>Tests requiring motor and dexterity skills can be challenging for those with motor impairments or physical disabilities. For example, someone with a hand tremor may find the sliding puzzles difficult. Also, the image classification tests that load more images until none that fit the criteria are left may pose a challenge.</p>

<h3 id="issues-related-to-cognition-and-language">Issues Related To Cognition And Language</h3>

<p>As CAPTCHAs become increasingly complex, some developers are turning to tests that require a combination of creative and critical thinking. Those that require users to solve a math problem or complete a puzzle can be challenging for people with dyslexia, dyscalculia, visual processing disorders, or cognitive impairments.</p>

<h2 id="why-assistive-technology-won-t-bridge-the-gap">Why Assistive Technology Won’t Bridge The Gap</h2>

<p>CAPTCHAs are intentionally designed for humans to interpret and solve, so assistive technology like screen readers and hands-free controls may be of little help. ReCAPTCHA in particular poses an issue because it analyzes background activity. If it flags the accessibility devices as bots, it will serve a potentially inaccessible CAPTCHA.</p>

<p>Even if this technology could bridge the gap, web developers shouldn’t expect it to. Industry standards dictate that they should follow universal design principles to make their websites as accessible and functional as possible.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aCAPTCHA%e2%80%99s%20accessibility%20issues%20could%20be%20forgiven%20if%20it%20were%20an%20effective%20security%20tool,%20but%20it%20is%20far%20from%20foolproof%20since%20bots%20get%20it%20right%20more%20than%20humans%20do.%20Why%20keep%20using%20a%20method%20that%20is%20ineffective%20and%20creates%20barriers%20for%20people%20with%20disabilities?%0a&url=https://smashingmagazine.com%2f2025%2f11%2faccessibility-problem-authentication-methods-captcha%2f">
      
CAPTCHA’s accessibility issues could be forgiven if it were an effective security tool, but it is far from foolproof since bots get it right more than humans do. Why keep using a method that is ineffective and creates barriers for people with disabilities?

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>There are better alternatives.</p>

<h2 id="principles-for-accessible-authentication">Principles For Accessible Authentication</h2>

<p>The idea that humans should consistently outperform algorithms is outdated. Better authentication methods exist, such as <strong>multifactor authentication (MFA)</strong>. The two-factor authentication market will be <a href="https://designerly.com/hacked-wordpress-site/">worth an estimated $26.7 billion</a> by 2027, underscoring its popularity. This tool is more effective than a CAPTCHA because it <strong>prevents unauthorized access, even with legitimate credentials</strong>.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg"
			
			sizes="100vw"
			alt="Multifactor authentication"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://unsplash.com/photos/a-screenshot-of-a-phone-RMIsZlv8qv4'>unsplash.com</a>. (<a href='https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/4-multifactor-authentication.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Ensure your MFA technique is accessible. Instead of having website visitors transcribe complex codes, you should send push notifications or SMS messages. Rely on the verification code autofill to automatically capture and enter the code. Alternatively, you can introduce a “remember this device” feature to skip authentication on trusted devices.</p>

<p>Apple’s two-factor authentication approach is designed this way. A trusted device automatically displays a six-digit verification code, so they do not have to search for it. When prompted, iPhone users can tap the suggestion that appears above their mobile keyboard for autofill.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="546"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png"
			
			sizes="100vw"
			alt=""
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://support.apple.com/en-us/102660'>Apple</a>. (<a href='https://files.smashing.media/articles/accessibility-problem-authentication-methods-captcha/5-apple-two-factor-authentication.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Single sign-on</strong> is another option. This session and user authentication service allows people to log in to multiple websites or applications with a single set of login credentials, minimizing the need for repeated identity verification.</p>

<p><strong>One-time-use “magic links”</strong> are an excellent alternative to reCAPTCHA and temporary PINs. Rather than remembering a code or solving a puzzle, the user clicks on a button. Avoid imposing deadlines because, according to WCAG Success Criterion 2.2.3, users <a href="https://www.w3.org/WAI/WCAG22/Understanding/no-timing.html">should not face time limits</a> since those with disabilities may need more time to complete specific actions.</p>

<p>Alternatively, you could use Cloudflare Turnstile. It authenticates <a href="https://developers.cloudflare.com/turnstile/">without showing a CAPTCHA</a>, and most people never even have to check a box or hit a button. The software works by issuing a small JavaScript challenge behind the scenes to automatically differentiate between bots and humans. Cloudflare Turnstile can be embedded into any website, making it an excellent alternative to standard classification tasks.</p>

<h2 id="testing-and-evaluation-of-accessible-authentication-designs">Testing And Evaluation Of Accessible Authentication Designs</h2>

<p>Testing and evaluating your accessible alternative authentication methods is essential. Many designs look good on paper but do not work in practice. If possible, gather feedback from actual users. An open beta may be an effective way to maximize visibility.</p>

<blockquote>Remember, general accessibility considerations do not only apply to people with disabilities. They also include those who are neurodivergent, lack access to a mobile device, or use assistive technology. Ensure your alternative designs consider these individuals.</blockquote> 

<p>Realistically, you cannot create a perfect system since everyone is unique. Many people struggle to follow multistep processes, solve equations, process complex instructions, or remember passcodes. While universal web design principles can improve flexibility, no single solution can meet everyone’s needs.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aRegardless%20of%20the%20authentication%20technique%20you%20use,%20you%20should%20present%20users%20with%20multiple%20authentication%20options%20upfront.%20They%20know%20their%20capabilities%20best,%20so%20let%20them%20decide%20what%20to%20use%20instead%20of%20trying%20to%20over-engineer%20a%20solution%20that%20works%20for%20every%20edge%20case.%0a&url=https://smashingmagazine.com%2f2025%2f11%2faccessibility-problem-authentication-methods-captcha%2f">
      
Regardless of the authentication technique you use, you should present users with multiple authentication options upfront. They know their capabilities best, so let them decide what to use instead of trying to over-engineer a solution that works for every edge case.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="address-the-accessibility-problem-with-design-changes">Address The Accessibility Problem With Design Changes</h2>

<p>A person with hand tremors may be unable to complete a sliding puzzle, while someone with an audio processing disorder may have trouble with distorted audio samples. However, you cannot simply replace CAPTCHAs with alternatives because they are often equally inaccessible.</p>

<p>QR codes, for example, may be difficult to scan for those with reduced fine motor control. People who are visually impaired may struggle to find it on the screen. Similarly, biometrics can pose an issue for people with facial deformities or a limited range of motion. Addressing the accessibility problem requires <strong>creative thinking</strong>.</p>

<p>You can start by visiting the <a href="https://www.w3.org/WAI/tutorials/">Web Accessibility Initiative’s accessibility tutorials</a> for developers to better understand universal design. Although these tutorials focus more on content than authentication, you can still use them to your advantage. The W3C Group Draft Note <a href="https://www.w3.org/TR/turingtest/">on the Inaccessibility of CAPTCHA</a> provides more relevant guidance.</p>

<p>Getting started is as easy as researching <strong>best practices</strong>. Understanding the basics is essential because there is no universal solution for accessible web design. If you want to optimize accessibility, consider sourcing feedback from the people who actually visit your website.</p>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“<a href="https://link.springer.com/book/10.1007/978-3-030-29345-1">The CAPTCHA: Perspectives and Challenges</a>,” Darko Brodić and Alessia Amelio</li>
<li>“<a href="https://www.smashingmagazine.com/2023/08/designing-accessible-text-over-images-part1/">Designing Accessible Text Over Images: Best Practices, Techniques, And Resources</a>,” Hannah Milan</li>
<li>“<a href="https://www.smashingmagazine.com/2011/03/in-search-of-the-perfect-captcha/">In Search Of The Best CAPTCHA</a>,” David Bushell</li>
<li>“<a href="https://www.smashingmagazine.com/2025/05/wcag-3-proposed-scoring-model-shift-accessibility-evaluation/">WCAG 3.0’s Proposed Scoring Model: A Shift in Accessibility Evaluation</a>,” Mikhail Prosmitskiy</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Vitaly Friedman</author><title>Designing For Stress And Emergency</title><link>https://www.smashingmagazine.com/2025/11/designing-for-stress-emergency/</link><pubDate>Mon, 24 Nov 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/11/designing-for-stress-emergency/</guid><description>Practical guidelines on designing time-critical products that prevent errors and improve accuracy. Part of the &lt;a href="https://measure-ux.com/">Measure UX &amp;amp; Design Impact&lt;/a> (use the code 🎟 &lt;code>IMPACT&lt;/code> to save 20% off today). With a &lt;a href="https://smashingconf.com/online-workshops/workshops/vitaly-friedman-impact-design/">live UX training&lt;/a> starting next week.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/11/designing-for-stress-emergency/" />
              <title>Designing For Stress And Emergency</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Designing For Stress And Emergency</h1>
                  
                    
                    <address>Vitaly Friedman</address>
                  
                  <time datetime="2025-11-24T13:00:00&#43;00:00" class="op-published">2025-11-24T13:00:00+00:00</time>
                  <time datetime="2025-11-24T13:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>No design exists in isolation. As designers, we often imagine specific situations in which people will use our product. It might be indeed quite common &mdash; but there will also be other &mdash; <strong>urgent, frustrating, stressful situations</strong>. And they are the ones that we rarely account for.</p>

<p>So how do we account for such situations? How can we help people <strong>use our products while coping with stress</strong> &mdash; without adding to their cognitive load? Let’s take a closer look.</p>

<h2 id="study-where-your-product-fits-into-people-s-lives">Study Where Your Product Fits Into People’s Lives</h2>

<p>When designing digital products, sometimes we get a bit too attached to our <strong>shiny new features and flows</strong> &mdash; often forgetting the messy reality in which these features and flows have to neatly fit. And often it means 10s of other products, 100s of other tabs, and 1000s of other emails.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="600"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg"
			
			sizes="100vw"
			alt="An example of a split screen with two power consumption dashboards on a 22-inch screen."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Design never exists in isolation. It must fit the user’s context and their expectations to do its job. (Image source: <a href='https://seabits.com/engine-and-power-dashboards/'>Engine And Power Dashboard</a>) (<a href='https://files.smashing.media/articles/designing-for-stress-and-emergency/1-designing-for-stress-and-emergency.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If your customers have to use a <strong>slightly older machine</strong>, with a <em>smallish</em> 22&rdquo; screen and a lot of background noise, they might use your product differently than you might have imagined, e.g., splitting the screen into halves to see both views at the same time (as displayed above).</p>

<p>Chances are high that our customers will use our product <strong>while doing something else</strong>, often with very little motivation, very little patience, plenty of urgent (and way more important) problems, and an unhealthy dose of stress. And that’s where our product must do its job well.</p>

<h2 id="what-is-stress">What Is Stress?</h2>

<p>What exactly do we mean when we talk about “stress”? As H Locke noted, stress is the <strong>body’s response to a situation it cannot handle</strong>. There is a mismatch between what people can control, their own skills, and the challenge in front of them.</p>

<p>If the situation seems unmanageable and the goal they want to achieve moves further away, it creates an enormous sense of <strong>failing</strong>. It can be extremely frustrating and demotivating.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://alypain.com/5-apps-to-reduce-stress-in-teens/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="804"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/2-designing-for-stress-and-emergency.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-for-stress-and-emergency/2-designing-for-stress-and-emergency.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-for-stress-and-emergency/2-designing-for-stress-and-emergency.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-for-stress-and-emergency/2-designing-for-stress-and-emergency.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-for-stress-and-emergency/2-designing-for-stress-and-emergency.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/2-designing-for-stress-and-emergency.jpg"
			
			sizes="100vw"
			alt="SOS Emergency System"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Stress has many levels. The key is not to let it spiral into dangerous zones. (Image source: <a href='https://alypain.com/5-apps-to-reduce-stress-in-teens/'>Alypain</a>) (<a href='https://files.smashing.media/articles/designing-for-stress-and-emergency/2-designing-for-stress-and-emergency.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Some failures have a local scope, but many have a <strong>far-reaching impact</strong>. Many people can’t choose the products they have to use for work, so when a tool fails repeatedly, causes frustration, or is unreliable, it affects the worker, the work, the colleagues, and processes within the organization. <strong>Fragility has a high cost</strong> &mdash; and so does frustration.</p>

<h2 id="how-stress-influences-user-interactions">How Stress Influences User Interactions</h2>

<p>It’s not a big surprise: stress disrupts attention, memory, cognition, and decision-making. It makes it difficult to prioritize and draw logical conclusions. In times of stress, we <strong>rely on fast, intuitive judgments</strong>, not reasoning. Typically, it leads to instinctive responses based on established habits.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="535"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png"
			
			sizes="100vw"
			alt="Designing For Stress And Emergency"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Overwhelming products can add to the cognitive load and lead to mistakes. However, people also get used to any products once they’ve used them long enough. (<a href='https://files.smashing.media/articles/designing-for-stress-and-emergency/3-designing-for-stress-and-emergency.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>When users are in an emergency, they experience <em>cognitive tunneling</em> — it&rsquo;s a state when their peripheral vision narrows, reading comprehension drops, fine motor skills deteriorate, and patience drops sharply. Under pressure, people often make decisions hastily, while others get entirely paralyzed. Either way is a likely <strong>path to mistakes</strong> &mdash; often irreversible ones and often without time for extensive deliberations.</p>

<p>Ideally, these decisions would be made way ahead of time &mdash; and then suggested when needed. But in practice, it’s not always possible. As it turns out, a good way to help people deal with stress is by <strong>providing order</strong> around how they manage it.</p>

<h2 id="single-tasking-instead-of-multi-tasking">Single-Tasking Instead Of Multi-Tasking</h2>

<p><a href="https://consensus.app/search/how-effective-are-people-at-multi-tasking-for-work/9GEx-KC0S8-OhSEgXClnrA/">People can’t <em>really</em> multi-task</a>, especially in very stressful situations or emergencies. Especially with a big chunk of work in front of them, people need some order to make progress, reliably. That’s why simpler pages usually work better than one big complex page.</p>

<p>Order means giving users a <strong>clear plan of action</strong> to complete a task. No distractions, no unnecessary navigation. We ask simple questions and <strong>prompt simple actions</strong>, one after another, one thing at a time.</p>














<figure class="
  
  
  ">
  
    <a href="https://designnotes.blog.gov.uk/2017/04/04/weve-published-the-task-list-pattern/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="607"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/4-designing-for-stress-and-emergency.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-for-stress-and-emergency/4-designing-for-stress-and-emergency.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-for-stress-and-emergency/4-designing-for-stress-and-emergency.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-for-stress-and-emergency/4-designing-for-stress-and-emergency.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-for-stress-and-emergency/4-designing-for-stress-and-emergency.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/4-designing-for-stress-and-emergency.png"
			
			sizes="100vw"
			alt="Task list pattern by Gov UK"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Poorly designed products can add to the cognitive load and lead to mistakes. (<a href='https://files.smashing.media/articles/designing-for-stress-and-emergency/4-designing-for-stress-and-emergency.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>An example of the plan is the <a href="https://designnotes.blog.gov.uk/2017/04/04/weve-published-the-task-list-pattern/">Task List Pattern</a>, invented by fine folks at Gov.uk. We break a task into a <strong>sequence of sub-tasks</strong>, describe them with actionable labels, assign statuses, and track progress.</p>

<p>To support accuracy, we revise <strong>default settings</strong>, values, presets, and actions. Also, the <strong>order of actions</strong> and buttons matters, so we put high-priority things first to make them easier to find. Then we add built-in safeguards (e.g., Undo feature) to prevent irreversible errors.</p>

<div class="partners__lead-place"></div>

<h2 id="supporting-in-emergencies">Supporting In Emergencies</h2>

<p>The most effective help during emergencies is to help people deal with the situation in a well-defined and effective way. That means being prepared for and designing an <strong>emergency mode</strong>, e.g., to activate instant alerts on emergency contacts, distribute pre-assigned tasks, and establish a line of communication.</p>














<figure class="
  
  
  ">
  
    <a href="https://www.redcross.org.au/emergencies/prepare/get-prepared-app/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="851"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/5-designing-for-stress-and-emergency.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-for-stress-and-emergency/5-designing-for-stress-and-emergency.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-for-stress-and-emergency/5-designing-for-stress-and-emergency.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-for-stress-and-emergency/5-designing-for-stress-and-emergency.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-for-stress-and-emergency/5-designing-for-stress-and-emergency.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/5-designing-for-stress-and-emergency.jpg"
			
			sizes="100vw"
			alt="Emergency plan by Rediplan App"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://www.redcross.org.au/emergencies/prepare/get-prepared-app/'>Rediplan App</a> to prepare and act in case of emergencies. (<a href='https://files.smashing.media/articles/designing-for-stress-and-emergency/5-designing-for-stress-and-emergency.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p><a href="https://www.redcross.org.au/emergencies/prepare/get-prepared-app/">Rediplan App</a> by Australian Red Cross is an emergency plan companion that encourages citizens to <strong>prepare their documents and belongings</strong> with a few checklists and actions &mdash; including key contracts, meeting places, and medical information, all in one place.</p>

<h2 id="just-enough-friction">Just Enough Friction</h2>

<p>Not all stress is equally harmful, though. As <a href="https://www.kryshiggins.com/optimal-onboarding-zone/">Krystal Higgins points out</a>, if there is not enough friction when onboarding new users and the experience is <strong>too passive</strong> or users are hand-held even through the most basic tasks, you risk that they won’t realize the <strong>personal value</strong> they gain from the experience and, ultimately, lose interest.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.kryshiggins.com/optimal-onboarding-zone/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="459"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/6-designing-for-stress-and-emergency.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-for-stress-and-emergency/6-designing-for-stress-and-emergency.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-for-stress-and-emergency/6-designing-for-stress-and-emergency.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-for-stress-and-emergency/6-designing-for-stress-and-emergency.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-for-stress-and-emergency/6-designing-for-stress-and-emergency.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-for-stress-and-emergency/6-designing-for-stress-and-emergency.png"
			
			sizes="100vw"
			alt="Bell Curve For Optimal User Onboarding"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      We need to find the sweet spot between value realization and friction to create experiences that keep users engaged. (Image source: <a href='https://www.kryshiggins.com/optimal-onboarding-zone/'>Krystal Higgins</a>) (<a href='https://files.smashing.media/articles/designing-for-stress-and-emergency/6-designing-for-stress-and-emergency.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="design-and-test-for-stress-cases">Design And Test For Stress Cases</h2>

<p><strong>Stress cases aren’t edge cases</strong>. We can’t predict the emotional state in which a user comes to our site or uses our product. A person looking for specific information on a hospital website or visiting a debt management website, for example, is most likely already stressed. Now, if the interface is overwhelming, it will only add to their cognitive load.</p>

<p>Stress-testing your product is critical to prevent this from happening. It’s useful to set up an annual day to <strong>stress test your product</strong> and refine emergency responses. It could be as simple as running <a href="https://contentdesign.intuit.com/foundations/content-testing/">content testing</a>, or running tests in a real, noisy, busy environment where users actually work — at peak times.</p>

<p>And in case of emergencies, we need to check if fallbacks work as expected and if the current UX of the product helps people manage failures and exceptional situations well enough.</p>

<h2 id="wrapping-up">Wrapping Up</h2>

<p>Emergencies <em>will</em> happen eventually &mdash; it’s just a matter of time. With good design, we can help <strong>mitigate risk and control damage</strong>, and make it hard to make irreversible mistakes. At its heart, that’s what good UX is exceptionally good at.</p>

<h2 id="key-takeaways">Key Takeaways</h2>

<p>People can’t multitask, especially in very stressful situations.</p>

<ul>
<li>Stress <strong>disrupts attention</strong>, memory, cognition, decision-making.</li>
<li>Also, it’s <strong>difficult to prioritize</strong> and draw logical conclusions.</li>
<li>Under stress, we rely on fast, intuitive judgments &mdash; not reasoning.</li>
<li>It leads to instinctive responses based on <strong>established habits</strong>.</li>
</ul>

<p>Goal: Design flows that support focus and high accuracy.</p>

<ul>
<li>Start with better default settings, values, presets, and actions.</li>
<li><strong>High-priority first</strong>: order of actions and buttons matters.</li>
<li>Break complex tasks down into a series of simple steps (10s–30s each).</li>
<li>Add built-in <strong>safeguards</strong> to prevent irreversible errors (Undo).</li>
</ul>

<p>Shift users to single-tasking: ask for one thing at a time.</p>

<ul>
<li><strong>Simpler pages</strong> might work better than one complex page.</li>
<li>Suggest a <strong>step-by-step plan of action</strong> to follow along.</li>
<li>Consider, design, and test flows for emergency responses ahead of time.</li>
<li>Add emergency mode for <strong>instant alerts</strong> and task assignments.</li>
</ul>

<h2 id="meet-how-to-measure-ux-and-design-impact">Meet “How To Measure UX And Design Impact”</h2>

<p>You can find more details on <strong>UX Strategy</strong> in 🪴&nbsp;<a href="https://measure-ux.com/"><strong>Measure UX &amp; Design Impact</strong></a> (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 <code>IMPACT</code> to save 20% off today. <a href="https://measure-ux.com/">Jump to the details</a>.</p>

<figure style="margin-bottom:0;padding-bottom:0" class="article__image">
    <a href="https://measure-ux.com/" title="How To Measure UX and Design Impact, with Vitaly Friedman">
    <img width="900" height="466" style="border-radius: 11px" src="https://files.smashing.media/articles/ux-metrics-video-course-release/measure-ux-and-design-impact-course.png" alt="How to Measure UX and Design Impact, with Vitaly Friedman.">
    </a>
</figure>

<div class="book-cta__inverted"><div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)"><nav class="content-tabs content-tabs--books"><ul><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">
Video + UX Training</button></a></li><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">Video only</button></a></li></ul></nav><div class="book-cta__col book-cta__hardcover content-tab--content"><h3 class="book-cta__title"><span>Video + UX Training</span></h3><span class="book-cta__price"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>495<span class="sup">.00</span></span></span> <span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>799<span class="sup">.00</span></span></span></span></span>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3951439" class="btn btn--full btn--medium btn--text-shadow">
Get Video + UX Training<div></div></a><p class="book-cta__desc">25 video lessons (8h) + <a href="https://smashingconf.com/online-workshops/workshops/vitaly-friedman-impact-design/">Live UX Training</a>.<br>100 days money-back-guarantee.</p></div><div class="book-cta__col book-cta__ebook content-tab--content"><h3 class="book-cta__title"><span>Video only</span></h3><div data-audience="anonymous free supporter" data-remove="true"><span class="book-cta__price" data-handler="PriceTag"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>250<span class="sup">.00</span></span></span><span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>395<span class="sup">.00</span></span></span></span></div>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3950630" class="btn btn--full btn--medium btn--text-shadow">
Get the video course<div></div></a><p class="book-cta__desc" data-audience="anonymous free supporter" data-remove="true">25 video lessons (8h). Updated yearly.<br>Also available as a <a href="https://smart-interface-design-patterns.thinkific.com/enroll/3082557?price_id=3951421">UX Bundle with 2 video courses.</a></p></div><span></span></div></div>

<h2 id="useful-resources">Useful Resources</h2>

<ul>
<li>“<a href="https://medium.com/design-bootcamp/ux-case-study-standby-17000867133c">Designing The SOS Emergency System</a>”, by Ritik Jayy</li>
<li>“<a href="https://medium.com/net-magazine/designing-for-crisis-9cab10b4c519">Designing For Crisis</a>”, by Eric Meyer</li>
<li>“<a href="https://medium.com/designing-services/designing-for-stressed-out-users-part-1-4489793dbe41">Designing For Stressed Out Users</a>” (Series), by H Locke</li>
<li><a href="https://uxpodcast.com/293-life-death-design-katie-swindler/">Designing For Stress</a> (Podcast), by Katie Swindler</li>
<li><a href="https://www.linkedin.com/posts/vitalyfriedman_ux-design-activity-7167433494200066048-trWE">Designing For Edge Cases and Exceptions</a>, by yours truly</li>
<li><a href="https://dfrlbook.com/"><em>Design For Real Life</em></a>, by Sara Wachter-Boettcher, Eric Mayer</li>
<li>“<a href="https://www.kryshiggins.com/optimal-onboarding-zone/">Optimal Stress Levels For Onboarding</a>, by Krystal Higgins</li>
</ul>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2025/09/how-minimize-environmental-impact-website/">How To Minimize The Environmental Impact Of Your Website</a>”, James Chudley</li>
<li>“<a href="https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/">AI In UX: Achieve More With Less</a>”, Paul Boag</li>
<li>“<a href="https://www.smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/">How To Make Your UX Research Hard To Ignore</a>”, Vitaly Friedman</li>
<li>“<a href="https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/">From Prompt To Partner: Designing Your Custom AI Assistant</a>,” Lyndon Cerejo</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Vitaly Friedman</author><title>Six Key Components of UX Strategy</title><link>https://www.smashingmagazine.com/2025/11/practical-guide-ux-strategy/</link><pubDate>Wed, 05 Nov 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/11/practical-guide-ux-strategy/</guid><description>Let’s dive into the building blocks of UX strategy and see how it speaks the language of product and business strategy to create user value while achieving company goals. Part of the &lt;a href="https://measure-ux.com/">Measure UX &amp;amp; Design Impact&lt;/a> (use the code 🎟 &lt;code>IMPACT&lt;/code> to save 20% off today).</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/11/practical-guide-ux-strategy/" />
              <title>Six Key Components of UX Strategy</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Six Key Components of UX Strategy</h1>
                  
                    
                    <address>Vitaly Friedman</address>
                  
                  <time datetime="2025-11-05T13:00:00&#43;00:00" class="op-published">2025-11-05T13:00:00+00:00</time>
                  <time datetime="2025-11-05T13:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>For years, “UX strategy” felt like a confusing, ambiguous, and overloaded term to me. To me, it was some sort of a roadmap or a “grand vision”, with a few business decisions attached to it. And looking back now, I realize that I was wrong all along.</p>

<p>UX Strategy isn’t a goal; it’s a <strong>journey towards that goal</strong>. A journey connecting where UX is today with a desired future state of UX. And as such, it guides our actions and decisions, things we do and don’t do. And its goal is very simple: to <strong>maximize our chances of success</strong> while considering risks, bottlenecks and anything that might endanger the project.</p>

<p>Let’s explore the <strong>components of UX strategy</strong>, and how it works with product strategy and business strategy to deliver user value and meet business goals.</p>

<h2 id="strategy-vs-goals-vs-plans">Strategy vs. Goals vs. Plans</h2>

<p>When we speak about strategy, we often speak about planning and goals &mdash; but they are actually quite different. While <em>strategy</em> answers <strong>“what” we’re doing and “why”</strong>, <em>planning</em> is about “how” and “when” we’ll get it done. And the <em>goal</em> is merely a desired outcome of that entire journey.</p>

<ul>
<li><strong>Goals</strong> establish a desired future outcome,</li>
<li>That outcome typically represents a problem to solve,</li>
<li><strong>Strategy</strong> shows a high-level solution for that problem,</li>
<li><strong>Plan</strong> is a detailed set of low-level steps for getting the solution done.</li>
</ul>














<figure class="
  
  
  ">
  
    <a href="https://www.linkedin.com/posts/alex-m-h-smith_please-tell-me-you-arent-making-this-mistake-activity-7364616097272143872-SKgz">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="800"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/1-strategy-vs-goal.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/1-strategy-vs-goal.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/1-strategy-vs-goal.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/1-strategy-vs-goal.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/1-strategy-vs-goal.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/1-strategy-vs-goal.jpeg"
			
			sizes="100vw"
			alt="A diagram showing that a goal is a destination, while a strategy is the path to get there."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Good strategy isn't a goal or a big objective; it's a solution to a problem posed by a goal. Via <a href='https://www.linkedin.com/posts/alex-m-h-smith_please-tell-me-you-arent-making-this-mistake-activity-7364616097272143872-SKgz'>Alex H Smith</a>. (<a href='https://files.smashing.media/articles/key-components-ux-strategy/1-strategy-vs-goal.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>A strong strategy requires making conscious, and oftentimes tough, decisions about what we will do &mdash; and just as importantly, <strong>what we will not do</strong>, and why.</p>

<h2 id="business-strategy">Business Strategy</h2>

<p>UX strategy doesn’t live in isolation. It must inform and support product strategy and be aligned with business strategy. All these terms are often slightly confusing and overloaded, so let&rsquo;s clear it up.</p>

<p>At the highest level, <strong>business strategy</strong> is about the distinct choices executives make to set the company apart from its competitors. They shape the company’s positioning, objectives, and (most importantly!) <strong>competitive advantage</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.strategyzer.com/library/the-business-model-canvas">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="579"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/2-business-model-canvas.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/2-business-model-canvas.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/2-business-model-canvas.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/2-business-model-canvas.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/2-business-model-canvas.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/2-business-model-canvas.jpg"
			
			sizes="100vw"
			alt="The Business Model Canvas representing key business considerations for a sustainable business."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      We shouldn’t underestimate our impact. UX affects many segments of the <a href='https://www.strategyzer.com/library/the-business-model-canvas'>Business Model Canvas</a>: user segments, relationships, channels, activities, revenue streams. (<a href='https://files.smashing.media/articles/key-components-ux-strategy/2-business-model-canvas.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Typically, this advantage is achieved in <strong>two ways</strong>: through lower prices (cost leadership) or through <strong>differentiation</strong>. The latter part isn&rsquo;t about being <em>different</em>, but rather <strong>being perceived differently</strong> by the target audience. And that’s exactly where UX impact steps in.</p>

<p>In short, business strategy is:</p>

<ul>
<li><strong>A top-line vision</strong>, basis for core offers,</li>
<li><strong>Shapes positioning</strong>, goals, competitive advantage,</li>
<li><strong>Must always adapt</strong> to the market to keep a competitive advantage.</li>
</ul>

<h2 id="product-strategy">Product Strategy</h2>

<p>Product strategy is how a high-level business direction is translated into a unique positioning of a product. It defines <strong>what the product is, who its users are</strong>, and how it will contribute to the business’s goals. It’s also how we bring a product to market, drive growth, and achieve product-market fit.</p>

<p>In short, product strategy is:</p>

<ul>
<li><strong>Unique positioning</strong> and value of a product,</li>
<li><strong>How to establish</strong> and keep a product in the marketplace,</li>
<li><strong>How to keep competitive advantage</strong> of the product.</li>
</ul>

<h2 id="ux-strategy">UX Strategy</h2>

<p>UX strategy is about <strong>shaping and delivering</strong> product value through UX. Good UX strategy always stems from UX research and answers to business needs. It established what to focus on, what our high-value actions are, how we’ll measure success, and &mdash; quite importantly &mdash; what <strong>risks</strong> we need to mitigate.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="451"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png"
			
			sizes="100vw"
			alt="Frequent risks"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Every project has plenty of risks that endanger it. Unknown dependencies are one of them. (<a href='https://files.smashing.media/articles/key-components-ux-strategy/8-risks-ux-strategy.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Most importantly, it’s <strong>not a fixed plan</strong> or a set of deliverables; it’s a guide that informs our actions, but also must be prepared to change when things change.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://www.nngroup.com/articles/ux-strategy/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="564"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/3-components-ux-strategy.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/3-components-ux-strategy.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/3-components-ux-strategy.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/3-components-ux-strategy.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/3-components-ux-strategy.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/3-components-ux-strategy.png"
			
			sizes="100vw"
			alt="A diagram illustrating the components of a UX Strategy: Vision, Goals, and a Plan."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Components of UX Strategy are Vision, Goals, and a Plan. Tactical steps are part of the execution. (Image source: <a href='https://www.nngroup.com/articles/ux-strategy/'>nngroup.com</a>) (<a href='https://files.smashing.media/articles/key-components-ux-strategy/3-components-ux-strategy.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>In short, UX strategy is:</p>

<ul>
<li>How we shape and deliver <strong>product value</strong> through UX,</li>
<li><strong>Priorities</strong>, focus + why, actions, metrics, risks,</li>
<li><strong>Isn’t a roadmap</strong>, intention or deliverables.</li>
</ul>

<h2 id="six-key-components-of-ux-strategy">Six Key Components of UX Strategy</h2>

<p>The impact of good UX typically lives in <strong>differentiation</strong> mentioned above. Again, it’s not about how “different” our experience is, but the unique perceived value that users associate with it. And that value is a matter of a clear, frictionless, accessible, fast, and reliable experience wrapped into the product.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg"
			
			sizes="100vw"
			alt="UX strategy covers a plan of action, priorities, when to start working on it, and what it looks like."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      UX strategy works best in discovery, and is useful when risk and uncertainty are high. (<a href='https://files.smashing.media/articles/key-components-ux-strategy/4-what-ux-strategy.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I always try to include <strong>6 key components</strong> in any strategic UX work so we don’t end up following a wrong assumption that won’t bring any impact:</p>

<ol>
<li><strong>Target goal</strong><br />
The desired, improved future state of UX.</li>
<li><strong>User segments</strong><br />
Primary users that we are considering.</li>
<li><strong>Priorities</strong><br />
What we will and, crucially, what we will not do, and why.</li>
<li><strong>High-value actions</strong><br />
How we drive value and meet user and business needs.</li>
<li><strong>Feasibility</strong><br />
Realistic assessment of people, processes, and resources.</li>
<li><strong>Risks</strong><br />
Bottlenecks, blockers, legacy constraints, big unknowns.</li>
</ol>

<p>It’s worth noting that it’s always dangerous to be designing a product with <strong>everybody in mind</strong>. As Jamie Levy noted, by being very broad too early, we often reduce the impact of our design and messaging. It’s typically better to start with a specific, <strong>well-defined user segment</strong> and then expand, rather than the other way around.</p>

<h2 id="practical-example-by-alin-buda">Practical Example (by Alin Buda)</h2>

<p>UX strategy doesn’t have to be a big <strong>40-page long PDF report</strong> or a Keynote presentation. A while back, Alin Buda kindly <a href="https://www.linkedin.com/posts/vitalyfriedman_ux-design-activity-7313819542655299585-ya-L?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAACDcgQBa_vsk5breYKwZAgyIhsHtJaFbL8">left a comment</a> on one of my LinkedIn posts, giving a great example of what a <strong>concise UX strategy</strong> could look like:</p>

<blockquote><strong>UX Strategy (for Q4)</strong><br /><br />Our UX strategy is to focus on <strong>high-friction workflows</strong> for expert users, not casual usability improvements. Why? Because retention in this space is driven by power-user efficiency, and that aligns with our growth model.<br /><br />To succeed, we’ll design <strong>workflow accelerators</strong> and decision-support tools that will reduce time-on-task. As a part of it, we’ll need to redesign legacy flows in the Crux system. We <strong>won’t prioritize</strong> UI refinements or onboarding tours, because it doesn’t move the needle in this context.</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="449"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg"
			
			sizes="100vw"
			alt="UX Strategy example, highlighting individual key points to cover."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      An example of UX strategy. It doesn't have to be a long PDF report. (<a href='https://files.smashing.media/articles/key-components-ux-strategy/5-ux-strategy-example.jpg'>Large preview</a>)
    </figcaption>
  
</figure>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="448"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg"
			
			sizes="100vw"
			alt="UX Strategy example, highlighting individual key points to cover."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      UX strategy works best in discovery, and is useful when risk and uncertainty are high. (<a href='https://files.smashing.media/articles/key-components-ux-strategy/6-ux-strategy-example.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>What I like most about this example is just how concise and clear it is. Getting to this level of clarity takes quite a bit of time, but it creates a very precise overview of what we do, what we don&rsquo;t do, what we focus on, and how we <strong>drive value</strong>.</p>

<h2 id="wrapping-up">Wrapping Up</h2>

<p>The best path to make a strong case with senior leadership is to frame your UX work as a direct <strong>contributor to differentiation</strong>. This isn’t just about making things look different; it’s about enhancing the perceived value.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://jamiemill.com/blog/elements-of-product-design/">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/7-elements-product-design.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/key-components-ux-strategy/7-elements-product-design.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/key-components-ux-strategy/7-elements-product-design.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/key-components-ux-strategy/7-elements-product-design.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/key-components-ux-strategy/7-elements-product-design.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/key-components-ux-strategy/7-elements-product-design.png"
			
			sizes="100vw"
			alt="A diagram showing the elements of product design, from abstract reality to the concrete surface."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Elements of Product Design, starting from mapping reality into the problem space. That's the critical part, and a cornerstone of UX Strategy. (Image source: <a href='https://jamiemill.com/blog/elements-of-product-design/'>Jamie Mill</a>) (<a href='https://files.smashing.media/articles/key-components-ux-strategy/7-elements-product-design.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>A good strategy ties UX improvements to <strong>measurable business outcomes</strong>. It doesn’t speak about design patterns, consistency, or neatly organized components. Instead, it speaks the language of product and business strategy: OKRs, costs, revenue, business metrics, and objectives.</p>

<p>Design <strong>can succeed without a strategy</strong>. In the wise words of Sun Tzu, strategy without tactics is the slowest route to victory. And tactics without strategy are the noise before defeat.</p>

<h2 id="meet-how-to-measure-ux-and-design-impact">Meet “How To Measure UX And Design Impact”</h2>

<p>You can find more details on <strong>UX Strategy</strong> in 🪴&nbsp;<a href="https://measure-ux.com/"><strong>Measure UX &amp; Design Impact</strong></a> (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 <code>IMPACT</code> to save 20% off today. <a href="https://measure-ux.com/">Jump to the details</a>.</p>

<figure style="margin-bottom:0;padding-bottom:0" class="article__image">
    <a href="https://measure-ux.com/" title="How To Measure UX and Design Impact, with Vitaly Friedman">
    <img width="900" height="466" style="border-radius: 11px" src="https://files.smashing.media/articles/ux-metrics-video-course-release/measure-ux-and-design-impact-course.png" alt="How to Measure UX and Design Impact, with Vitaly Friedman.">
    </a>
</figure>

<div class="book-cta__inverted"><div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)"><nav class="content-tabs content-tabs--books"><ul><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">
Video + UX Training</button></a></li><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">Video only</button></a></li></ul></nav><div class="book-cta__col book-cta__hardcover content-tab--content"><h3 class="book-cta__title"><span>Video + UX Training</span></h3><span class="book-cta__price"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>495<span class="sup">.00</span></span></span> <span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>799<span class="sup">.00</span></span></span></span></span>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3951439" class="btn btn--full btn--medium btn--text-shadow">
Get Video + UX Training<div></div></a><p class="book-cta__desc">25 video lessons (8h) + <a href="https://smashingconf.com/online-workshops/workshops/vitaly-friedman-impact-design/">Live UX Training</a>.<br>100 days money-back-guarantee.</p></div><div class="book-cta__col book-cta__ebook content-tab--content"><h3 class="book-cta__title"><span>Video only</span></h3><div data-audience="anonymous free supporter" data-remove="true"><span class="book-cta__price" data-handler="PriceTag"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>250<span class="sup">.00</span></span></span><span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>395<span class="sup">.00</span></span></span></span></div>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3950630" class="btn btn--full btn--medium btn--text-shadow">
Get the video course<div></div></a><p class="book-cta__desc" data-audience="anonymous free supporter" data-remove="true">25 video lessons (8h). Updated yearly.<br>Also available as a <a href="https://smart-interface-design-patterns.thinkific.com/enroll/3082557?price_id=3951421">UX Bundle with 2 video courses.</a></p></div><span></span></div></div>

<h2 id="useful-resources">Useful Resources</h2>

<ul>
<li>“<a href="https://www.nngroup.com/articles/ux-strategy/">UX Strategy: Definition and Components</a>”, Sarah Gibbons, Anna Kaley</li>
<li>“<a href="https://www.nngroup.com/articles/strategy-study-guide/">UX Strategy: Study Guide</a>”, Sarah Gibbons, Anna Kaley</li>
<li><a href="https://www.youtube.com/watch?v=-6rFBXVMBTs">What Goes Into a Proactive UX Strategy</a> (video), Jared Spool</li>
<li>“<a href="https://dovetail.com/ux/ux-strategy/">How To Develop An Effective UX Strategy</a>”, Chloé Garnham</li>
<li><a href="https://thewavingcat.com/publications/the-little-book-of-strategy/"><em>The Little Book Of Strategy</em></a> (free PDF), Peter Bihr</li>
<li>“<a href="https://www.uxmatters.com/mt/archives/2016/12/how-to-create-an-enterprise-ux-strategy.php">Enterprise UX Strategy</a>”, Cassandra Naji</li>
<li>“<a href="https://web.archive.org/web/20181128062801/https://www.invisionapp.com/inside-design/ux-strategy-guide/">UX Strategy Guide</a>” + <a href="https://web.archive.org/web/20220506065907/https://s3.amazonaws.com/blog.invisionapp.com/uploads/2018/01/UX-strategy-template.pdf">Blueprint (Template)</a>, Alex Souza</li>
<li><a href="https://learningloop.io/playbooks/">Product Strategy Playbooks</a></li>
<li><a href="https://jaimelevy.com/ux-strategy-book/"><em>UX Strategy</em></a>, Jaime Levy</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Paul Boag</author><title>AI In UX: Achieve More With Less</title><link>https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/</link><pubDate>Fri, 17 Oct 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/</guid><description>A simple but powerful mental model for working with AI: treat it like an enthusiastic intern with no real-world experience. Paul Boag shares lessons learned from real client projects across user research, design, development, and content creation.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/" />
              <title>AI In UX: Achieve More With Less</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>AI In UX: Achieve More With Less</h1>
                  
                    
                    <address>Paul Boag</address>
                  
                  <time datetime="2025-10-17T08:00:00&#43;00:00" class="op-published">2025-10-17T08:00:00+00:00</time>
                  <time datetime="2025-10-17T08:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it simply cannot do. I have fed it terrible prompts and received terrible output. And I have definitely spent more time fighting with it than I care to admit.</p>

<p>But I have also discovered that when you stop treating AI like magic and start treating it like what it actually is (a very enthusiastic intern with zero life experience), things start to make more sense.</p>

<p>Let me share what I have learned from working with AI on real client projects across user research, design, development, and content creation.</p>

<h2 id="how-to-work-with-ai">How To Work With AI</h2>

<p>Here is the mental model that has been most helpful for me. Treat AI like an <strong>intern with zero experience</strong>.</p>

<p>An intern fresh out of university has lots of enthusiasm and qualifications, but no real-world experience. You would not trust them to do anything unsupervised. You would explain tasks in detail. You would expect to review their work multiple times. You would give feedback and ask them to try again.</p>

<p>This is exactly how you should work with AI.</p>

<h3 id="the-basics-of-prompting">The Basics Of Prompting</h3>

<p>I am not going to pretend to be an expert. I have just spent way too much time playing with this stuff because I like anything shiny and new. But here is what works for me.</p>

<ul>
<li><strong>Define the role.</strong><br />
Start with something like <em>“Act as a user researcher”</em>  or <em>“Act as a copywriter.”</em>  This gives the AI context for how to respond.</li>
<li><strong>Break it into steps.</strong><br />
Do not just say <em>“Analyze these interview transcripts.”</em> Instead, say <em>“I want you to complete the following steps. One, identify recurring themes. Two, look for questions users are trying to answer. Three, note any objections that come up. Four, output a summary of each.”</em></li>
<li><strong>Define success.</strong><br />
Tell it what good looks like. <em>“I am looking for a report that gives a clear indication of recurring themes and questions in a format I can send to stakeholders. Do not use research terminology because they will not understand it.”</em></li>
<li><strong>Make it think.</strong><br />
Tell it to think deeply about its approach before responding. Get it to create a way to test for success (known as a rubric) and iterate on its work until it passes that test.</li>
</ul>

<p>Here is a real prompt I use for online research:</p>

<blockquote>Act as a user researcher. I would like you to carry out deep research online into [brand name]. In particular, I would like you to focus on what people are saying about the brand, what the overall sentiment is, what questions people have, and what objections people mention. The goal is to create a detailed report that helps me better understand the brand perception.<br /><br />Think deeply about your approach before carrying out the research. Create a rubric for the report to ensure it is as useful as possible. Keep iterating until the report scores extremely high on the rubric. Only then, output the report.</blockquote>

<p>That second paragraph (the bit about thinking deeply and creating a rubric), I basically copy and paste into everything now. It is a universal way to get better output.</p>

<h3 id="learn-when-to-trust-it">Learn When To Trust It</h3>

<p>You should never fully trust AI. Just like you would never fully trust an intern you have only just met.</p>

<p>To begin with, double-check absolutely everything. Over time, you will get a sense of when it is losing its way. You will spot the patterns. You will know when to start a fresh conversation because the current one has gone off the rails.</p>

<p>But even after months of working with it daily, I still check its work. I still challenge it. I still make it <strong>cite sources</strong> and <strong>explain its reasoning</strong>.</p>

<p>The key is that even with all that checking, it is still faster than doing it yourself. Much faster.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="using-ai-for-user-research">Using AI For User Research</h2>

<p>This is where AI has genuinely transformed my work. I use it constantly for five main things.</p>

<h3 id="online-research">Online Research</h3>

<p>I love AI for this. I can ask it to go and research a brand online. What people are saying about it, what questions they have, what they like, and what frustrates them. Then do the same for competitors and compare.</p>

<p>This would have taken me days of trawling through social media and review sites. Now it takes minutes.</p>

<p>I recently did this for an e-commerce client. I wanted to understand what annoyed people about the brand and what they loved. I got detailed insights that shaped the entire conversion optimization strategy. All from one prompt.</p>

<h3 id="analyzing-interviews-and-surveys">Analyzing Interviews And Surveys</h3>

<p>I used to avoid open-ended questions in surveys. They were such a pain to review. Now I use them all the time because AI can analyze hundreds of text responses in seconds.</p>

<p>For interviews, I upload the transcripts and ask it to identify recurring themes, questions, and requests. I always get it to quote directly from the transcripts so I can verify it is not making things up.</p>

<p>The quality is good. Really good. As long as you give it <strong>clear instructions</strong> about what you want.</p>

<h3 id="making-sense-of-data">Making Sense Of Data</h3>

<p>I am terrible with spreadsheets. Put me in front of a person and I can understand them. Put me in front of data, and my eyes glaze over.</p>

<p>AI has changed that. I upload spreadsheets to ChatGPT and just ask questions. <em>“What patterns do you see?”</em> <em>“Can you reformat this?”</em> <em>“Show me this data in a different way.”</em></p>

<p><a href="https://clarity.microsoft.com/">Microsoft Clarity</a> now has Copilot built in, so you can ask it questions about your analytics data. <a href="https://www.triplewhale.com/">Triple Whale</a> does the same for e-commerce sites. These tools are game changers if you struggle with data like I do.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="465"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png"
			
			sizes="100vw"
			alt="Screenshot of the Microsoft Clarity with the built-in Copilot"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Microsoft Clarity has co-pilot built in, making it so much easier to uncover insights. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/1-microsoft-clarity.png'>Large preview</a>)
    </figcaption>
  
</figure>

<div class="partners__lead-place"></div>

<h3 id="research-projects">Research Projects</h3>

<p>This is probably my favorite technique. In ChatGPT and Claude, you can create projects. In other tools, they are called spaces. Think of them as self-contained folders where everything you put in is available to every conversation in that project.</p>

<p>When I start working with a new client, I create a project and throw everything in. Old user research. Personas. Survey results. Interview transcripts. Documentation. Background information. Site copy. Anything I can find.</p>

<p>Then I give it custom instructions. Here is one I use for my own business:</p>

<blockquote>Act as a business consultant and marketing strategy expert with good copywriting skills. Your role is to help me define the future of my <a href="https://boagworld.com/l/ux-consultant/">UX consultant business</a> and better articulate it, especially via my website. When I ask for your help, ask questions to improve your answers and challenge my assumptions where appropriate.</blockquote>

<p>I have even uploaded a virtual board of advisors (people I wish I had on my board) and asked AI to research how they think and respond as they would.</p>

<p>Now I have this project that knows everything about my business. I can ask it questions. Get it to review my work. <strong>Challenge my thinking.</strong> It is like having a co-worker who never gets tired and has a perfect memory.</p>

<p>I do this for every client project now. It is invaluable.</p>

<h3 id="creating-personas">Creating Personas</h3>

<p>AI has reinvigorated my interest in personas. I had lost heart in them a bit. They took too long to create, and clients always said they already had marketing personas and did not want to pay to do them again.</p>

<p>Now I can create what I call <a href="https://www.smashingmagazine.com/2025/09/functional-personas-ai-lean-practical-workflow/">functional personas</a>. Personas that are actually useful to people who work in UX. Not marketing fluff about what brands people like, but real information about what questions they have and what tasks they are trying to complete.</p>

<p>I upload all my research to a project and say:</p>

<blockquote>Act as a user researcher. Create a persona for [audience type]. For this persona, research the following information: questions they have, tasks they want to complete, goals, states of mind, influences, and success metrics. It is vital that all six criteria are addressed in depth and with equal vigor.</blockquote>

<p>The output is really good. Detailed. Useful. Based on actual data rather than pulled out of thin air.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="2480"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png"
			
			sizes="100vw"
			alt="Ai-generated functional persona"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI makes creating detailed personas so much faster. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/2-ai-creating-personas.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Here is my challenge to anyone who thinks AI-generated personas are somehow fake. What makes you think your personas are so much better? Every persona is a story of a <strong>hypothetical user</strong>. You make judgment calls when you create personas, too. At least AI can process far more information than you can and is brilliant at pattern recognition.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aMy%20only%20concern%20is%20that%20relying%20too%20heavily%20on%20AI%20could%20disconnect%20us%20from%20real%20users.%20We%20still%20need%20to%20talk%20to%20people.%20We%20still%20need%20that%20empathy.%20But%20as%20a%20tool%20to%20synthesize%20research%20and%20create%20reference%20points?%20It%20is%20excellent.%0a&url=https://smashingmagazine.com%2f2025%2f10%2fai-ux-achieve-more-with-less%2f">
      
My only concern is that relying too heavily on AI could disconnect us from real users. We still need to talk to people. We still need that empathy. But as a tool to synthesize research and create reference points? It is excellent.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="using-ai-for-design-and-development">Using AI For Design And Development</h2>

<p>Let me start with a warning. AI is not production-ready. Not yet. Not for the kind of client work I do, anyway.</p>

<p>Three reasons why:</p>

<ol>
<li>It is slow if you want something specific or complicated.</li>
<li>It can be frustrating because it gets close but not quite there.</li>
<li>And the quality is often subpar. Unpolished code, questionable design choices, that kind of thing.</li>
</ol>

<p>But that does not mean it is not useful. It absolutely is. Just not for final production work.</p>

<h3 id="functional-prototypes">Functional Prototypes</h3>

<p>If you are not too concerned with matching a specific design, AI can quickly prototype functionality in ways that are hard to match in Figma. Because Figma is terrible at prototyping functionality. You cannot even create an active form field in a Figma prototype. It’s the biggest thing people do online other than click links &mdash; and you cannot test it.</p>

<p>Tools like <a href="https://www.relume.io/">Relume</a> and <a href="https://bolt.new/">Bolt</a> can create quick functional mockups that show roughly how things work. They are great for non-designers who just need to throw together a prototype quickly. For designers, they can be useful for showing developers how you want something to work.</p>

<p>But you can spend ages getting them to put a hamburger menu on the right side of the screen. So use them for quick iteration, not pixel-perfect design.</p>

<h3 id="small-coding-tasks">Small Coding Tasks</h3>

<p>I use AI constantly for small, low-risk coding work. I am not a developer anymore. I used to be, back when dinosaurs roamed the earth, but not for years.</p>

<p>AI lets me create the little tools I need. <a href="https://boagworld.com/boagworks/convince-the-boss/">A calculator that calculates the ROI of my UX work</a>. An app for running top task analysis. Bits of JavaScript for hiding elements on a page. WordPress plugins for updating dates automatically.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="465"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png"
			
			sizes="100vw"
			alt="Screenshot of the Bolt tool"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      I find Bolt an incredibly intuitive tool for building quick prototypes for low-risk apps. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/3-bolt-tool.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Just before running my workshop on this topic, I needed a tool to create calendar invites for multiple events. All the online services wanted £16 a month. I asked ChatGPT to build me one. One prompt. It worked. It looked rubbish, but I did not care. It did what I needed.</p>

<p>If you are a developer, you should absolutely be using tools like <a href="https://cursor.com/">Cursor</a> by now. They are invaluable for pair programming with AI. But if you are not a developer, just stick with Claude or Bolt for quick throwaway tools.</p>

<h3 id="reviewing-existing-services">Reviewing Existing Services</h3>

<p>There are some great tools for getting quick feedback on existing websites when budget and time are tight.</p>

<p>If you need to conduct a <a href="https://boagworld.com/l/ux-audit/">UX audit</a>, <a href="https://wevo.ai/takeapulse/">Wevo Pulse</a> is an excellent starting point. It automatically reviews a website based on personas and provides visual attention heatmaps, friction scores, and specific improvement recommendations. It generates insights in minutes rather than days.</p>

<p>Now, let me be clear. This does not replace having an experienced person conduct a proper UX audit. You still need that human expertise to understand context, make judgment calls, and spot issues that AI might miss. But as a starting point to identify obvious problems quickly? It is a great tool. Particularly when budget or time constraints mean a full audit is not on the table.</p>

<p>For e-commerce sites, <a href="https://baymard.com/product/ux-ray">Baymard has UX Ray</a>, which analyzes flaws based on their massive database of user research.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="465"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png"
			
			sizes="100vw"
			alt="Screenshot of the Baymard UX-ray"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Baymard UX-ray is an incredibly handy tool for improving the quality of your UX audits. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/4-baymard-ux-ray.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="checking-your-designs">Checking Your Designs</h3>

<p><a href="https://attentioninsight.com/">Attention Insight</a> has taken thousands of hours of eye-tracking studies and trained AI on it to predict where people will look on a page. It has about 90 to 96 percent accuracy.</p>

<p>You upload a screenshot of your design, and it shows you where attention is going. Then you can play around with your imagery and layout to guide attention to the right place.</p>

<p>It is great for dealing with stakeholders who say, <em>“People won’t see that.”</em> You can prove they will. Or equally, when stakeholders try to crowd the interface with too much stuff, you can show them attention shooting everywhere.</p>

<p>I use this constantly. Here is a real example from a pet insurance company. They had photos of a dog, cat, and rabbit for different types of advice. The dog was far from the camera. The cat was looking directly at the camera, pulling all the attention. The rabbit was half off-frame. Most attention went to the cat’s face.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="421"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png"
			
			sizes="100vw"
			alt="An example from a pet insurance company tested by Attention Insight"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/5-attention-insight.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I redesigned it using AI-generated images, where I could control exactly where each animal looked. Dog looking at the camera. Cat looking right. Rabbit looking left. All the attention drawn into the center. Made a massive difference.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="394"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png"
			
			sizes="100vw"
			alt="Redesigned version of the previous example with AI-generated images."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      AI can be used to create images that are consistent with a brand identity and are designed to draw attention to specific elements. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/6-redesigned-ai-version.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="creating-the-perfect-image">Creating The Perfect Image</h3>

<p>I use AI all the time for creating images that do a specific job. My preferred tools are <a href="https://www.midjourney.com/">Midjourney</a> and Gemini.</p>

<p>I like Midjourney because, visually, it creates stunning imagery. You can dial in the tone and style you want. The downside is that it is not great at following specific instructions.</p>

<p>So I produce an image in Midjourney that is close, then upload it to Gemini. Gemini is not as good at visual style, but it is much better at following instructions. <em>“Make the guy reach here”</em> or <em>“Add glasses to this person.”</em> I can get pretty much exactly what I want.</p>

<p>The other thing I love about Midjourney is that you can upload a photograph and say, <em>“Replicate this style.”</em> This keeps <strong>consistency</strong> across a website. I have a master image I use as a reference for all my site imagery to keep the style consistent.</p>

<h2 id="using-ai-for-content">Using AI For Content</h2>

<p>Most clients give you terrible copy. Our job is to improve the user experience or conversion rate, and anything we do gets utterly undermined by bad copy.</p>

<p>I have completely stopped asking clients for copy since AI came along. Here is my process.</p>

<h3 id="build-everything-around-questions">Build Everything Around Questions</h3>

<p>Once I have my information architecture, I get AI to generate a massive list of questions users will ask. Then I run a <a href="https://www.smashingmagazine.com/2022/05/top-tasks-focus-what-matters-must-defocus-what-doesnt/">top task analysis</a> where people vote on which questions matter most.</p>

<p>I assign those questions to pages on the site. Every page gets a list of the questions it needs to answer.</p>

<h3 id="get-bullet-point-answers-from-stakeholders">Get Bullet Point Answers From Stakeholders</h3>

<p>I spin up the content management system with a really basic theme. Just HTML with very basic formatting. I go through every page and assign the questions.</p>

<p>Then I go to my clients and say: <em>“I do not want you to write copy. Just go through every page and bullet point answers to the questions. If the answer exists on the old site, copy and paste some text or link to it. But just bullet points.”</em></p>

<p>That is their job done. Pretty much.</p>

<div class="partners__lead-place"></div>

<h3 id="let-ai-draft-the-copy">Let AI Draft The Copy</h3>

<p>Now I take control. I feed ChatGPT the questions and bullet points and say:</p>

<blockquote>Act as an online copywriter. Write copy for a webpage that answers the question [question]. Use the following bullet points to answer that question: [bullet points]. Use the following guidelines: Aim for a ninth-grade reading level or below. Sentences should be short. Use plain language. Avoid jargon. Refer to the reader as you. Refer to the writer as us. Ensure the tone is friendly, approachable, and reassuring. The goal is to [goal]. Think deeply about your approach. Create a rubric and iterate until the copy is excellent. Only then, output it.</blockquote>

<p>I often upload a full style guide as well, with details about how I want it to be written.</p>

<p>The output is genuinely good. As a first draft, it is excellent. Far better than what most stakeholders would give me.</p>

<h3 id="stakeholders-review-and-provide-feedback">Stakeholders Review And Provide Feedback</h3>

<p>That goes into the website, and stakeholders can comment on it. Once I get their feedback, I take the original copy and all their comments back into ChatGPT and say, <em>“Rewrite using these comments.”</em></p>

<p>Job done.</p>

<p>The great thing about this approach is that even if stakeholders make loads of changes, they are making changes to a good foundation. The overall quality still comes out better than if they started with a blank sheet.</p>

<p>It also makes things go smoother because you are not criticizing their content, where they get defensive. They are criticizing AI content.</p>

<h3 id="tools-that-help">Tools That Help</h3>

<p>If your stakeholders are still giving you content, <a href="https://hemingwayapp.com/">Hemingway Editor</a> is brilliant. Copy and paste text in, and it tells you how readable and scannable it is. It highlights long sentences and jargon. You can use this to prove to clients that their content is not good web copy.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="497"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png"
			
			sizes="100vw"
			alt="Screenshot of the Hemingway Editor"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Hemingway Editor is superb at rewriting copy to be more web-friendly. (<a href='https://files.smashing.media/articles/ai-ux-achieve-more-with-less/7-hemingway-editor.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If you pay for the pro version, you get AI tools that will rewrite the copy to be more readable. It is excellent.</p>

<h2 id="what-this-means-for-you">What This Means for You</h2>

<p>Let me be clear about something. None of this is perfect. AI makes mistakes. It hallucinates. It produces bland output if you do not push it hard enough. It requires constant checking and challenging.</p>

<p>But here is what I know from two years of using this stuff daily. It has made me <strong>faster</strong>. It has made me <strong>better</strong>. It has freed me up to do <strong>more strategic thinking</strong> and <strong>less grunt work</strong>.</p>

<p>A report that would have taken me five days now takes three hours. That is not an exaggeration. That is real.</p>

<p>Overall, AI probably gives me a 25 to 33 percent increase in what I can do. That is significant.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aYour%20value%20as%20a%20UX%20professional%20lies%20in%20your%20ideas,%20your%20questions,%20and%20your%20thinking.%20Not%20your%20ability%20to%20use%20Figma.%20Not%20your%20ability%20to%20manually%20review%20transcripts.%20Not%20your%20ability%20to%20write%20reports%20from%20scratch.%0a&url=https://smashingmagazine.com%2f2025%2f10%2fai-ux-achieve-more-with-less%2f">
      
Your value as a UX professional lies in your ideas, your questions, and your thinking. Not your ability to use Figma. Not your ability to manually review transcripts. Not your ability to write reports from scratch.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>AI cannot innovate. It cannot make creative leaps. It cannot know whether its output is good. It cannot understand what it is like to be human.</p>

<p>That is where you come in. That is where you will always come in.</p>

<p>Start small. Do not try to learn everything at once. Just ask yourself throughout your day: Could I do this with AI? Try it. See what happens. Double-check everything. Learn what works and what does not.</p>

<p>Treat it like an enthusiastic intern with zero life experience. Give it clear instructions. Check its work. Make it try again. Challenge it. Push it further.</p>

<p>And remember, it is not going to take your job. It is going to change it. For the better, I think. As long as we learn to work with it rather than against it.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Vitaly Friedman</author><title>How To Make Your UX Research Hard To Ignore</title><link>https://www.smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/</link><pubDate>Thu, 16 Oct 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/</guid><description>Research isn’t everything. Facts alone don’t win arguments, but powerful stories do. Here’s how to turn your research into narratives that inspire trust and influence decisions.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/" />
              <title>How To Make Your UX Research Hard To Ignore</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>How To Make Your UX Research Hard To Ignore</h1>
                  
                    
                    <address>Vitaly Friedman</address>
                  
                  <time datetime="2025-10-16T13:00:00&#43;00:00" class="op-published">2025-10-16T13:00:00+00:00</time>
                  <time datetime="2025-10-16T13:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>In the early days of my career, I believed that nothing <strong>wins an argument</strong> more effectively than strong and unbiased research. Surely facts speak for themselves, I thought.</p>

<p>If I just get enough data, just enough evidence, just enough clarity on where users struggle &mdash; well, once I have it all and I present it all, it alone will surely change people’s minds, hearts, and beliefs. And, most importantly, it will help everyone see, understand, and perhaps even appreciate and commit to <strong>what needs to be done</strong>.</p>

<p>Well, it’s not quite like that. In fact, the stronger and louder the data, the more likely it is to be <strong>questioned</strong>. And there is a good reason for that, which is often left between the lines.</p>

<h2 id="research-amplifies-internal-flaws">Research Amplifies Internal Flaws</h2>

<p>Throughout the years, I’ve often seen data speaking volumes about where the business is failing, where customers are struggling, where the team is faltering &mdash; and where an <strong>urgent turnaround</strong> is necessary. It was right there, in plain sight: clear, loud, and obvious.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://medium.com/shopify-ux/the-design-process-is-a-lie-465a7064a733">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="600"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/1-illustration-jose-torre.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/1-illustration-jose-torre.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/1-illustration-jose-torre.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/1-illustration-jose-torre.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/1-illustration-jose-torre.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/1-illustration-jose-torre.jpg"
			
			sizes="100vw"
			alt="Illustration by José Torre."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Good research doesn't just uncover troubles; it also amplifies internal flaws and poor decisions. Wonderful illustration by <a href='https://medium.com/shopify-ux/the-design-process-is-a-lie-465a7064a733'>José Torre</a>. (<a href='https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/1-illustration-jose-torre.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>But because it’s so clear, it reflects back, often amplifying all the sharp edges and all the cut corners in all the wrong places. It reflects internal flaws, <strong>wrong assumptions</strong>, and failing projects &mdash; some of them signed off years ago, with secured budgets, big promotions, and approved headcounts. Questioning them means <strong>questioning authority</strong>, and often it’s a tough path to take.</p>

<p>As it turns out, strong data is very, very good at raising <strong>uncomfortable truths</strong> that most companies don’t really want to acknowledge. That’s why, at times, research is deemed “unnecessary,” or why we don’t get access to users, or why <strong>loud voices</strong> always win big arguments.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg"
			
			sizes="100vw"
			alt="UX Research in B2B."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      UX Research in B2B: when you don’t have access to users. (<a href='https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/2-ux-research-b2b.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>So even if data is presented with a lot of eagerness, gravity, and passion in that big meeting, it will get questioned, doubted, and explained away. Not because of its flaws, but because of hope, reluctance to change, and layers of <strong>internal politics</strong>.</p>

<p>This shows up most vividly in situations when someone raises concerns about the <strong>validity and accuracy</strong> of research. Frankly, it’s not that somebody is wrong and somebody is right. Both parties just happen to be <strong>right in a different way</strong>.</p>

<h2 id="what-to-do-when-data-disagrees">What To Do When Data Disagrees</h2>

<p>We’ve all heard that data always tells a story. However, it’s <strong>never just a single story</strong>. People are complex, and pointing out a specific truth about them just by looking at numbers is rarely enough.</p>

<p>When data disagrees, it doesn’t mean that either is wrong. It’s just that <strong>different perspectives</strong> reveal different parts of a whole story that isn’t completed yet.</p>














<figure class="
  
  
  ">
  
    <a href="https://medium.com/lexisnexis-design/what-to-do-when-qual-and-quant-disagree-18a535164ca6">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="972"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/3-qual-quant-data.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/3-qual-quant-data.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/3-qual-quant-data.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/3-qual-quant-data.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/3-qual-quant-data.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/3-qual-quant-data.jpg"
			
			sizes="100vw"
			alt="Various UX Research methods"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://medium.com/lexisnexis-design/what-to-do-when-qual-and-quant-disagree-18a535164ca6'>What to do when qual and quant disagree</a>, a very practical guide by Archana Shah. (<a href='https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/3-qual-quant-data.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>In digital products, most stories have <strong>2 sides</strong>:</p>

<ul>
<li><strong>Quantitative data</strong> ← What/When: behavior patterns at scale.</li>
<li><strong>Qualitative data</strong> ← Why/How: user needs and motivations.</li>
<li>↳ Quant usually comes from analytics, surveys, and experiments.</li>
<li>↳ Qual comes from tests, observations, and open-ended surveys.</li>
</ul>

<p>Risk-averse teams overestimate the <strong>weight of big numbers</strong> in quantitative research. Users exaggerate the frequency and severity of issues that are critical for them. As Archana Shah <a href="https://medium.com/lexisnexis-design/what-to-do-when-qual-and-quant-disagree-18a535164ca6">noted</a>, designers get carried away by users’ <strong>confident responses</strong> and often overestimate what people say and do.</p>

<p>And so, eventually, data coming from different teams paints a different picture. And when it happens, we need to <strong>reconcile and triangulate</strong>. With the former, we track what’s missing, omitted, or overlooked. With the latter, we <strong>cross-validate data</strong> &mdash; e.g., finding pairings of qual/quant streams of data, then clustering them together to see what’s there and what’s missing, and exploring from there.</p>

<p>And even with all of it in place and data conflicts resolved, we still need to do one more thing to make a strong argument: we need to tell a <strong>damn good story</strong>.</p>

<h2 id="facts-don-t-win-arguments-stories-do">Facts Don’t Win Arguments, Stories Do</h2>

<p>Research isn’t everything. <a href="https://www.linkedin.com/posts/erikahall_tapping-the-sign-again-every-time-i-see-activity-7360805865051865090-uldg">Facts don’t win arguments</a> &mdash; <strong>powerful stories do</strong>. But a story that starts with a spreadsheet isn’t always inspiring or effective. Perhaps it brings a problem into the spotlight, but it doesn’t lead to a resolution.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://medium.com/shopify-ux/the-design-process-is-a-lie-465a7064a733">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="600"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/4-illustration-jose-torre.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/4-illustration-jose-torre.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/4-illustration-jose-torre.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/4-illustration-jose-torre.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/4-illustration-jose-torre.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/4-illustration-jose-torre.png"
			
			sizes="100vw"
			alt="Illustration by José Torre."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Presenting research is more than presenting findings. It must be wrapped inside an actionable story. Wonderful illustration by <a href='https://medium.com/shopify-ux/the-design-process-is-a-lie-465a7064a733'>José Torre</a>. (<a href='https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/4-illustration-jose-torre.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The very first thing I try to do in that big boardroom meeting is to emphasize <strong>what unites us</strong> &mdash; shared goals, principles, and commitments that are relevant to the topic at hand. Then, I show how new data <strong>confirms or confronts</strong> our commitments, with specific problems we believe we need to address.</p>

<p>When a question about the quality of data comes in, I need to show that it has been <strong>reconciled and triangulated</strong> already and discussed with other teams as well.</p>

<p>A good story has a poignant ending. People need to see an <strong>alternative future</strong> to trust and accept the data &mdash; and a clear and safe path forward to commit to it. So I always try to present options and solutions that we believe will drive change and explain our decision-making behind that.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://ucdc.therectangles.com">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="485"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/5-art-interviewing-stakeholders.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/5-art-interviewing-stakeholders.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/5-art-interviewing-stakeholders.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/5-art-interviewing-stakeholders.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/5-art-interviewing-stakeholders.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/5-art-interviewing-stakeholders.png"
			
			sizes="100vw"
			alt="User Centered Design Canvas"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A useful little helper to understand what stakeholders truly care about. <a href='https://ucdc.therectangles.com'>User Centered Design Canvas</a> could be applied to stakeholders. (<a href='https://files.smashing.media/articles/how-make-ux-research-hard-to-ignore/5-art-interviewing-stakeholders.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>They also need to believe that this distant future is <strong>within reach</strong>, and that they can pull it off, albeit under a tough timeline or with limited resources.</p>

<p>And: a good story also presents a viable, compelling, <strong>shared goal</strong> that people can rally around and commit to. Ideally, it’s something that has a direct benefit for them and their teams.</p>

<p>These are the ingredients of the story that I always try to keep in my mind when working on that big presentation. And in fact, data is a <strong>starting point</strong>, but it does need a story wrapped around it to be effective.</p>

<h2 id="wrapping-up">Wrapping Up</h2>

<p>There is nothing more disappointing than finding a real problem that real people struggle with and facing the harsh reality of research <strong>not being trusted</strong> or valued.</p>

<p>We’ve all been there before. The best thing you can do is to <strong>be prepared</strong>: have strong data to back you up, include both quantitative and qualitative research &mdash; preferably with video clips from real customers &mdash; but also paint a <strong>viable future</strong> which seems within reach.</p>

<p>And sometimes nothing changes until <strong>something breaks</strong>. And at times, there isn’t much you can do about it unless you are prepared when it happens.</p>

<blockquote>“Data doesn’t change minds, and facts don’t settle fights. Having answers isn’t the same as learning, and it for sure isn’t the same as making evidence-based decisions.”<br /><br />&mdash; Erika Hall</blockquote>

<h2 id="meet-how-to-measure-ux-and-design-impact">Meet “How To Measure UX And Design Impact”</h2>

<p>You can find more details on <strong>UX Research</strong> in <a href="https://measure-ux.com/"><strong>Measure UX &amp; Design Impact</strong></a> (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 <code>IMPACT</code> to save 20% off today. <a href="https://measure-ux.com/">Jump to the details</a>.</p>

<figure style="margin-bottom:0;padding-bottom:0" class="article__image">
    <a href="https://measure-ux.com/" title="How To Measure UX and Design Impact, with Vitaly Friedman">
    <img width="900" height="466" style="border-radius: 11px" src="https://files.smashing.media/articles/ux-metrics-video-course-release/measure-ux-and-design-impact-course.png" alt="How to Measure UX and Design Impact, with Vitaly Friedman.">
    </a>
</figure>

<div class="book-cta__inverted"><div class="book-cta" data-handler="ContentTabs" data-mq="(max-width: 480px)"><nav class="content-tabs content-tabs--books"><ul><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">
Video + UX Training</button></a></li><li class="content-tab"><a href="#"><button class="btn btn--small btn--white btn--white--bordered">Video only</button></a></li></ul></nav><div class="book-cta__col book-cta__hardcover content-tab--content"><h3 class="book-cta__title"><span>Video + UX Training</span></h3><span class="book-cta__price"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>495<span class="sup">.00</span></span></span> <span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>799<span class="sup">.00</span></span></span></span></span>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3951439" class="btn btn--full btn--medium btn--text-shadow">
Get Video + UX Training<div></div></a><p class="book-cta__desc">25 video lessons (8h) + <a href="https://smashingconf.com/online-workshops/workshops/vitaly-friedman-impact-design/">Live UX Training</a>.<br>100 days money-back-guarantee.</p></div><div class="book-cta__col book-cta__ebook content-tab--content"><h3 class="book-cta__title"><span>Video only</span></h3><div data-audience="anonymous free supporter" data-remove="true"><span class="book-cta__price" data-handler="PriceTag"><span><span class=""><span class="currency-sign">$</span>&nbsp;<span>250<span class="sup">.00</span></span></span><span class="book-cta__price--old"><span class="currency-sign">$</span>&nbsp;<span>395<span class="sup">.00</span></span></span></span></div>
<a href="https://smart-interface-design-patterns.thinkific.com/enroll/3081832?price_id=3950630" class="btn btn--full btn--medium btn--text-shadow">
Get the video course<div></div></a><p class="book-cta__desc" data-audience="anonymous free supporter" data-remove="true">25 video lessons (8h). Updated yearly.<br>Also available as a <a href="https://smart-interface-design-patterns.thinkific.com/enroll/3082557?price_id=3951421">UX Bundle with 2 video courses.</a></p></div><span></span></div></div>

<h2 id="useful-resources">Useful Resources</h2>

<ul>
<li>“<a href="https://www.dscout.com/people-nerds/present-research-for-stakeholders-tips">How to Present Research So Stakeholders Sit Up and Take Action</a>”, by Nikki Anderson</li>
<li>“<a href="https://medium.com/lexisnexis-design/what-to-do-when-qual-and-quant-disagree-18a535164ca6">What To Do When Data Disagrees</a>”, by Subhasree Chatterjee, Archana Shah, Sanket Shukl, and Jason Bressler</li>
<li>“<a href="https://medium.com/shopify-ux/how-to-use-mixed-method-research-to-drive-product-decisions-7ff023e5b107">Mixed-Method UX Research</a>”, by Raschin Fatemi</li>
<li>“<a href="https://medium.com/@jwill7378/confidently-step-into-mixed-method-ux-research-a-step-by-step-framework-for-mixed-method-research-98f4284b8ebe">A Step-by-Step Framework For Mixed-Method Research</a>”, by Jeremy Williams</li>
<li>“<a href="https://dscout.com/people-nerds/mixed-methods-research">The Ultimate Guide To Mixed Methods</a>”, by Ben Wiedmaier</li>
<li><a href="https://www.linkedin.com/posts/vitalyfriedman_ux-surveys-activity-7222861773375180800-O0c0">Survey Design Cheatsheet</a>, by yours truly</li>
<li><a href="https://www.linkedin.com/posts/vitalyfriedman_ux-design-research-activity-7227973209839538177-P3iV">Useful Calculators For UX Research</a>, by yours truly</li>
<li><a href="https://vimeo.com/188285898?fl=pl&amp;fe=vl">Beyond Measure</a>, by Erika Hall</li>
</ul>

<p><strong>Useful Books</strong></p>

<ul>
<li><em>Just Enough Research</em>, by Erika Hall</li>
<li><em>Designing Surveys That Work</em>, by Caroline Jarrett</li>
<li><em>Designing Quality Survey Questions</em>, by Sheila B. Robinson</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Yegor Gilyov</author><title>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</title><link>https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/</link><pubDate>Fri, 03 Oct 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/</guid><description>Ready to move beyond static mockups? Here is a practical, step-by-step guide to Intent Prototyping &amp;mdash; a disciplined method that uses AI to turn your design intent (UI sketches, conceptual models, and user flows) directly into a live prototype, making it your primary canvas for ideation.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/" />
              <title>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</h1>
                  
                    
                    <address>Yegor Gilyov</address>
                  
                  <time datetime="2025-10-03T10:00:00&#43;00:00" class="op-published">2025-10-03T10:00:00+00:00</time>
                  <time datetime="2025-10-03T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>In <strong><a href="https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/">Part 1</a></strong> of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:</p>

<blockquote>How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?</blockquote>

<p>In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.</p>

<p>The answer is a more disciplined process I call <strong>Intent Prototyping</strong> (kudos to Marco Kotrotsos, who coined <a href="https://kotrotsos.medium.com/intent-oriented-programming-bridging-human-thought-and-ai-machine-execution-3a92373cc1b6">Intent-Oriented Programming</a>). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit <em>intent</em> at the very center of the process. It receives a holistic expression of <em>intent</em> (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="491"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg"
			
			sizes="100vw"
			alt="Diagram showing sketches, a conceptual model, and user flows as inputs to Intent Prototyping, which outputs a live prototype."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Intent Prototyping workflow. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This method solves the concerns we’ve discussed in Part 1 in the best way possible:</p>

<ul>
<li><strong>Unlike static mockups,</strong> the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the system’s underlying logic as well as its surface.</li>
<li><strong>Unlike a vibe-coded prototype</strong>, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesn’t need to reverse-engineer a black box or become “code archaeologists” to guess at the designer’s vision, as they receive not only a live prototype but also a clearly documented design intent behind it.</li>
</ul>

<p>This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="my-workflow">My Workflow</h2>

<p>To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this <a href="https://github.com/YegorGilyov/reality-check">GitHub repository</a>.</p>

<h3 id="step-1-expressing-an-intent">Step 1: Expressing An Intent</h3>

<p>Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="583"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png"
			
			sizes="100vw"
			alt="A rough sketch of screens to manage product ideas and reality checks."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A low-fidelity sketch of the initial idea. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.</p>

<p>What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:</p>

<ul>
<li>Navigational paths (clicking here takes you to).</li>
<li>Interaction details that can’t be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior).</li>
<li>What parts might make sense to build as reusable components.</li>
<li>Which components from the design system (I’m using <a href="https://ant.design/">Ant Design Library</a>) should be used.</li>
<li>Any other comments that help understand how this thing should work (while sketches illustrate how it should look).</li>
</ul>

<p>Having added all those details, we end up with such an annotated sketch:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="399"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png"
			
			sizes="100vw"
			alt="The initial sketch with annotations specifying components, navigation, and interaction details."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The sketch annotated with details. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our <em>intent</em> will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.</p>

<p>For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="579"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png"
			
			sizes="100vw"
			alt="Screenshot of Google AI Studio with an annotated sketch as input."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Generating a conceptual model from the sketch using Google AI Studio. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Note</strong>: <em>All the prompts that I use here and below can be found in the <a href="#appendices">Appendices</a>. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.</em></p>

<p>As a result, Gemini gives us a description and the following diagram:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="480"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg"
			
			sizes="100vw"
			alt="UML class diagram showing two connected entities: “ProductIdea” and “RealityCheck”."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      UML class diagram. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing <em>intent</em>, along with the Flow and Visualization.</p>

<p>As a result of this step, our <em>intent</em> is fully expressed in two files: <code>Sketch.png</code> and <code>Model.md</code>. This will be our durable source of truth.</p>

<h3 id="step-2-preparing-a-spec-and-a-plan">Step 2: Preparing A Spec And A Plan</h3>

<p>The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.</p>

<p>I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see <a href="#appendices">Appendices 2 and 3</a>). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see <a href="#appendices">Appendices 8, 9, and 10</a>). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.</p>

<p>As a result, Gemini provides us with content for <code>DAL.md</code> and <code>UI.md</code>. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.</p>

<p>It’s important to remember that by their nature, <strong>LLMs are not deterministic</strong> and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.</p>

<p>Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.</p>

<p>Once we have <code>Sketch.png</code>, <code>Model.md</code>, <code>DAL.md</code>, <code>UI.md</code>, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.</p>

<p>One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in <a href="#appendices">Appendices 4 and 5</a>.</p>

<h3 id="step-3-executing-the-plan">Step 3: Executing The Plan</h3>

<p>To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.</p>

<p>However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.</p>

<p>My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.</p>

<p>Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.</p>

<p>Then we put all our files into that project:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="666"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png"
			
			sizes="100vw"
			alt="A file directory showing the docs folder containing DAL.md, Model.md, Sketch.png, and UI.md."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Project structure with design intent and spec files. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="419"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png"
			
			sizes="100vw"
			alt="Screenshot of a terminal showing the Gemini CLI."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Gemini CLI. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>And we send the prompt to build the Data Access Layer (see <a href="#appendices">Appendix 6</a>). That prompt implies step-by-step execution, so upon completion of each step, I send the following:</p>

<div class="break-out">
<pre><code class="language-markdown">Thank you! Now, please move to the next task.
Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. 
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
</code></pre>
</div>

<p>As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="572"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png"
			
			sizes="100vw"
			alt="A basic webpage with forms and buttons to test the Data Access Layer’s CRUD functions."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The AI-generated test page for the Data Access Layer. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.</p>

<p>And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see <a href="#appendices">Appendix 7</a>). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from <code>UI-plan.md</code>. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="320"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png"
			
			sizes="100vw"
			alt="A before-and-after comparison showing the UI&#39;s visual improvement."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Refining the AI-generated UI to match the sketch. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once I’m happy with the result of a step, I ask Gemini to move on:</p>

<div class="break-out">
<pre><code class="language-markdown">Thank you! Now, please move to the next task.
Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.  
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
</code></pre>
</div>

<p>Before long, the result looks like this, and in every detail it works exactly as we <em>intended</em>:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="486"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png"
			
			sizes="100vw"
			alt="Screenshots of the final, polished application UI."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The final interactive prototype. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.</p>

<div class="partners__lead-place"></div>

<h3 id="step-4-learning-and-iterating">Step 4: Learning And Iterating</h3>

<p>It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.</p>

<p>And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.</p>

<h3 id="is-this-workflow-too-heavy">Is This Workflow Too Heavy?</h3>

<p>This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:</p>

<ul>
<li>In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it.</li>
<li>Individual iterations don’t need to be big. You can start with a <a href="https://wiki.c2.com/?WalkingSkeleton">Walking Skeleton</a>: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations.</li>
<li>And last but not least, maybe the idea of “think before you do” is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road.</li>
</ul>

<h2 id="intent-prototyping-vs-other-methods">Intent Prototyping Vs. Other Methods</h2>

<p>There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.</p>

<table class="tablesaw break-out" style="grid-column: 3 / 18; font-size: 13pt;">
    <thead>
        <tr>
            <th>Method/Tool</th>
            <th>Goal</th>
            <th>Risks it is best suited to mitigate</th>
            <th width="300">Examples</th>
            <th>Why</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Intent Prototyping</td>
            <td>To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows.</td>
            <td>Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring.</td>
            <td><ul><li>A CRM (Customer Relationship Management system).</li><li>A Resource Management Tool.</li><li>A No-Code Integration Platform (admin’s UI).</li></ul></td>
            <td>It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff.</td>
        </tr>
        <tr>
            <td>Vibe Coding (Conversational)</td>
            <td>To rapidly explore interactive ideas through improvisation.</td>
            <td>Losing momentum because of analysis paralysis.</td>
            <td><ul><li>An interactive data table with live sorting/filtering.</li><li>A novel navigation concept.</li><li>A proof-of-concept for a single, complex component.</li></ul></td>
            <td>It has the smallest loop between an idea conveyed in natural language and an interactive outcome.</td>
        </tr>
        <tr>
            <td>Axure</td>
            <td>To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works.</td>
            <td>Designing flows that break when users don’t follow the “happy path.”</td>
            <td><ul><li>A multi-step e-commerce checkout.</li><li>A software configuration wizard.</li><li>A dynamic form with dependent fields.</li></ul></td>
            <td>It’s made to create complex <code>if-then</code> logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code.</td>
        </tr>
        <tr>
            <td>Figma</td>
            <td>To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture.</td>
            <td>Making a product that looks bad, doesn't fit with the brand, or has a layout that is hard to understand.</td>
            <td><ul><li>A marketing landing page.</li><li>A user onboarding flow.</li><li>Presenting a new visual identity.</li></ul></td>
            <td>It excels at high-fidelity visual design and provides simple, fast tools for linking static screens.</td>
        </tr>
        <tr>
            <td>ProtoPie, Framer</td>
            <td>To make high-fidelity micro-interactions feel just right.</td>
            <td>Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions.</td>
            <td><ul><li>A custom pull-to-refresh animation.</li><li>A fluid drag-and-drop interface.</li><li>An animated chart or data visualization.</li></ul></td>
            <td>These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use.</td>
        </tr>
        <tr>
            <td>Low-code / No-code Tools (e.g., Bubble, Retool)</td>
            <td>To create a working, data-driven app as quickly as possible.</td>
            <td>The application will never be built because traditional development is too expensive.</td>
            <td><ul><li>An internal inventory tracker.</li><li>A customer support dashboard.</li><li>A simple directory website.</li></ul></td>
            <td>They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs.</td>
        </tr>
    </tbody>
</table>

<p><br /></p>

<p>The key takeaway is that each method is a <strong>specialized tool for mitigating a specific type of risk</strong>. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.</p>

<div class="partners__lead-place"></div>

<h2 id="bringing-it-all-together">Bringing It All Together</h2>

<p>The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster &mdash; it’s a <strong>fundamental shift in how we design</strong>. By putting a clear, unambiguous <em>intent</em> at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.</p>

<p>There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design <em>intent</em> makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented <em>intent</em> becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.</p>

<p>Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating <strong>pictures of a product</strong> and empowers us to become architects of <strong>blueprints for a system</strong>. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.</p>

<h3 id="appendices">Appendices</h3>

<p>You can find the full <strong>Intent Prototyping Starter Kit</strong>, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this <a href="https://github.com/YegorGilyov/intent-prototyping-starter-kit">GitHub repository</a>.</p>

<div class="js-table-accordion accordion book__toc" id="TOC" aria-multiselectable="true">
    <dl class="accordion-list" style="margin-bottom: 1em" data-handler="Accordion">
          <dt tabindex="0" class="accordion-item" id="accordion-item-0" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 1: Sketch to UML Class Diagram
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-0" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the sketch carefully. There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Generate the conceptual model description in the Mermaid format using a UML class diagram.

&#35;&#35; Ground Rules

- Every entity must have the following attributes:
    - `id` (string)
    - `createdAt` (string, ISO 8601 format)
    - `updatedAt` (string, ISO 8601 format)
- Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes.
- Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. 
- Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be `"1" -- "0..*"`, it must be `"0..1" -- "0..*"`).
- Use only valid syntax in the Mermaid diagram.
- Do not include enumerations in the Mermaid diagram.
- Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file).

&#35;&#35; Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for entity names.
- Use camelCase for attributes and relationships.
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:** Base every detail on visual evidence in the sketch, not on common design patterns. 
- &#42;*Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `Model.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-1" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 2: Sketch to DAL Spec
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-1" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. 

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `Model.md`: the conceptual model
- `Sketch.png`: the UI sketch

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations.

---

&#35;&#35; Markdown Output Structure

Use this template for the entire document.

```markdown

&#35; Data Access Layer Specification

This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. Type Definitions

Location: `src/types/entities.ts`

&#35;&#35;&#35; 1.1. `BaseEntity`

A shared interface that all entities should extend.

[TypeScript interface definition]

&#35;&#35;&#35; 1.2. `[Entity Name]`

The interface for the [Entity Name] entity.

[TypeScript interface definition]

&#35;&#35; 2. Zustand Stores

&#35;&#35;&#35; 2.1. Store for `[Entity Name]`

&#42;&#42;Location:&#42;&#42; `src/stores/[Entity Name (plural)].ts`

The Zustand store will manage the state of all [Entity Name] items.

&#42;&#42;Store State (`[Entity Name]State`):&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Store Implementation (`use[Entity Name]Store`):&#42;&#42;

- The store will be created using `create&lt;[Entity Name]State&gt;()(...)`.
- It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`.
- `[Entity Name (plural, camelCase)]` will be a dictionary (`Record&lt;string, [Entity]&gt;`) for O(1) access.

&#42;&#42;Actions:&#42;&#42;

- &#42;&#42;`add[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`update[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`remove[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`doSomethingElseWith[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
    
&#35;&#35; 3. Custom Hooks

&#35;&#35;&#35; 3.1. `use[Entity Name (plural)]`

&#42;&#42;Location:&#42;&#42; `src/hooks/use[Entity Name (plural)].ts`

The hook will be the primary interface for UI components to interact with [Entity Name] data.

&#42;&#42;Hook Return Value:&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Hook Implementation:&#42;&#42;

[List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.]

```

--- 

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:&#42;&#42; Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. 
- &#42;&#42;Double-Check:&#42;&#42; After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `DAL.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-2" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 3: Sketch to UI Spec
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-2" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully: 

- `Sketch.png`: the UI sketch
  - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly.
- `Model.md`: the conceptual model
- `DAL.md`: the Data Access Layer spec

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Generate the complete markdown content for a new file, `UI.md`.

---

&#35;&#35; Markdown Output Structure

Use this template for the entire document.

```markdown

&#35; UI Layer Specification

This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. High-Level Structure

The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. 

&#35;&#35;&#35; 1.1. `App` Component

The root component that sets up routing and global providers.

-   &#42;&#42;Location&#42;&#42;: `src/App.tsx`
-   &#42;&#42;Purpose&#42;&#42;: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page.
-   &#42;&#42;Composition&#42;&#42;:
  -   Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`.
  -   Renders `[Page Name]`.

&#35;&#35; 2. Pages

&#35;&#35;&#35; 2.1. `[Page Name]`

-   &#42;&#42;Location:&#42;&#42; `src/pages/PageName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Briefly describe the main goal and function of this page]
-   &#42;&#42;Data Access:&#42;&#42;
  [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this page using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this page]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with this page] 
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this page should work]

&#35;&#35; 3. Components

&#35;&#35;&#35; 3.1. `[Component Name]`

-   &#42;&#42;Location:&#42;&#42; `src/components/ComponentName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Explain what this component does and where it's used]
-   &#42;&#42;Props:&#42;&#42;
  [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.]
-   &#42;&#42;Data Access:&#42;&#42;
    [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this component using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this component]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with the component]
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this component should work]
  
```

--- 

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:&#42;&#42; Base every detail on the visual evidence in the sketch, not on common design patterns. 
- &#42;&#42;Double-Check:&#42;&#42; After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `UI.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-3" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 4: DAL Spec to Plan
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-3" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `DAL.md`: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter.

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a step-by-step plan to build a Data Access Layer according to the spec. 

Each task should:

- Focus on one concern
- Be reasonably small
- Have a clear start + end
- Contain clearly defined Objectives and Acceptance Criteria

The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. 

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between.

&#35;&#35; Final Instructions
 
- Note that we are not starting from scratch; the basic template has already been created using Vite.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for `DAL-plan.md`.
</code></pre>
</div></p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 5: UI Spec to Plan
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `UI.md`: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- `Sketch.png`: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a step-by-step plan to build a UI layer according to the spec and the sketch. 

Each task must:

- Focus on one concern.
- Be reasonably small.
- Have a clear start + end.
- Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding.
- Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan.

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between.

&#35;&#35; Final Instructions

- Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully.
- For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully).
- The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for `UI-plan.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>         
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 6: DAL Plan to Code
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. 

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices
- @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Read the plan:

- @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application.

&#42;&#42;Step 4:&#42;&#42; Build a Data Access Layer for this application according to the spec and following the plan. 

- Complete one task from the plan at a time. 
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 
- Do not do anything else. At this point, we are focused on building the Data Access Layer.

&#35;&#35; Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Do not start the development server, I'll do it by myself.
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 7: UI Plan to Code
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. 

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Read the plan:

- @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application.

&#42;&#42;Step 4:&#42;&#42; Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: 

- Complete one task from the plan at a time. 
- Make sure you build the UI according to the sketch; this is very important.
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 

&#35;&#35; Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Follow Ant Design's default styles and components. 
- Do not touch the data access layer: it's ready and it's perfect. 
- Do not start the development server, I'll do it by myself.
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 8: TS-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: TypeScript Best Practices

&#35;&#35; Type System & Type Safety

- Use TypeScript for all code and enable strict mode.
- Ensure complete type safety throughout stores, hooks, and component interfaces.
- Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types.
- Entity interfaces should extend common patterns while maintaining their specific properties.
- Use TypeScript type guards in filtering operations for relationship safety.
- Avoid the 'any' type; prefer 'unknown' when necessary.
- Use generics to create reusable components and functions.
- Utilize TypeScript's features to enforce type safety.
- Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled.
- Avoid enums; use maps instead.

&#35;&#35; Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for component names and types/interfaces.
- Prefix interfaces for React props with 'Props' (e.g., ButtonProps).
- Use camelCase for variables and functions.
- Use UPPER_CASE for constants.
- Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx).
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
- Favor named exports for components.

&#35;&#35; Code Structure & Patterns

- Write concise, technical TypeScript code with accurate examples.
- Use functional and declarative programming patterns; avoid classes.
- Prefer iteration and modularization over code duplication.
- Use the "function" keyword for pure functions.
- Use curly braces for all conditionals for consistency and clarity.
- Structure files appropriately based on their purpose.
- Keep related code together and encapsulate implementation details.

&#35;&#35; Performance & Error Handling

- Use immutable and efficient data structures and algorithms.
- Create custom error types for domain-specific errors.
- Use try-catch blocks with typed catch clauses.
- Handle Promise rejections and async errors properly.
- Log errors appropriately and handle edge cases gracefully.

&#35;&#35; Project Organization

- Place shared types in a types directory.
- Use barrel exports (index.ts) for organizing exports.
- Structure files and directories based on their purpose.

&#35;&#35; Other Rules

- Use comments to explain complex logic or non-obvious decisions.
- Follow the single responsibility principle: each function should do exactly one thing.
- Follow the DRY (Don't Repeat Yourself) principle.
- Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only.
- Use 2 spaces for indentation (no tabs).
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 9: React-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: React Best Practices

&#35;&#35; Component Structure

- Use functional components over class components
- Keep components small and focused
- Extract reusable logic into custom hooks
- Use composition over inheritance
- Implement proper prop types with TypeScript
- Structure React files: exported component, subcomponents, helpers, static content, types
- Use declarative TSX for React components
- Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components

&#35;&#35; React Patterns

- Utilize useState and useEffect hooks for state and side effects
- Use React.memo for performance optimization when needed
- Utilize React.lazy and Suspense for code-splitting
- Implement error boundaries for robust error handling
- Keep styles close to components

&#35;&#35; React Performance

- Avoid unnecessary re-renders
- Lazy load components and images when possible
- Implement efficient state management
- Optimize rendering strategies
- Optimize network requests
- Employ memoization techniques (e.g., React.memo, useMemo, useCallback)

&#35;&#35; React Project Structure

```
/src
- /components - UI components (every component in a separate file)
- /hooks - public-facing custom hooks (every hook in a separate file)
- /providers - React context providers (every provider in a separate file)
- /pages - page components (every page in a separate file)
- /stores - entity-specific Zustand stores (every store in a separate file)
- /styles - global styles (if needed)
- /types - shared TypeScript types and interfaces
```
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 10: Zustand-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: Zustand Best Practices

&#35;&#35; Core Principles

- &#42;&#42;Implement a data layer&#42;&#42; for this React application following this specification carefully and to the letter.
- &#42;&#42;Complete separation of concerns&#42;&#42;: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic.
- &#42;&#42;Shared state architecture&#42;&#42;: Different UI components should work with the same shared state, despite using entity-specific hooks separately.

&#35;&#35; Technology Stack

- &#42;&#42;State management&#42;&#42;: Use Zustand for state management with automatic localStorage persistence via the `persist` middleware.

&#35;&#35; Store Architecture

- &#42;&#42;Base entity:&#42;&#42; Implement a BaseEntity interface with common properties that all entities extend:
```typescript 
export interface BaseEntity { 
  id: string; 
  createdAt: string; // ISO 8601 format 
  updatedAt: string; // ISO 8601 format 
}
```
- &#42;&#42;Entity-specific stores&#42;&#42;: Create separate Zustand stores for each entity type.
- &#42;&#42;Dictionary-based storage&#42;&#42;: Use dictionary/map structures (`Record<string, Entity>`) rather than arrays for O(1) access by ID.
- &#42;&#42;Handle relationships&#42;&#42;: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate.

&#35;&#35; Hook Layer

The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities.

&#35;&#35;&#35; Core Principles

1.  &#42;&#42;One Hook Per Entity&#42;&#42;: There will be a single, comprehensive custom hook for each entity (e.g., `useBlogPosts`, `useCategories`). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created.
2.  &#42;&#42;Return reactive data, not getter functions&#42;&#42;: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes.
3.  &#42;&#42;Expose Dictionaries for O(1) Access&#42;&#42;: To provide simple and direct access to data, every hook will return a dictionary (`Record<string, Entity>`) of the relevant items.

&#35;&#35;&#35; The Standard Hook Pattern

Every entity hook will follow this implementation pattern:

1.  &#42;&#42;Subscribe&#42;&#42; to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data.
2.  &#42;&#42;Filter&#42;&#42; the data based on the parameters passed into the hook. This logic will be memoized with `useMemo` for efficiency. If no parameters are provided, the hook will operate on the entire dataset.
3.  &#42;&#42;Return a Consistent Shape&#42;&#42;: The hook will always return an object containing:
    &#42;   A &#42;&#42;filtered and sorted array&#42;&#42; (e.g., `blogPosts`) for rendering lists.
    &#42;   A &#42;&#42;filtered dictionary&#42;&#42; (e.g., `blogPostsDict`) for convenient `O(1)` lookup within the component.
    &#42;   All necessary &#42;&#42;action functions&#42;&#42; (`add`, `update`, `remove`) and &#42;&#42;relationship operations&#42;&#42;.
    &#42;   All necessary &#42;&#42;helper functions&#42;&#42; and &#42;&#42;derived data objects&#42;&#42;. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes.

&#35;&#35; API Design Standards

- &#42;&#42;Object Parameters&#42;&#42;: Use object parameters instead of multiple direct parameters for better extensibility:
```typescript

// ✅ Preferred

add({ title, categoryIds })

// ❌ Avoid

add(title, categoryIds)

```
- &#42;&#42;Internal Methods&#42;&#42;: Use underscore-prefixed methods for cross-store operations to maintain clean separation.

&#35;&#35; State Validation Standards

- &#42;&#42;Existence checks&#42;&#42;: All `update` and `remove` operations should validate entity existence before proceeding.
- &#42;&#42;Relationship validation&#42;&#42;: Verify both entities exist before establishing relationships between them.

&#35;&#35; Error Handling Patterns

- &#42;&#42;Operation failures&#42;&#42;: Define behavior when operations fail (e.g., updating non-existent entities).
- &#42;&#42;Graceful degradation&#42;&#42;: How to handle missing related entities in helper functions.

&#35;&#35; Other Standards

- &#42;&#42;Secure ID generation&#42;&#42;: Use `crypto.randomUUID()` for entity ID generation instead of custom implementations for better uniqueness guarantees and security.
- &#42;&#42;Return type consistency&#42;&#42;: `add` operations return generated IDs for component workflows requiring immediate entity access, while `update` and `remove` operations return `void` to maintain clean modification APIs.
</code></pre>
</div></p>
             </div>
         </dd>    
    <span></span></dl>
</div>
                

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Lyndon Cerejo</author><title>From Prompt To Partner: Designing Your Custom AI Assistant</title><link>https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/</link><pubDate>Fri, 26 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/</guid><description>What if your best AI prompts didn’t disappear into your unorganized chat history, but came back tomorrow as a reliable assistant? In this article, you’ll learn how to turn one-off “aha” prompts into reusable assistants that are tailored to your audience, grounded in your knowledge, and consistent every time, saving you (and your team) from typing the same 448-word prompt ever again. No coding, just designing, and by the end, you’ll have a custom AI assistant that can augment your team.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/" />
              <title>From Prompt To Partner: Designing Your Custom AI Assistant</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>From Prompt To Partner: Designing Your Custom AI Assistant</h1>
                  
                    
                    <address>Lyndon Cerejo</address>
                  
                  <time datetime="2025-09-26T10:00:00&#43;00:00" class="op-published">2025-09-26T10:00:00+00:00</time>
                  <time datetime="2025-09-26T10:00:00&#43;00:00" class="op-modified">2026-02-09T03:03:08+00:00</time>
                </header>
                
                

<p>In “<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “<a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a>”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.</p>

<p>AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function &mdash; allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.</p>

<h2 id="why-build-your-own-assistant">Why Build Your Own Assistant?</h2>

<p>If you’ve ever copied and pasted the same mega-prompt for the n<sup>th</sup> time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.</p>

<p>Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in <strong>your voice</strong>, with <strong>your context and constraints</strong> baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.</p>

<h3 id="benefits-of-reusing-prompts-even-your-own">Benefits Of Reusing Prompts, Even Your Own</h3>

<p>Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:</p>

<ul>
<li><strong>Focused on a real repeating problem</strong><br />
A good AI Assistant isn’t a general-purpose “do everything” bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who’s doing it (e.g., analyzing customer feedback).</li>
<li><strong>Customized for your context</strong><br />
Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI.</li>
<li><strong>Consistency at scale</strong><br />
You can use the <a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/#anatomy-structure-it-like-a-designer">WIRE+FRAME prompt framework</a> to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time.</li>
<li><strong>Codifying expertise</strong><br />
Every time you turn a great prompt into an AI Assistant, you’re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes).</li>
<li><strong>Faster ramp-up for teammates</strong><br />
Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.

<br /></li>
</ul>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="reasons-for-your-own-ai-assistant-instead-of-public-ai-assistants">Reasons For Your Own AI Assistant Instead Of Public AI Assistants</h3>

<p>Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.</p>

<p>A few reasons for building your AI Assistant instead of using a public assistant someone else created include:</p>

<ul>
<li><strong>Fit</strong>: Public assistants are built for the masses. Your work has quirks, tone, and processes they’ll never quite match.</li>
<li><strong>Trust &amp; Security</strong>: You don’t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won’t) do.</li>
<li><strong>Evolution</strong>: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog &mdash; things a public bot won’t do for you.</li>
</ul>

<p>Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:</p>

<p><em>Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.</em></p>

<p><strong><em>Note</em></strong>: <em>We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.</em></p>

<h3 id="when-not-to-build-an-ai-assistant-yet">When Not to Build An AI Assistant (Yet)</h3>

<p>An AI Assistant is great when the <em>same</em> audience has the <em>same</em> problem <em>often</em>. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:</p>

<ul>
<li><strong>One-off or rare tasks</strong><br />
If it won’t be reused at least monthly, I’d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen.</li>
<li><strong>Sensitive or regulated data</strong><br />
If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I’d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance).</li>
<li><strong>Heavy orchestration or logic</strong><br />
Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I’d recommend not trying to build an AI Assistant for these cases.</li>
<li><strong>Real-time information</strong><br />
AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web.</li>
<li><strong>High-stakes outputs</strong><br />
For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability.</li>
<li><strong>No measurable win</strong><br />
If you can’t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I’d recommend keeping it as a saved WIRE+FRAME prompt.</li>
</ul>

<p>Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.</p>

<p>In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.</p>

<h2 id="as-always-start-with-the-user">As Always, Start with the User</h2>

<p>This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.</p>

<ul>
<li>Who will use this assistant?</li>
<li>What’s the specific pain or task they struggle with today?</li>
<li>What language, tone, and examples will feel natural to them?</li>
</ul>

<p>Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.</p>

<h2 id="from-prompt-to-assistant">From Prompt To Assistant</h2>

<p>You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.</p>

<ul>
<li><strong>M: Map your prompt</strong><br />
Port your successful WIRE+FRAME prompt into the AI assistant.</li>
<li><strong>A: Add knowledge and training</strong><br />
Ground the assistant in <em>your</em> world. Upload knowledge files, examples, or guides that make it uniquely yours.</li>
<li><strong>T: Tailor for audience</strong><br />
Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience.</li>
<li><strong>C: Check, test, and refine</strong><br />
Test the preview with different inputs and refine until you get the results you expect.</li>
<li><strong>H: Hand off and maintain</strong><br />
Set sharing options and permissions, share the link, and maintain it.</li>
</ul>

<p>A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:</p>

<ul>
<li><strong>Prototype Prodigy</strong>: Transform rough ideas into prototypes and export them into Figma to refine.</li>
<li><strong>Critique Coach</strong>: Review wireframes or mockups and point out accessibility and usability gaps.</li>
</ul>

<p>But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: <em>“An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”</em></p>

<p>And that’s the one we will build in this article &mdash; say hello to <strong>Insight Interpreter.</strong></p>

<div class="partners__lead-place"></div>

<h2 id="walkthrough-insight-interpreter">Walkthrough: Insight Interpreter</h2>

<p>Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in <a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a> into a CustomGPT.</p>

<p>When you start building a CustomGPT by visiting <a href="https://chat.openai.com/gpts/editor?utm_source=chatgpt.com">https://chat.openai.com/gpts/editor</a>, you’ll see two paths:</p>

<ul>
<li><strong>Conversational interface</strong><br />
Vibe-chat your way &mdash; it’s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions.</li>
<li><strong>Configure interface</strong><br />
The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you’ll want for assistants you plan to share or depend on regularly.</li>
</ul>

<p>The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="451"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png"
			
			sizes="100vw"
			alt="CustomGPT Configure Interface"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      CustomGPT Configure Interface. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="m-map-your-prompt">M: Map Your Prompt</h3>

<p>Paste your full WIRE+FRAME prompt into the <em>Instructions</em> section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:</p>

<ul>
<li><strong>W</strong>ho &amp; What: The AI persona and the core deliverable (<em>“…senior UX researcher and customer insights analyst… specialize in synthesizing qualitative data from diverse sources…”</em>).</li>
<li><strong>I</strong>nput Context: Background or data scope to frame the task (<em>“…analyzing customer feedback uploaded from sources such as…”</em>).</li>
<li><strong>R</strong>ules &amp; Constraints: Boundaries (<em>“…do not fabricate pain points, representative quotes, journey stages, or patterns…”</em>).</li>
<li><strong>E</strong>xpected Output: Format and fields of the deliverable (<em>“…a structured list of themes. For each theme, include…”</em>).</li>
<li><strong>F</strong>low: Explicit, ordered sub-tasks (<em>“Recommended flow of tasks: Step 1…”</em>).</li>
<li><strong>R</strong>eference Voice: Tone, mood, or reference (<em>“…concise, pattern-driven, and objective…”</em>).</li>
<li><strong>A</strong>sk for Clarification: Ask questions if unclear (<em>“…if data is missing or unclear, ask before continuing…”</em>).</li>
<li><strong>M</strong>emory: Memory to recall earlier definitions (<em>“Unless explicitly instructed otherwise, keep using this process…”</em>).</li>
<li><strong>E</strong>valuate &amp; Iterate: Have the AI self-critique outputs (<em>“…critically evaluate…suggest improvements…”</em>).</li>
</ul>

<p>If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective <em>Instructions</em> sections.</p>

<h3 id="a-add-knowledge-and-training">A: Add Knowledge And Training</h3>

<p>In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: <em>reviews_Q2_2025.csv</em> beats <em>latestfile_final2.csv</em>. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:</p>

<ul>
<li>Taxonomy of themes;</li>
<li>Instructions on parsing uploaded data;</li>
<li>Examples of real UX research reports using this structure;</li>
<li>Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;</li>
<li>Customer journey map stages;</li>
<li>Customer feedback file templates (not actual data).</li>
</ul>

<p>An example of a file to help it parse uploaded data is shown below:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="447"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png"
			
			sizes="100vw"
			alt="GPT file parsing instructions"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="t-tailor-for-audience">T: Tailor For Audience</h3>

<ul>
<li><strong>Audience tailoring</strong><br />
If you are building this for others, your prompt should have addressed tone in the “Reference Voice” section. If you didn’t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the <em>Conversation starters</em> section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use “Analyze feedback from the attached file” for our Insights Interpreter to make it more self-explanatory for anyone, instead of “Analyze data,” which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use “What are the types of curiosity?” and “Give me a micro-practice to spark curiosity”.</li>
<li><strong>Functional tailoring</strong><br />
Fill in the CustomGPT name, icon, description, and capabilities.

<ul>
<li><em>Name</em>: Pick one that will make it clear what the CustomGPT does. Let’s use “Insights Interpreter &mdash; Customer Feedback Analyzer”. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.</li>
<li><em>Icon</em>: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.</li>
<li><em>Description</em>: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.</li>
<li><em>Recommended Model</em>: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.</li>
<li><em>Capabilities</em>: Turn off anything you won’t need. We’ll turn off “Web Search” to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on “Code Interpreter &amp; Data Analysis” to allow it to understand and process uploaded files. “Canvas” allows users to work on a shared canvas with the GPT to edit writing tasks; “Image generation” - if the CustomGPT needs to create images.</li>
<li><em>Actions</em>: Making <a href="https://platform.openai.com/docs/actions/introduction">third-party APIs</a> available to the CustomGPT, advanced functionality we don’t need.</li>
<li><em>Additional Settings</em>: Sneakily hidden and opted in by default, I opt out of training OpenAI’s models.</li>
</ul></li>
</ul>

<h3 id="c-check-test-refine">C: Check, Test &amp; Refine</h3>

<p>Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.</p>

<p>Use the <em>Preview</em> panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.</p>

<p>When things don’t look right, here are quick debugging fixes:</p>

<ul>
<li><strong>Generic answers?</strong><br />
Tighten <em>Input Context</em> or update the knowledge files.</li>
<li><strong>Hallucinations?</strong><br />
Revisit your <em>Rules</em> section. Turn off web browsing if you don’t need external data.</li>
<li><strong>Wrong tone?</strong><br />
Strengthen <em>Reference Voice</em> or swap in clearer examples.</li>
<li><strong>Inconsistent?</strong><br />
Test across models in preview and set the most reliable one as “Recommended.”</li>
</ul>

<h3 id="h-hand-off-and-maintain">H: Hand Off And Maintain</h3>

<p>When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:</p>

<ul>
<li><strong>Only me</strong>: Private use. Perfect if you’re still experimenting or keeping it personal.</li>
<li><strong>Anyone with the link</strong>: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.</li>
<li><strong>GPT Store</strong>: Fully public. Your assistant is listed and findable by anyone browsing the store. <em>(This is the option we’ll use.)</em></li>
<li><strong>Business workspace</strong> (if you’re on GPT Business): Share with others within your business account only &mdash; the easiest way to keep it in-house and controlled.</li>
</ul>

<p>But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:</p>

<ul>
<li><strong>Collect feedback</strong>: Ask teammates what worked, what didn’t, and what they had to fix manually.</li>
<li><strong>Iterate</strong>: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: <a href="https://chatgpt.com/gpts/mine">https://chatgpt.com/gpts/mine</a></li>
<li><strong>Track changes</strong>: Keep a simple changelog (date, version, updates) for traceability.</li>
<li><strong>Refresh knowledge</strong>: Update knowledge files and examples on a regular cadence so answers don’t go stale.</li>
</ul>

<p>And that’s it! <a href="https://go.cerejo.com/insights-interpreter">Our Insights Interpreter is now live!</a></p>

<p>Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="325"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png"
			
			sizes="100vw"
			alt="Results of the structured WIRE&#43;FRAME prompt from the previous article"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Results of the structured WIRE+FRAME prompt from the previous article. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png'>Large preview</a>)
    </figcaption>
  
</figure>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="276"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png"
			
			sizes="100vw"
			alt="Results of the Insights Interpreter CustomGPT based on the same prompt"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Results of the Insights Interpreter CustomGPT based on the same prompt. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.</p>

<p>Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.</p>

<p>While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs &mdash; including the tone, context, output formats, and get the real AI Assistant you need!</p>

<div class="partners__lead-place"></div>

<h2 id="inspiration-for-other-ai-assistants">Inspiration For Other AI Assistants</h2>

<p>We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:</p>

<ul>
<li><strong>Workshop Wizard</strong>: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.</li>
<li><strong>Research Roundup Buddy</strong>: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.</li>
<li><strong>Persona Refresher</strong>: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).</li>
<li><strong>Content Checker</strong>: Proofs copy for tone, accessibility, and reading level before it ever hits your site.</li>
<li><strong>Trend Tamer</strong>: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.</li>
<li><strong>Microcopy Provocateur</strong>: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.</li>
<li><strong>Ethical UX Debater</strong>: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.</li>
</ul>

<p>The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.</p>

<h2 id="ask-me-anything-about-assistants">Ask Me Anything About Assistants</h2>

<ul>
<li><strong>What are some limitations of a CustomGPT?</strong><br />
Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining.</li>
<li><strong>Can I copy someone else’s public CustomGPT and tweak it?</strong><br />
Not directly, but if you get inspired by another CustomGPT, you can look at how it’s framed and rebuild your own using WIRE+FRAME &amp; MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google’s equivalent &mdash; Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem.</li>
<li><strong>How private are my uploaded files?</strong><br />
The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn’t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don’t upload sensitive, confidential, or personal data you wouldn’t want circulating. Enterprise accounts do have some protections, so check with your company.</li>
<li><strong>How many files can I upload, and does size matter?</strong><br />
Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think “chapter” instead of “entire book.” At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10.</li>
<li><strong>What’s the difference between a CustomGPT and a Project?</strong><br />
A CustomGPT is a focused assistant, like an intern trained to do one role well (like “Insight Interpreter”). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.</li>
</ul>

<h2 id="from-reading-to-building">From Reading To Building</h2>

<p>In this AI x Design series, we’ve gone from messy prompting (“<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”) to a structured prompt framework, WIRE+FRAME (“<a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a>”). And now, in this article, your very own reusable AI sidekick.</p>

<p>CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in <em>how</em> you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They <strong>extend your craft</strong>, <strong>codify your expertise</strong>, and give your team leverage that generic AI models can’t.</p>

<p>Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item></channel></rss>