Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/ Recent content in Articles on Smashing Magazine — For Web Designers And Developers Mon, 09 Feb 2026 03:03:10 GMT https://validator.w3.org/feed/docs/rss2.html manual en Articles on Smashing Magazine — For Web Designers And Developers https://www.smashingmagazine.com/images/favicon/app-icon-512x512.png https://www.smashingmagazine.com/ All rights reserved 2026, Smashing Media AG Development Design UX Mobile Front-end <![CDATA[CSS <code>@scope</code>: An Alternative To Naming Conventions And Heavy Abstractions]]> https://smashingmagazine.com/2026/02/css-scope-alternative-naming-conventions/ https://smashingmagazine.com/2026/02/css-scope-alternative-naming-conventions/ Thu, 05 Feb 2026 08:00:00 GMT When learning the principles of basic CSS, one is taught to write modular, reusable, and descriptive styles to ensure maintainability. But when developers become involved with real-world applications, it often feels impossible to add UI features without styles leaking into unintended areas.

This issue often snowballs into a self-fulfilling loop; styles that are theoretically scoped to one element or class start showing up where they don’t belong. This forces the developer to create even more specific selectors to override the leaked styles, which then accidentally override global styles, and so on.

Rigid class name conventions, such as BEM, are one theoretical solution to this issue. The BEM (Block, Element, Modifier) methodology is a systematic way of naming CSS classes to ensure reusability and structure within CSS files. Naming conventions like this can reduce cognitive load by leveraging domain language to describe elements and their state, and if implemented correctly, can make styles for large applications easier to maintain.

In the real world, however, it doesn’t always work out like that. Priorities can change, and with change, implementation becomes inconsistent. Small changes to the HTML structure can require many CSS class name revisions. With highly interactive front-end applications, class names following the BEM pattern can become long and unwieldy (e.g., app-user-overview__status--is-authenticating), and not fully adhering to the naming rules breaks the system’s structure, thereby negating its benefits.

Given these challenges, it’s no wonder that developers have turned to frameworks, Tailwind being the most popular CSS framework. Rather than trying to fight what seems like an unwinnable specificity war between styles, it is easier to give up on the CSS Cascade and use tools that guarantee complete isolation.

Developers Lean More On Utilities

How do we know that some developers are keen on avoiding cascaded styles? It’s the rise of “modern” front-end tooling — like CSS-in-JS frameworks — designed specifically for that purpose. Working with isolated styles that are tightly scoped to specific components can seem like a breath of fresh air. It removes the need to name things — still one of the most hated and time-consuming front-end tasks — and allows developers to be productive without fully understanding or leveraging the benefits of CSS inheritance.

But ditching the CSS Cascade comes with its own problems. For instance, composing styles in JavaScript requires heavy build configurations and often leads to styles awkwardly intermingling with component markup or HTML. Instead of carefully considered naming conventions, we allow build tools to autogenerate selectors and identifiers for us (e.g., .jsx-3130221066), requiring developers to keep up with yet another pseudo-language in and of itself. (As if the cognitive load of understanding what all your component’s useEffects do weren’t already enough!)

Further abstracting the job of naming classes to tooling means that basic debugging is often constrained to specific application versions compiled for development, rather than leveraging native browser features that support live debugging, such as Developer Tools.

It’s almost like we need to develop tools to debug the tools we’re using to abstract what the web already provides — all for the sake of running away from the “pain” of writing standard CSS.

Luckily, modern CSS features not only make writing standard CSS more flexible but also give developers like us a great deal more power to manage the cascade and make it work for us. CSS Cascade Layers are a great example, but there’s another feature that gets a surprising lack of attention — although that is changing now that it has recently become Baseline compatible.

The CSS @scope At-Rule

I consider the CSS @scope at-rule to be a potential cure for the sort of style-leak-induced anxiety we’ve covered, one that does not force us to compromise native web advantages for abstractions and extra build tooling.

“The @scope CSS at-rule enables you to select elements in specific DOM subtrees, targeting elements precisely without writing overly-specific selectors that are hard to override, and without coupling your selectors too tightly to the DOM structure.”

MDN

In other words, we can work with isolated styles in specific instances without sacrificing inheritance, cascading, or even the basic separation of concerns that has been a long-running guiding principle of front-end development.

Plus, it has excellent browser coverage. In fact, Firefox 146 added support for @scope in December, making it Baseline compatible for the first time. Here is a simple comparison between a button using the BEM pattern versus the @scope rule:

<!-- BEM --> 
<button class="button button--primary">
  <span class="button__text">Click me</span>
  <span class="button__icon">→</span>
</button>

<style>
  .button .button__text { /* button text styles */ }
  .button .button__icon { /* button icon styles */ }
  .button--primary { primary button styles */ }
</style>
<!-- @scope --> 
<button class="primary-button">
  <span>Click me</span>
  <span>→</span>
</button>

<style>
  @scope (.primary-button) {
    span:first-child { /* button text styles */ }
    span:last-child { /* button icon styles */ }
  }
</style>

The @scope rule allows for precision with less complexity. The developer no longer needs to create boundaries using class names, which, in turn, allows them to write selectors based on native HTML elements, thereby eliminating the need for prescriptive CSS class name patterns. By simply removing the need for class name management, @scope can alleviate the fear associated with CSS in large projects.

Basic Usage

To get started, add the @scope rule to your CSS and insert a root selector to which styles will be scoped:

@scope (<selector>) {
  /* Styles scoped to the <selector> */
}

So, for example, if we were to scope styles to a <nav> element, it may look something like this:

@scope (nav) {
  a { /* Link styles within nav scope */ }

  a:active { /* Active link styles */ }

  a:active::before { /* Active link with pseudo-element for extra styling */ }

  @media (max-width: 768px) {
    a { /* Responsive adjustments */ }
  }
}

This, on its own, is not a groundbreaking feature. However, a second argument can be added to the scope to create a lower boundary, effectively defining the scope’s start and end points.

/* Any a element inside ul will not have the styles applied */
@scope (nav) to (ul) {
  a {
    font-size: 14px;
  }
}

This practice is called donut scoping, and there are several approaches one could use, including a series of similar, highly specific selectors coupled tightly to the DOM structure, a :not pseudo-selector, or assigning specific class names to <a> elements within the <nav> to handle the differing CSS.

Regardless of those other approaches, the @scope method is much more concise. More importantly, it prevents the risk of broken styles if classnames change or are misused or if the HTML structure were to be modified. Now that @scope is Baseline compatible, we no longer need workarounds!

We can take this idea further with multiple end boundaries to create a “style figure eight”:

/* Any <a> or <p> element inside <aside> or <nav> will not have the styles applied */
@scope (main) to (aside, nav) {
  a {
    font-size: 14px;
  }
  p {
    line-height: 16px;
    color: darkgrey;
  }
}

Compare that to a version handled without the @scope rule, where the developer has to “reset” styles to their defaults:

main a {
  font-size: 14px;
}

main p {
  line-height: 16px;
  color: darkgrey;
}

main aside a,
main nav a {
  font-size: inherit; /* or whatever the default should be */
}

main aside p,
main nav p {
  line-height: inherit; /* or whatever the default should be */
  color: inherit; /* or a specific color */
}

Check out the following example. Do you notice how simple it is to target some nested selectors while exempting others?

See the Pen @scope example [forked] by Blake Lundquist.

Consider a scenario where unique styles need to be applied to slotted content within web components. When slotting content into a web component, that content becomes part of the Shadow DOM, but still inherits styles from the parent document. The developer might want to implement different styles depending on which web component the content is slotted into:

<!-- Same <user-card> content, different contexts -->
<product-showcase>
  <user-card slot="reviewer">
    <img src="avatar.jpg" slot="avatar">
    <span slot="name">Jane Doe</span>
  </user-card>
</product-showcase>

<team-roster>
  <user-card slot="member">
    <img src="avatar.jpg" slot="avatar">
    <span slot="name">Jane Doe</span>
  </user-card>
</team-roster>

In this example, the developer might want the <user-card> to have distinct styles only if it is rendered inside <team-roster>:

@scope (team-roster) {
  user-card {
    display: inline-flex;
    align-items: center;
    gap: 0.5rem;
  }

  user-card img {
    border-radius: 50%;
    width: 40px;
    height: 40px;
  }
}
More Benefits

There are additional ways that @scope can remove the need for class management without resorting to utilities or JavaScript-generated class names. For example, @scope opens up the possibility to easily target descendants of any selector, not just class names:

/* Only div elements with a direct child button are included in the root scope */
@scope (div:has(> button)) {
  p {
    font-size: 14px;
  }
}

And they can be nested, creating scopes within scopes:

@scope (main) {
  p {
    font-size: 16px;
    color: black;
  }
  @scope (section) {
    p {
      font-size: 14px;
      color: blue;
    }
    @scope (.highlight) {
      p {
        background-color: yellow;
        font-weight: bold;
      }
    }
  }
}

Plus, the root scope can be easily referenced within the @scope rule:

/* Applies to elements inside direct child section elements of main, but stops at any direct aside that is a direct chiled of those sections */
@scope (main > section) to (:scope > aside) {
  p {
    background-color: lightblue;
    color: blue;
  }
  /* Applies to ul elements that are immediate siblings of root scope  */
  :scope + ul {
    list-style: none;
  }
}

The @scope at-rule also introduces a new proximity dimension to CSS specificity resolution. In traditional CSS, when two selectors match the same element, the selector with the higher specificity wins. With @scope, when two elements have equal specificity, the one whose scope root is closer to the matched element wins. This eliminates the need to override parent styles by manually increasing an element’s specificity, since inner components naturally supersede outer element styles.

<style>
  @scope (.container) {
    .title { color: green; } 
  }
  <!-- The <h2> is closer to .container than to .sidebar so "color: green" wins. -->
  @scope (.sidebar) {
    .title { color: red; }
  }
</style>

<div class="sidebar">
  <div class="container">
    <h2 class="title">Hello</h2>
  </div>
</div>
Conclusion

Utility-first CSS frameworks, such as Tailwind, work well for prototyping and smaller projects. Their benefits quickly diminish, however, when used in larger projects involving more than a couple of developers.

Front-end development has become increasingly overcomplicated in the last few years, and CSS is no exception. While the @scope rule isn’t a cure-all, it can reduce the need for complex tooling. When used in place of, or alongside strategic class naming, @scope can make it easier and more fun to write maintainable CSS.

Further Reading

]]>
hello@smashingmagazine.com (Blake Lundquist)
<![CDATA[Combobox vs. Multiselect vs. Listbox: How To Choose The Right One]]> https://smashingmagazine.com/2026/02/combobox-vs-multiselect-vs-listbox/ https://smashingmagazine.com/2026/02/combobox-vs-multiselect-vs-listbox/ Tue, 03 Feb 2026 10:00:00 GMT Design Patterns For AI Interfaces, **friendly video courses on UX** and design patterns by Vitaly.]]> So what’s the difference between combobox, multiselect, listbox, and dropdown? While all these UI components might appear similar, they serve different purposes. The choice often comes down to the number of available options and their visibility.

Let’s see how they differ, what purpose they serve, and how to choose the right one — avoiding misunderstandings and wrong expectations along the way.

Not All List Patterns Are The Same

All the UI components highlighted above have exactly one thing in common: they support users’ interactions with lists. However, they do so slightly differently.

Let’s take a look at each, one by one:

  • Dropdown → list is hidden until it’s triggered.
  • Combobox → type to filter + select 1 option.
  • Multiselect → type to filter + select many options.
  • Listbox → all list options visible by default (+ scroll).
  • Dual listbox → move items between 2 listboxes.

In other words, Combobox combines a text input field with a dropdown list, so users can type to filter and select a single option. With Multiselect, users can select many options (often displayed as pills or chips).

Listboxes display all list options visible by default, often with scrolling. It’s helpful when users need to see all available choices immediately. Dual listbox (also called transfer list) is a variation of a listbox that allows users to move items between two listboxes (left ↔ right), typically for bulk selection.

Never Hide Frequently Used Options

As mentioned above, the choice of the right UI component depends on 2 factors: how many list options are available, and if all these options need to be visible by default. All lists could have tree structures, nesting, and group selection, too.

There is one principle that I’ve been following for years for any UI component: never hide frequently used options. If users rely on a particular selection frequently, there is very little value in hiding it from them.

We could either make it pre-selected, or (if there are only 2–3 frequently used options) show them as chips or buttons, and then show the rest of the list on interaction. In general, it’s a good idea to always display popular options — even if it might clutter the UI.

How To Choose Which?

Not every list needs a complex selection method. For lists with fewer than 5 items, simple radio buttons or checkboxes usually work best. But if users need to select from a large list of options (e.g., 200+ items), combobox + multiselect are helpful because of the faster filtering (e.g., country selection).

Listboxes are helpful when people need to access many options at once, especially if they need to choose many options from that list as well. They could be helpful for frequently used filters.

Dual listbox is often overlooked and ignored. But it can be very helpful for complex tasks, e..g bulk selection, or assigning roles, tasks, responsibilities. It’s the only UI component that allows users to review their full selection list side-by-side with the source list before committing (also called “Transfer list”).

In fact, dual listbox is often faster, more accurate, and more accessible than drag-and-drop.

Usability Considerations

One important note to keep in mind is that all list types need to support keyboard navigation (e.g., ↑/↓ arrow keys) for accessibility. Some people will almost always rely uponthe keyboard to select options once they start typing.

Beyond that:

  • For lists with 7+ options, consider adding “Select All” and “Clear All” functionalities to streamline user interaction.
  • For lengthy lists with a combobox, expose all options to users on click/tap, as otherwise they might never be seen,
  • Most important, don’t display non-interactive elements as buttons to avoid confusion — and don't display interactive elements as static labels.
Wrapping Up: Not Everything Is A Dropdown

Names matter. A vertical list of options is typically described as a “dropdown” — but often it’s a bit too generic to be meaningful. “Dropdown” hints that the list is hidden by default. “Multiselect” implies multi-selection (checkbox) within a list. “Combobox” implies text input. And “Listbox” is simply a list of selectable items, visible at all times.

The goal isn’t to be consistent with the definitions above for the sake of it. But rather to align intentions — speak the same language when deciding on, designing, building, and then using these UI components.

It should work for everyone — designers, engineers, and end users — as long as static labels don’t look like interactive buttons, and radio buttons don’t act like checkboxes.

Meet “Design Patterns For AI Interfaces”

Meet Design Patterns For AI Interfaces, Vitaly’s new video course with practical examples from real-life products — with a live UX training happening soon. Jump to a free preview.

Meet Design Patterns For AI Interfaces, Vitaly’s video course on interface design & UX.

Video + UX Training

$ 450.00 $ 799.00 Get Video + UX Training

30 video lessons (10h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 275.00$ 395.00
Get the video course

30 video lessons (10h). Updated yearly.
Also available as a UX Bundle with 3 video courses.

Useful Resources ]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Short Month, Big Ideas (February 2026 Wallpapers Edition)]]> https://smashingmagazine.com/2026/01/desktop-wallpaper-calendars-february-2026/ https://smashingmagazine.com/2026/01/desktop-wallpaper-calendars-february-2026/ Sat, 31 Jan 2026 09:00:00 GMT Sometimes, the best inspiration lies right in front of us. With that in mind, we embarked on our wallpapers adventure more than 14 years ago. The idea: to provide you with a new collection of unique and inspiring desktop wallpapers every month. This February is no exception, of course.

For this post, artists and designers from across the globe once again got their ideas flowing and designed wallpapers to bring some good vibes to your desktops and home screens. All of them come in a variety of screen resolutions and can be downloaded for free. A huge thank-you to everyone who shared their design with us this month — this post wouldn’t exist without your kind support!

If you too would like to get featured in one of our next wallpapers posts, please don’t hesitate to submit your design. We are always looking for creative talent and can’t wait to see your story come to life!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Eternal Tech

“The first one being older than 100 years, radio is still connecting people, places, and events.” — Designed by Ginger It Solutions from Serbia.

Coffee Break

Designed by Ricardo Gimenes from Spain.

Mosa-hic

“Small colored squares make me think of a mosaic, but the squares are not precisely tiled, so I call it a ‘mosa-hic,’ like the ‘hic hic’ sound someone makes when they’ve had a bit too much to drink.” — Designed by Philippe Brouard from France.

Search Within

“I used the search-bar metaphor to reflect a daily habit and transform it into a moment of introspection, reminding myself to pause and look inward.” — Designed by Hitesh Puri from India, Delhi.

The Lighthouse Of Mystery

“We continue the film saga. This time, we go to the mysterious Shutter Island, a lighthouse with many mysteries that will absorb you.” — Designed by Veronica Valenzuela from Spain.

That’s All Folks

Designed by Ricardo Gimenes from Spain.

Fall In Love With Yourself

“We dedicate February to Frida Kahlo to illuminate the world with color. Fall in love with yourself, with life, and then with whoever you want.” — Designed by Veronica Valenzuela from Spain.

Plants

“I wanted to draw some very cozy place, both realistic and cartoonish, filled with little details. A space with a slightly unreal atmosphere that some great shops or cafes have. A mix of plants, books, bottles, and shelves seemed like a perfect fit. I must admit, it took longer to draw than most of my other pictures! But it was totally worth it. Watch the making-of.” — Designed by Vlad Gerasimov from Georgia.

Farewell, Winter

“Although I love winter (mostly because of the fun winter sports), there are other great activities ahead. Thanks, winter, and see you next year!” — Designed by Igor Izhik from Canada.

Love Is In The Play

“Forget Lady and the Tramp and their spaghetti kiss, ’cause Snowflake and Cloudy are enjoying their bliss. The cold and chilly February weather made our kitties knit themselves a sweater. Knitting and playing, the kitties tangled in the yarn and fell in love in your neighbor’s barn.” — Designed by PopArt Studio from Serbia.

True Love

Designed by Ricardo Gimenes from Spain.

February Ferns

Designed by Nathalie Ouederni from France.

The Great Beyond

Designed by Lars Pauwels from Belgium.

It’s A Cupcake Kind Of Day

“Sprinkles are fun, festive, and filled with love… especially when topped on a cupcake! Everyone is creative in their own unique way, so why not try baking some cupcakes and decorating them for your sweetie this month? Something homemade, like a cupcake or DIY craft, is always a sweet gesture.” — Designed by Artsy Cupcake from the United States.

Mochi

Designed by Ricardo Gimenes from Spain.

Balloons

Designed by Xenia Latii from Germany.

Magic Of Music

Designed by Vlad Gerasimov from Georgia.

Love Angel Vader

“Valentine’s Day is coming? Noooooooooooo!” — Designed by Ricardo Gimenes from Spain.

Principles Of Good Design

“The simplicity seen in the work of Dieter Rams which has ensured his designs from the 50s and 60s still hold a strong appeal.” — Designed by Vinu Chaitanya from India.

Time Thief

“Who has stolen our time? Maybe the time thief, so be sure to enjoy the other 28 days of February.” — Designed by Colorsfera from Spain.

Dark Temptation

“A dark romantic feel, walking through the city on a dark and rainy night.” — Designed by Matthew Talebi from the United States.

The Bathman

Designed by Ricardo Gimenes from Spain.

Snowy Sunset

Designed by Nathalie Croze from France.

Like The Cold Side Of A Pillow

Designed by Sarah Tanner from the United States.

Ice Cream Love

“My inspiration for this wallpaper is the biggest love someone can have in life: the love for ice cream!” — Designed by Zlatina Petrova from Bulgaria.

Febrewery

“I live in Madison, WI, which is famous for its breweries. Wisconsin even named their baseball team “The Brewers.” If you like beer, brats, and lots of cheese, it’s the place for you!” — Designed by Danny Gugger from the United States.

Share The Same Orbit!

“I prepared a simple and chill layout design for February called ‘Share The Same Orbit!’ which suggests to share the love orbit.” — Designed by Valentin Keleti from Romania.

Febpurrary

“I was doodling pictures of my cat one day and decided I could turn it into a fun wallpaper — because a cold, winter night in February is the perfect time for staying in and cuddling with your cat, your significant other, or both!” — Designed by Angelia DiAntonio from Ohio, USA.

Snow

Designed by Elise Vanoorbeek from Belgium.

Dog Year Ahead

Designed by PopArt Studio from Serbia.

French Fries

Designed by Doreen Bethge from Germany.

“Greben” Icebreaker

“Danube is Europe’s second largest river, connecting ten different countries. In these cold days, when ice paralyzes rivers and closes waterways, a small but brave icebreaker called Greben (Serbian word for ‘reef’) seems stronger than winter. It cuts through the ice on Đerdap gorge (Iron Gate) — the longest and biggest gorge in Europe — thus helping the production of electricity in the power plant. This is our way to give thanks to Greben!” — Designed by PopArt Studio from Serbia.

Out There, There’s Someone Like You

“I am a true believer that out there in this world there is another person who is just like us, the problem is to find her/him.” — Designed by Maria Keller from Mexico.

On The Light Side

Designed by Ricardo Gimenes from Spain.

Get Featured Next Month

Feeling inspired? We’ll publish the March wallpapers on February 28, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[Practical Use Of AI Coding Tools For The Responsible Developer]]> https://smashingmagazine.com/2026/01/practical-use-ai-coding-tools-responsible-developer/ https://smashingmagazine.com/2026/01/practical-use-ai-coding-tools-responsible-developer/ Fri, 30 Jan 2026 13:00:00 GMT Over the last two years, my team at Work & Co and I have been testing out and gradually integrating AI coding tools like Copilot, Cursor, Claude, and ChatGPT to help us ship web experiences that are used by the masses. Admittedly, after some initial skepticism and a few aha moments, various AI tools have found their way into my daily use. Over time, the list of applications where we found it made sense to let AI take over started to grow, so I decided to share some practical use cases for AI tools for what I call the “responsible developer”.

What do I mean by a responsible developer?

We have to make sure that we deliver quality code as expected by our stakeholders and clients. Our contributions (i.e., pull requests) should not become a burden on our colleagues who will have to review and test our work. Also, in case you work for a company: The tools we use need to be approved by our employer. Sensitive aspects like security and privacy need to be handled properly: Don’t paste secrets, customer data (PII), or proprietary code into tools without policy approval. Treat it like code from a stranger on the internet. Always test and verify.

Note: This article assumes some very basic familiarity with AI coding tools like Copilot inside VSCode or Cursor. If all of this sounds totally new and unfamiliar to you, the Github Copilot video tutorials can be a fantastic starting point for you.

Helpful Applications Of AI Coding Tools

Note: The following examples will mainly focus on working in JavaScript-based web applications like React, Vue, Svelte, or Angular.

Getting An Understanding Of An Unfamiliar Codebase

It’s not uncommon to work on established codebases, and joining a large legacy codebase can be intimidating. Simply open your project and your AI agent (in my case, Copilot Chat in VSCode) and start asking questions just like you would ask a colleague. In general, I like to talk to any AI agent just as I would to a fellow human.

Here is a more refined example prompt:

“Give me a high-level architecture overview: entrypoints, routing, auth, data layer, build tooling. Then list 5 files to read in order. Treat explanations as hypotheses and confirm by jumping to referenced files.”

You can keep asking follow-up questions like “How does the routing work in detail?” or “Talk me through the authentication process and methods” and it will lead you to helpful directions to shine some light into the dark of an unfamiliar codebase.

Triaging Breaking Changes When Upgrading Dependencies

Updating npm packages, especially when they come with breaking changes, can be tedious and time-consuming work, and make you debug a fair amount of regressions. I recently had to upgrade the data visualization library plotly.js up one major release version from version 2 to 3, and as a result of that, the axis labeling in some of the graphs stopped working.

I went on to ask ChatGPT:

“I updated my Angular project that uses Plotly. I updated the plotly.js — dist package from version 2.35.2 to 3.1.0 — and now the labels on the x and y axis are gone. What happened?”

The agent came back with a solution promptly (see for yourself below).

Note: I still verified the explanation against the official migration guide before shipping the fix.

Replicating Refactors Safely Across Files

Growing codebases most certainly unveil opportunities for code consolidation. For example, you notice code duplication across files that can be extracted into a single function or component. As a result, you decide to create a shared component that can be included instead and perform that refactor in one file. Now, instead of manually carrying out those changes to your remaining files, you ask your agent to roll out the refactor for you.

Agents let you select multiple files as context. Once the refactor for one file is done, I can add both the refactored and untouched files into context and prompt the agent to roll out the changes to other files like this: “Replicate the changes I made in file A to file B as well”.

Implementing Features In Unfamiliar Technologies

One of my favorite aha-moments using AI coding tools was when it helped me create a quite complex animated gradient animation in GLSL, a language I have been fairly unfamiliar with. On a recent project, our designers came up with an animated gradient as a loading state on a 3D object. I really liked the concept and wanted to deliver something unique and exciting to our clients. The problem: I only had two days to implement it, and GLSL has quite the steep learning curve.

Again, an AI tool (in this case, ChatGPT) came in handy, and I started quite simply prompting it to create a standalone HTML file for me that renders a canvas and a very simple animated color gradient. Step after step, I prompted the AI to add more finesse to it until I arrived at a decent result so I could start integrating the shader into my actual codebase.

The end result: Our clients were super happy, and we delivered a complex feature in a small amount of time thanks to AI.

Writing Tests

In my experience, there’s rarely enough time on projects to continuously write and maintain a proper suite of unit and integration tests, and on top of that, many developers don’t really enjoy the task of writing tests. Prompting your AI helper to set up and write tests for you is entirely possible and can be done in a small amount of time. Of course, you, as a developer, should still make sure that your tests actually take a look at the critical parts of your application and follow sensible testing principles, but you can “outsource” the writing of the tests to our AI helper.

Example prompt:

“Write unit tests for this function using Jest. Cover happy path, edge cases, and failure modes. Explain why each test exists.”

You can even pass along testing guru Kent C. Dodds’ testing best practices as guidelines to your agent, like below:

Internal Tooling

Somewhat similar to the shader example mentioned earlier, I was recently tasked to analyze code duplication in a codebase and compare before and after a refactor. Certainly not a trivial task if you don’t want to go the time-consuming route of comparing files manually. With the help of Copilot, I created a script that analyzed code duplication for me, arranged and ordered the output in a table, and exported it to Excel. Then I took it a step further. When our code refactor was done, I prompted the agent to take my existing Excel sheet as the baseline, add in the current state of duplication in separate columns, and calculate the delta.

Updating Code Written A Long Time Ago

Recently, an old client of mine hit me up, as over time, a few features weren’t working properly on his website anymore.

The catch: The website was built almost ten years ago, and the JavaScript and SCSS were using rather old compile tools like requireJS, and the setup required an older version of Node.js that wouldn’t even run on my 2025 MacBook.

Updating the whole build process by hand would have taken me days, so I decided to prompt the AI agent, “Can you update the JS and SCSS build process to a lean 2025 stack like Vite?” It sure did, and after around an hour of refining with the agent, I had my SCSS and JS build switched to Vite, and I was able to focus on actual bugfixing. Just make sure to properly validate the output and compiled files when doing such integral changes to your build process.

Summarizing And Drafting

Would you like to summarize all your recent code changes in one sentence for a commit message, or have a long list of commits and would like to sum them up in three bullet points? No problem, let the AI take care of it, but please make sure to proofread it.

An example prompt is as simple as messaging a fellow human: “Please sum up my recent changes in concise bullet points”.

My advice here would be to use GPT for writing with caution, and as with code, please check the output before sending or submitting.

Recommendations And Best Practices

Prompting

One of the not-so-obvious benefits of using AI is that the more specific and tailored your prompts are, the better the output. The process of prompting an AI agent forces us to formulate our requirements as specifically as possible before we write and code. This is why, as a general rule, I highly recommend being as specific as possible with your prompting.

Ryan Florence, co-author of Remix, suggests a simple yet powerful way to improve this process by finishing your initial prompt with the sentence:

“Before we start, do you have any questions for me?”

At this point, the AI usually comes back with helpful questions where you can clarify your specific intent, guiding the agent to provide you with a more tailored approach for your task.

Use Version Control And Work In Digestible Chunks

Using version control like git not only comes in handy when collaborating as a team on a single codebase but also to provide you as an individual contributor with stable points to roll back to in case of an emergency. Due to its non-deterministic nature, AI can sometimes go rogue and make changes that are simply not helpful for what you are trying to achieve and eventually break things irreparably.

Splitting up your work into multiple commits will help you create stable points that you can revert to in case things go sideways. And your teammates will thank you as well, as they will have an easier time reviewing your code when it is split up into semantically well-structured chunks.

Review Thoroughly

This is more of a general best practice, but in my opinion, it becomes even more important when using AI tools for development work: Be the first critical reviewer of your code. Make sure to take some time to go over your changes line by line, just like you would review someone else’s code, and only submit your work once it passes your own self-review.

“Two things are both true to me right now: AI agents are amazing and a huge productivity boost. They are also massive slop machines if you turn off your brain and let go completely.”

— Armin Ronacher in his blog post Agent Psychosis: Are We Going Insane?
Conclusion And Critical Thoughts

In my opinion, AI coding tools can improve our productivity as developers on a daily basis and free up mental capacity for more planning and high-level thinking. They force us to articulate our desired outcome with meticulous detail.

Any AI can, at times, hallucinate, which basically means it lies in a confident tone. So please make sure to check and test, especially when you are in doubt. AI is not a silver bullet, and I believe, excellence and the ability to solve problems as a developer will never go out of fashion.

For developers who are just starting out in their career these tools can be highly tempting to do the majority of the work for them. What may get lost here is the often draining and painful work through bugs and issues that are tricky to debug and solve, aka “the grind”. Even Cursor AI’s very own Lee Robinson questions this in one of his posts:

AI coding tools are evolving at a fast pace, and I am excited for what will come next. I hope you found this article and its tips helpful and are excited to try out some of these for yourself.

]]>
hello@smashingmagazine.com (Stefan Kaltenegger)
<![CDATA[Unstacking CSS Stacking Contexts]]> https://smashingmagazine.com/2026/01/unstacking-css-stacking-contexts/ https://smashingmagazine.com/2026/01/unstacking-css-stacking-contexts/ Tue, 27 Jan 2026 10:00:00 GMT Have you ever set z-index: 99999 on an element in your CSS, and it doesn’t come out on top of other elements? A value that large should easily place that element visually on top of anything else, assuming all the different elements are set at either a lower value or not set at all.

A webpage is usually represented in a two-dimensional space; however, by applying specific CSS properties, an imaginary z-axis plane is introduced to convey depth. This plane is perpendicular to the screen, and from it, the user perceives the order of elements, one on top of the other. The idea behind the imaginary z-axis, the user’s perception of stacked elements, is that the CSS properties that create it combine to form what we call a stacking context.

We’re going to talk about how elements are “stacked” on a webpage, what controls the stacking order, and practical approaches to “unstack” elements when needed.

About Stacking Contexts

Imagine your webpage as a desk. As you add HTML elements, you’re laying pieces of paper, one after the other, on the desk. The last piece of paper placed is equivalent to the most recently added HTML element, and it sits on top of all the other papers placed before it. This is the normal document flow, even for nested elements. The desk itself represents the root stacking context, formed by the <html> element, which contains all other folders.

Now, specific CSS properties come into play.

Properties like position (with z-index), opacity, transform, and contain) act like a folder. This folder takes an element and all of its children, extracts them from the main stack, and groups them into a separate sub-stack, creating what we call a stacking context. For positioned elements, this happens when we declare a z-index value other than auto. For properties like opacity, transform, and filter, the stacking context is created automatically when specific values are applied.

Try to understand this: Once a piece of paper (i.e., a child element) is inside a folder (i.e., the parent’s stacking context), it can never exit that folder or be placed between papers in a different folder. Its z-index is now only relevant inside its own folder.

In the illustration below, Paper B is now within the stacking context of Folder B, and can only be ordered with other papers in the folder.

Imagine, if you will, that you have two folders on your desk:

<div class="folder-a">Folder A</div>
<div class="folder-b">Folder B</div>
.folder-a { z-index: 1; }
.folder-b { z-index: 2; }

Let’s update the markup a bit. Inside Folder A is a special page, z-index: 9999. Inside Folder B is a plain page, z-index: 5.

<div class="folder-a">
   <div class="special-page">Special Page</div>
</div>

<div class="folder-b">
  <div class="plain-page">Plain Page</div>
</div>
.special-page { z-index: 9999; }
.plain-page { z-index: 5; }

Which page is on top?

It’s the .plain-page in Folder B. The browser ignores the child papers and stacks the two folders first. It sees Folder B (z-index: 2) and places it on top of Folder A (z-index: 1) because we know that two is greater than one. Meanwhile, the .special-page set to z-index: 9999 page is at the bottom of the stack even though its z-index is set to the highest possible value.

Stacking contexts can also be nested (folders inside folders), creating a “family tree.” The same principle applies: a child can never escape its parents’ folder.

Now that you get how stacking contexts behave like folders that group and reorder layers, it’s worth asking: why do certain properties — like transform and opacity — create new stacking contexts?

Here’s the thing: these properties don’t create stacking contexts because of how they look; they do it because of how the browser works under the hood. When you apply transform, opacity, filter, or perspective, you’re telling the browser, “Hey, this element might move, rotate, or fade, so be ready!”

When you use these properties, the browser creates a new stacking context to manage rendering more efficiently. This allows the browser to handle animations, transforms, and visual effects independently, reducing the need to recalculate how these elements interact with the rest of the page. Think of it as the browser saying, “I’ll handle this folder separately so I don’t have to reshuffle the entire desk every time something inside it changes.”

But there’s a side effect. Once the browser lifts an element into its own layer, it must “flatten” everything within it, creating a new stacking context. It’s like taking a folder off the desk to handle it separately; everything inside that folder gets grouped, and the browser now treats it as a single unit when deciding what sits on top of what.

So even though the transform and opacity properties might not appear to affect the way that elements stack visually, they do, and it’s for performance optimisation. Several other CSS properties can also create stacking contexts for similar reasons. MDN provides a complete list if you want to dig deeper. There are quite a few, which only illustrates how easy it is to inadvertently create a stacking context without knowing it.

The “Unstacking” Problem

Stacking issues can arise for many reasons, but some are more common than others. Modal components are a classic pattern because they require toggling the component to “open” on a top layer above all other elements, then removing it from the top layer when it is “closed.”

I’m pretty confident that all of us have run into a situation where we open a modal and, for whatever reason, it doesn’t appear. It’s not that it didn’t open properly, but that it is out of view in a lower layer of the stacking context.

This leaves you to wonder “how come?” since you set:

.overlay {
  position: fixed; /* creates the stacking context */
  z-index: 1; /* puts the element on a layer above everything else */
  inset: 0; 
  width: 100%; 
  height: 100vh; 
  overflow: hidden;
  background-color: #00000080;
}

This looks correct, but if the parent element containing the modal trigger is a child element within another parent element that’s also set to z-index: 1, that technically places the modal in a sublayer obscured by the main folder. Let’s look at that specific scenario and a couple of other common stacking-context pitfalls. I think you’ll see not only how easy it is to inadvertently create stacking contexts, but also how to mismanage them. Also, how you return to a managed state depends on the situation.

Scenario 1: The Trapped Modal

You can immediately see your modal trapped in a low-level layer and identify the parent.

Browser Extensions

Smart developers have built extensions to help. Tools like this “CSS Stacking Context Inspector” Chrome extension add an extra z-index tab to your DevTools to show you information about elements that create a stacking context.

IDE Extensions

You can even spot issues during development with an extension like this one for VS Code, which highlights potential stacking context issues directly in your editor.

Unstacking And Regaining Control

After we’ve identified the root cause, the next step is to deal with it. There are several approaches you can take to tackle this problem, and I’ll list them in order. You can choose anyone at any level, though; no one can complain or obstruct another.

Change The HTML Structure

This is considered the optimal fix. For you to run into a stacking context issue, you must have placed some elements in funny positions within your HTML. Restructuring the page will help you reshape the DOM and eliminate the stacking context problem. Find the problematic element and remove it from the trapping element in the HTML markup. For instance, we can solve the first scenario, “The Trapped Modal,” by moving the .modal-container out of the header and placing it in the <body> element by itself.

<header class="header">
  <h2>Header</h2>
  <button id="open-modal">Open Modal</button>
  <!-- Former position -->
</header>
<main class="content">
  <h1>Main Content</h1>
  <p>This content has a z-index of 2 and will still not cover the modal.</p>
</main>

<!-- New position  -->
<div id="modal-container" class="modal-container">
  <div class="modal-overlay"></div>
  <div class="modal-content">
    <h3>Modal Title</h3>
    <p>Now, I'm not behind anything. I've gotten a better position as a result of DOM restructuring.</p>
    <button id="close-modal">Close</button>
  </div>
</div>

When you click the “Open Modal” button, the modal is positioned in front of everything else as it’s supposed to be.

See the Pen Scenario 1: The Trapped Modal (Solution) [forked] by Shoyombo Gabriel Ayomide.

Adjust The Parent Stacking Context In CSS

What if the element is one you can’t move without breaking the layout? It’s better to address the issue: the parent establishes the context. Find the CSS property (or properties) responsible for triggering the context and remove it. If it has a purpose and cannot be removed, give the parent a higher z-index value than its sibling elements to lift the entire container. With a higher z-index value, the parent container moves to the top, and its children appear closer to the user.

Based on what we learned in “The Submerged Dropdown” scenario, we can’t move the dropdown out of the navbar; it wouldn’t make sense. However, we can increase the z-index value of the .navbar container to be greater than the .content element’s z-index value.

.navbar {
  background: #333;
  /* z-index: 1; */
  z-index: 3;
  position: relative;
}

With this change, the .dropdown-menu now appears in front of the content without any issue.

See the Pen Scenario 2: The Submerged Dropdown (Solution) [forked] by Shoyombo Gabriel Ayomide.

Try Portals, If Using A Framework

In frameworks like React or Vue, a Portal is a feature that lets you render a component outside its normal parent hierarchy in the DOM. Portals are like a teleportation device for your components. They let you render a component’s HTML anywhere in the document (typically right into document.body) while keeping it logically connected to its original parent for props, state, and events. This is perfect for escaping stacking context traps since the rendered output literally appears outside the problematic parent container.

ReactDOM.createPortal(
  <ToolTip />,
  document.body
);

This ensures your dropdown content isn’t hidden behind its parent, even if the parent has overflow: hidden or a lower z-index.

In the “The Clipped Tooltip” scenario we looked at earlier, I used a Portal to rescue the tooltip from the overflow: hidden clip by placing it in the document body and positioning it above the trigger within the container.

See the Pen Scenario 3: The Clipped Tooltip (Solution) [forked] by Shoyombo Gabriel Ayomide.

Introducing Stacking Context Without Side Effects

All the approaches explained in the previous section are aimed at “unstacking” elements from problematic stacking contexts, but there are some situations where you’ll actually need or want to create a stacking context.

Creating a new stacking context is easy, but all approaches come with a side effect. That is, except for using isolation: isolate. When applied to an element, the stacking context of that element’s children is determined relative to each child and within that context, rather than being influenced by elements outside of it. A classic example is assigning that element a negative value, such as z-index: -1.

Imagine you have a .card component. You want to add a decorative shape that sits behind the .card’s text, but on top of the card’s background. Without a stacking context on the card, z-index: -1 sends the shape to the bottom of the root stacking context (the whole page). This makes it disappear behind the .card’s white background:

See the Pen Negative z-index (problem) [forked] by Shoyombo Gabriel Ayomide.

To solve this, we declare isolation: isolate on the parent .card:

See the Pen Negative z-index (solution) [forked] by Shoyombo Gabriel Ayomide.

Now, the .card element itself becomes a stacking context. When its child element — the decorative shape created on the :before pseudo-element — has z-index: -1, it goes to the very bottom of the parent’s stacking context. It sits perfectly behind the text and on top of the card’s background, as intended.

Conclusion

Remember: the next time your z-index seems out of control, it’s a trapped stacking context.

References

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Gabriel Shoyombo)
<![CDATA[Beyond Generative: The Rise Of Agentic AI And User-Centric Design]]> https://smashingmagazine.com/2026/01/beyond-generative-rise-agentic-ai-user-centric-design/ https://smashingmagazine.com/2026/01/beyond-generative-rise-agentic-ai-user-centric-design/ Thu, 22 Jan 2026 13:00:00 GMT Agentic AI stands ready to transform customer experience and operational efficiency, necessitating a new strategic approach from leadership. This evolution in artificial intelligence empowers systems to plan, execute, and persist in tasks, moving beyond simple recommendations to proactive action. For UX teams, product managers, and executives, understanding this shift is crucial for unlocking opportunities in innovation, streamlining workflows, and redefining how technology serves people.

It’s easy to confuse Agentic AI with Robotic Process Automation (RPA), which is technology that focuses on rules-based tasks performed on computers. The distinction lies in rigidity versus reasoning. RPA is excellent at following a strict script: if X happens, do Y. It mimics human hands. Agentic AI mimics human reasoning. It does not follow a linear script; it creates one.

Consider a recruiting workflow. An RPA bot can scan a resume and upload it to a database. It performs a repetitive task perfectly. An Agentic system looks at the resume, notices the candidate lists a specific certification, cross-references that with a new client requirement, and decides to draft a personalized outreach email highlighting that match. RPA executes a predefined plan; Agentic AI formulates the plan based on a goal. This autonomy separates agents from the predictive tools we have used for the last decade.

Another example is managing meeting conflicts. A predictive model integrated into your calendar might analyze your meeting schedule and the schedules of your colleagues. It could then suggest potential conflicts, such as two important meetings scheduled at the same time, or a meeting scheduled when a key participant is on vacation. It provides you with information and flags potential issues, but you are responsible for taking action.

An agentic AI, in the same scenario, would go beyond just suggesting conflicts to avoid. Upon identifying a conflict with a key participant, the agent could act by:

  • Checking the availability of all necessary participants.
  • Identifying alternative time slots that work for everyone.
  • Sending out proposed new meeting invitations to all attendees.
  • If the conflict is with an external participant, the agent could draft and send an email explaining the need to reschedule and offering alternative times.
  • Updating your calendar and the calendars of your colleagues with the new meeting details once confirmed.

This agentic AI understands the goal (resolving the meeting conflict), plans the steps (checking availability, finding alternatives, sending invites), executes those steps, and persists until the conflict is resolved, all with minimal direct user intervention. This demonstrates the “agentic” difference: the system takes proactive steps for the user, rather than just providing information to the user.

Agentic AI systems understand a goal, plan a series of steps to achieve it, execute those steps, and even adapt if things go wrong. Think of it like a proactive digital assistant. The underlying technology often combines large language models (LLMs) for understanding and reasoning, with planning algorithms that break down complex tasks into manageable actions. These agents can interact with various tools, APIs, and even other AI models to accomplish their objectives, and critically, they can maintain a persistent state, meaning they remember previous actions and continue working towards a goal over time. This makes them fundamentally different from typical generative AI, which usually completes a single request and then resets.

A Simple Taxonomy of Agentic Behaviors

We can categorize agent behavior into four distinct modes of autonomy. While these often look like a progression, they function as independent operating modes. A user might trust an agent to act autonomously for scheduling, but keep it in “suggestion mode” for financial transactions.

We derived these levels by adapting industry standards for autonomous vehicles (SAE levels) to digital user experience contexts.

Observe-and-Suggest

The agent functions as a monitor. It analyzes data streams and flags anomalies or opportunities, but takes zero action.

Differentiation
Unlike the next level, the agent generates no complex plan. It points to a problem.

Example
A DevOps agent notices a server CPU spike and alerts the on-call engineer. It does not know how or attempt to fix it, but it knows something is wrong.

Implications for design and oversight
At this level, design and oversight should prioritize clear, non-intrusive notifications and a well-defined process for users to act on suggestions. The focus is on empowering the user with timely and relevant information without taking control. UX practitioners should focus on making suggestions clear and easy to understand, while product managers need to ensure the system provides value without overwhelming the user.

Plan-and-Propose

The agent identifies a goal and generates a multi-step strategy to achieve it. It presents the full plan for human review.

Differentiation
The agent acts as a strategist. It does not execute; it waits for approval on the entire approach.

Example
The same DevOps agent notices the CPU spike, analyzes the logs, and proposes a remediation plan:

  1. Spin up two extra instances.
  2. Restart the load balancer.
  3. Archive old logs.

The human reviews the logic and clicks “Approve Plan”.

Implications for design and oversight
For agents that plan and propose, design must ensure the proposed plans are easily understandable and that users have intuitive ways to modify or reject them. Oversight is crucial in monitoring the quality of proposals and the agent’s planning logic. UX practitioners should design clear visualizations of the proposed plans, and product managers must establish clear review and approval workflows.

Act-with-Confirmation

The agent completes all preparation work and places the final action in a staged state. It effectively holds the door open, waiting for a nod.

Differentiation
This differs from “Plan-and-Propose” because the work is already done and staged. It reduces friction. The user confirms the outcome, not the strategy.

Example
A recruiting agent drafts five interview invitations, finds open times on calendars, and creates the calendar events. It presents a “Send All” button. The user provides the final authorization to trigger the external action.

Implications for design and oversight
When agents act with confirmation, the design should provide transparent and concise summaries of the intended action, clearly outlining potential consequences. Oversight needs to verify that the confirmation process is robust and that users are not being asked to blindly approve actions. UX practitioners should design confirmation prompts that are clear and provide all necessary information, and product managers should prioritize a robust audit trail for all confirmed actions.

Act-Autonomously

The agent executes tasks independently within defined boundaries.

Differentiation
The user reviews the history of actions, not the actions themselves.

Example
The recruiting agent sees a conflict, moves the interview to a backup slot, updates the candidate, and notifies the hiring manager. The human only sees a notification: Interview rescheduled to Tuesday.

Implications for design and oversight
For autonomous agents, the design needs to establish clear pre-approved boundaries and provide robust monitoring tools. Oversight requires continuous evaluation of the agent’s performance within these boundaries, a critical need for robust logging, clear override mechanisms, and user-defined kill switches to maintain user control and trust. UX practitioners should focus on designing effective dashboards for monitoring autonomous agent behavior, and product managers must ensure clear governance and ethical guidelines are in place.

Let’s look at a real-world application in HR technology to see these modes in action. Consider an “Interview Coordination Agent” designed to handle the logistics of hiring.

  • In Suggest Mode
    The agent notices an interviewer is double-booked. It highlights the conflict on the recruiter’s dashboard: “Warning: Sarah is double-booked for the 2 PM interview.”
  • In Plan Mode
    The agent analyzes Sarah’s calendar and the candidate’s availability. It presents a solution: “I recommend moving the interview to Thursday at 10 AM. This requires moving Sarah’s 1:1 with her manager.” The recruiter reviews this logic.
  • In Confirmation Mode
    The agent drafts the emails to the candidate and the manager. It populates the calendar invites. The recruiter sees a summary: “Ready to reschedule to Thursday. Send updates?” The recruiter clicks “Confirm.”
  • In Autonomous Mode
    The agent handles the conflict instantly. It respects a pre-set rule: “Always prioritize candidate interviews over internal 1:1s.” It moves the meeting and sends the notifications. The recruiter sees a log entry: “Resolved schedule conflict for Candidate B.”
Research Primer: What To Research And How

Developing effective agentic AI demands a distinct research approach compared to traditional software or even generative AI. The autonomous nature of AI agents, their ability to make decisions, and their potential for proactive action necessitate specialized methodologies for understanding user expectations, mapping complex agent behaviors, and anticipating potential failures. The following research primer outlines key methods to measure and evaluate these unique aspects of agentic AI.

Mental-Model Interviews

These interviews uncover users’ preconceived notions about how an AI agent should behave. Instead of simply asking what users want, the focus is on understanding their internal models of the agent’s capabilities and limitations. We should avoid using the word “agent” with participants. It carries sci-fi baggage or is a term too easily confused with a human agent offering support or services. Instead, frame the discussion around “assistants” or “the system.”

We need to uncover where users draw the line between helpful automation and intrusive control.

  • Method: Ask users to describe, draw, or narrate their expected interactions with the agent in various hypothetical scenarios.
  • Key Probes (reflecting a variety of industries):
    • To understand the boundaries of desired automation and potential anxieties around over-automation, ask:
      • If your flight is canceled, what would you want the system to do automatically? What would worry you if it did that without your explicit instruction?
    • To explore the user’s understanding of the agent’s internal processes and necessary communication, ask:
      • Imagine a digital assistant is managing your smart home. If a package is delivered, what steps do you imagine it takes, and what information would you expect to receive?
    • To uncover expectations around control and consent within a multi-step process, ask:
      • If you ask your digital assistant to schedule a meeting, what steps do you envision it taking? At what points would you want to be consulted or given choices?
  • Benefits of the method: Reveals implicit assumptions, highlights areas where the agent’s planned behavior might diverge from user expectations, and informs the design of appropriate controls and feedback mechanisms.

Agent Journey Mapping:

Similar to traditional user journey mapping, agent journey mapping specifically focuses on the anticipated actions and decision points of the AI agent itself, alongside the user’s interaction. This helps to proactively identify potential pitfalls.

  • Method: Create a visual map that outlines the various stages of an agent’s operation, from initiation to completion, including all potential actions, decisions, and interactions with external systems or users.
  • Key Elements to Map:
    • Agent Actions: What specific tasks or decisions does the agent perform?
    • Information Inputs/Outputs: What data does the agent need, and what information does it generate or communicate?
    • Decision Points: Where does the agent make choices, and what are the criteria for those choices?
    • User Interaction Points: Where does the user provide input, review, or approve actions?
    • Points of Failure: Crucially, identify specific instances where the agent could misinterpret instructions, make an incorrect decision, or interact with the wrong entity.
      • Examples: Incorrect recipient (e.g., sending sensitive information to the wrong person), overdraft (e.g., an automated payment exceeding available funds), misinterpretation of intent (e.g., booking a flight for the wrong date due to ambiguous language).
    • Recovery Paths: How can the agent or user recover from these failures? What mechanisms are in place for correction or intervention?
  • Benefits of the method: Provides a holistic view of the agent’s operational flow, uncovers hidden dependencies, and allows for the proactive design of safeguards, error handling, and user intervention points to prevent or mitigate negative outcomes.

Simulated Misbehavior Testing:

This approach is designed to stress-test the system and observe user reactions when the AI agent fails or deviates from expectations. It’s about understanding trust repair and emotional responses in adverse situations.

  • Method: In controlled lab studies, deliberately introduce scenarios where the agent makes a mistake, misinterprets a command, or behaves unexpectedly.
  • Types of “Misbehavior” to Simulate:
    • Command Misinterpretation: The agent performs an action slightly different from what the user intended (e.g., ordering two items instead of one).
    • Information Overload/Underload: The agent provides too much irrelevant information or not enough critical details.
    • Unsolicited Action: The agent takes an action the user explicitly did not want or expect (e.g., buying stock without approval).
    • System Failure: The agent crashes, becomes unresponsive, or provides an error message.
    • Ethical Dilemmas: The agent makes a decision with ethical implications (e.g., prioritizing one task over another based on an unforeseen metric).
  • Observation Focus:
    • User Reactions: How do users react emotionally (frustration, anger, confusion, loss of trust)?
    • Recovery Attempts: What steps do users take to correct the agent’s behavior or undo its actions?
    • Trust Repair Mechanisms: Do the system’s built-in recovery or feedback mechanisms help restore trust? How do users want to be informed about errors?
    • Mental Model Shift: Does the misbehavior alter the user’s understanding of the agent’s capabilities or limitations?
  • Benefits of the method: Crucial for identifying design gaps related to error recovery, feedback, and user control. It provides insights into how resilient users are to agent failures and what is needed to maintain or rebuild trust, leading to more robust and forgiving agentic systems.

By integrating these research methodologies, UX practitioners can move beyond simply making agentic systems usable to making them trusted, controllable, and accountable, fostering a positive and productive relationship between users and their AI agents. Note that these aren’t the only methods relevant to exploring agentic AI effectively. Many other methods exist, but these are most accessible to practitioners in the near term. I’ve previously covered the Wizard of Oz method, a slightly more advanced method of concept testing, which is also a valuable tool for exploring agentic AI concepts.

Ethical Considerations In Research Methodology

When researching agentic AI, particularly when simulating misbehavior or errors, ethical considerations are key to take into account. There are many publications focusing on ethical UX research, including an article I wrote for Smashing Magazine, these guidelines from the UX Design Institute, and this page from the Inclusive Design Toolkit.

Key Metrics For Agentic AI

You’ll need a comprehensive set of key metrics to effectively assess the performance and reliability of agentic AI systems. These metrics provide insights into user trust, system accuracy, and the overall user experience. By tracking these indicators, developers and designers can identify areas for improvement and ensure that AI agents operate safely and efficiently.

1. Intervention Rate
For autonomous agents, we measure success by silence. If an agent executes a task and the user does not intervene or reverse the action within a set window (e.g., 24 hours), we count that as acceptance. We track the Intervention Rate: how often does a human jump in to stop or correct the agent? A high intervention rate signals a misalignment in trust or logic.

2. Frequency of Unintended Actions per 1,000 Tasks
This critical metric quantifies the number of actions performed by the AI agent that were not desired or expected by the user, normalized per 1,000 completed tasks. A low frequency of unintended actions signifies a well-aligned AI that accurately interprets user intent and operates within defined boundaries. This metric is closely tied to the AI’s understanding of context, its ability to disambiguate commands, and the robustness of its safety protocols.

3. Rollback or Undo Rates
This metric tracks how often users need to reverse or undo an action performed by the AI. High rollback rates suggest that the AI is making frequent errors, misinterpreting instructions, or acting in ways that are not aligned with user expectations. Analyzing the reasons behind these rollbacks can provide valuable feedback for improving the AI’s algorithms, understanding of user preferences, and its ability to predict desirable outcomes.

To understand why, you must implement a microsurvey on the undo action. For example, when a user reverses a scheduling change, a simple prompt can ask: “Wrong time? Wrong person? Or did you just want to do it yourself?” Allowing the user to click on the option that best corresponds to their reasoning.

4. Time to Resolution After an Error
This metric measures the duration it takes for a user to correct an error made by the AI or for the AI system itself to recover from an erroneous state. A short time to resolution indicates an efficient and user-friendly error recovery process, which can mitigate user frustration and maintain productivity. This includes the ease of identifying the error, the accessibility of undo or correction mechanisms, and the clarity of error messages provided by the AI.

Collecting these metrics requires instrumenting your system to track Agent Action IDs. Every distinct action the agent takes, such as proposing a schedule or booking a flight, must generate a unique ID that persists in the logs. To measure the Intervention Rate, we do not look for an immediate user reaction. We look for the absence of a counter-action within a defined window. If an Action ID is generated at 9:00 AM and no human user modifies or reverts that specific ID by 9:00 AM the next day, the system logically tags it as Accepted. This allows us to quantify success based on user silence rather than active confirmation.

For Rollback Rates, raw counts are insufficient because they lack context. To capture the underlying reason, you must implement intercept logic on your application’s Undo or Revert functions. When a user reverses an agent-initiated action, trigger a lightweight microsurvey. This can be a simple three-option modal asking the user to categorize the error as factually incorrect, lacking context, or a simple preference to handle the task manually. This combines quantitative telemetry with qualitative insight. It enables engineering teams to distinguish between a broken algorithm and a user preference mismatch.

These metrics, when tracked consistently and analyzed holistically, provide a robust framework for evaluating the performance of agentic AI systems, allowing for continuous improvement in control, consent, and accountability.

Designing Against Deception

As agents become increasingly capable, we face a new risk: Agentic Sludge. Traditional sludge creates friction that makes it hard to cancel a subscription or delete an account. Agentic sludge acts in reverse. It removes friction to a fault, making it too easy for a user to agree to an action that benefits the business rather than their own interests.

Consider an agent assisting with travel booking. Without clear guardrails, the system might prioritize a partner airline or a higher-margin hotel. It presents this choice as the optimal path. The user, trusting the system’s authority, accepts the recommendation without scrutiny. This creates a deceptive pattern where the system optimizes for revenue under the guise of convenience.

The Risk Of Falsely Imagined Competence

Deception may not stem from malicious intent. It often manifests in AI as Imagined Competence. Large Language Models frequently sound authoritative even when incorrect. They present a false booking confirmation or an inaccurate summary with the same confidence as a verified fact. Users may naturally trust this confident tone. This mismatch creates a dangerous gap between system capability and user expectations.

We must design specifically to bridge this gap. If an agent fails to complete a task, the interface must signal that failure clearly. If the system is unsure, it must express uncertainty rather than masking it with polished prose.

Transparency via Primitives

The antidote to both sludge and hallucination is provenance. Every autonomous action requires a specific metadata tag explaining the origin of the decision. Users need the ability to inspect the logic chain behind the result.

To achieve this, we must translate primitives into practical answers. In software engineering, primitives refer to the core units of information or actions an agent performs. To the engineer, this looks like an API call or a logic gate. To the user, it must appear as a clear explanation.

The design challenge lies in mapping these technical steps to human-readable rationales. If an agent recommends a specific flight, the user needs to know why. The interface cannot hide behind a generic suggestion. It must expose the underlying primitive: Logic: Cheapest_Direct_Flight or Logic: Partner_Airline_Priority.

Figure 4 illustrates this translation flow. We take the raw system primitive — the actual code logic — and map it to a user-facing string. For instance, a primitive checking a calendar schedule a meeting becomes a clear statement: I’ve proposed a 4 PM meeting.

This level of transparency ensures the agent’s actions appear logical and beneficial. It allows the user to verify that the agent acted in their best interest. By exposing the primitives, we transform a black box into a glass box, ensuring users remain the final authority on their own digital lives.

Setting The Stage For Design

Building an agentic system requires a new level of psychological and behavioral understanding. It forces us to move beyond conventional usability testing and into the realm of trust, consent, and accountability. The research methods we’ve discussed, from probing mental models to simulating misbehavior and establishing new metrics, provide a necessary foundation. These practices are the essential tools for proactively identifying where an autonomous system might fail and, more importantly, how to repair the user-agent relationship when it does.

The shift to agentic AI is a redefinition of the user-system relationship. We are no longer designing for tools that simply respond to commands; we are designing for partners that act on our behalf. This changes the design imperative from efficiency and ease of use to transparency, predictability, and control.

When an AI can book a flight or trade a stock without a final click, the design of its “on-ramps” and “off-ramps” becomes paramount. It is our responsibility to ensure that users feel they are in the driver’s seat, even when they’ve handed over the wheel.

This new reality also elevates the role of the UX researcher. We become the custodians of user trust, working collaboratively with engineers and product managers to define and test the guardrails of an agent’s autonomy. Beyond being researchers, we become advocates for user control, transparency, and the ethical safeguards within the development process. By translating primitives into practical questions and simulating worst-case scenarios, we can build robust systems that are both powerful and safe.

This article has outlined the “what” and “why” of researching agentic AI. It has shown that our traditional toolkits are insufficient and that we must adopt new, forward-looking methodologies. The next article will build upon this foundation, providing the specific design patterns and organizational practices that make an agent’s utility transparent to users, ensuring they can harness the power of agentic AI with confidence and control. The future of UX is about making systems trustworthy.

For additional understanding of agentic AI, you can explore the following resources:

]]>
hello@smashingmagazine.com (Victor Yocco)
<![CDATA[Rethinking “Pixel Perfect” Web Design]]> https://smashingmagazine.com/2026/01/rethinking-pixel-perfect-web-design/ https://smashingmagazine.com/2026/01/rethinking-pixel-perfect-web-design/ Tue, 20 Jan 2026 10:00:00 GMT It’s 2026. We are operating in an era of incredible technological leaps, where advanced tooling and AI-enhanced workflows have fundamentally transformed how we design, build, and bridge the gap between the two. The web is moving faster than ever, with groundbreaking features and standards emerging almost daily.

Yet, in the middle of this high-speed evolution, there’s one thing we’ve been carrying with us since the early days of print, a phrase that feels increasingly out of sync with our modern reality: “Pixel Perfect.”

I’ll be honest, I’m not a fan. In fact, I believe the idea that we can have pixel-perfection in our designs has become misleading, vague, and ultimately counterproductive to the way we build for the modern web. As a community of developers and designers, it’s time we take a hard look at this legacy concept, understand why it’s failing us, and redefine what “perfection” actually looks like in a multi-device, fluid world.

A Brief History Of A Rigid Mindset

To understand why many of us still aim for pixel perfection today, we have to look back at where it all began. It didn’t start on the web, but as a stowaway from the era when layout software first allowed us to design for print on a personal computer, and GUI design from the late 1980s and ’90s.

In the print industry, perfection was absolute. Once a design was sent to the press, every dot of ink had a fixed, unchangeable position on a physical page. When designers transitioned to the early web, they brought this “printed page” mentality with them. The goal was simple: The website must be an exact, pixel-for-pixel replica of the static mockup created in design applications like Photoshop and QuarkXPress.

I’m old enough to remember working with talented designers who had spent their entire careers in the print world. They would hand over web designs and, with total sincerity, insist on discussing the layout in centimeters and inches. To them, the screen was just another piece of paper, albeit one that glowed.

In those days, we “tamed” the web to achieve this. We used table-based layouts, nested three levels deep, and stretched 1×1 pixel “spacer GIFs” to create precise gaps. We designed for a single, “standard” resolution (usually 800×600) because, back then, we could actually pretend we knew exactly what the user was seeing.

<!-- A typical "Pixel Perfect" layout from 1998 -->
<table width="800" border="0" cellpadding="0" cellspacing="0">
  <tr>
    <td width="150" valign="top" bgcolor="#CCCCCC">
      <img src="spacer.gif" width="150" height="1"> <!-- Sidebar -->
    </td>
    <td width="10"><img src="spacer.gif" width="10" height="1"></td>
    <td width="640" valign="top">
      <!-- Content goes here -->
    </td>
  </tr>
</table>
Cracks In The Foundation

The first major challenge to the fixed-table mindset came as early as 2000. In his seminal article, “A Dao of Web Design”, John Allsopp argued that by trying to force the web into the constraints of print, we were missing the point of the medium entirely. He called the quest for pixel-perfection a “ritual” that ignored the web’s inherent fluidity.

When a new medium borrows from an existing one, some of what it borrows makes sense, but much of the borrowing is thoughtless, “ritual,” and often constrains the new medium. Over time, the new medium develops its own conventions, throwing off existing conventions that don’t make sense.

Nonetheless, the “pixel-perfection” refused to die. While its meaning has shifted and morphed over the decades, it has rarely been well-defined. Many have tried, such as in 2010 when the design agency ustwo released the Pixel Perfect Precision (PPP) (PDF) handbook. But that same year, Responsive Web Design also gained massive momentum, effectively killing the idea that a website could look identical on every screen.

Yet, here we are, still using a term born from the limitations of monitors dated to the ’90s to describe the complex interfaces of 2026.

Note: Before we continue, it’s important to acknowledge the exceptions. There are, of course, scenarios where pixel precision is non-negotiable. Icon grids, sprite sheets, canvas rendering, game engines, or bitmap exports often require exact, pixel-level control to function correctly. These, however, are specialized technical requirements, not a general rule for modern UI development.
Why “Pixel Perfect” Is Failing the Modern Web

In our current landscape, clinging to the idea of “pixel perfection” isn’t just anachronistic, it’s actively harmful to the products we build. Here is why.

It Is Fundamentally Vague

Let’s start with a simple question: When a designer asks for a “pixel-perfect” implementation, what are they actually asking for? Is it the colors, the spacing, the typography, the borders, the alignment, the shadows, the interactions? Take a moment to think about it.

If your answer is “everything”, then you’ve just identified the core issue.

The term “pixel-perfect” is so all-encompassing that it lacks any real technical specificity. It’s a blanket statement that masks a lack of clear requirements. When we say “make it pixel perfect,” we aren’t giving a directive; we’re expressing a feeling.

The Multi-Surface Reality

The concept of a “standard screen size” is now a relic of the past. We are building for an almost infinite variety of viewports, resolutions, and aspect-ratios, and this reality is not likely to change any time soon. Plus, the web is no longer confined to a flat, rectangular piece of glass; it can be on a foldable phone that changes aspect ratios mid-session, or on a spatial interface projected into a room.

Every Internet-connected device has its own pixel density, scaling factors, and rendering quirks.

A design that is “perfect” on one set of pixels is, by definition, imperfect on another. Striving for a single, static “perfection” ignores the fluid, adaptive nature of the modern web. When the canvas is constantly shifting, the very idea of a fixed pixel implementation becomes a technical impossibility.

The Dynamic Nature Of Content

A static mockup is a snapshot of a single state with a specific set of data. But content is rarely static like that in the real world. Localization is a prime example: a label that fits perfectly inside a button component in English might overflow the container in German or require a different font entirely for CJK languages.

Beyond text length, localization means changes with currency symbols, date formatting, and numeric systems. Any of these variables can significantly impact a page layout. If a design is built to be “pixel-perfect” based on a specific string of text, it is inherently fragile. A pixel-perfect layout completely collapses the moment content changes.

Accessibility Is The Real Perfection

True perfection means a site that works for everyone. If a layout is so rigid that it breaks when a user increases their font size or forces a high-contrast mode, it isn’t perfect — it’s broken. “Pixel perfect” often prioritizes visual aesthetics over functional accessibility, creating barriers for users who don’t fit the “standard” profile.

Think Systems, Not Pages

We no longer build pages; we build design systems. We create components that must work in isolation and a variety of contexts, whether in headers, in sidebars, or in dynamic grids. Trying to match a component to a specific pixel coordinate in a static mockup is a fool’s errand.

A pure “pixel-perfect” approach treats every instance as a unique snowflake, which is the antithesis of a scalable, component-based architecture. It forces developers to choose between following a static image and maintaining the integrity of the system.

Perfection Is Technical Debt

When we prioritize exact visual matching over sound engineering, we aren’t just making a design choice; we are incurring technical debt. Chasing that last pixel often forces developers to bypass the browser’s natural layout engine.

Working in exact units leads to “magic numbers”, those arbitrary margin-top: 3px or left: -1px hacks, sprinkled throughout the codebase to force an element into a specific position on a specific screen. This creates a fragile, brittle architecture, leading to a never-ending cycle of “visual bug” tickets.

/* The "Pixel Perfect" Hack */
.card-title {
  margin-top: 13px; /* Matches the mockup exactly on 1440px */
  margin-left: -2px; /* Optical adjustment for a specific font */
}
/* The "Design Intent" Solution */
.card-title {
  margin-top: var(--space-m); /* Part of a consistent scale */
  align-self: start; /* Logical alignment */
}

By insisting on pixel-perfection, we are building a foundation that is difficult to automate, difficult to refactor, and ultimately, more expensive to maintain. We have much more flexible ways to calculate sizing in CSS, thanks to relative units.

Moving From Pixels To Intent

So far, I’ve spent a lot of time talking about what we shouldn’t do. But let’s be clear: Moving away from “pixel perfection” isn’t an excuse for sloppy implementation or a “close enough” attitude. We still need consistency, we still want our products to look and feel high-quality, and we still need a shared methodology for achieving that.

So, if “pixel perfection” is no longer a viable goal, what should we be striving for?

The answer, I believe, lies in shifting our focus from individual pixels to design intent. In a fluid world, perfection isn’t about matching a static image, but ensuring that the core logic and visual integrity of the design are preserved across every possible context.

Design Intent Over Static Values

Instead of asking for a margin: 24px in a design, we should be asking: Why is this margin here? Is it to create a visual separation between sections? Is it part of a consistent spacing scale? When we understand the intent, we can implement it using fluid units and functions (like rem and clamp(), respectively) and use advanced tools, like CSS Container Queries, that allow the design to breathe and adapt while still feeling “right”.

/* Intent: A heading that scales smoothly with the viewport */
h1 {
  font-size: clamp(2rem, 5vw + 1rem, 4rem);
}
/* Intent: Change layout based on the component's own width, not the screen */
.card-container {
  container-type: inline-size;
}
@container (min-width: 400px) {
  .card {
    display: grid;
    grid-template-columns: 1fr 2fr;
  }
}

Speaking In Tokens

Design tokens are the bridge between design and code. When a designer and developer agree on a token like --spacing-large instead of 32px, they aren’t just syncing values, but instead syncing logic. This ensures that even if the underlying value changes to accommodate a specific condition, the relationship between elements remains perfect.

:root {
  /* The logic is defined once */
  --color-primary: #007bff;
  --spacing-unit: 8px;
  --spacing-large: calc(var(--spacing-unit) * 4);
}

/* And reused everwhere */
.button {
  background-color: var(--color-primary);
  padding: var(--spacing-large);
}

Fluidity As A Feature, Not A Bug

We need to stop viewing the web’s flexibility as something to be tamed and start seeing that flexibility as its greatest strength. A “perfect” implementation is one that looks intentional at 320px, 1280px, and even in a 3D spatial environment. This means embracing intrinsic web design based on an element’s natural size in any context — and using modern CSS tools to create layouts that “know” how to arrange themselves based on the available space.

Death To The “Handover”

In this intent-driven world, the “handover” of traditional design assets has become another relic of the past. We no longer pass static Photoshop files across a digital wall and hope for the best. Instead, we work within living design systems.

Modern tooling allows designers to specify behaviors, not just positions. When a designer defines a component, they aren’t just drawing a box; they’re defining its constraints, its fluid scales, and its relationship to the content. As developers, our job is to implement that logic.

The conversation has shifted from “Why is this three pixels off?” to “How should this component behave when the container shrinks?” and “What happens to the hierarchy when the text is translated to a longer language?”

Better Language, Better Outcomes

Speaking of conversations, when we aim for “pixel perfection”, we set ourselves up for friction. Mature teams have long moved past this binary “match-or-fail” mindset towards a more descriptive vocabulary that reflects the complexity of our work.

By replacing “pixel perfect” with more precise terms, we create shared expectations and eliminate pointless arguments. Here are a few phrases that have served me well for productive discussions around intent and fluidity:

  • “Visually consistent with the design system.”
    Instead of matching a specific mockup, we ensure the implementation follows the established rules of our system.
  • “Matches spacing and hierarchy.”
    We focus on the relationships and rhythm between elements rather than their absolute coordinates.
  • “Preserves proportions and alignment logic.”
    We ensure that the intent of the layout remains intact, even as it scales and shifts.
  • “Acceptable variance across platforms.”
    We acknowledge that a site will look different, within a defined and agreed-upon range of variation, and that’s okay as long as the experience remains high-quality.

Language creates reality. Clear language doesn’t just improve the code, but the relationship between designers and developers. It moves us toward a shared ownership of the final, living product. When we speak the same language, “perfection” stops being a demand and starts being a collaborative achievement.

A Note To My Design Colleagues

When you hand over a design, don’t give us a fixed width, but a set of rules. Tell us what should stretch, what should stay fixed, and what should happen when the content inevitably overflows. Your “perfection” lies in the logic you define, not the pixels you draw.

The New Standard Of Excellence

The web was never meant to be a static gallery of frozen pixels. It was born to be a messy, fluid, and gloriously unpredictable medium. When we cling to an outdated model of “pixel perfection”, we are effectively trying to put a leash on a hurricane. It’s unnatural in today’s front-end landscape.

In 2026, we have the tools to build interfaces that think, adapt, and breathe. We have AI that can generate layouts in seconds and spatial interfaces that defy the very concept of a “screen”. In this world, perfection isn’t a fixed coordinate but a promise; it’s the promise that no matter who is looking, or what they are looking through, the soul of the design remains intact.

So, let’s bury the term once and for all. Let’s leave the centimeters to the architects and the spacer GIFs to the digital museums. If you want something to look exactly the same for the next hundred years, carve it in stone or print it on a high-quality cardstock. But if you want to build for the web, embrace the chaos.

Stop counting pixels. Start building intent.

]]>
hello@smashingmagazine.com (Amit Sheen)
<![CDATA[Smashing Animations Part 8: Theming Animations Using CSS Relative Colour]]> https://smashingmagazine.com/2026/01/smashing-animations-part-8-css-relative-colour/ https://smashingmagazine.com/2026/01/smashing-animations-part-8-css-relative-colour/ Wed, 14 Jan 2026 10:00:00 GMT I’ve recently refreshed the animated graphics on my website with a new theme and a group of pioneering characters, putting into practice plenty of the techniques I shared in this series. A few of my animations change appearance when someone interacts with them or at different times of day.

The colours in the graphic atop my blog pages change from morning until night every day. Then, there’s the snow mode, which adds chilly colours and a wintery theme, courtesy of an overlay layer and a blending mode.

While working on this, I started to wonder whether CSS relative colour values could give me more control while also simplifying the process.

Note: In this tutorial, I’ll focus on relative colour values and the OKLCH colour space for theming graphics and animations. If you want to dive deep into relative colour, Ahmad Shadeed created a superb interactive guide. As for colour spaces, gamuts, and OKLCH, our own Geoff Graham wrote about them.

Repeated use of elements was key. Backgrounds were reused whenever possible, with zooms and overlays helping construct new scenes from the same artwork. It was born of necessity, but it also encouraged thinking in terms of series rather than individual scenes.

The problem With Manually Updating Colour Palettes

Let’s get straight to my challenge. In Toon Titles like this one — based on the 1959 Yogi Bear Show episode “Lullabye-Bye Bear” — and my work generally, palettes are limited to a select few colours.

I create shades and tints from what I call my “foundation” colour to expand the palette without adding more hues.

In Sketch, I work in the HSL colour space, so this process involves increasing or decreasing the lightness value of my foundation colour. Honestly, it’s not an arduous task — but choosing a different foundation colour requires creating a whole new set of shades and tints. Doing that manually, again and again, quickly becomes laborious.

I mentioned the HSL — H (hue), S (saturation), and L (lightness) — colour space, but that’s just one of several ways to describe colour.

RGB — R (red), G (green), B (blue) — is probably the most familiar, at least in its Hex form.

There’s also LAB — L (lightness), A (green–red), B (blue–yellow) — and the newer, but now widely supported LCH — L (lightness), C (chroma), H (hue) — model in its OKLCH form. With LCH — specifically OKLCH in CSS — I can adjust the lightness value of my foundation colour.

Or I can alter its chroma. LCH chroma and HSL saturation both describe the intensity or richness of a colour, but they do so in different ways. LCH gives me a wider range and more predictable blending between colours.

I can also alter the hue to create a palette of colours that share the same lightness and chroma values. In both HSL and LCH, the hue spectrum starts at red, moves through green and blue, and returns to red.

Why OKLCH Changed How I Think About Colour

Browser support for the OKLCH colour space is now widespread, even if design tools — including Sketch — haven’t caught up. Fortunately, that shouldn’t stop you from using OKLCH. Browsers will happily convert Hex, HSL, LAB, and RGB values into OKLCH for you. You can define a CSS custom property with a foundation colour in any space, including Hex:

/* Foundation colour */
--foundation: #5accd6;

Any colours derived from it will be converted into OKLCH automatically:

--foundation-light: oklch(from var(--foundation) [...]; }
--foundation-mid: oklch(from var(--foundation) [...]; }
--foundation-dark: oklch(from var(--foundation) [...]; }
Relative Colour As A Design System

Think of relative colour as saying: “Take this colour, tweak it, then give me the result.” There are two ways to adjust a colour: absolute changes and proportional changes. They look similar in code, but behave very differently once you start swapping foundation colours. Understanding that difference is what can turn using relative colour into a system.

/* Foundation colour */
--foundation: #5accd6;

For example, the lightness value of my foundation colour is 0.7837, while a darker version has a value of 0.5837. To calculate the difference, I subtract the lower value from the higher one and apply the result using a calc() function:

--foundation-dark: 
  oklch(from var(--foundation)
  calc(l - 0.20) c h);

To achieve a lighter colour, I add the difference instead:

--foundation-light:
  oklch(from var(--foundation)
  calc(l + 0.10) c h);

Chroma adjustments follow the same process. To reduce the intensity of my foundation colour from 0.1035 to 0.0035, I subtract one value from the other:

oklch(from var(--foundation)
l calc(c - 0.10) h);

To create a palette of hues, I calculate the difference between the hue value of my foundation colour (200) and my new hue (260):

oklch(from var(--foundation)
l c calc(h + 60));

Those calculations are absolute. When I subtract a fixed amount, I’m effectively saying, “Always subtract this much.” The same applies when adding fixed values:

calc(c - 0.10)
calc(c + 0.10)

I learned the limits of this approach the hard way. When I relied on subtracting fixed chroma values, colours collapsed towards grey as soon as I changed the foundation. A palette that worked for one colour fell apart for another.

Multiplication behaves differently. When I multiply chroma, I’m telling the browser: “Reduce this colour’s intensity by a proportion.” The relationship between colours remains intact, even when the foundation changes:

calc(c * 0.10)
My Move It, Scale It, Rotate It Rules
  • Move lightness (add or subtract),
  • Scale chroma (multiply),
  • Rotate hue (add or subtract degrees).

I scale chroma because I want intensity changes to stay proportional to the base colour. Hue relationships are rotational, so multiplying hue makes no sense. Lightness is perceptual and absolute — multiplying it often produces odd results.

From One Colour To An Entire Theme

Relative colour allows me to define a foundation colour and generate every other colour I need — fills, strokes, gradient stops, shadows — from it. At that point, colour stops being a palette and starts being a system.

SVG illustrations tend to reuse the same few colours across fills, strokes, and gradients. Relative colour lets you define those relationships once and reuse them everywhere — much like animators reused backgrounds to create new scenes.

Change the foundation colour once, and every derived colour updates automatically, without recalculating anything by hand. Outside of animated graphics, I could use this same approach to define colours for the states of interactive elements such as buttons and links.

The foundation colour I used in my “Lullabye-Bye Bear” Toon Title is a cyan-looking blue. The background is a radial gradient between my foundation and a darker version.

To create alternative versions with entirely different moods, I only need to change the foundation colour:

--foundation: #5accd6;
--grad-end: var(--foundation);
--grad-start: oklch(from var(--foundation)
  calc(l - 0.2357) calc(c * 0.833) h);

To bind those custom properties to my SVG gradient without duplicating colour values, I replaced hard-coded stop-color values with inline styles:

<defs>
  <radialGradient id="bg-grad" […]>
    <stop offset="0%" style="stop-color: var(--grad-end);" />
    <stop offset="100%" style="stop-color: var(--grad-start);" />
  </radialGradient>
</defs>
<path fill="url(#bg-grad)" fill="#5DCDD8" d="[...]"/>

Next, I needed to ensure that my Toon Text always contrasts with whatever foundation colour I choose. A 180deg hue rotation produces a complementary colour that certainly pops — but can vibrate uncomfortably:

.text-light {
  fill: oklch(from var(--foundation)
    l c calc(h + 180));
}

A 90° shift produces a vivid secondary colour without being fully complementary:

.text-light {
  fill: oklch(from var(--foundation)
    l c calc(h - 90));
}

My recreation of Quick Draw McGraw’s 1959 Toon Title “El Kabong“ uses the same techniques but with a more varied palette. For example, there’s another radial gradient between the foundation colour and a darker shade.

The building and tree in the background are simply different shades of the same foundation colour. For those paths, I needed two additional fill colours:

.bg-mid {
  fill: oklch(from var(--foundation)
    calc(l - 0.04) calc(c * 0.91) h);
}

.bg-dark {
  fill: oklch(from var(--foundation)
    calc(l - 0.12) calc(c * 0.64) h);
}
When The Foundations Start To Move

So far, everything I’ve shown has been static. Even when someone uses a colour picker to change the foundation colour, that change happens instantly. But animated graphics rarely stand still — the clue is in the name. So, if colour is part of the system, there’s no reason it can’t animate, too.

To animate the foundation colour, I first need to split it into its OKLCH channels — lightness, chroma, and hue. But there’s an important extra step: I need to register those values as typed custom properties. But what does that mean?

By default, a browser doesn’t know whether a CSS custom property value represents a colour, length, number, or something else entirely. That often means they can’t be interpolated smoothly during animation, and jump from one value to the next.

Registering a custom property tells the browser the type of value it represents and how it should behave over time. In this case, I want the browser to treat my colour channels as numbers so they can be animated smoothly.

@property --f-l {
  syntax: "<number>";
  inherits: true;
  initial-value: 0.40;
}

@property --f-c {
  syntax: "<number>";
  inherits: true;
  initial-value: 0.11;
}

@property --f-h {
  syntax: "<number>";
  inherits: true;
  initial-value: 305;
}

Once registered, these custom properties behave like native CSS. The browser can interpolate them frame-by-frame. I then rebuild the foundation colour from those channels:

--foundation: oklch(var(--f-l) var(--f-c) var(--f-h));

This makes the foundation colour become animatable, just like any other numeric value. Here’s a simple “breathing” animation that gently shifts lightness over time:

@keyframes breathe {
  0%, 100% { --f-l: 0.36; }
  50% { --f-l: 0.46; }
}

.toon-title {
  animation: breathe 10s ease-in-out infinite;
}

Because every other colour in fills, gradients, and strokes is derived from --foundation, they all animate together, and nothing needs to be updated manually.

One Animated Colour, Many Effects

At the start of this process, I wondered whether CSS relative colour values could offer more possibilities while also making them simpler to implement. I recently added a new gold mine background to my website’s contact page, and the first iteration included oil lamps that glow and swing.

I wanted to explore how animating CSS relative colours could make the mine interior more realistic by tinting it with colours from the lamps. I wanted them to affect the world around them, the way real light does. So, rather than animating multiple colours, I built a tiny lighting system that animates just one colour.

My first task was to slot an overlay layer between the background and my lamps:

<path 
  id="overlay"
  fill="var(--overlay-tint)" 
  [...] 
  style="mix-blend-mode: color"
/>

I used mix-blend-mode: color because that tints what’s beneath it while preserving the underlying luminance. As I only want the overlay to be visible when animations are turned on, I made the overlay opt-in:

.svg-mine #overlay {
  display: none;
}

@media (prefers-reduced-motion: no-preference) {
  .svg-mine[data-animations=on] #overlay {
    display: block;
    opacity: 0.5;
  }
}

The overlay was in place, but not yet connected to the lamps. I needed a light source. My lamps are simple, and each one contains a circle element that I blurred with a filter. The filter produces a very soft blur over the entire circle.

<filter id="lamp-glow-1" x="-120%" y="-120%" width="340%" height="340%">
  <feGaussianBlur in="SourceGraphic" stdDeviation="56"/>
</filter>

Instead of animating the overlay and lamps separately, I animate a single “flame” colour token and derive everything else from that. First, I register three typed custom properties for OKLCH channels:

@property --fl-l {
  syntax: "<number>"; 
  inherits: true;
  initial-value: 0.86;
}
@property --fl-c {
  syntax: "<number>";
  inherits: true;
  initial-value: 0.12;
}
@property --fl-h {
  syntax: "<number>";
  inherits: true;
  initial-value: 95;
}

I animated those channels, deliberately pushing a few frames towards orange so the flicker reads clearly as firelight:

@keyframes flame {
  0%, 100% { --fl-l: 0.86; --fl-c: 0.12; --fl-h: 95; }
  6% { --fl-l: 0.91; --fl-c: 0.10; --fl-h: 92; }
  12% { --fl-l: 0.83; --fl-c: 0.14; --fl-h: 100; }
  18% { --fl-l: 0.88; --fl-c: 0.11; --fl-h: 94; }
  24% { --fl-l: 0.82; --fl-c: 0.16; --fl-h: 82; }
  30% { --fl-l: 0.90; --fl-c: 0.12; --fl-h: 90; }
  36% { --fl-l: 0.79; --fl-c: 0.17; --fl-h: 76; }
  44% { --fl-l: 0.87; --fl-c: 0.12; --fl-h: 96; }
  52% { --fl-l: 0.81; --fl-c: 0.15; --fl-h: 102; }
  60% { --fl-l: 0.89; --fl-c: 0.11; --fl-h: 93; }
  68% { --fl-l: 0.83; --fl-c: 0.16; --fl-h: 85; }
  76% { --fl-l: 0.91; --fl-c: 0.10; --fl-h: 91; }
  84% { --fl-l: 0.85; --fl-c: 0.14; --fl-h: 98; }
  92% { --fl-l: 0.80; --fl-c: 0.17; --fl-h: 74; }
}

Then I scoped that animation to the SVG, so the shared variables are available to both the lamps and my overlay:

@media (prefers-reduced-motion: no-preference) {
  .svg-mine[data-animations=on] {
    animation: flame 3.6s infinite linear;
    isolation: isolate;

    /* Build a flame colour from animated channels */
    --flame: oklch(var(--fl-l) var(--fl-c) var(--fl-h));

    /* Lamp colour derived from flame */
    --lamp-core: oklch(from var(--flame) calc(l + 0.05) calc(c * 0.70) h);

    /* Overlay tint derived from the same flame */
    --overlay-tint: oklch(from var(--flame)
      calc(l + 0.06) calc(c * 0.65) calc(h - 10));
  }
}

Finally, I applied those derived colours to the glowing lamps and the overlay they affect:

@media (prefers-reduced-motion: no-preference) {
  .svg-mine[data-animations=on] #mine-lamp-1 > circle,
  .svg-mine[data-animations=on] #mine-lamp-2 > circle {
    fill: var(--lamp-core);
  }

  .svg-mine[data-animations=on] #overlay {
    display: block;
    fill: var(--overlay-tint);
    opacity: 0.5;
  }
}

When the flame shifts toward orange, the lamps warm up, and the scene warms with them. When the flame cools, everything settles together. The best part is that nothing is written manually. If I change the foundation colour or tweak the flame animation ranges, the entire lighting system updates simultaneously.

You can see the final result on my website.

Reuse, Repurpose, Revisited

Those Hanna-Barbera animators were forced to repurpose elements out of necessity, but I reuse colours because it makes my work more consistent and easier to maintain. CSS relative colour values allow me to:

  • Define a single foundation colour,
  • Describe how other colours relate to it,
  • Reuse those relationships everywhere, and
  • Animate the system by changing one value.

Relative colour doesn’t just make theming easier. It encourages a way of thinking where colour, like motion, is intentional — and where changing one value can transform an entire scene without rewriting the work beneath it.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[UX And Product Designer’s Career Paths In 2026]]> https://smashingmagazine.com/2026/01/ux-product-designer-career-paths/ https://smashingmagazine.com/2026/01/ux-product-designer-career-paths/ Mon, 12 Jan 2026 10:00:00 GMT Smart Interface Design Patterns, a **friendly video course on UX** and design patterns by Vitaly.]]> As the new year begins, I often find myself in a strange place — reflecting back at the previous year or looking forward to the year ahead. And as I speak with colleagues and friends at the time, it typically doesn’t take long for a conversation about career trajectory to emerge.

So I thought I’d share a few thoughts on how to shape your career path as we are looking ahead to 2026. Hopefully you’ll find it useful.

Run A Retrospective For Last Year

To be honest, for many years, I was mostly reacting. Life was happening to me, rather than me shaping the life that I was living. I was making progress reactively and I was looking out for all kinds of opportunities. It was easy and quite straightforward — I was floating and jumping between projects and calls and making things work as I was going along.

Years ago, my wonderful wife introduced one little annual ritual which changed that dynamic entirely. By the end of each year, we sit with nothing but paper and pencil and run a thorough retrospective of the past year — successes, mistakes, good moments, bad moments, things we loved, and things we wanted to change.

We look back at our memories, projects, and events that stood out that year. And then we take notes for where we stand in terms of personal growth, professional work, and social connections — and how we want to grow.

These are the questions I’m trying to answer there:

  • What did I find most rewarding and fulfilling last year?
  • What fears and concerns slowed me down the most?
  • What could I leave behind, give away or simplify?
  • What tasks would be good to delegate or automate?
  • What are my 3 priorities to grow this upcoming year?
  • What times do I block in my calendar for my priorities?

It probably sounds quite cliche, but these 4–5h of our time every year set a foundation for changes to introduce for the next year. This little exercise shapes the trajectory that I’ll be designing and prioritizing next year. I can’t recommend it enough.

UX Skills Self-Assessment Matrix

Another little tool that I found helpful for professional growth is UX Skills Self-Assessment Matrix (Figma template) by Maigen Thomas. It’s a neat little tool that’s designed to help you understand what you’d like to do more of, what you’d prefer to do less, and where your current learning curve lies vs. where you feel confident in your expertise.

The exercise typically takes around 20–30 minutes, and it helps identify the UX skills with a sweet spot — typically the upper half of the canvas. You’ll also pinpoint areas where you’re improving, and those where you are already pretty good at. It’s a neat reality check — and a great reminder once you review it year after year. Highly recommended!

UX Career Levels For Design Systems Teams

A while back, Javier Cuello has put together a Career Levels For Design System Teams (Figma Kit), a neat little helper for product designers looking to transition into design systems teams or managers building a career matrix for them. The model maps progression levels (Junior, Semi-Senior, Senior, and Staff) to key development areas, with skills and responsibilities required at each stage.

What I find quite valuable in Javier’s model is the mapping of strategy and impact, along with systematic thinking and governance. While as designers we often excel at tactical design — from elegant UI components to file organization in Figma — we often lag a little bit behind in strategic decisions.

To a large extent, the difference between levels of seniority is moving from tactical initiatives to strategic decisions. It’s proactively looking for organizational challenges that a system can help with. It’s finding and inviting key people early. It’s also about embedding yourself in other teams when needed.

But it’s also keeping an eye out for situations when design systems fail, and paving the way to make it more difficult to fail. And: adapting the workflow around the design system to ship on a tough deadline when needed, but with a viable plan of action on how and when to pay back accumulating UX debt.

Find Your Product Design Career Path

When we speak about career trajectory, it’s almost always assumed that the career progression inevitably leads to management. However, this hasn’t been a path I preferred, and it isn’t always the ideal path for everyone.

Personally, I prefer to work on intricate fine details of UX flows and deep dive into complex UX challenges. However, eventually it might feel like you’ve stopped growing — perhaps you’ve hit a ceiling in your organization, or you have little room for exploration and learning. So where do you go from there?

A helpful model to think about your next steps is to consider Ryan Ford’s Mirror Model. It explores career paths and expectations that you might want to consider to advocate for a position or influence that you wish to achieve next.

That’s typically something you might want to study and decide on your own first, and then bring it up for discussion. Usually, there are internal opportunities out there. So before changing the company, you can switch teams, or you could shape a more fulfilling role internally.

You just need to find it first. Which brings us to the next point.

Proactively Shaping Your Role

I keep reminding myself of Jason Mesut’s observation that when we speak about career ladders, it assumes that we can either go up, down, or fall off. But in reality, you can move up, move down, and move sideways. As Jason says, “promoting just the vertical progression doesn’t feel healthy, especially in such a diverse world of work, and diverse careers ahead of us all.”

So, in the attempt to climb up, perhaps consider also moving sideways. Zoom out and explore where your interests are. Focus on the much-needed intersection between business needs and user needs. Between problem space and solution space. Between strategic decisions and operations. Then zoom in. In the end, you might not need to climb anything — but rather just find that right spot that brings your expertise to light and makes the biggest impact.

Sometimes these roles might involve acting as a “translator” between design and engineering, specializing in UX and accessibility. They could also involve automating design processes with AI, improving workflow efficiency, or focusing on internal search UX or legacy systems.

These roles are never advertised, but they have a tremendous impact on a business. If you spot such a gap and proactively bring it to senior management, you might be able to shape a role that brings your strengths into the spotlight, rather than trying to fit into a predefined position.

What About AI?

One noticeable skill that is worth sharpening is, of course, around designing AI experiences. The point isn’t about finding ways to replace design work with AI automation. Today, it seems like people crave nothing more than actual human experience — created by humans, with attention to humans’ needs and intentions, designed and built and tested with humans, embedding human values and working well for humans.

If anything, we should be more obsessed with humans, not with AI. If anything, AI amplifies the need for authenticity, curation, critical thinking, and strategy. And that’s a skill that will be very much needed in 2026. We need designers who can design beautiful AI experiences (and frankly, I do have a whole course on that) — experiences people understand, value, use, and trust.

No technology can create clarity, structure, trust, and care out of poor content, poor metadata, and poor value for end users. If we understand the fundamentals of good design, and then design with humans in mind, and consider humans’ needs and wants and struggles, we can help users and businesses bridge that gap in a way AI never could. And that’s what you and perhaps your renewed role could bring to the table.

Wrapping Up

The most important thing about all these little tools and activities is that they help you get more clarity. Clarity on where you currently stand and where you actually want to grow towards.

These are wonderful conversation starters to help you find a path you’d love to explore, on your own or with your manager. However, just one thing I’d love to emphasize:

Absolutely, feel free to refine the role to amplify your strengths, rather than finding a way to match a particular role perfectly.

Don’t forget: you bring incredible value to your team and to your company. Sometimes it just needs to be highlighted or guided to the right spot to bring it into the spotlight.

You’ve got this — and happy 2026! ✊🏼✊🏽✊🏾

Meet “Design Patterns For AI Interfaces”

Meet design patterns that work for AI products in Design Patterns For AI Interfaces, Vitaly’s shiny new video course with practical examples from real-life products — with a live UX training happening soon. Jump to a free preview. Use code SNOWFLAKE to save 20% off!

Meet Design Patterns For AI Interfaces, Vitaly’s video course on interface design & UX.

Video + UX Training

$ 450.00 $ 799.00 Get Video + UX Training

30 video lessons (10h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 275.00$ 395.00
Get the video course

30 video lessons (10h). Updated yearly.
Also available as a UX Bundle with 3 video courses.

Useful Resources ]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Penpot Is Experimenting With MCP Servers For AI-Powered Design Workflows]]> https://smashingmagazine.com/2026/01/penpot-experimenting-mcp-servers-ai-powered-design-workflows/ https://smashingmagazine.com/2026/01/penpot-experimenting-mcp-servers-ai-powered-design-workflows/ Thu, 08 Jan 2026 08:00:00 GMT This article is a sponsored by Penpot

Imagine that your Penpot file contains a full icon set in addition to the design itself, which uses some but not all of those icons. If you were to ask an AI such as Claude or Gemini to export only the icons that are being used, it wouldn’t be able to do that. It’s not able to interact with Penpot files.

However, a Penpot MCP server can. It can perform a handpicked number of operations under set rules and permissions, especially since Penpot has an extensive API and even more so because it’s open-source.

The AI’s job is simply to understand your intent, choose the right operation for the MCP server to perform (an export in this case), and pass along any parameters (i.e., icons that are being used). The MCP server then translates this into a structured API request and executes it.

It might help to think of AI as a server in a restaurant that takes your order, the MCP server as both the menu and chef, and the API request as (hopefully) a hot pizza pie on a warm plate.

Why MCP servers, exactly? Well, Penpot isn’t able to understand your intent because it’s not an LLM, nor does it allow third-party LLMs to interact with your Penpot files for the security and privacy of your Penpot data. Although Penpot MCP servers do act as a secure bridge, translating AI intent into API requests using your Penpot files and data as context.

What’s even better is that because Penpot takes a design-expressed-as-code approach, designs can be programmatically created, edited, and analyzed on a granular level. It’s more contextual, more particular, and therefore more powerful in comparison to what other MCP servers offer, and far more thoughtful than the subpar ‘Describe → Generate’ AI workflow that I don’t think anybody really wants. Penpot’s AI whitepaper describes this as the bad approach and the ‘Convert to Code’ approach as the ugly approach, whereas MCP servers are more refined and adaptable.

Features And Technical Details

Before we move on to use cases, here are some features and technical details that further explain how Penpot MCP servers work:

  • Complies with MCP standards;
  • Integrates with the Penpot API for real-time design data;
  • Includes a Python SDK, REST API, plugin system, and CLI tools;
  • Works with any MCP-enabled AI assistant (Claude in VS Code, Claude in Cursor, Claude Desktop, etc.);
  • Supports sharing design context with AI models, and letting them see and understand components;
  • Facilitates communication with Penpot using natural language.

What, then, could MCP servers enable us to do in Penpot, and what have existing experiments already achieved? Let’s take a look.

Penpot MCP Server Use-Cases

If you just want to skip to what Penpot MCP servers can do, Penpot have a few MCP demos stashed in a Google Drive that are more than worth watching. Penpot CEO Pablo Ruiz-Múzquiz mentioned that videos 03, 04, 06, 08, and 12 are their favorites.

An even faster way to summarize MCP servers is to watch the unveiling from Penpot Fest 2025.

Otherwise, let’s take a look at some of the more refined examples that Penpot demonstrated in their public showcase.

Design-to-Code and Back Again (and More)

Running on from what I was saying earlier about how Penpot designs are expressed as code, this means that MCP servers can be used to convert design to code using AI, but also code to design, design to documentation, documentation to design system elements, design to code again based on said design system, and then completely new components based on said design system.

It sounds surreal, but the demo below shows off this transmutability, and it’s not from vague instruction but rather previous design choices, regardless of how they were expressed (design, code, or documentation). There are no surprises — these are simply the decisions that you would’ve made anyway based on previous decisions, executed swiftly.

In the demo, Juan de la Cruz García, Designer at Penpot, frictionlessly transmutes some simple components into documentation, design system elements, code, new components, and even a complete Storybook project like a piece of Play-Doh:

Design-to-Code, Design/Code Validation, And Simple Operations

In a similar demo below, Dominik Jain, Co-Founder at Oraios AI, creates a Node.js web app based on the design before updating the frontend styles, saves names and identifiers to memory to ensure smooth design-to-code translation before checking it for consistency, adds a comment next to the selected shape in Penpot, and then replaces a scribble in Penpot with an adapted component. There’s a lot happening here, but you can see exactly what Dominik is typing into Claude Desktop as well as Claude’s responses, and it’s very robust:

By the way, the previous demo used Claude in VS Code, so I should note that Penpot MCP servers are LLM-agnostic. Your tech stack is totally up to you. IvanTheGeek managed to set up their MCP server with the JetBrains Rider IDE and Junie AI.

More Use Cases

Translate a Penpot board to production-ready semantic HTML and modular CSS while leveraging any Penpot design tokens (remember that Penpot designs are already expressed as code, so this isn’t a shot in the dark):

Generate an interactive web prototype without changing the existing HTML:

As shown earlier, convert a scribble into a component, leveraging existing design and/or design system elements:

Create design system documentation from a Penpot file:

And here are some more use-cases from Penpot and the community:

  • Advanced exports,
  • Search for design elements using natural language,
  • Pull data from external APIs using natural language,
  • Easily connect Penpot to other external tools,
  • Saving repetitive tasks to memory and executing them,
  • Visual regression testing,
  • Design consistency and redundancy checking,
  • Accessibility and usability analysis and feedback,
  • Design system compliance checking,
  • Guideline compliance checking (brand, content, etc.),
  • Monitor adoption and usage with design analytics,
  • Automatically keep documentation in sync with design,
  • Design file organization (e.g., tagging/categorization).

Essentially, Penpot MCP servers lead the way to an infinite number of workflows thanks to the efficiency and ease of your chosen LLM/LLM client, but without exposing your data to it.

What Would You Use MCP Servers For?

Penpot MCP servers aren’t even at the beta stage, but it is an active experiment that you can be a part of. Penpot users have already begun exploring use cases for MCP servers, but Penpot wants to see more. To ensure that the next generation of design tools meets the needs of designers, developers, and product teams in general, they must be built collectively and collaboratively, especially where AI is concerned.

Note: Penpot is looking for beta testers eager to explore, experiment with, and help refine Penpot’s MCP Server. To join, write to support@penpot.app with the subject line “MCP beta test volunteer.”

Is there anything that you feel Penpot MCP servers could do that current tools aren’t able to do well enough, fast enough, or aren’t able to do at all?

You can learn how to set up a Penpot MCP server right here and start tinkering today, or if your brain’s buzzing with ideas already, Penpot want you to join the discussion, share your feedback, and talk about your use-cases. Alternatively, the comment section right below isn’t a bad place to start either!

]]>
hello@smashingmagazine.com (Daniel Schwarz)
<![CDATA[Pivoting Your Career Without Starting From Scratch]]> https://smashingmagazine.com/2026/01/pivoting-career-without-starting-from-scratch/ https://smashingmagazine.com/2026/01/pivoting-career-without-starting-from-scratch/ Wed, 07 Jan 2026 10:00:00 GMT Has work felt “different” to you? You show up, do your work, fix what needs fixing, and get the job done, but the excitement isn’t quite the same anymore. Maybe the work has become too routine, or maybe you’ve grown in a way your role hasn’t kept up with. You catch yourself thinking, “I’ve been doing this for years, but where do I go from here?”

It’s not always about the burnouts or frustrations. Sometimes it’s just curiosity. You’ve learned a lot, built things, solved problems, and now a small part of you wants to see what else you can do. Maybe the rise of AI is making you look at your job differently, or maybe you feel ready for a new kind of challenge that does not look like your current day-to-day.

I have seen many people across different fields go through this. Developers moving into product work, designers shifting to UX research, engineers getting into teaching, or support folks building communities. Everyone reaches that point where they want their work to feel meaningful again.

The good thing is you are not starting from zero. The experience you already have, like solving problems, making decisions, working, and communicating with people, those are real, valuable skills that carry over anywhere. Most of the time, the next step is not about leaving tech behind. It’s about finding where your skills make the most sense next.

This article is about that: How to rethink your path when things start to feel a bit stale, and how to move toward something new without losing everything you’ve built so far.

Redefining Your Toolkit

When people start thinking about changing careers, the first thing they usually do is focus on what they do not have. The missing skills, the new tools they need to learn, or how far behind they feel. It is a normal reaction, but it is not always the best place to begin.

Instead, try looking at what is already there. You have probably built more useful skills than you realize. Many of us get used to describing ourselves by our job titles, such as developer, designer, or analyst, but those titles do not fully explain what we actually do. They just tell us where we sit on a team. The real story is the work behind the title.

Think of a developer, for example. On paper, the job is to write code, but in reality, a developer spends most of their time solving problems, making decisions, and building systems that make sense to other people. The same goes for designers. They do not just make things look good; they pay attention to how people think, how they move through a screen, and how to make something feel clear and simple.

Your skills don’t disappear when your title changes. They just find new ways to show up.

These are what people call transferable skills, but you do not need the fancy term to get the idea. These are abilities that stay useful no matter where you go. Problem-solving, curiosity, clear communication, empathy, and learning fast — these are the things that make you good at what you do, even if the tools or roles change.

You already use them more than you think. When you fix a bug, you are learning how to track a problem back to its roots. When you explain a technical idea to someone non-technical, you are practicing clarity. When you deal with tight deadlines, you are learning how to manage priorities. None of these disappear if you switch fields. You apply it somewhere else.

So, before you worry about what you do not know, take a moment to see what you already do well. Write it down if you have to. Not just the tasks, but the thinking behind them. That is where your real value is.

Four Real-World Paths to Explore

Once you start seeing your skills beyond your job title, you may realize how many directions you can actually take. The tech world keeps changing fast: tools change, teams change, new roles show up every year, and people move in ways they never planned.

Here are four real paths that many people in tech are taking today.

From To What Changes Why It Works
Developer Product Manager You move from building the product to shaping what gets built and why. Developers already understand tradeoffs, user needs, and how features come together. That is product thinking in action.
Engineer Developer Advocate You focus less on code delivery and more on helping others succeed with your product. You already know the technology inside out, so turning that knowledge into clear communication makes you a natural teacher.
Back-end Engineer Solutions Engineer You bring your problem-solving mindset to real client challenges. It is not about selling, it is about understanding problems deeply and building trust through technical skill.
Designer UX Researcher or Service Designer You shift from visuals to understanding how people think, feel, and interact. Good design starts with empathy, and that same skill fits perfectly in research and experience design.

What many people discover when they take one of these steps is that their daily work changes, not their identity. The tools and routines might be different, but the core way they think and solve problems stays the same.

The biggest change is usually perspective. Instead of focusing on how something gets built, you begin to care more about why it matters, who it helps, and what impact it has. For many people, that shift often brings back the excitement they might have lost somewhere along the way.

Your First Steps Towards A New Path

When you find a direction that feels interesting, the next step is figuring out how to move toward it without losing your footing where you are. This is where curiosity turns into a plan.

1. Take A Look At What You Bring

Start by checking your strengths. It does not have to be anything complex. Write down what you do well, what feels natural to you, and what people usually ask you for help with.

If you want a simple guide, Learning People has a good breakdown for auditing your personal skills, including a template for identifying and evaluating your skills. Try filling it out; it’s well worth the few minutes it takes to complete.

After listing your strengths, try matching them with roles you’re curious about. For example, if you’re a developer who enjoys explaining things, that could connect well with mentoring, writing tutorials, or developer advocacy.

2. Learn By Getting Close To It

Job descriptions aren’t a perfect reflection of the realities of working a specific job. Talking with people who do that job will. So, reach out to people who already do what you’re interested in and ask them what their day-to-day looks like, what parts they enjoy, and what surprised them when they started.

And if possible, shadow someone or volunteer to help on a project. You don’t need a job change to explore something new. Short, hands-on experiences often teach you far more than any course, and many people are more than willing to take you under their wing, especially if you are offering your time and help in exchange for experience.

3. Build Proof Through Small Experiments

Do something small that points in the direction you want to go. Maybe build a simple tool, write a short piece about what you’re learning, or help a local startup or open-source team. These don’t need to be perfect, but they just need to exist. They show direction, not completion.

Blogging has always been a perfect way to share your learning path and demonstrate your excitement about it. Plus, it establishes a track record of the knowledge you acquire.

4. Shape Your Story As You Grow

Instead of going with the idea of “I’m switching careers,” try thinking of it as “I’m building on what I already do.” That simple shift makes your journey clearer. It shows that you’re not starting from zero — you’re simply moving forward with more intention.

Navigating The Mental Hurdles

Every career shift, even when it feels exciting, comes with doubts. You might ask yourself, “What if I’m not ready?” or “What if I can’t keep up?” These thoughts are more common than people admit.

Imposter Syndrome

One fear that shows up a lot is imposter syndrome, that feeling you do not belong or that others are “better” or “smarter” at something than you. A recent piece from Nordcloud shared that more than half (58%) of IT professionals have felt this at some point in their career.

Comparison is a silent thief of confidence. Seeing others move faster can make you feel late. But everyone has different opportunities and different timing. What matters is the direction you are moving in, not how fast you go.

Here’s a thought worth remembering:

People who have successfully changed their careers did not wait until they felt brave. Most of them still had doubts, but they just moved anyway, one small step at a time.

Starting Again

Another worry is the idea of starting over. You may feel that you’ve spent too many years in one space to move into another. But you are not returning to the beginning. You are moving with experience. Your habits, discipline, and problem-solving stay with you. They just show up in a different way.

It’s hard — and self-defeating — to imagine the work it takes to start all over again, especially when you have invested many years into what you do. But remember, it’s not always too late. Even Kurt Vonnegut was 47 when he wrote his seminal book, Slaughterhouse Five. You can still enjoy a very long and fruitful career, even in middle age.

Finances

Money and stability also weigh a lot. The fear of losing income or looking uncertain can hold you back. And everyone’s money situation can be wildly different. You may have family to support, big loans to pay back, a lack of reserves, or any number of completely valid reasons for not wanting to give up a steady paycheck when you’re already receiving one.

A simple way to reduce that pressure is to start with small steps. Take a small side gig, try part-time work, or help on a short project in the area you’re curious about. These small tests give you clarity without shaking your foundation.

Conversations With Industry Experts

Below are short interviews with a handful of tech professionals serving in different roles. I wanted to talk with real people who have recently switched careers or are in the process of doing so because it helps illustrate the wide range of situations, challenges, and opportunities you might expect to encounter in a career change.

Thomas Dodoo: Graphic Designer, 5 Years Of Experience

Background: Thomas has an IT background. He first got interested in tech through game development in school, but later discovered that design was what he enjoyed more. Over time, he moved fully into graphic design and branding.

Question: When you were starting, what confused you the most about choosing your path?

Thomas: I wasn’t sure if I should stay with game development or follow design. I liked both, but design came more naturally, so I just kept learning little by little.

Question: Was there a moment that made you take your design work more seriously?

Thomas: Yes, the first time someone trusted me with their full brand. It made me realise this could be more than a hobby.

Question: What skills did you carry over from development into your design work?

Thomas: My background in development helped me think more logically about design. I break things down, think in steps, and focus on how things work, not just how they look.

Adwoa Mensah: Product Manager, 4 Years Of Experience

Background: Adwoa moved from software testing to product management.

Question: When did you realize it was time to change careers?

Adwoa: I realised it when I started caring more about why things were being built, not just checking if they worked. I enjoyed asking questions, giving input, and thinking about the bigger picture, and testing alone started to feel limiting.

Question: What new skills did you need to learn to move into your new field?

Adwoa: I had to learn how to communicate better, especially with designers, developers, and stakeholders. I also worked on planning, prioritising work, and understanding users more deeply. I learned most of this by watching product managers I worked with, asking questions, reading, and slowly taking on more responsibility on real projects.

Konstantinos Tournas: AI Engineer

Background: Konstantinos started programming with zero experience. He had no technical background at first, but he developed a strong interest in artificial intelligence and worked his way into the field.

Question: What moments in your journey made you question yourself, and how did you move past them?

Konstantinos: There were many moments in my career journey when I doubted myself, mainly because I started completely from zero, with no programming background and no connections in the field. What helped me push through was the motivation I had to learn and my genuine love for artificial intelligence. Every time I questioned myself, I reminded myself where I started and how far I had come in such a short amount of time.

Question: When you feel pressure or doubt in your work, what helps you stay grounded?

Konstantinos: When I feel pressure or self-doubt, I usually take a walk in nature. It helps me clear my mind and think creatively about how I can improve my work. In programming, the work rarely stops when your shift ends; problems in the code follow you throughout the day, and overcoming them requires creativity. Walking helps me reset and return with better ideas.

Question: How do you deal with comparing yourself to others in your field?

Konstantinos: Even though I’m competitive by nature, I constantly try to learn from others in my field. I don’t like showing off; I prefer listening. I know I can become great at what I do, but that doesn’t happen overnight. Comparison can be healthy, as long as it pushes you to grow rather than discourages you.

Question: What would you say to someone who feels like they are not good enough to pursue the path they want?

Konstantinos: I started programming without a university degree and with an entirely different background. Patience and persistence truly are the keys to success; it might sound cliché, but they were precisely what helped me. In less than six months, with long hours of focused work, consistency, and determination, I managed to get hired for my dream job simply because I believed in myself and wanted it badly enough.

Yinjian Huang: Product Designer (AI, SaaS), 5 Years Of Experience

Background: Yinjian works in product design across AI, SaaS, and B2B products. Her work focuses on building early-stage products, shaping user experience, and working closely with engineering and product teams on AI-driven features.

Question: Looking back, what is one decision you made that you think others in your field could learn from?

Yinjian: Keep learning across disciplines: design, PM, AI, and engineering. The broader your fluency, the better you can design and reason holistically. Cross‑functional knowledge compounds and unlocks better product judgment.

Question: What do you wish you had known about handling stress, workload, or expectations earlier in your career?

Yinjian: Communicate early if the workload is too heavy or a deadline is at risk. Flag constraints, renegotiate scope, and make trade‑offs explicit. Early clarity beats late surprises.

Question: How do you evaluate whether a new opportunity or challenge is worth taking on?

Yinjian: I evaluate opportunities on three axes: the learning delta (skills I’ll gain), the people I’ll work with, and alignment with my interests.

Question: What advice would you give to someone who wants to grow in your field but feels stuck or unsure of where to start?

Yinjian: Growth can feel overwhelming at first because there’s so much to learn. Build a simple roadmap: start by making your craft solid, then expand adjacent skills. Find the best resources, practice relentlessly, and seek feedback on tight cycles. Momentum comes from small, consistent wins.

The Bottom Line

This whole piece is just a reminder that it’s fine to question where you are and want something different. Everyone hits that moment when things stop feeling exciting, and you start wondering what’s next. It doesn’t mean you’ve failed. It usually means you’re growing.

I wrote this because I’ve been in that space too, still figuring out what direction makes the most sense for me. So if you’re feeling stuck or unsure, I hope this gave you something useful. You don’t need to have everything sorted out right now. Just keep learning, stay curious, and take one small step at a time.

]]>
hello@smashingmagazine.com (Joas Pambou)
<![CDATA[Countdown To New Adventures (January 2026 Wallpapers Edition)]]> https://smashingmagazine.com/2025/12/desktop-wallpaper-calendars-january-2026/ https://smashingmagazine.com/2025/12/desktop-wallpaper-calendars-january-2026/ Wed, 31 Dec 2025 09:00:00 GMT A new year is the perfect opportunity to break free from routines, reset habits, and refine how you do things. And while you may have made plenty of New Year’s resolutions, sometimes it’s the small changes that work wonders — a tidy desktop and a new wallpaper, for example, that give you a little motivation boost when you need it.

In this post, you’ll find desktop wallpapers to accompany you through your first adventures of 2026, to make you smile, and to bring some happy pops of color to a cold and dark winter day. As every month since we started our monthly wallpapers series more than 14 years ago, all of them were created with love by artists and designers from across the globe and can be downloaded for free.

A huge thank-you to everyone who shared their designs with us this month — you are truly smashing! Have a happy and healthy new year, everyone!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit your wallpaper design! 🎨
    We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬
Moonwalker

Designed by Ricardo Gimenes from Spain.

Home Office

Designed by Ricardo Gimenes from Spain.

Winter Magic At Home

“Snow drifted quietly as a child and their loyal dog paused beside the glowing Christmas tree, each ornament holding a small piece of winter wonder. In that still moment, the cold faded away, replaced by warmth, curiosity, and the simple joy of being together.” — Designed by PopArt Studio from Serbia.

Enjoy The Little Things

“Inspired by the little things in life! Stop for a moment and look around — find the little things that can bring true joy to you. This illustration is hand-drawn on a piece of paper, then scanned and adjusted with Photoshop (for the calendar UI). No AI tools have been used! Enjoy!” — Designed by Martin Nikolchev from Bulgaria.

A Message Of Peace And Hope

“In the city where I live, the weather in January could be unpredictable. Sometimes it is freezing cold, and sometimes there is rain and no snow at all. I like it when there is an open sky and the sun is shining. The sky is cold and has a wonderful blue color, especially in the evenings. You look at it and think about what heart desires the most.” — Designed by Wolfie from Russia.

Snowflakes

Designed by Eike Otto from Berlin, Germany.

Blue Monday

“Blue Monday may be a PR stunt, but your mental health isn’t. Do your best to protect it all year round.” — Designed by Ginger It Solutions from Serbia.

Open The Doors Of The New Year

“January is the first month of the year and usually the coldest winter month in the Northern hemisphere. The name of the month of January comes from ‘ianua’, the Latin word for door, so this month denotes the door to the new year and a new beginning. Let’s open the doors of the new year together and hope it will be the best so far!” — Designed by PopArt Studio from Serbia.

Winter Leaves

Designed by Nathalie Ouederni from France.

Bird Bird Bird Bird

“Just four birds, ready for winter.” — Designed by Vlad Gerasimov from Georgia.

Start Somewhere

“If we wait until we’re ready, we’ll be waiting for the rest of our lives. Start today — somewhere, anywhere.” — Designed by Shawna Armstrong from the United States.

Squirrel Appreciation Day

“Join us in honoring our furry little forest friends this Squirrel Appreciation Day! Whether they’re gathering nuts, building cozy homes, or brightening up winter days with their playful antics, squirrels remind us to treasure nature’s small wonders. Let’s show them some love today!” — Designed by PopArt Studio from Serbia.

Cold… Penguins!

“The new year is here! We waited for it like penguins. We look at the snow and enjoy it! — Designed by Veronica Valenzuela from Spain.

Cheerful Chimes City

Designed by Design Studio from India.

Boom!

Designed by Elise Vanoorbeek from Belgium.

Peaceful Mountains

“When all the festivities are over, all we want is some peace and rest. That’s why I made this simple flat art wallpaper with peaceful colors.” — Designed by Jens Gilis from Belgium.

Yogabear

Designed by Ricardo Gimenes from Spain.

Be Awesome Today

“A little daily motivation to keep your cool during the month of January.” — Designed by Amalia Van Bloom from the United States.

A Fresh Start

Designed by Ricardo Gimenes from Spain.

Winter Getaway

“What could be better than a change of scene for a week? Even if you are too busy, just think about it.” — Designed by Igor Izhik from Canada.

Happy New Year ’86

Designed by Ricardo Gimenes from Spain.

Angel In Snow

Designed by Brainer from Ukraine.

January Fish

“My fish tank at home inspired me to make a wallpaper with a fish.” — Designed by Arno De Decker from Belgium.

Dare To Be You

“The new year brings new opportunities for each of us to become our true selves. I think that no matter what you are — like this little monster — you should dare to be the true you without caring what others may think. Happy New Year!” — Designed by Maria Keller from Mexico.

Oaken January

“In our country, Christmas is celebrated in January when oak branches and leaves are burnt to symbolize the beginning of the new year and new life. It’s the time when we gather with our families and celebrate the arrival of the new year in a warm and cuddly atmosphere.” — Designed by PopArt Studio from Serbia.

The Little Paradox

Designed by Ricardo Gimenes from Spain.

New Year’s Resolution

Designed by Elise Vanoorbeek from Belgium.

Don Quijote, Here We Go!

“This year we are going to travel through books, and you couldn’t start with a better one than Don Quijote de la Mancha!” — Designed by Veronica Valenzuela Jimenez from Spain.

Rubber Ducky Day

“Winter can be such a gloomy time of the year. The sun sets earlier, the wind feels colder, and our heating bills skyrocket. I hope to brighten up your month with my wallpaper for Rubber Ducky Day!” — Designed by Ilya Plyusnin from Belgium.

A New Beginning

“I wanted to do a lettering-based wallpaper because I love lettering. I chose January because for a lot of people the new year is perceived as a new beginning and I wish to make them feel as positive about it as possible! The idea is to make them feel like the new year is (just) the start of something really great.” — Designed by Carolina Sequeira from Portugal.

Japanese New Year

Designed by Evacomics from Singapore.

Happy Hot Tea Month

“You wake me up to a beautiful day; lift my spirit when I’m feeling blue. When I’m home you relieve me of the long day’s stress. You help me have a good time with my loved ones; give me company when I’m all alone. You’re none other than my favourite cup of hot tea.” — Designed by Acodez IT Solutions from India.

Don’t Forget Your Vitamins

“Discover the seasonal fruits and vegetables. In January: apple and banana enjoying the snow!” — Designed by Vitaminas Design from Spain.

Wolf Month

“Wolf-month (in Dutch ‘wolfsmaand’) is another name for January.” — Designed by Chiara Faes from Belgium.

A New Start

“The new year brings hope, festivity, lots and lots of resolutions, and many more goals that need to be achieved.” — Designed by Damn Perfect from India.

Get Featured Next Month

Feeling inspired? We’ll publish the February wallpapers on January 31, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[How To Design For (And With) Deaf People]]> https://smashingmagazine.com/2025/12/how-design-for-with-deaf-people/ https://smashingmagazine.com/2025/12/how-design-for-with-deaf-people/ Tue, 30 Dec 2025 10:00:00 GMT Smart Interface Design Patterns, a **friendly video course on UX** and design patterns by Vitaly.]]> When we think about people who are deaf, we often assume stereotypes, such as “disabled” older adults with hearing aids. However, this perception is far from the truth and often leads to poor decisions and broken products.

Let’s look at when and how deafness emerges, and how to design better experiences for people with hearing loss.

Deafness Is A Spectrum

Deafness spans a broad continuum, from minor to profound hearing loss. Around 90–95% of deaf people come from hearing families, and deafness often isn’t merely a condition that people are born with. It frequently occurs due to exposure to loud noises, and it also emerges with age, disease, and accidents.

The loudness of sound is measured in units called decibels (dB). Everybody is on the spectrum of deafness, from normal hearing (up to 15 dB) to profound hearing loss (91+ dB):

  • Slight Hearing Loss, 16–25 dB
    At 16 dB hearing loss, a person can miss up to 10% of speech when a speaker is at a distance greater than 3 feet.
  • Mild hearing loss, 26–40 dB
    Soft sounds are hard to hear, including whispering, which is around 40 dB in volume. It’s more difficult to hear soft speech sounds spoken at a normal volume. At 40dB hearing loss, a person may miss 50% of meeting discussions.
  • Moderate hearing loss, 41–55 dB
    A person may hear almost no speech when another person is talking at normal volume. At a 50dB hearing loss, a person may not pick up to 80% of speech.
  • Moderately Severe Hearing Loss, 56–70 dB
    A person may have problems hearing the sounds of a dishwasher (60dB). At 70 dB, they might miss almost all speech.
  • Severe Hearing Loss, 71–90 dB
    A person will hear no speech when a person is talking at a normal level. They may hear only some very loud noises: vacuum (70 dB), blender (78 dB), hair dryer (90 dB).
  • Profound Hearing Loss, 91+ dB
    Hear no speech and at most very loud sounds such as a music player at full volume (100 dB), which would be damaging for people with normal hearing, or a car horn (110 dB).

It’s worth mentioning that loss of hearing can also be situational and temporary, as people with “normal” hearing (0 to 25 dB hearing loss) will always encounter situations where they can’t hear, e.g., due to noisy environments.

Useful Things To Know About Deafness

Assumptions are always dangerous, and in the case of deafness, there are quite a few that aren’t accurate. For example, most deaf people actually do not know a sign language — it’s only around 1% in the US.

Also, despite our expectations, there is actually no universal sign language that everybody uses. For example, British signers often cannot understand American signers. There are globally around 300 different sign languages actively used.

“We never question making content available in different written or spoken languages, and the same should apply to signed languages.”

Johanna Steiner

Sign languages are not just gestures or pantomime. They are 4D spatial languages with their own grammar and syntax, separate from spoken languages, and they don’t have a written form. They rely heavily on facial expression to convey meaning and emphasis. And they are also not universal — every country has its own sign language and dialects.

  • You can only understand 30% of words via lip-reading.
  • Most deaf people do not know any sign language.
  • Many sign languages have local dialects that can be hard to interpret.
  • Not all deaf people are fluent signers and often rely on visual clues.
  • For many deaf people, a spoken language is their second language.
  • Sign language is 4-dimensional, incorporating 3D space, time and also facial expressions.
How To Communicate Respectfully

Keep in mind that many deaf people use the spoken language of their country as their second language. So to communicate with a deaf person, it’s best to ask in writing. Don’t ask how much a person can understand, or if they can lip-read you.

However, as Rachel Edwards noted, don’t assume someone is comfortable with written language because they are deaf. Sometimes their literacy may be low, and so providing information as text and assuming that covers your deaf users might not be the answer.

Also, don’t assume that every deaf person can lip-read. You can see only about 30% of words on someone’s mouth. That’s why many deaf people need additional visual cues, like text or cued speech.

It’s also crucial to use respectful language. Deaf people do not always see themselves as disabled, but rather as a cultural linguistic minority with a unique identity. Others, as Meryl Evan has noted, don’t identify as deaf or hard of hearing, but rather as “hearing impaired”. So, it’s mostly up to an individual how they want to identify.

  • Deaf (Capital ‘D’)
    Culturally Deaf people who have been deaf since birth or before learning to speak. Sign language is often the first language, and written language is the second.
  • deaf (Lowercase ‘d’)
    People who developed hearing loss later in life. Used by people who feel closer to the hearing/hard-of-hearing world and prefer to communicate written and/or oral.
  • Hard of Hearing
    People with mild to moderate hearing loss who typically communicate orally and use hearing aids.

In general, avoid hearing impairment if you can, and use Deaf (for those deaf for most of their lives), deaf (for those who became deaf later), or hard of hearing (HoH) for partial hearing loss. But either way, ask politely first and then respect the person’s preferences.

Practical UX Guidelines

When designing UIs and content, consider these key accessibility guidelines for deaf and hard-of-hearing users:

  1. Don’t make the phone required or the only method of contact.
  2. Provide text alternatives for all audible alerts or notices.
  3. Add haptic feedback on mobile (e.g., vibration patterns).
  4. Ensure good lighting to help people see facial expressions.
  5. Circular seating usually works better, so everyone can see each other’s faces.
  6. Always include descriptions of non-spoken sounds (e.g., rain, laughter) in your content.
  7. Add a transcript and closed captions for audio and video.
  8. Clearly identify each speaker in all audio and video content.
  9. Design multiple ways to communicate in every instance (online + in-person).
  10. Invite video participants to keep the camera on to facilitate lip-reading and the viewing of facial expressions, which convey tone.
  11. Always test products with the actual community, instead of making assumptions for them.
Wrapping Up

I keep repeating myself like a broken record, but better accessibility always benefits everyone. When we improve experiences for some groups of people, it often improves experiences for entirely different groups as well.

As Marie Van Driessche rightfully noted, to design a great experience for accessibility, we must design with people, rather than for them. And that means always include people with lived experience of exclusion into the design process — as they are the true experts.

Accessibility never happens by accident — it’s a deliberate decision and a commitment.

No digital product is neutral. There must be a deliberate effort to make products and services more accessible. Not only does it benefit everyone, but it also shows what a company stands for and values.

And once you do have a commitment, it will be so much easier to retain accessibility rather than adding it last minute as a crutch — when it’s already too late to do it right and way too expensive to do it well.

Meet “Smart Interface Design Patterns”

You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

Meet Smart Interface Design Patterns, our video course on interface design & UX.

Video + UX Training

$ 495.00 $ 699.00 Get Video + UX Training

25 video lessons (15h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 300.00$ 395.00
Get the video course

40 video lessons (15h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Useful Books

  • Sound Is Not Enough, by Svetlana Kouznetsova
  • Mismatch: How Inclusion Shapes Design, by Kat Holmes
  • Building for Everyone: Extend Your Product's Reach Through Inclusive Design (+ free excerpt), by Annie Jean-Baptiste
]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Giving Users A Voice Through Virtual Personas]]> https://smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/ https://smashingmagazine.com/2025/12/giving-users-voice-virtual-personas/ Tue, 23 Dec 2025 10:00:00 GMT In my previous article, I explored how AI can help us create functional personas more efficiently. We looked at building personas that focus on what users are trying to accomplish rather than demographic profiles that look good on posters but rarely change design decisions.

But creating personas is only half the battle. The bigger challenge is getting those insights into the hands of people who need them, at the moment they need them.

Every day, people across your organization make decisions that affect user experience. Product teams decide which features to prioritize. Marketing teams craft campaigns. Finance teams design invoicing processes. Customer support teams write response templates. All of these decisions shape how users experience your product or service.

And most of them happen without any input from actual users.

The Problem With How We Share User Research

You do the research. You create the personas. You write the reports. You give the presentations. You even make fancy infographics. And then what happens?

The research sits in a shared drive somewhere, slowly gathering digital dust. The personas get referenced in kickoff meetings and then forgotten. The reports get skimmed once and never opened again.

When a product manager is deciding whether to add a new feature, they probably do not dig through last year’s research repository. When the finance team is redesigning the invoice email, they almost certainly do not consult the user personas. They make their best guess and move on.

This is not a criticism of those teams. They are busy. They have deadlines. And honestly, even if they wanted to consult the research, they probably would not know where to find it or how to interpret it for their specific question.

The knowledge stays locked inside the heads of the UX team, who cannot possibly be present for every decision being made across the organization.

What If Users Could Actually Speak?
What if, instead of creating static documents that people need to find and interpret, we could give stakeholders a way to consult all of your user personas at once?

Imagine a marketing manager working on a new campaign. Instead of trying to remember what the personas said about messaging preferences, they could simply ask: “I’m thinking about leading with a discount offer in this email. What would our users think?”

And the AI, drawing on all your research data and personas, could respond with a consolidated view: how each persona would likely react, where they agree, where they differ, and a set of recommendations based on their collective perspectives. One question, synthesized insight across your entire user base.

This is not science fiction. With AI, we can build exactly this kind of system. We can take all of that scattered research (the surveys, the interviews, the support tickets, the analytics, the personas themselves) and turn it into an interactive resource that anyone can query for multi-perspective feedback.

Building the User Research Repository

The foundation of this approach is a centralized repository of everything you know about your users. Think of it as a single source of truth that AI can access and draw from.

If you have been doing user research for any length of time, you probably have more data than you realize. It is just scattered across different tools and formats:

  • Survey results sitting in your survey platform,
  • Interview transcripts in Google Docs,
  • Customer support tickets in your helpdesk system,
  • Analytics data in various dashboards,
  • Social media mentions and reviews,
  • Old personas from previous projects,
  • Usability test recordings and notes.

The first step is gathering all of this into one place. It does not need to be perfectly organized. AI is remarkably good at making sense of messy inputs.

If you are starting from scratch and do not have much existing research, you can use AI deep research tools to establish a baseline.

These tools can scan the web for discussions about your product category, competitor reviews, and common questions people ask. This gives you something to work with while you build out your primary research.

Creating Interactive Personas

Once you have your repository, the next step is creating personas that the AI can consult on behalf of stakeholders. This builds directly on the functional persona approach I outlined in my previous article, with one key difference: these personas become lenses through which the AI analyzes questions, not just reference documents.

The process works like this:

  1. Feed your research repository to an AI tool.
  2. Ask it to identify distinct user segments based on goals, tasks, and friction points.
  3. Have it generate detailed personas for each segment.
  4. Configure the AI to consult all personas when stakeholders ask questions, providing consolidated feedback.

Here is where this approach diverges significantly from traditional personas. Because the AI is the primary consumer of these persona documents, they do not need to be scannable or fit on a single page. Traditional personas are constrained by human readability: you have to distill everything down to bullet points and key quotes that someone can absorb at a glance. But AI has no such limitation.

This means your personas can be considerably more detailed. You can include lengthy behavioral observations, contradictory data points, and nuanced context that would never survive the editing process for a traditional persona poster. The AI can hold all of this complexity and draw on it when answering questions.

You can also create different lenses or perspectives within each persona, tailored to specific business functions. Your “Weekend Warrior” persona might have a marketing lens (messaging preferences, channel habits, campaign responses), a product lens (feature priorities, usability patterns, upgrade triggers), and a support lens (common questions, frustration points, resolution preferences). When a marketing manager asks a question, the AI draws on the marketing-relevant information. When a product manager asks, it pulls from the product lens. Same persona, different depth depending on who is asking.

The personas should still include all the functional elements we discussed before: goals and tasks, questions and objections, pain points, touchpoints, and service gaps. But now these elements become the basis for how the AI evaluates questions from each persona’s perspective, synthesizing their views into actionable recommendations.

Implementation Options

You can set this up with varying levels of sophistication depending on your resources and needs.

The Simple Approach

Most AI platforms now offer project or workspace features that let you upload reference documents. In ChatGPT, these are called Projects. Claude has a similar feature. Copilot and Gemini call them Spaces or Gems.

To get started, create a dedicated project and upload your key research documents and personas. Then write clear instructions telling the AI to consult all personas when responding to questions. Something like:

You are helping stakeholders understand our users. When asked questions, consult all of the user personas in this project and provide: (1) a brief summary of how each persona would likely respond, (2) an overview highlighting where they agree and where they differ, and (3) recommendations based on their collective perspectives. Draw on all the research documents to inform your analysis. If the research does not fully cover a topic, search social platforms like Reddit, Twitter, and relevant forums to see how people matching these personas discuss similar issues. If you are still unsure about something, say so honestly and suggest what additional research might help.

This approach has some limitations. There are caps on how many files you can upload, so you might need to prioritize your most important research or consolidate your personas into a single comprehensive document.

The More Sophisticated Approach

For larger organizations or more ongoing use, a tool like Notion offers advantages because it can hold your entire research repository and has AI capabilities built in. You can create databases for different types of research, link them together, and then use the AI to query across everything.

The benefit here is that the AI has access to much more context. When a stakeholder asks a question, it can draw on surveys, support tickets, interview transcripts, and analytics data all at once. This makes for richer, more nuanced responses.

What This Does Not Replace

I should be clear about the limitations.

Virtual personas are not a substitute for talking to real users. They are a way to make existing research more accessible and actionable.

There are several scenarios where you still need primary research:

  • When launching something genuinely new that your existing research does not cover;
  • When you need to validate specific designs or prototypes;
  • When your repository data is getting stale;
  • When stakeholders need to hear directly from real humans to build empathy.

In fact, you can configure the AI to recognize these situations. When someone asks a question that goes beyond what the research can answer, the AI can respond with something like: “I do not have enough information to answer that confidently. This might be a good question for a quick user interview or survey.”

And when you do conduct new research, that data feeds back into the repository. The personas evolve over time as your understanding deepens. This is much better than the traditional approach, where personas get created once and then slowly drift out of date.

The Organizational Shift

If this approach catches on in your organization, something interesting happens.

The UX team’s role shifts from being the gatekeepers of user knowledge to being the curators and maintainers of the repository.

Instead of spending time creating reports that may or may not get read, you spend time ensuring the repository stays current and that the AI is configured to give helpful responses.

Research communication changes from push (presentations, reports, emails) to pull (stakeholders asking questions when they need answers). User-centered thinking becomes distributed across the organization rather than concentrated in one team.

This does not make UX researchers less valuable. If anything, it makes them more valuable because their work now has a wider reach and greater impact. But it does change the nature of the work.

Getting Started

If you want to try this approach, start small. If you need a primer on functional personas before diving in, I have written a detailed guide to creating them. Pick one project or team and set up a simple implementation using ChatGPT Projects or a similar tool. Gather whatever research you have (even if it feels incomplete), create one or two personas, and see how stakeholders respond.

Pay attention to what questions they ask. These will tell you where your research has gaps and what additional data would be most valuable.

As you refine the approach, you can expand to more teams and more sophisticated tooling. But the core principle stays the same: take all that scattered user knowledge and give it a voice that anyone in your organization can hear.

In my previous article, I argued that we should move from demographic personas to functional personas that focus on what users are trying to do. Now I am suggesting we take the next step: from static personas to interactive ones that can actually participate in the conversations where decisions get made.

Because every day, across your organization, people are making decisions that affect your users. And your users deserve a seat at the table, even if it is a virtual one.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[How To Measure The Impact Of Features]]> https://smashingmagazine.com/2025/12/how-measure-impact-features-tars/ https://smashingmagazine.com/2025/12/how-measure-impact-features-tars/ Fri, 19 Dec 2025 10:00:00 GMT Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today).]]> So we design and ship a shiny new feature. How do we know if it’s working? How do we measure and track its impact? There is no shortage in UX metrics, but what if we wanted to establish a simple, repeatable, meaningful UX metric — specifically for our features? Well, let’s see how to do just that.

I first heard about the TARS framework from Adrian H. Raudschl’s wonderful article on “How To Measure Impact of Features”. Here, Adrian highlighted how his team tracks and decides which features to focus on — and then maps them against each other in a 2×2 quadrants matrix.

It turned out to be a very useful framework to visualize the impact of UX work through the lens of business metrics.

Let’s see how it works.

1. Target Audience (%)

We start by quantifying the target audience by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.

Target audience isn’t the same as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. More users might have the problem that the export feature is trying to solve, but they can’t find it.

Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”
2. A = Adoption (%)

Next, we measure how well we are “acquiring” our target audience. For that, we track how many users actually engage successfully with that feature over a specific period of time.

We don’t focus on CTRs or session duration there, but rather if users meaningfully engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.

High feature adoption (>60%) suggests that the problem was impactful. Low adoption (<20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.

Sometimes, low feature adoption has nothing to do with the feature itself, but rather where it sits in the UI. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.

Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a success.

Question we ask: “What percentage of active target users actually use the feature to solve that problem?”
3. Retention (%)

Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for meaningful impact.

If a feature has >50% retention rate (avg.), we can be quite confident that it has a high strategic importance. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.

Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”
4. Satisfaction Score (CES)

Finally, we measure the level of satisfaction that users have with that feature that we’ve shipped. We don’t ask everyone — we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.

Once users actually used a feature multiple times, we ask them how easy it was to solve a problem after they used that feature — between “much more difficult” and “much easier than expected”. We know how we want to score.

Using TARS For Feature Strategy

Once we start measuring with TARS, we can calculate an S÷T score — the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a 2×2 matrix.

Overperforming features are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.

Liability features have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify core features and project features — and have a conversation with designers, PMs, and engineers on what we should work on next.

Conversion Rate Is Not a UX Metric

TARS doesn’t cover conversion rate, and for a good reason. As Fabian Lenz noted, conversion is often considered to be the ultimate indicator of success — yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.

The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to many different initiatives — from sales and marketing to web performance boost to seasonal effects to UX initiatives.

UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply can’t choose the product they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.

High Conversion Despite Bad UX

As Fabian writes, high conversion rate can happen despite poor UX, because:

  • Strong brand power pulls people in,
  • Aggressive but effective urgency tactics,
  • Prices are extremely attractive,
  • Marketing performs brilliantly,
  • Historical customer loyalty,
  • Users simply have no alternative.

Low Conversion Despite Great UX

At the same time, a low conversion rate can occur despite great UX, because:

  • Offers aren’t relevant to the audience,
  • Users don’t trust the brand,
  • Poor business model or high risk of failure,
  • Marketing doesn’t reach the right audience,
  • External factors (price, timing, competition).

An improved conversion is the positive outcome of UX initiatives. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of actionable design metrics we could use to track UX and drive sustainable success.

Wrapping Up

Product metrics alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics. Personally, I would extend TARS with UX-focused metrics and KPIs as well — depending on the needs of the project.

Huge thanks to Adrian H. Raudaschl for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 3 video courses.

Useful Resources

Further Reading

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Smashing Animations Part 7: Recreating Toon Text With CSS And SVG]]> https://smashingmagazine.com/2025/12/smashing-animations-part-7-recreating-toon-text-css-svg/ https://smashingmagazine.com/2025/12/smashing-animations-part-7-recreating-toon-text-css-svg/ Wed, 17 Dec 2025 10:00:00 GMT After finishing a project that required me to learn everything I could about CSS and SVG animations, I started writing this series about Smashing Animations and “How Classic Cartoons Inspire Modern CSS.” To round off this year, I want to show you how to use modern CSS to create that element that makes Toon Titles so impactful: their typography.

Title Artwork Design

In the silent era of the 1920s and early ’30s, the typography of a film’s title card created a mood, set the scene, and reminded an audience of the type of film they’d paid to see.

Cartoon title cards were also branding, mood, and scene-setting, all rolled into one. In the early years, when major studio budgets were bigger, these title cards were often illustrative and painterly.

But when television boomed during the 1950s, budgets dropped, and cards designed by artists like Lawrence “Art” Goble adopted a new visual language, becoming more graphic, stylised, and less intricate.

Note: Lawrence “Art” Goble is one of the often overlooked heroes of mid-century American animation. He primarily worked for Hanna-Barbera during its most influential years of the 1950s and 1960s.

Goble wasn’t a character animator. His role was to create atmosphere, so he designed environments for The Flintstones, Huckleberry Hound, Quick Draw McGraw, and Yogi Bear, as well as the opening title cards that set the tone. His title cards, featuring paintings with a logo overlaid, helped define the iconic look of Hanna-Barbera.

Goble’s artwork for characters such as Quick Draw McGraw and Yogi Bear was effective on smaller TV screens. Rather than reproducing a still from the cartoon, he focused on presenting a single, strong idea — often in silhouette — that captured its essence. In “The Buzzin’ Bear,” Yogi buzzes by in a helicopter. He bounces away, pic-a-nic basket in hand, in “Bear on a Picnic,” and for his “Prize Fight Fright,” Yogi boxes the title text.

With little or no motion to rely on, Goble’s single frames had to create a mood, set the scene, and describe a story. They did this using flat colours, graphic shapes, and typography that was frequently integrated into the artwork.

As designers who work on the web, toon titles can teach us plenty about how to convey a brand’s personality, make a first impression, and set expectations for someone’s experience using a product or website. We can learn from the artists’ techniques to create effective banners, landing-page headers, and even good ol’ fashioned splash screens.

Toon Title Typography

Cartoon title cards show how merging type with imagery delivers the punch a header or hero needs. With a handful of text-shadow, text-stroke, and transform tricks, modern CSS lets you tap into that same energy.

The Toon Text Title Generator

Partway through writing this article, I realised it would be useful to have a tool for generating text styled like the cartoon titles I love so much. So I made one.

My Toon Text Title Generator lets you experiment with colours, strokes, and multiple text shadows. You can adjust paint order, apply letter spacing, preview your text in a selection of sample fonts, and then copy the generated CSS straight to your clipboard to use in a project.

Toon Title CSS

You can simply copy-paste the CSS that the Toon Text Title Generator provides you. But let’s look closer at what it does.

Text shadow

Look at the type in this title from Augie Doggie’s episode “Yuk-Yuk Duck,” with its pale yellow letters and dark, hard, offset shadow that lifts it off the background and creates the illusion of depth.

You probably already know that text-shadow accepts four values: (1) horizontal and (2) vertical offsets, (3) blur, and (4) a colour which can be solid or semi-transparent. Those offset values can be positive or negative, so I can replicate “Yuk-Yuk Duck” using a hard shadow pulled down and to the right:

color: #f7f76d;
text-shadow: 5px 5px 0 #1e1904;

On the other hand, this “Pint Giant” title has a different feel with its negative semi-soft shadow:

color: #c2a872;
text-shadow:
  -7px 5px 0 #b100e,
  0 -5px 10px #546c6f;

To add extra depth and create more interesting effects, I can layer multiple shadows. For “Let’s Duck Out,” I combine four shadows: the first a solid shadow with a negative horizontal offset to lift the text off the background, followed by progressively softer shadows to create a blur around it:

color: #6F4D80;
text-shadow:
  -5px 5px 0 #260e1e, /* Shadow 1 */
  0 0 15px #e9ce96,   /* Shadow 2 */
  0 0 30px #e9ce96,   /* Shadow 3 */
  0 0 30px #e9ce96;   /* Shadow 4 */

These shadows show that using text-shadow isn’t just about creating lighting effects, as they can also be decorative and add personality.

Text Stroke

Many cartoon title cards feature letters with a bold outline that makes them stand out from the background. I can recreate this effect using text-stroke. For a long time, this property was only available via a -webkit- prefix, but that also means it’s now supported across modern browsers.

text-stroke is a shorthand for two properties. The first, text-stroke-width, draws a contour around individual letters, while the second, text-stroke-color, controls its colour. For “Whatever Goes Pup,” I added a 4px blue stroke to the yellow text:

color: #eff0cd;
-webkit-text-stroke: 4px #7890b5;
text-stroke: 4px #7890b5;

Strokes can be especially useful when they’re combined with shadows, so for “Growing, Growing, Gone,” I added a thin 3px stroke to a barely blurred 1px shadow to create this three-dimensional text effect:

color: #fbb999;
text-shadow: 3px 5px 1px #5160b1;
-webkit-text-stroke: 3px #984336;
text-stroke: 3px #984336;

Paint Order

Using text-stroke doesn’t always produce the expected result, especially with thinner letters and thicker strokes, because by default the browser draws a stroke over the fill. Sadly, CSS still does not permit me to adjust stroke placement as I often do in Sketch. However, the paint-order property has values that allow me to place the stroke behind, rather than in front of, the fill.

paint-order: stroke paints the stroke first, then the fill, whereas paint-order: fill does the opposite:

color: #fbb999;
paint-order: fill;
text-shadow: 3px 5px 1px #5160b1;
text-stroke-color:#984336;
text-stroke-width: 3px;

An effective stroke keeps letters readable, adds weight, and — when combined with shadows and paint order — gives flat text real presence.

Backgrounds Inside Text

Many cartoon title cards go beyond flat colour by adding texture, gradients, or illustrated detail to the lettering. Sometimes that’s a texture, other times it might be a gradient with a subtle tonal shift. On the web, I can recreate this effect by using a background image or gradient behind the text, and then clipping it to the shape of the letters. This relies on two properties working together: background-clip: text and text-fill-color: transparent.

First, I apply a background behind the text. This can be a bitmap or vector image or a CSS gradient. For this example from the Quick Draw McGraw episode “Baba Bait,” the title text includes a subtle top–bottom gradient from dark to light:

background: linear-gradient(0deg, #667b6a, #1d271a);

Next, I clip that background to the glyphs and make the text transparent so the background shows through:

-webkit-background-clip: text;
-webkit-text-fill-color: transparent;

With just those two lines, the background is no longer painted behind the text; instead, it’s painted within it. This technique works especially well when combined with strokes and shadows. A clipped gradient provides the lettering with colour and texture, a stroke keeps its edges sharp, and a shadow elevates it from the background. Together, they recreate the layered look of hand-painted title cards using nothing more than a little CSS. As always, test clipped text carefully, as browser quirks can sometimes affect shadows and rendering.

Splitting Text Into Individual Characters

Sometimes I don’t want to style a whole word or heading. I want to style individual letters — to nudge a character into place, give one glyph extra weight, or animate a few letters independently.

In plain HTML and CSS, there’s only one reliable way to do that: wrap each character in its own span element. I could do that manually, but that would be fragile, hard to maintain, and would quickly fall apart when copy changes. Instead, when I need per-letter control, I use a text-splitting library like splt.js (although other solutions are available). This takes a text node and automatically wraps words or characters, giving me extra hooks to animate and style without messing up my markup.

It’s an approach that keeps my HTML readable and semantic, while giving me the fine-grained control I need to recreate the uneven, characterful typography you see in classic cartoon title cards. However, this approach comes with accessibility caveats, as most screen readers read text nodes in order. So this:

<h2>Hum Sweet Hum</h2>

…reads as you’d expect:

Hum Sweet Hum

But this:

<h2>
<span>H</span>
<span>u</span>
<span>m</span>
<!-- etc. -->
</h2>

…can be interpreted differently depending on the browser and screen reader. Some will concatenate the letters and read the words correctly. Others may pause between letters, which in a worst-case scenario might sound like:

“H…” “U…” “M…”

Sadly, some splitting solutions don’t deliver an always accessible result, so I’ve written my own text splitter, splinter.js, which is currently in beta.

Transforming Individual Letters

To activate my Toon Text Splitter, I add a data- attribute to the element I want to split:

<h2 data-split="toon">Hum Sweet Hum</h2>

First, my script separates each word into individual letters and wraps them in a span element with class and ARIA attributes applied:

<span class="toon-char" aria-hidden="true">H</span>
<span class="toon-char" aria-hidden="true">u</span>
<span class="toon-char" aria-hidden="true">m</span>

The script then takes the initial content of the split element and adds it as an aria attribute to help maintain accessibility:

<h2 data-split="toon" aria-label="Hum Sweet Hum">
  <span class="toon-char" aria-hidden="true">H</span>
  <span class="toon-char" aria-hidden="true">u</span>
  <span class="toon-char" aria-hidden="true">m</span>
</h2>

With those class attributes applied, I can then style individual characters as I choose.

For example, for “Hum Sweet Hum,” I want to replicate how its letters shift away from the baseline. After using my Toon Text Splitter, I applied four different translate values using several :nth-child selectors to create a semi-random look:

/* 4th, 8th, 12th... */
.toon-char:nth-child(4n) { translate: 0 -8px; }
/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) { translate: 0 -4px; }
/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) { translate: 0 4px; }
/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) { translate: 0 8px; }

But translate is only one property I can use to transform my toon text.

I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */
.toon-line .toon-char:nth-child(4n) { rotate: -4deg; }
/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) { rotate: -8deg; }
/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) { rotate: 4deg; }
/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) { rotate: 8deg; }

But translate is only one property I can use to transform my toon text. I could also rotate those individual characters for an even more chaotic look:

/* 4th, 8th, 12th... */
.toon-line .toon-char:nth-child(4n) {
rotate: -4deg; }

/* 1st, 5th, 9th... */
.toon-char:nth-child(4n+1) {
rotate: -8deg; }

/* 2nd, 6th, 10th... */
.toon-char:nth-child(4n+2) {
rotate: 4deg; }

/* 3rd, 7th, 11th... */
.toon-char:nth-child(4n+3) {
rotate: 8deg; }

And, of course, I could add animations to jiggle those characters and bring my toon text style titles to life. First, I created a keyframe animation that rotates the characters:

@keyframes jiggle {
0%, 100% { transform: rotate(var(--base-rotate, 0deg)); }
25% { transform: rotate(calc(var(--base-rotate, 0deg) + 3deg)); }
50% { transform: rotate(calc(var(--base-rotate, 0deg) - 2deg)); }
75% { transform: rotate(calc(var(--base-rotate, 0deg) + 1deg)); }
}

Before applying it to the span elements created by my Toon Text Splitter:

.toon-char {
animation: jiggle 3s infinite ease-in-out;
transform-origin: center bottom; }

And finally, setting the rotation amount and a delay before each character begins to jiggle:

.toon-char:nth-child(4n) { --base-rotate: -2deg; }
.toon-char:nth-child(4n+1) { --base-rotate: -4deg; }
.toon-char:nth-child(4n+2) { --base-rotate: 2deg; }
.toon-char:nth-child(4n+3) { --base-rotate: 4deg; }

.toon-char:nth-child(4n) { animation-delay: 0.1s; }
.toon-char:nth-child(4n+1) { animation-delay: 0.3s; }
.toon-char:nth-child(4n+2) { animation-delay: 0.5s; }
.toon-char:nth-child(4n+3) { animation-delay: 0.7s; }

One Frame To Make An Impression

Cartoon title artists had one frame to make an impression, and their typography was as important as the artwork they painted. The same is true on the web.

A well-designed header or hero area needs clarity, character, and confidence — not simply a faded full-width background image.

With a few carefully chosen CSS properties — shadows, strokes, clipped backgrounds, and some restrained animation — we can recreate that same impact. I love toon text not because I’m nostalgic, but because its design is intentional. Make deliberate choices, and let a little toon text typography add punch to your designs.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[Accessible UX Research, eBook Now Available For Download]]> https://smashingmagazine.com/2025/12/accessible-ux-research-ebook-release/ https://smashingmagazine.com/2025/12/accessible-ux-research-ebook-release/ Tue, 09 Dec 2025 16:00:00 GMT reserve your print copy at the presale price.]]> This article is a sponsored by Accessible UX Research

Smashing Library expands again! We’re so happy to announce our newest book, Accessible UX Research, is now available for download in eBook formats. Michele A. Williams takes us for a deep dive into the real world of UX testing, and provides a road map for including users with different abilities and needs in every phase of testing.

But the truth is, you don’t need to be conducting UX testing or even be a UX professional to get a lot out of this book. Michele gives in-depth descriptions of the assistive technology we should all be familiar with, in addition to disability etiquette, common pitfalls when creating accessible prototypes, and so much more. You’ll refer to this book again and again in your daily work.

This is also your last chance to get your printed copy at our discounted presale price. We expect printed copies to start shipping in early 2026. We know you’ll love this book, but don’t just take our word for it — we asked a few industry experts to check out Accessible UX Research too:

Accessible UX Research stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a solid framework for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.

This is the book I wish I had when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”

Eric Bailey, Accessibility Advocate
“User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a critical piece in the puzzle of actually doing and integrating that research into accessibility work day to day.”

Devon Pershing, Author of The Accessibility Operations Guidebook
“Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the depth of human experience. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”

Manuel Matuzović, Author of the Web Accessibility Cookbook
“This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A strong focus on real-world application equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”

Anna E. Cook, Accessibility and Inclusive Design Specialist
About The Book

The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a practical guide to inclusive UX research, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.

Inside, you’ll learn how to:

  • Plan research that includes disabled participants from the start,
  • Recruit participants with disabilities,
  • Facilitate sessions that work for a range of access needs,
  • Ask better questions and avoid unintentionally biased research methods,
  • Build trust and confidence in your team around accessibility and inclusion.

The book also challenges common assumptions about disability and urges readers to rethink what inclusion really means in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.

High-quality hardcover, 320 pages. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. Print edition shipping early 2026. eBook now available for download. Download a free sample (PDF, 2.3MB) and reserve your print copy at the presale price.

“Accessible UX Research” shares successful strategies that’ll help you recruit the participants you need for the study you’re designing. (Large preview) Contents
  1. Disability mindset: For inclusive research to succeed, we must first confront our mindset about disability, typically influenced by ableism.
  2. Diversity of disability: Accessibility is not solely about blind screen reader users; disability categories help us unpack and process the diversity of disabled users.
  3. Disability in the stages of UX research: Disabled participants can and should be part of every research phase — formative, prototype, and summative.
  4. Recruiting disabled participants: Recruiting disabled participants is not always easy, but that simply means we need to learn strategies on where to look.
  5. Designing your research: While our goal is to influence accessible products, our research execution must also be accessible.
  6. Facilitating an accessible study: Preparation and communication with your participants can ensure your study logistics run smoothly.
  7. Analyzing and reporting with accuracy and impact: How you communicate your findings is just as important as gathering them in the first place — so prepare to be a storyteller, educator, and advocate.
  8. Disability in the UX research field: Inclusion isn’t just for research participants, it’s important for our colleagues as well, as explained by blind UX Researcher Dr. Cynthia Bennett.
The book will challenge your disability mindset and what it means to be truly inclusive in your work. (Large preview) Who This Book Is For

Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…

  1. You have been tasked to improve accessibility of your product, but need to know where to start to facilitate this successfully.
  2. You want to establish a culture for accessibility in your company, but not sure how to make it work.
  3. You want to move from WCAG/EAA compliance to established accessibility practices and inclusion in research practices and beyond.
  4. You want to improve your overall accessibility knowledge and be viewed as an Accessibility Specialist for your organization.
About the Author

Dr. Michele A. Williams is owner of M.A.W. Consulting, LLC - Making Accessibility Work. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, published academic author, and patented inventor, she is passionate about educating and advising on technology that does not exclude disabled users.

Technical Details Community Matters ❤️

Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! ;-)

More Smashing Books & Goodies

Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Trine, Heather, and Steven are three of these people. Have you checked out their books already?

The Ethical Design Handbook

A practical guide on ethical design for digital products.

Add to cart $44

Understanding Privacy

Everything you need to know to put your users first and make a better web.

Add to cart $44

Touch Design for Mobile Interfaces

Learn how touchscreen devices really work — and how people really use them.

Add to cart $44

]]>
hello@smashingmagazine.com (Ari Stiles)
<![CDATA[State, Logic, And Native Power: CSS Wrapped 2025]]> https://smashingmagazine.com/2025/12/state-logic-native-power-css-wrapped-2025/ https://smashingmagazine.com/2025/12/state-logic-native-power-css-wrapped-2025/ Tue, 09 Dec 2025 10:00:00 GMT If I were to divide CSS evolutions into categories, we have moved far beyond the days when we simply asked for border-radius to feel like we were living in the future. We are currently living in a moment where the platform is handing us tools that don’t just tweak the visual layer, but fundamentally redefine how we architect interfaces. I thought the number of features announced in 2024 couldn’t be topped. I’ve never been so happily wrong.

The Chrome team’s “CSS Wrapped 2025” is not just a list of features; it is a manifesto for a dynamic, native web. As someone who has spent a couple of years documenting these evolutions — from defining “CSS5” eras to the intricacies of modern layout utilities — I find myself looking at this year’s wrap-up with a huge sense of excitement. We are seeing a shift towards “Optimized Ergonomics” and “Next-gen interactions” that allow us to stop fighting the code and start sculpting interfaces in their natural state.

In this article, you can find a comprehensive look at the standout features from Chrome’s report, viewed through the lens of my recent experiments and hopes for the future of the platform.

The Component Revolution: Finally, A Native Customizable Select

For years, we have relied on heavy JavaScript libraries to style dropdowns, a “decades-old problem” that the platform has finally solved. As I detailed in my deep dive into the history of the customizable select (and related articles), this has been a long road involving Open UI, bikeshedding names like <selectmenu> and <selectlist>, and finally landing on a solution that re-uses the existing <select> element.

The introduction of appearance: base-select is a strong foundation. It allows us to fully customize the <select> element — including the button and the dropdown list (via ::picker(select)) — using standard CSS. Crucially, this is built with progressive enhancement in mind. By wrapping our styles in a feature query, we ensure a seamless experience across all browsers.

We can opt in to this new behavior without breaking older browsers:

select {
  /* Opt-in for the new customizable select */
  @supports (appearance: base-select) {
    &, &::picker(select) {
      appearance: base-select;
    }
  }
}

The fantastic addition to allow rich content inside options, such as images or flags, is a lot of fun. We can create all sorts of selects nowadays:

  • Demo: I created a Poké-adventure demo showing how the new <selectedcontent> element can clone rich content (like a Pokéball icon) from an option directly into the button.

See the Pen A customizable select with images inside of the options and the selectedcontent [forked] by utilitybend.

See the Pen A customizable select with only pseudo-elements [forked] by utilitybend.

See the Pen An actual Select Menu with optgroups [forked] by utilitybend.

This feature alone signals a massive shift in how we will build forms, reducing dependencies and technical debt.

Scroll Markers And The Death Of The JavaScript Carousel

Creating carousels has historically been a friction point between developers and clients. Clients love them, developers dread the JavaScript required to make them accessible and performant. The arrival of ::scroll-marker and ::scroll-button() pseudo-elements changes this dynamic entirely.

These features allow us to create navigation dots and scroll buttons purely with CSS, linked natively to the scroll container. As I wrote on my blog, this was Love at first slide. The ability to create a fully functional, accessible slider without a single line of JavaScript is not just convenient; it is a triumph for performance. There are some accessibility concerns around this feature, and even though these are valid, there might be a way for us developers to make it work. The good thing is, all these UI changes are making it a lot easier than custom DOM manipulation and dragging around aria tags, but I digress…

We can now group markers automatically using scroll-marker-group and style the buttons using anchor positioning to place them exactly where we want.

.carousel {
  overflow-x: auto;
  scroll-marker-group: after; /* Creates the container for dots */

  /* Create the buttons */
  &::scroll-button(inline-end),
  &::scroll-button(inline-start) {
    content: " ";
    position: absolute;
    /* Use anchor positioning to center them */
    position-anchor: --carousel;
    top: anchor(center);
  }

  /* Create the markers on the children */
  div {
    &::scroll-marker {
      content: " ";
      width: 24px;
      border-radius: 50%;
      cursor: pointer;
    }
    /* Highlight the active marker */
    &::scroll-marker:target-current {
      background: white;
    }
  }
}

See the Pen Carousel Pure HTML and CSS [forked] by utilitybend.

See the Pen Webshop slick slider remake in CSS [forked] by utilitybend.

State Queries: Sticky Thing Stuck? Snappy Thing Snapped?

For a long time, we have lacked the ability to know if a “sticky thing is stuck” or if a “snappy item is snapped” without relying on IntersectionObserver hacks. Chrome 133 introduced scroll-state queries, allowing us to query these states declaratively.

By setting container-type: scroll-state, we can now style children based on whether they are stuck, snapped, or overflowing. This is a massive “quality of life” improvement that I have been eagerly waiting for since CSS Day 2023. It has even evolved a lot since we can also see the direction of the scroll, lovely!

For a simple example: we can finally apply a shadow to a header only when it is actually sticking to the top of the viewport:

.header-container {
  container-type: scroll-state;
  position: sticky;
  top: 0;

  header {
    transition: box-shadow 0.5s ease-out;
    /* The query checks the state of the container */
    @container scroll-state(stuck: top) {
      box-shadow: rgba(0, 0, 0, 0.6) 0px 12px 28px 0px;
    }
  }
}
  • Demo: A sticky header that only applies a shadow when it is actually stuck.

See the Pen Sticky headers with scroll-state query, checking if the sticky element is stuck [forked] by utilitybend.

  • Demo: A Pokémon-themed list that uses scroll-state queries combined with anchor positioning to move a frame over the currently snapped character.

See the Pen Scroll-state query to check which item is snapped with CSS, Pokemon version [forked] by utilitybend.

Optimized Ergonomics: Logic In CSS

The “Optimized Ergonomics” section of CSS Wrapped highlights features that make our workflows more intuitive. Three features stand out as transformative for how we write logic:

  1. if() Statements
    We are finally getting conditionals in CSS. The if() function acts like a ternary operator for stylesheets, allowing us to apply values based on media, support, or style queries inline. This reduces the need for verbose @media blocks for single property changes.
  2. @function functions
    We can finally move some logic to a different place, resulting in some cleaner files, a real quality of life feature.
  3. sibling-index() and sibling-count()
    These tree-counting functions solve the issue of staggering animations or styling items based on list size. As I explored in Styling siblings with CSS has never been easier, this eliminates the need to hard-code custom properties (like --index: 1) in our HTML.

Example: Calculating Layouts

We can now write concise mathematical formulas. For example, staggering an animation for cards entering the screen becomes trivial:

.card-container > * {
  animation: reveal 0.6s ease-out forwards;
  /* No more manual --index variables! */
  animation-delay: calc(sibling-index() * 0.1s);
}

I even experimented with using these functions along with trigonometry to place items in a perfect circle without any JavaScript.

See the Pen Stagger cards using sibling-index() [forked] by utilitybend.

  • Demo: Placing items in a perfect circle using sibling-index, sibling-count, and the new CSS @function feature.

See the Pen The circle using sibling-index, sibling-count and functions [forked] by utilitybend.

My CSS To-Do List: Features I Can’t Wait To Try

While I have been busy sculpting selects and transitions, the “CSS Wrapped 2025” report is packed with other goodies that I haven’t had the chance to fire up in CodePen yet. These are high on my list for my next experiments:

Anchored Container Queries

I used CSS Anchor Positioning for the buttons in my carousel demo, but “CSS Wrapped” highlights an evolution of this: Anchored Container Queries. This solves a problem we’ve all had with tooltips: if the browser flips the tooltip from top to bottom because of space constraints, the “arrow” often stays pointing the wrong way. With anchored container queries (@container anchored(fallback: flip-block)), we can style the element based on which fallback position the browser actually chose.

Nested View Transition Groups

View Transitions have been a revolution, but they came with a specific trade-off: they flattened the element tree, which often broke 3D transforms or overflow: clip. I always had a feeling that it was missing something, and this might just be the answer. By using view-transition-group: nearest, we can finally nest transition groups within each other.

This allows us to maintain clipping effects or 3D rotations during a transition — something that was previously impossible because the elements were hoisted up to the top level.

.card img {
  view-transition-name: photo;
  view-transition-group: nearest; /* Keep it nested! */
}

Typography and Shapes

Finally, the ergonomist in me is itching to try Text Box Trim, which promises to remove that annoying extra whitespace above and below text content (the leading) to finally achieve perfect vertical alignment. And for the creative side, corner-shape and the shape() function are opening up non-rectangular layouts, allowing for “squaricles” and complex paths that respond to CSS variables. That being said, I can’t wait to have a design full of squircles!

A Hopeful Future

We are witnessing a world where CSS is becoming capable of handling logic, state, and complex interactions that previously belonged to JavaScript. Features like moveBefore (preserving DOM state for iframes/videos) and attr() (using types beyond strings for colors and grids) further cement this reality.

While some of these features are currently experimental or specific to Chrome, the momentum is undeniable. We must hope for continued support across all browsers through initiatives like Interop to ensure these capabilities become the baseline. That being said, having browser engines is just as important as having all these awesome features in “Chrome first”. These new features need to be discussed, tinkered with, and tested before ever landing in browsers.

It is a fantastic moment to get into CSS. We are no longer just styling documents; we are crafting dynamic, ergonomic, and robust applications with a native toolkit that is more powerful than ever.

Let’s get going with this new era and spread the word.

This is CSS Wrapped!

]]>
hello@smashingmagazine.com (Brecht De Ruyte)
<![CDATA[How UX Professionals Can Lead AI Strategy]]> https://smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/ https://smashingmagazine.com/2025/12/how-ux-professionals-can-lead-ai-strategy/ Mon, 08 Dec 2025 08:00:00 GMT Your senior management is excited about AI. They’ve read the articles, attended the webinars, and seen the demos. They’re convinced that AI will transform your organization, boost productivity, and give you a competitive edge.

Meanwhile, you’re sitting in your UX role wondering what this means for your team, your workflow, and your users. You might even be worried about your job security.

The problem is that the conversation about how AI gets implemented is happening right now, and if you’re not part of it, someone else will decide how it affects your work. That someone probably doesn’t understand user experience, research practices, or the subtle ways poor implementation can damage the very outcomes management hopes to achieve.

You have a choice. You can wait for directives to come down from above, or you can take control of the conversation and lead the AI strategy for your practice.

Why UX Professionals Must Own the AI Conversation

Management sees AI as efficiency gains, cost savings, competitive advantage, and innovation all wrapped up in one buzzword-friendly package. They’re not wrong to be excited. The technology is genuinely impressive and can deliver real value.

But without UX input, AI implementations often fail users in predictable ways:

  • They automate tasks without understanding the judgment calls those tasks require.
  • They optimize for speed while destroying the quality that made your work valuable.

Your expertise positions you perfectly to guide implementation. You understand users, workflows, quality standards, and the gap between what looks impressive in a demo and what actually works in practice.

Use AI Momentum to Advance Your Priorities

Management’s enthusiasm for AI creates an opportunity to advance priorities you’ve been fighting for unsuccessfully. When management is willing to invest in AI, you can connect those long-standing needs to the AI initiative. Position user research as essential for training AI systems on real user needs. Frame usability testing as the validation method that ensures AI-generated solutions actually work.

How AI gets implemented will shape your team’s roles, your users’ experiences, and your organization’s capability to deliver quality digital products.

Your Role Isn’t Disappearing (It’s Evolving)

Yes, AI will automate some of the tasks you currently do. But someone needs to decide which tasks get automated, how they get automated, what guardrails to put in place, and how automated processes fit around real humans doing complex work.

That someone should be you.

Think about what you already do. When you conduct user research, AI might help you transcribe interviews or identify themes. But you’re the one who knows which participant hesitated before answering, which feedback contradicts what you observed in their behavior, and which insights matter most for your specific product and users.

When you design interfaces, AI might generate layout variations or suggest components from your design system. But you’re the one who understands the constraints of your technical platform, the political realities of getting designs approved, and the edge cases that will break a clever solution.

Your future value comes from the work you’re already doing:

  • Seeing the full picture.
    You understand how this feature connects to that workflow, how this user segment differs from that one, and why the technically correct solution won’t work in your organization’s reality.
  • Making judgment calls.
    You decide when to follow the design system and when to break it, when user feedback reflects a real problem versus a feature request from one vocal user, and when to push back on stakeholders versus find a compromise.
  • Connecting the dots.
    You translate between technical constraints and user needs, between business goals and design principles, between what stakeholders ask for and what will actually solve their problem.

AI will keep getting better at individual tasks. But you’re the person who decides which solution actually works for your specific context. The people who will struggle are those doing simple, repeatable work without understanding why. Your value is in understanding context, making judgment calls, and connecting solutions to real problems.

Step 1: Understand Management’s AI Motivations

Before you can lead the conversation, you need to understand what’s driving it. Management is responding to real pressures: cost reduction, competitive pressure, productivity gains, and board expectations.

Speak their language.
When you talk to management about AI, frame everything in terms of ROI, risk mitigation, and competitive advantage. “This approach will protect our quality standards” is less compelling than “This approach reduces the risk of damaging our conversion rate while we test AI capabilities.”

Separate hype from reality.
Take time to research what AI capabilities actually exist versus what’s hype. Read case studies, try tools yourself, and talk to peers about what’s actually working.

Identify real pain points.
AI might legitimately address in your organization. Maybe your team spends hours formatting research findings, or accessibility testing creates bottlenecks. These are the problems worth solving.

Step 2: Audit Your Current State and Opportunities

Map your team’s work. Where does time actually go? Look at the past quarter and categorize how your team spent their hours.

Identify high-volume, repeatable tasks versus high-judgment work.
Repeatable tasks are candidates for automation. High-judgment work is where you add irreplaceable value.

Also, identify what you’ve wanted to do but couldn’t get approved.
This is your opportunity list. Maybe you’ve wanted quarterly usability tests, but only get budget annually. Write these down separately. You’ll connect them to your AI strategy in the next step.

Spot opportunities where AI could genuinely help:

  • Research synthesis:
    AI can help organize and categorize findings.
  • Analyzing user behavior data:
    AI can process analytics and session recordings to surface patterns you might miss.
  • Rapid prototyping:
    AI can quickly generate testable prototypes, speeding up your test cycles.
Step 3: Define AI Principles for Your UX Practice

Before you start forming your strategy, establish principles that will guide every decision.

Set non-negotiables.
User privacy, accessibility, and human oversight of significant decisions. Write these down and get agreement from leadership before you pilot anything.

Define criteria for AI use.
AI is good at pattern recognition, summarization, and generating variations. AI is poor at understanding context, making ethical judgments, and knowing when rules should be broken.

Define success metrics beyond efficiency.
Yes, you want to save time. But you also need to measure quality, user satisfaction, and team capability. Build a balanced scorecard that captures what actually matters.

Create guardrails.
Maybe every AI-generated interface needs human review before it ships. These guardrails prevent the obvious disasters and give you space to learn safely.

Step 4: Build Your AI-in-UX Strategy

Now you’re ready to build the actual strategy you’ll pitch to leadership. Start small with pilot projects that have a clear scope and evaluation criteria.

Connect to business outcomes management cares about.
Don’t pitch “using AI for research synthesis.” Pitch “reducing time from research to insights by 40%, enabling faster product decisions.”

Piggyback your existing priorities on AI momentum.
Remember that opportunity list from Step 2? Now you connect those long-standing needs to your AI strategy. If you’ve wanted more frequent usability testing, explain that AI implementations need continuous validation to catch problems before they scale. AI implementations genuinely benefit from good research practices. You’re simply using management’s enthusiasm for AI as the vehicle to finally get resources for practices that should have been funded all along.

Define roles clearly.
Where do humans lead? Where does AI assist? Where won’t you automate? Management needs to understand that some work requires human judgment and should never be fully automated.

Plan for capability building.
Your team will need training and new skills. Budget time and resources for this.

Address risks honestly.
AI could generate biased recommendations, miss important context, or produce work that looks good but doesn’t actually function. For each risk, explain how you’ll detect it and what you’ll do to mitigate it.

Step 5: Pitch the Strategy to Leadership

Frame your strategy as de-risking management’s AI ambitions, not blocking them. You’re showing them how to implement AI successfully while avoiding the obvious pitfalls.

Lead with outcomes and ROI they care about.
Put the business case up front.

Bundle your wish list into the AI strategy.
When you present your strategy, include those capabilities you’ve wanted but couldn’t get approved before. Don’t present them as separate requests. Integrate them as essential components. “To validate AI-generated designs, we’ll need to increase our testing frequency from annual to quarterly” sounds much more reasonable than “Can we please do more testing?” You’re explaining what’s required for their AI investment to succeed.

Show quick wins alongside a longer-term vision.
Identify one or two pilots that can show value within 30-60 days. Then show them how those pilots build toward bigger changes over the next year.

Ask for what you need.
Be specific. You need a budget for tools, time for pilots, access to data, and support for team training.

Step 6: Implement and Demonstrate Value

Run your pilots with clear before-and-after metrics. Measure everything: time saved, quality maintained, user satisfaction, team confidence.

Document wins and learning.
Failures are useful too. If a pilot doesn’t work out, document why and what you learned.

Share progress in management’s language. Monthly updates should focus on business outcomes, not technical details. “We’ve reduced research synthesis time by 35% while maintaining quality scores” is the right level of detail.

Build internal advocates by solving real problems.
When your AI pilots make someone’s job easier, you create advocates who will support broader adoption.

Iterate based on what works in your specific context. Not every AI application will fit your organization. Pay attention to what’s actually working and double down on that.

Taking Initiative Beats Waiting

AI adoption is happening. The question isn’t whether your organization will use AI, but whether you’ll shape how it gets implemented.

Your UX expertise is exactly what’s needed to implement AI successfully. You understand users, quality, and the gap between impressive demos and useful reality.

Take one practical first step this week.
Schedule 30 minutes to map one AI opportunity in your practice. Pick one area where AI might help, think through how you’d pilot it safely, and sketch out what success would look like.

Then start the conversation with your manager. You might be surprised how receptive they are to someone stepping up to lead this.

You know how to understand user needs, test solutions, measure outcomes, and iterate based on evidence. Those skills don’t change just because AI is involved. You’re applying your existing expertise to a new tool.

Your role isn’t disappearing. It’s evolving into something more strategic, more valuable, and more secure. But only if you take the initiative to shape that evolution yourself.

Further Reading On SmashingMag

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[Beyond The Black Box: Practical XAI For UX Practitioners]]> https://smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/ https://smashingmagazine.com/2025/12/beyond-black-box-practical-xai-ux-practitioners/ Fri, 05 Dec 2025 15:00:00 GMT In my last piece, we established a foundational truth: for users to adopt and rely on AI, they must trust it. We talked about trust being a multifaceted construct, built on perceptions of an AI’s Ability, Benevolence, Integrity, and Predictability. But what happens when an AI, in its silent, algorithmic wisdom, makes a decision that leaves a user confused, frustrated, or even hurt? A mortgage application is denied, a favorite song is suddenly absent from a playlist, and a qualified resume is rejected before a human ever sees it. In these moments, ability and predictability are shattered, and benevolence feels a world away.

Our conversation now must evolve from the why of trust to the how of transparency. The field of Explainable AI (XAI), which focuses on developing methods to make AI outputs understandable to humans, has emerged to address this, but it’s often framed as a purely technical challenge for data scientists. I argue it’s a critical design challenge for products relying on AI. It’s our job as UX professionals to bridge the gap between algorithmic decision-making and human understanding.

This article provides practical, actionable guidance on how to research and design for explainability. We’ll move beyond the buzzwords and into the mockups, translating complex XAI concepts into concrete design patterns you can start using today.

De-mystifying XAI: Core Concepts For UX Practitioners

XAI is about answering the user’s question: “Why?” Why was I shown this ad? Why is this movie recommended to me? Why was my request denied? Think of it as the AI showing its work on a math problem. Without it, you just have an answer, and you’re forced to take it on faith. In showing the steps, you build comprehension and trust. You also allow for your work to be double-checked and verified by the very humans it impacts.

Feature Importance And Counterfactuals

There are a number of techniques we can use to clarify or explain what is happening with AI. While methods range from providing the entire logic of a decision tree to generating natural language summaries of an output, two of the most practical and impactful types of information UX practitioners can introduce into an experience are feature importance (Figure 1) and counterfactuals. These are often the most straightforward for users to understand and the most actionable for designers to implement.

Feature Importance

This explainability method answers, “What were the most important factors the AI considered?” It’s about identifying the top 2-3 variables that had the biggest impact on the outcome. It’s the headline, not the whole story.

Example: Imagine an AI that predicts whether a customer will churn (cancel their service). Feature importance might reveal that “number of support calls in the last month” and “recent price increases” were the two most important factors in determining if a customer was likely to churn.

Counterfactuals

This powerful method answers, “What would I need to change to get a different outcome?” This is crucial because it gives users a sense of agency. It transforms a frustrating “no” into an actionable “not yet.”

Example: Imagine a loan application system that uses AI. A user is denied a loan. Instead of just seeing “Application Denied,” a counterfactual explanation would also share, “If your credit score were 50 points higher, or if your debt-to-income ratio were 10% lower, your loan would have been approved.” This gives Sarah clear, actionable steps she can take to potentially get a loan in the future.

Using Model Data To Enhance The Explanation

Although technical specifics are often handled by data scientists, it's helpful for UX practitioners to know that tools like LIME (Local Interpretable Model-agnostic Explanations) which explains individual predictions by approximating the model locally, and SHAP (SHapley Additive exPlanations) which uses a game theory approach to explain the output of any machine learning model are commonly used to extract these “why” insights from complex models. These libraries essentially help break down an AI’s decision to show which inputs were most influential for a given outcome.

When done properly, the data underlying an AI tool’s decision can be used to tell a powerful story. Let’s walk through feature importance and counterfactuals and show how the data science behind the decision can be utilized to enhance the user’s experience.

Now let’s cover feature importance with the assistance of Local Explanations (e.g., LIME) data: This approach answers, “Why did the AI make this specific recommendation for me, right now?” Instead of a general explanation of how the model works, it provides a focused reason for a single, specific instance. It’s personal and contextual.

Example: Imagine an AI-powered music recommendation system like Spotify. A local explanation would answer, “Why did the system recommend this specific song by Adele to you right now?” The explanation might be: “Because you recently listened to several other emotional ballads and songs by female vocalists.”

Finally, let’s cover the inclusion of Value-based Explanations (e.g. Shapley Additive Explanations (SHAP) data to an explanation of a decision: This is a more nuanced version of feature importance that answers, “How did each factor push the decision one way or the other?” It helps visualize what mattered, and whether its influence was positive or negative.

Example: Imagine a bank uses an AI model to decide whether to approve a loan application.

Feature Importance: The model output might show that the applicant’s credit score, income, and debt-to-income ratio were the most important factors in its decision. This answers what mattered.

Feature Importance with Value-Based Explanations (SHAP): SHAP values would take feature importance further based on elements of the model.

  • For an approved loan, SHAP might show that a high credit score significantly pushed the decision towards approval (positive influence), while a slightly higher-than-average debt-to-income ratio pulled it slightly away (negative influence), but not enough to deny the loan.
  • For a denied loan, SHAP could reveal that a low income and a high number of recent credit inquiries strongly pushed the decision towards denial, even if the credit score was decent.

This helps the loan officer explain to the applicant beyond what was considered, to how each factor contributed to the final “yes” or “no” decision.

It’s crucial to recognize that the ability to provide good explanations often starts much earlier in the development cycle. Data scientists and engineers play a pivotal role by intentionally structuring models and data pipelines in ways that inherently support explainability, rather than trying to bolt it on as an afterthought.

Research and design teams can foster this by initiating early conversations with data scientists and engineers about user needs for understanding, contributing to the development of explainability metrics, and collaboratively prototyping explanations to ensure they are both accurate and user-friendly.

XAI And Ethical AI: Unpacking Bias And Responsibility

Beyond building trust, XAI plays a critical role in addressing the profound ethical implications of AI*, particularly concerning algorithmic bias. Explainability techniques, such as analyzing SHAP values, can reveal if a model’s decisions are disproportionately influenced by sensitive attributes like race, gender, or socioeconomic status, even if these factors were not explicitly used as direct inputs.

For instance, if a loan approval model consistently assigns negative SHAP values to applicants from a certain demographic, it signals a potential bias that needs investigation, empowering teams to surface and mitigate such unfair outcomes.

The power of XAI also comes with the potential for “explainability washing.” Just as “greenwashing” misleads consumers about environmental practices, explainability washing can occur when explanations are designed to obscure, rather than illuminate, problematic algorithmic behavior or inherent biases. This could manifest as overly simplistic explanations that omit critical influencing factors, or explanations that strategically frame results to appear more neutral or fair than they truly are. It underscores the ethical responsibility of UX practitioners to design explanations that are genuinely transparent and verifiable.

UX professionals, in collaboration with data scientists and ethicists, hold a crucial responsibility in communicating the why of a decision, and also the limitations and potential biases of the underlying AI model. This involves setting realistic user expectations about AI accuracy, identifying where the model might be less reliable, and providing clear channels for recourse or feedback when users perceive unfair or incorrect outcomes. Proactively addressing these ethical dimensions will allow us to build AI systems that are truly just and trustworthy.

From Methods To Mockups: Practical XAI Design Patterns

Knowing the concepts is one thing; designing them is another. Here’s how we can translate these XAI methods into intuitive design patterns.

Pattern 1: The "Because" Statement (for Feature Importance)

This is the simplest and often most effective pattern. It’s a direct, plain-language statement that surfaces the primary reason for an AI’s action.

  • Heuristic: Be direct and concise. Lead with the single most impactful reason. Avoid jargon at all costs.
Example: Imagine a music streaming service. Instead of just presenting a “Discover Weekly” playlist, you add a small line of microcopy.

Song Recommendation: “Velvet Morning”
Because you listen to “The Fuzz” and other psychedelic rock.

Pattern 2: The "What-If" Interactive (for Counterfactuals)

Counterfactuals are inherently about empowerment. The best way to represent them is by giving users interactive tools to explore possibilities themselves. This is perfect for financial, health, or other goal-oriented applications.

  • Heuristic: Make explanations interactive and empowering. Let users see the cause and effect of their choices.
Example: A loan application interface. After a denial, instead of a dead end, the user gets a tool to determine how various scenarios (what-ifs) might play out (See Figure 1).

Pattern 3: The Highlight Reel (For Local Explanations)

When an AI performs an action on a user’s content (like summarizing a document or identifying faces in photos), the explanation should be visually linked to the source.

  • Heuristic: Use visual cues like highlighting, outlines, or annotations to connect the explanation directly to the interface element it’s explaining.
Example: An AI tool that summarizes long articles.

AI-Generated Summary Point:
Initial research showed a market gap for sustainable products.

Source in Document:
“...Our Q2 analysis of market trends conclusively demonstrated that no major competitor was effectively serving the eco-conscious consumer, revealing a significant market gap for sustainable products...”

Pattern 4: The Push-and-Pull Visual (for Value-based Explanations)

For more complex decisions, users might need to understand the interplay of factors. Simple data visualizations can make this clear without being overwhelming.

  • Heuristic: Use simple, color-coded data visualizations (like bar charts) to show the factors that positively and negatively influenced a decision.
Example: An AI screening a candidate’s profile for a job.

Why this candidate is a 75% match:

Factors pushing the score up:
  • 5+ Years UX Research Experience
  • Proficient in Python

Factors pushing the score down:
  • No experience with B2B SaaS

Learning and using these design patterns in the UX of your AI product will help increase the explainability. You can also use additional techniques that I’m not covering in-depth here. This includes the following:

  • Natural language explanations: Translating an AI’s technical output into simple, conversational human language that non-experts can easily understand.
  • Contextual explanations: Providing a rationale for an AI’s output at the specific moment and location, it is most relevant to the user’s task.
  • Relevant visualizations: Using charts, graphs, or heatmaps to visually represent an AI’s decision-making process, making complex data intuitive and easier for users to grasp.

A Note For the Front End: Translating these explainability outputs into seamless user experiences also presents its own set of technical considerations. Front-end developers often grapple with API design to efficiently retrieve explanation data, and performance implications (like the real-time generation of explanations for every user interaction) need careful planning to avoid latency.

Some Real-world Examples

UPS Capital’s DeliveryDefense

UPS uses AI to assign a “delivery confidence score” to addresses to predict the likelihood of a package being stolen. Their DeliveryDefense software analyzes historical data on location, loss frequency, and other factors. If an address has a low score, the system can proactively reroute the package to a secure UPS Access Point, providing an explanation for the decision (e.g., “Package rerouted to a secure location due to a history of theft”). This system demonstrates how XAI can be used for risk mitigation and building customer trust through transparency.

Autonomous Vehicles

These vehicles of the future will need to effectively use XAI to help their vehicles make safe, explainable decisions. When a self-driving car brakes suddenly, the system can provide a real-time explanation for its action, for example, by identifying a pedestrian stepping into the road. This is not only crucial for passenger comfort and trust but is a regulatory requirement to prove the safety and accountability of the AI system.

IBM Watson Health (and its challenges)

While often cited as a general example of AI in healthcare, it’s also a valuable case study for the importance of XAI. The failure of its Watson for Oncology project highlights what can go wrong when explanations are not clear, or when the underlying data is biased or not localized. The system’s recommendations were sometimes inconsistent with local clinical practices because they were based on U.S.-centric guidelines. This serves as a cautionary tale on the need for robust, context-aware explainability.

The UX Researcher’s Role: Pinpointing And Validating Explanations

Our design solutions are only effective if they address the right user questions at the right time. An explanation that answers a question the user doesn’t have is just noise. This is where UX research becomes the critical connective tissue in an XAI strategy, ensuring that we explain the what and how that actually matters to our users. The researcher’s role is twofold: first, to inform the strategy by identifying where explanations are needed, and second, to validate the designs that deliver those explanations.

Informing the XAI Strategy (What to Explain)

Before we can design a single explanation, we must understand the user’s mental model of the AI system. What do they believe it’s doing? Where are the gaps between their understanding and the system’s reality? This is the foundational work of a UX researcher.

Mental Model Interviews: Unpacking User Perceptions Of AI Systems

Through deep, semi-structured interviews, UX practitioners can gain invaluable insights into how users perceive and understand AI systems. These sessions are designed to encourage users to literally draw or describe their internal “mental model” of how they believe the AI works. This often involves asking open-ended questions that prompt users to explain the system’s logic, its inputs, and its outputs, as well as the relationships between these elements.

These interviews are powerful because they frequently reveal profound misconceptions and assumptions that users hold about AI. For example, a user interacting with a recommendation engine might confidently assert that the system is based purely on their past viewing history. They might not realize that the algorithm also incorporates a multitude of other factors, such as the time of day they are browsing, the current trending items across the platform, or even the viewing habits of similar users.

Uncovering this gap between a user’s mental model and the actual underlying AI logic is critically important. It tells us precisely what specific information we need to communicate to users to help them build a more accurate and robust mental model of the system. This, in turn, is a fundamental step in fostering trust. When users understand, even at a high level, how an AI arrives at its conclusions or recommendations, they are more likely to trust its outputs and rely on its functionality.

AI Journey Mapping: A Deep Dive Into User Trust And Explainability

By meticulously mapping the user’s journey with an AI-powered feature, we gain invaluable insights into the precise moments where confusion, frustration, or even profound distrust emerge. This uncovers critical junctures where the user’s mental model of how the AI operates clashes with its actual behavior.

Consider a music streaming service: Does the user’s trust plummet when a playlist recommendation feels “random,” lacking any discernible connection to their past listening habits or stated preferences? This perceived randomness is a direct challenge to the user’s expectation of intelligent curation and a breach of the implicit promise that the AI understands their taste. Similarly, in a photo management application, do users experience significant frustration when an AI photo-tagging feature consistently misidentifies a cherished family member? This error is more than a technical glitch; it strikes at the heart of accuracy, personalization, and even emotional connection.

These pain points are vivid signals indicating precisely where a well-placed, clear, and concise explanation is necessary. Such explanations serve as crucial repair mechanisms, mending a breach of trust that, if left unaddressed, can lead to user abandonment.

The power of AI journey mapping lies in its ability to move us beyond simply explaining the final output of an AI system. While understanding what the AI produced is important, it’s often insufficient. Instead, this process compels us to focus on explaining the process at critical moments. This means addressing:

  • Why a particular output was generated: Was it due to specific input data? A particular model architecture?
  • What factors influenced the AI’s decision: Were certain features weighted more heavily?
  • How the AI arrived at its conclusion: Can we offer a simplified, analogous explanation of its internal workings?
  • What assumptions the AI made: Were there implicit understandings of the user’s intent or data that need to be surfaced?
  • What the limitations of the AI are: Clearly communicating what the AI cannot do, or where its accuracy might waver, builds realistic expectations.

AI journey mapping transforms the abstract concept of XAI into a practical, actionable framework for UX practitioners. It enables us to move beyond theoretical discussions of explainability and instead pinpoint the exact moments where user trust is at stake, providing the necessary insights to build AI experiences that are powerful, transparent, understandable, and trustworthy.

Ultimately, research is how we uncover the unknowns. Your team might be debating how to explain why a loan was denied, but research might reveal that users are far more concerned with understanding how their data was used in the first place. Without research, we are simply guessing what our users are wondering.

Collaborating On The Design (How to Explain Your AI)

Once research has identified what to explain, the collaborative loop with design begins. Designers can prototype the patterns we discussed earlier—the “Because” statement, the interactive sliders—and researchers can put those designs in front of users to see if they hold up.

Targeted Usability & Comprehension Testing: We can design research studies that specifically test the XAI components. We don’t just ask, “Is this easy to use?” We ask, “After seeing this, can you tell me in your own words why the system recommended this product?” or “Show me what you would do to see if you could get a different result.” The goal here is to measure comprehension and actionability, alongside usability.

Measuring Trust Itself: We can use simple surveys and rating scales before and after an explanation is shown. For instance, we can ask a user on a 5-point scale, “How much do you trust this recommendation?” before they see the “Because” statement, and then ask them again afterward. This provides quantitative data on whether our explanations are actually moving the needle on trust.

This process creates a powerful, iterative loop. Research findings inform the initial design. That design is then tested, and the new findings are fed back to the design team for refinement. Maybe the “Because” statement was too jargony, or the “What-If” slider was more confusing than empowering. Through this collaborative validation, we ensure that the final explanations are technically accurate, genuinely understandable, useful, and trust-building for the people using the product.

The Goldilocks Zone Of Explanation

A critical word of caution: it is possible to over-explain. As in the fairy tale, where Goldilocks sought the porridge that was ‘just right’, the goal of a good explanation is to provide the right amount of detail—not too much and not too little. Bombarding a user with every variable in a model will lead to cognitive overload and can actually decrease trust. The goal is not to make the user a data scientist.

One solution is progressive disclosure.

  1. Start with the simple. Lead with a concise “Because” statement. For most users, this will be enough.
  2. Offer a path to detail. Provide a clear, low-friction link like “Learn More” or “See how this was determined.”
  3. Reveal the complexity. Behind that link, you can offer the interactive sliders, the visualizations, or a more detailed list of contributing factors.

This layered approach respects user attention and expertise, providing just the right amount of information for their needs. Let’s imagine you’re using a smart home device that recommends optimal heating based on various factors.

Start with the simple: “Your home is currently heated to 72 degrees, which is the optimal temperature for energy savings and comfort.

Offer a path to detail: Below that, a small link or button: “Why is 72 degrees optimal?"

Reveal the complexity: Clicking that link could open a new screen showing:

  • Interactive sliders for outside temperature, humidity, and your preferred comfort level, demonstrating how these adjust the recommended temperature.
  • A visualization of energy consumption at different temperatures.
  • A list of contributing factors like “Time of day,” “Current outside temperature,” “Historical energy usage,” and “Occupancy sensors.”

It’s effective to combine multiple XAI methods and this Goldilocks Zone of Explanation pattern, which advocates for progressive disclosure, implicitly encourages this. You might start with a simple “Because” statement (Pattern 1) for immediate comprehension, and then offer a “Learn More” link that reveals a “What-If” Interactive (Pattern 2) or a “Push-and-Pull Visual” (Pattern 4) for deeper exploration.

For instance, a loan application system could initially state the primary reason for denial (feature importance), then allow the user to interact with a “What-If” tool to see how changes to their income or debt would alter the outcome (counterfactuals), and finally, provide a detailed “Push-and-Pull” chart (value-based explanation) to illustrate the positive and negative contributions of all factors. This layered approach allows users to access the level of detail they need, when they need it, preventing cognitive overload while still providing comprehensive transparency.

Determining which XAI tools and methods to use is primarily a function of thorough UX research. Mental model interviews and AI journey mapping are crucial for pinpointing user needs and pain points related to AI understanding and trust. Mental model interviews help uncover user misconceptions about how the AI works, indicating areas where fundamental explanations (like feature importance or local explanations) are needed. AI journey mapping, on the other hand, identifies critical moments of confusion or distrust in the user’s interaction with the AI, signaling where more granular or interactive explanations (like counterfactuals or value-based explanations) would be most beneficial to rebuild trust and provide agency.

Ultimately, the best way to choose a technique is to let user research guide your decisions, ensuring that the explanations you design directly address actual user questions and concerns, rather than simply offering technical details for their own sake.

XAI for Deep Reasoning Agents

Some of the newest AI systems, known as deep reasoning agents, produce an explicit “chain of thought” for every complex task. They do not merely cite sources; they show the logical, step-by-step path they took to arrive at a conclusion. While this transparency provides valuable context, a play-by-play that spans several paragraphs can feel overwhelming to a user simply trying to complete a task.

The principles of XAI, especially the Goldilocks Zone of Explanation, apply directly here. We can curate the journey, using progressive disclosure to show only the final conclusion and the most salient step in the thought process first. Users can then opt in to see the full, detailed, multi-step reasoning when they need to double-check the logic or find a specific fact. This approach respects user attention while preserving the agent’s full transparency.

Next Steps: Empowering Your XAI Journey

Explainability is a fundamental pillar for building trustworthy and effective AI products. For the advanced practitioner looking to drive this change within their organization, the journey extends beyond design patterns into advocacy and continuous learning.

To deepen your understanding and practical application, consider exploring resources like the AI Explainability 360 (AIX360) toolkit from IBM Research or Google’s What-If Tool, which offer interactive ways to explore model behavior and explanations. Engaging with communities like the Responsible AI Forum or specific research groups focused on human-centered AI can provide invaluable insights and collaboration opportunities.

Finally, be an advocate for XAI within your own organization. Frame explainability as a strategic investment. Consider a brief pitch to your leadership or cross-functional teams:

“By investing in XAI, we’ll go beyond building trust; we’ll accelerate user adoption, reduce support costs by empowering users with understanding, and mitigate significant ethical and regulatory risks by exposing potential biases. This is good design and smart business.”

Your voice, grounded in practical understanding, is crucial in bringing AI out of the black box and into a collaborative partnership with users.

]]>
hello@smashingmagazine.com (Victor Yocco)
<![CDATA[Masonry: Things You Won’t Need A Library For Anymore]]> https://smashingmagazine.com/2025/12/masonry-things-you-wont-need-library-anymore/ https://smashingmagazine.com/2025/12/masonry-things-you-wont-need-library-anymore/ Tue, 02 Dec 2025 10:00:00 GMT About 15 years ago, I was working at a company where we built apps for travel agents, airport workers, and airline companies. We also built our own in-house framework for UI components and single-page app capabilities.

We had components for everything: fields, buttons, tabs, ranges, datatables, menus, datepickers, selects, and multiselects. We even had a div component. Our div component was great by the way, it allowed us to do rounded corners on all browsers, which, believe it or not, wasn't an easy thing to do at the time.

Our work took place at a point in our history when JS, Ajax, and dynamic HTML were seen as a revolution that brought us into the future. Suddenly, we could update a page dynamically, get data from a server, and avoid having to navigate to other pages, which was seen as slow and flashed a big white rectangle on the screen between the two pages.

There was a phrase, made popular by Jeff Atwood (the founder of StackOverflow), which read:

“Any application that can be written in JavaScript will eventually be written in JavaScript.”

Jeff Atwood

To us at the time, this felt like a dare to actually go and create those apps. It felt like a blanket approval to do everything with JS.

So we did everything with JS, and we didn’t really take the time to research other ways of doing things. We didn’t really feel the incentive to properly learn what HTML and CSS could do. We didn’t really perceive the web as an evolving app platform in its entirety. We mostly saw it as something we needed to work around, especially when it came to browser support. We could just throw more JS at it to get things done.

Would taking the time to learn more about how the web worked and what was available on the platform have helped me? Sure, I could probably have shaved a bunch of code that wasn’t truly needed. But, at the time, maybe not that much.

You see, browser differences were pretty significant back then. This was a time when Internet Explorer was still the dominant browser, with Firefox being the close second, but starting to lose market share due to Chrome rapidly gaining popularity. Although Chrome and Firefox were quite good at agreeing on web standards, the environments in which our apps were running meant that we had to support IE6 for a long time. Even when we were allowed to support IE8, we still had to deal with a lot of differences between browsers. Not only that, but the web of the time just didn't have that many capabilities built right into the platform.

Fast forward to today. Things have changed tremendously. Not only do we have more of these capabilities than ever before, but the rate at which they become available has increased as well.

Let me ask the question again, then: Would taking the time to learn more about how the web works and what is available on the platform help you today? Absolutely yes. Learning to understand and use the web platform today puts you at a huge advantage over other developers.

Whether you work on performance, accessibility, responsiveness, all of them together, or just shipping UI features, if you want to do it as a responsible engineer, knowing the tools that are available to you helps you reach your goals faster and better.

Some Things You Might Not Need A Library For Anymore

Knowing what browsers support today, the question, then, is: What can we ditch? Do we need a div component to do rounded corners in 2025? Of course, we don’t. The border-radius property has been supported by all currently used browsers for more than 15 years at this point. And corner-shape is also coming soon, for even fancier corners.

Let’s take a look at relatively recent features that are now available in all major browsers, and which you can use to replace existing dependencies in your codebase.

The point isn't to immediately ditch all your beloved libraries and rewrite your codebase. As for everything else, you’ll need to take browser support into account first and decide based on other factors specific to your project. The following features are implemented in the three main browser engines (Chromium, WebKit, and Gecko), but you might have different browser support requirements that prevent you from using them right away. Now is still a good time to learn about these features, though, and perhaps plan to use them at some point.

Popovers And Dialogs

The Popover API, the <dialog> HTML element, and the ::backdrop pseudo-element can help you get rid of dependencies on popup, tooltip, and dialog libraries, such as Floating UI, Tippy.js, Tether, or React Tooltip.

They handle accessibility and focus management for you, out of the box, are highly customizable by using CSS, and can easily be animated.

Accordions

The <details> element, its name attribute for mutually exclusive elements, and the ::details-content pseudo-element remove the need for accordion components like the Bootstrap Accordion or the React Accordion component.

Just using the platform here means it’s easier for folks who know HTML/CSS to understand your code without having to first learn to use a specific library. It also means you’re immune to breaking changes in the library or the discontinuation of that library. And, of course, it means less code to download and run. Mutually exclusive details elements don’t need JS to open, close, or animate.

CSS Syntax

Cascade layers, for a more organized CSS codebase, CSS nesting, for more compact CSS, new color functions, relative colors, and color-mix, new Maths functions like abs(), sign(), pow() and others help reduce dependencies on CSS pre-processors, utility libraries like Bootstrap and Tailwind, or even runtime CSS-in-JS libraries.

The game changer :has(), one of the most requested features for a long time, removes the need for more complicated JS-based solutions.

JS Utilities

Modern Array methods like findLast(), or at(), as well as Set methods like difference(), intersection(), union() and others can reduce dependencies on libraries like Lodash.

Container Queries

Container queries make UI components respond to things other than the viewport size, and therefore make them more reusable across different contexts.

No need to use a JS-heavy UI library for this anymore, and no need to use a polyfill either.

Layout

Grid, subgrid, flexbox, or multi-column have been around for a long time now, but looking at the results of the State of CSS surveys, it’s clear that developers tend to be very cautious with adopting new things, and wait for a very long time before they do.

These features have been Baseline for a long time and you could use them to get rid of dependencies on things like the Bootstrap’s grid system, Foundation Framework’s flexbox utilities, Bulma fixed grid, Materialize grid, or Tailwind columns.

I’m not saying you should drop your framework. Your team adopted it for a reason, and removing it might be a big project. But looking at what the web platform can offer without a third-party wrapper on top comes with a lot of benefits.

Things You Might Not Need Anymore In The Near Future

Now, let’s take a quick look at some of the things you will not need a library for in the near future. That is to say, the things below are not quite ready for mass adoption, but being aware of them and planning for potential later use can be helpful.

Anchor Positioning

CSS anchor positioning handles the positioning of popovers and tooltips relative to other elements, and takes care of keeping them in view, even when moving, scrolling, or resizing the page.

This is a great complement to the Popover API mentioned before, which will make it even easier to migrate away from more performance-intensive JS solutions.

Navigation API

The Navigation API can be used to handle navigation in single-page apps and might be a great complement, or even a replacement, to React Router, Next.js routing, or Angular routing tasks.

View Transitions API

The View Transitions API can animate between the different states of a page. On a single-page application, this makes smooth transitions between states very easy, and can help you get rid of animation libraries such as Anime.js, GSAP, or Motion.dev.

Even better, the API can also be used with multiple-page applications.

Remember earlier, when I said that the reason we built single-page apps at the company where I worked 15 years ago was to avoid the white flash of page reloads when navigating? Had that API been available at the time, we would have been able to achieve beautiful page transition effects without a single-page framework and without a huge initial download of the entire app.

Scroll-driven Animations

Scroll-driven animations run on the user’s scroll position, rather than over time, making them a great solution for storytelling and product tours.

Some people have gone a bit over the top with it, but when used well, this can be a very effective design tool, and can help get rid of libraries like: ScrollReveal, GSAP Scroll, or WOW.js.

Customizable Selects

A customizable select is a normal <select> element that lets you fully customize its appearance and content, while ensuring accessibility and performance benefits.

This has been a long time coming, and a highly requested feature, and it’s amazing to see it come to the web platform soon. With a built-in customizable select, you can finally ditch all this hard-to-maintain JS code for your custom select components.

CSS Masonry

CSS Masonry is another upcoming web platform feature that I want to spend more time on.

With CSS Masonry, you can achieve layouts that are very hard, or even impossible, with flex, grid, or other built-in CSS layout primitives. Developers often resort to using third-party libraries to achieve Masonry layouts, such as the Masonry JS library.

But, more on that later. Let’s wrap this point up before moving on to Masonry.

Why You Should Care

The job market is full of web developers with experience in JavaScript and the latest frameworks of the day. So, really, what’s the point in learning to use the web platform primitives more, if you can do the same things with the libraries, utilities, and frameworks you already know today?

When an entire industry relies on these frameworks, and you can just pull in the right library, shouldn’t browser vendors just work with these libraries to make them load and run faster, rather than trying to convince developers to use the platform instead?

First of all, we do work with library authors, and we do make frameworks better by learning about what they use and improving those areas.

But secondly, “just using the platform” can bring pretty significant benefits.

Sending Less Code To Devices

The main benefit is that you end up sending far less code to your clients’ devices.

According to the 2024 Web Almanac, the average number of HTTP requests is around 70 per site, most of which is due to JavaScript with 23 requests. In 2024, JS overtook images as the dominant file type too. The median number of page requests for JS files is 23, up 8% since 2022.

And page size continues to grow year over year. The median page weight is around 2MB now, which is 1.8MB more than it was 10 years ago.

Sure, your internet connection speed has probably increased, too, but that’s not the case for everyone. And not everyone has the same device capabilities either.

Pulling in third-party code for things you can do with the platform, instead, most probably means you ship more code, and therefore reach fewer customers than you normally would. On the web, bad loading performance leads to large abandonment rates and hurts brand reputation.

Running Less Code On Devices

Furthermore, the code you do ship on your customers’ devices likely runs faster if it uses fewer JavaScript abstractions on top of the platform. It’s also probably more responsive and more accessible by default. All of this leads to more and happier customers.

Check my colleague Alex Russell’s yearly performance inequality gap blog, which shows that premium devices are largely absent from markets with billions of users due to wealth inequality. And this gap is only growing over time.

Built-in Masonry Layout

One web platform feature that’s coming soon and which I’m very excited about is CSS Masonry.

Let me start by explaining what Masonry is.

What Is Masonry

Masonry is a type of layout that was made popular by Pinterest years ago. It creates independent tracks of content within which items pack themselves up as close to the start of the track as they can.

Many people see Masonry as a great option for portfolios and photo galleries, which it certainly can do. But Masonry is more flexible than what you see on Pinterest, and it’s not limited to just waterfall-like layouts.

In a Masonry layout:

  • Tracks can be columns or rows:

  • Tracks of content don’t all have to be the same size:

  • Items can span multiple tracks:

  • Items can be placed on specific tracks; they don’t have to always follow the automatic placement algorithm:

Demos

Here are a few simple demos I made by using the upcoming implementation of CSS Masonry in Chromium.

A photo gallery demo, showing how items (the title in this case) can span multiple tracks:

Another photo gallery showing tracks of different sizes:

A news site layout with some tracks wider than others, and some items spanning the entire width of the layout:

A kanban board showing that items can be placed onto specific tracks:

Note: The previous demos were made with a version of Chromium that’s not yet available to most web users, because CSS Masonry is only just starting to be implemented in browsers.

However, web developers have been happily using libraries to create Masonry layouts for years already.

Sites Using Masonry Today

Indeed, Masonry is pretty common on the web today. Here are a few examples I found besides Pinterest:

And a few more, less obvious, examples:

So, how were these layouts created?

Workarounds

One trick that I’ve seen used is using a Flexbox layout instead, changing its direction to column, and setting it to wrap.

This way, you can place items of different heights in multiple, independent columns, giving the impression of a Masonry layout:

There are, however, two limitations with this workaround:

  1. The order of items is different from what it would be with a real Masonry layout. With Flexbox, items fill the first column first and, when it’s full, then go to the next column. With Masonry, items would stack in whichever track (or column in this case) has more space available.
  2. But also, and perhaps more importantly, this workaround requires that you set a fixed height to the Flexbox container; otherwise, no wrapping would occur.
Third-party Masonry Libraries

For more advanced cases, developers have been using libraries.

The most well-known and popular library for this is simply called Masonry, and it gets downloaded about 200,000 times per week according to NPM.

Squarespace also provides a layout component that renders a Masonry layout, for a no-code alternative, and many sites use it.

Both of these options use JavaScript code to place items in the layout.

Built-in Masonry

I’m really excited that Masonry is now starting to appear in browsers as a built-in CSS feature. Over time, you will be able to use Masonry just like you do Grid or Flexbox, that is, without needing any workarounds or third-party code.

My team at Microsoft has been implementing built-in Masonry support in the Chromium open source project, which Edge, Chrome, and many other browsers are based on. Mozilla was actually the first browser vendor to propose an experimental implementation of Masonry back in 2020. And Apple has also been very interested in making this new web layout primitive happen.

The work to standardize the feature is also moving ahead, with agreement within the CSS working group about the general direction and even a new display type display: grid-lanes.

If you want to learn more about Masonry and track progress, check out my CSS Masonry resources page.

In time, when Masonry becomes a Baseline feature, just like Grid or Flexbox, we’ll be able to simply use it and benefit from:

  • Better performance,
  • Better responsiveness,
  • Ease of use and simpler code.

Let’s take a closer look at these.

Better Performance

Making your own Masonry-like layout system, or using a third-party library instead, means you’ll have to run JavaScript code to place items on the screen. This also means that this code will be render blocking. Indeed, either nothing will appear, or things won’t be in the right places or of the right sizes, until that JavaScript code has run.

Masonry layout is often used for the main part of a web page, which means the code would be making your main content appear later than it could otherwise have, degrading your LCP, or Largest Contentful Paint metric, which plays a big role in perceived performance and search engine optimization.

I tested the Masonry JS library with a simple layout and by simulating a slow 4G connection in DevTools. The library is not very big (24KB, 7.8KB gzipped), but it took 600ms to load under my test conditions.

Here is a performance recording showing that long 600ms load time for the Masonry library, and that no other rendering activity happened while that was happening:

In addition, after the initial load time, the downloaded script then needed to be parsed, compiled, and then run. All of which, as mentioned before, was blocking the rendering of the page.

With a built-in Masonry implementation in the browser, we won’t have a script to load and run. The browser engine will just do its thing during the initial page rendering step.

Better Responsiveness

Similar to when a page first loads, resizing the browser window leads to rendering the layout in that page again. At this point, though, if the page is using the Masonry JS library, there’s no need to load the script again, because it’s already here. However, the code that moves items in the right places needs to run.

Now this particular library seems to be pretty fast at doing this when the page loads. However, it animates the items when they need to move to a different place on window resize, and this makes a big difference.

Of course, users don’t spend time resizing their browser windows as much as we developers do. But this animated resizing experience can be pretty jarring and adds to the perceived time it takes for the page to adapt to its new size.

Ease Of Use And Simpler Code

How easy it is to use a web feature and how simple the code looks are important factors that can make a big difference for your team. They can’t ever be as important as the final user experience, of course, but developer experience impacts maintainability. Using a built-in web feature comes with important benefits on that front:

  • Developers who already know HTML, CSS, and JS will most likely be able to use that feature easily because it’s been designed to integrate well and be consistent with the rest of the web platform.
  • There’s no risk of breaking changes being introduced in how the feature is used.
  • There’s almost zero risk of that feature becoming deprecated or unmaintained.

In the case of built-in Masonry, because it’s a layout primitive, you use it from CSS, just like Grid or Flexbox, no JS involved. Also, other layout-related CSS properties, such as gap, work as you’d expect them to. There are no tricks or workarounds to know about, and the things you do learn are documented on MDN.

For the Masonry JS lib, initialization is a bit complex: it requires a data attribute with a specific syntax, along with hidden HTML elements to set the column and gap sizes.

Plus, if you want to span columns, you need to include the gap size yourself to avoid problems:

<script src="https://unpkg.com/masonry-layout@4.2.2/dist/masonry.pkgd.min.js"></script>
<style>
  .track-sizer,
  .item {
    width: 20%;
  }
  .gutter-sizer {
    width: 1rem;
  }
  .item {
    height: 100px;
    margin-block-end: 1rem;
  }
  .item:nth-child(odd) {
    height: 200px;
  }
  .item--width2 {
    width: calc(40% + 1rem);
  }
</style>

<div class="container"
  data-masonry='{ "itemSelector": ".item", "columnWidth": ".track-sizer", "percentPosition": true, "gutter": ".gutter-sizer" }'>
  <div class="track-sizer"></div>
  <div class="gutter-sizer"></div>
  <div class="item"></div>
  <div class="item item--width2"></div>
  <div class="item"></div>
  ...
</div>

Let’s compare this to what a built-in Masonry implementation would look like:

<style>
  .container {
    display: grid-lanes;
    grid-lanes: repeat(4, 20%);
    gap: 1rem;
  }
  .item {
    height: 100px;
  }
  .item:nth-child(odd) {
    height: 200px;
  }
  .item--width2 {
    grid-column: span 2;
  }
</style>

<div class="container">
  <div class="item"></div>
  <div class="item item--width2"></div>
  <div class="item"></div>
  ...
</div>

Simpler, more compact code that can just use things like gap and where spanning tracks is done with span 2, just like in grid, and doesn’t require you to calculate the right width that includes the gap size.

How To Know What’s Available And When It’s Available?

Overall, the question isn’t really if you should use built-in Masonry over a JS library, but rather when. The Masonry JS library is amazing and has been filling a gap in the web platform for many years, and for many happy developers and users. It has a few drawbacks if you compare it to a built-in Masonry implementation, of course, but those are not important if that implementation isn’t ready.

It’s easy for me to list these cool new web platform features because I work at a browser vendor, and I therefore tend to know what’s coming. But developers often share, survey after survey, that keeping track of new things is hard. Staying informed is difficult, and companies don’t always prioritize learning anyway.

To help with this, here are a few resources that provide updates in simple and compact ways so you can get the information you need quickly:

If you have a bit more time, you might also be interested in browser vendors’ release notes:

For even more resources, check out my Navigating the Web Platform Cheatsheet.

My Thing Is Still Not Implemented

That’s the other side of the problem. Even if you do find the time, energy, and ways to keep track, there’s still frustration with getting your voice heard and your favorite features implemented.

Maybe you’ve been waiting for years for a specific bug to be resolved, or a specific feature to ship in a browser where it’s still missing.

What I’ll say is browser vendors do listen. I’m part of several cross-organization teams where we discuss developer signals and feedback all the time. We look at many different sources of feedback, both internal at each browser vendor and external/public on forums, open source projects, blogs, and surveys. And, we’re always trying to create better ways for developers to share their specific needs and use cases.

So, if you can, please demand more from browser vendors and pressure us to implement the features you need. I get that it takes time, and can also be intimidating (not to mention a high barrier to entry), but it also works.

Here are a few ways you can get your (or your company’s) voice heard: Take the annual State of JS, State of CSS, and State of HTML surveys. They play a big role in how browser vendors prioritize their work.

If you need a specific standard-based API to be implemented consistently across browsers, consider submitting a proposal at the next Interop project iteration. It requires more time, but consider how Shopify and RUMvision shared their wish lists for Interop 2026. Detailed information like this can be very useful for browser vendors to prioritize.

For more useful links to influence browser vendors, check out my Navigating the Web Platform Cheatsheet.

Conclusion

To close, I hope this article has left you with a few things to think about:

  • Excitement for Masonry and other upcoming web features.
  • A few web features you might want to start using.
  • A few pieces of custom or 3rd-party code you might be able to remove in favor of built-in features.
  • A few ways to keep track of what’s coming and influence browser vendors.

More importantly, I hope I’ve convinced you of the benefits of using the web platform to its full potential.

]]>
hello@smashingmagazine.com (Patrick Brosset)
<![CDATA[A Sparkle Of December Magic (2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/11/desktop-wallpaper-calendars-december-2025/ https://smashingmagazine.com/2025/11/desktop-wallpaper-calendars-december-2025/ Sun, 30 Nov 2025 09:00:00 GMT As the year winds down, many of us are busy wrapping up projects, meeting deadlines, or getting ready for the holiday season. Why not take a moment amid the end-of-year hustle to set the mood for December with some wintery desktop wallpapers? They might just bring a sparkle of inspiration to your workspace in these busy weeks.

To provide you with unique and inspiring wallpaper designs each month anew, we started our monthly wallpapers series more than 14 years ago. It’s the perfect opportunity both to put your creative skills to the test and to find just the right wallpaper to accompany you through the new month. This December is no exception, of course, so following our cozy little tradition, we have a new collection of wallpapers waiting for you below. Each design has been created with love by artists and designers from across the globe and comes in a variety of screen resolutions.

A huge thank-you to everyone who tickled their creativity and shared their wallpapers with us this time around! This post wouldn’t exist without your kind support. ❤️ Happy December!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit your wallpaper design! 🎨
    We are always looking for creative talent and would love to feature your desktop wallpaper in one of our upcoming posts. Join in ↬
Zero-Gravity December

“Floating in space, decorating the Christmas tree, unbothered by the familiar weight of New Year’s resolutions waiting back on Earth every December.” — Designed by Ginger It Solutions from Serbia.

A Quiet December Walk

“In the stillness of a snowy forest, a man and his loyal dog share a peaceful winter walk. The world is hushed beneath a blanket of white, with only soft flakes falling and the crunch of footsteps breaking the silence. It’s a simple, serene moment that captures the calm beauty of December and the quiet joy of companionship in nature’s winter glow.” — Designed by PopArt Studio from Serbia.

Quoted Rudolph

Designed by Ricardo Gimenes from Spain.

Learning Is An Art

“The year is coming to an end. A year full of adventures, projects, unforgettable moments, and others that will fade into oblivion. And it’s this month that we start preparing for next year, organizing it and hoping it will be at least as good as the last, and that it will give us 365 days to savor from the first to the last. This month we share Katherine Johnson and some wise words that we shouldn’t forget: ‘I like to learn. It’s an art and a science.’” — Designed by Veronica Valenzuela Jimenez from Spain.

Chilly Dog, Warm Troubles

Designed by Ricardo Gimenes from Spain.

Modern Christmas Magic

“A fusion of modern Christmas aesthetics and a user-centric mobile app development company, crafting delightful holiday-inspired digital experiences.” — Designed by the Zco Corporation Design Team from the United States.

Dear Moon, Merry Christmas

Designed by Vlad Gerasimov from Georgia.

It’s In The Little Things

Designed by Thaïs Lenglez from Belgium.

The House On The River Drina

“Since we often yearn for a peaceful and quiet place to work, we have found inspiration in the famous house on the River Drina in Bajina Bašta, Serbia. Wouldn’t it be great being in nature, away from civilization, swaying in the wind and listening to the waves of the river smashing your house, having no neighbors to bother you? Not sure about the Internet, though…” — Designed by PopArt Studio from Serbia.

Christmas Cookies

“Christmas is coming and a great way to share our love is by baking cookies.” — Designed by Maria Keller from Mexico.

Sweet Snowy Tenderness

“You know that warm feeling when you get to spend cold winter days in a snug, homey, relaxed atmosphere? Oh, yes, we love it, too! It is the sentiment we set our hearts on for the holiday season, and this sweet snowy tenderness is for all of us who adore watching the snowfall from our windows. Isn’t it romantic?” — Designed by PopArt Studio from Serbia.

Anonymoose

Designed by Ricardo Gimenes from Spain.

Cardinals In Snowfall

“During Christmas season, in the cold, colorless days of winter, Cardinal birds are seen as symbols of faith and warmth. In the part of America I live in, there is snowfall every December. While the snow is falling, I can see gorgeous Cardinals flying in and out of my patio. The intriguing color palette of the bright red of the Cardinals, the white of the flurries, and the brown/black of dry twigs and fallen leaves on the snow-laden ground fascinates me a lot, and inspired me to create this quaint and sweet, hand-illustrated surface pattern design as I wait for the snowfall in my town!” — Designed by Gyaneshwari Dave from the United States.

Getting Hygge

“There’s no more special time for a fire than in the winter. Cozy blankets, warm beverages, and good company can make all the difference when the sun goes down. We’re all looking forward to generating some hygge this winter, so snuggle up and make some memories.” — Designed by The Hannon Group from Washington D.C.

Christmas Woodland

Designed by Mel Armstrong from Australia.

Joy To The World

“Joy to the world, all the boys and girls now, joy to the fishes in the deep blue sea, joy to you and me.” — Designed by Morgan Newnham from Boulder, Colorado.

Gifts Lover

Designed by Elise Vanoorbeek from Belgium.

King Of Pop

Designed by Ricardo Gimenes from Spain.

The Matterhorn

“Christmas is always such a magical time of year so we created this wallpaper to blend the majestry of the mountains with a little bit of magic.” — Designed by Dominic Leonard from the United Kingdom.

Ninja Santa

Designed by Elise Vanoorbeek from Belgium.

Ice Flowers

“I took some photos during a very frosty and cold week before Christmas.” Designed by Anca Varsandan from Romania.

Christmas Selfie

Designed by Emanuela Carta from Italy.

Winter Wonderland

“‘Winter is the time for comfort, for good food and warmth, for the touch of a friendly hand and for a talk beside the fire: it is the time for home.’ (Edith Sitwell) — Designed by Dipanjan Karmakar from India.

Winter Coziness At Home

“Winter coziness that we all feel when we come home after spending some time outside or when we come to our parental home to celebrate Christmas inspired our designers. Home is the place where we can feel safe and sound, so we couldn’t help ourselves but create this calendar.” — Designed by MasterBundles from Ukraine.

Enchanted Blizzard

“A seemingly forgotten world under the shade of winter glaze hides a moment where architecture meets fashion and change encounters steadiness.” — Designed by Ana Masnikosa from Belgrade, Serbia.

All That Belongs To The Past

“Sometimes new beginnings make us revisit our favorite places or people from the past. We don’t visit them often because they remind us of the past but enjoy the brief reunion. Cheers to new beginnings in the new year!” Designed by Dorvan Davoudi from Canada.

December Through Different Eyes

“As a Belgian, December reminds me of snow, coziness, winter, lights, and so on. However, in the Southern hemisphere, it is summer at this time. With my illustration I wanted to show the different perspectives on December. I wish you all a Merry Christmas and Happy New Year!” — Designed by Jo Smets from Belgium.

Silver Winter

Designed by Violeta Dabija from Moldova.

Cozy

“December is all about coziness and warmth. Days are getting darker, shorter, and colder. So a nice cup of hot cocoa just warms me up.” — Designed by Hazuki Sato from Belgium.

Tongue Stuck On Lamppost

Designed by Josh Cleland from the United States.

On To The Next One

“Endings intertwined with new beginnings, challenges we rose to and the ones we weren’t up to, dreams fulfilled and opportunities missed. The year we say goodbye to leaves a bitter-sweet taste, but we’re thankful for the lessons, friendships, and experiences it gave us. We look forward to seeing what the new year has in store, but, whatever comes, we will welcome it with a smile, vigor, and zeal.” — Designed by PopArt Studio from Serbia.

Christmas Owl

“Christmas waves a magic wand over this world, and behold, everything is softer and more beautiful.” — Designed by Suman Sil from India.

Catch Your Perfect Snowflake

“This time of year, people tend to dream big and expect miracles. Let your dreams come true!” Designed by Igor Izhik from Canada.

Winter Garphee

“Garphee’s flufiness glowing in the snow.” Designed by Razvan Garofeanu from Romania.

Trailer Santa

“A mid-century modern Christmas scene outside the norm of snowflakes and winter landscapes.” Designed by Houndstooth from the United States.

Winter Solstice

“In December there’s a winter solstice; which means that the longest night of the year falls in December. I wanted to create the feeling of solitude of the long night into this wallpaper.” — Designed by Alex Hermans from Belgium.

Christmas Time

Designed by Sofie Keirsmaekers from Belgium.

Happy Holidays

Designed by Ricardo Gimenes from Spain.

Get Featured Next Month

Feeling inspired? We’ll publish the January wallpapers on December 31, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[The Accessibility Problem With Authentication Methods Like CAPTCHA]]> https://smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/ https://smashingmagazine.com/2025/11/accessibility-problem-authentication-methods-captcha/ Thu, 27 Nov 2025 10:00:00 GMT The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) has become ingrained in internet browsing since personal computers gained momentum in the consumer electronics market. For nearly as long as people have been going online, web developers have sought ways to block spam bots.

The CAPTCHA service distinguishes between human and bot activity to keep bots out. Unfortunately, its methods are less than precise. In trying to protect humans, developers have made much of the web inaccessible to people with disabilities.

Authentication methods, such as CAPTCHA, typically use image classification, puzzles, audio samples, or click-based tests to determine whether the user is human. While the types of challenges are well-documented, their logic is not public knowledge. People can only guess what it takes to “prove” they are human.

What Is CAPTCHA?

A CAPTCHA is a reverse Turing test that takes the form of a challenge-response test. For example, if it instructs users to “select all images with stairs,” they must pick the stairs out from railings, driveways, and crosswalks. Alternatively, they may be asked to enter the text they see, add the sum of dice faces, or complete a sliding puzzle.

Image-based CAPTCHAs are responsible for the most frustrating shared experiences internet users have — deciding whether to select a square when only a small sliver of the object in question is in it.

Regardless of the method, a computer or algorithm ultimately determines whether the test-taker is human or machine. This authentication service has spawned many offshoots, including reCAPTCHA and hCAPTCHA. It has even led to the creation of entire companies, such as GeeTest and Arkose Labs. The Google-owned automated system reCAPTCHA requires users to click a checkbox labeled “I’m not a robot” for authentication. It runs an adaptive analysis in the background to assign a risk score. hCAPTCHA is an image-classification-based alternative.

Other authentication methods include multi-factor authentication (MFA), QR codes, temporary personal identification numbers (PINs), and biometrics. They do not follow the challenge-response formula, but serve fundamentally similar purposes.

These offshoots are intended to be better than the original, but they often fail to meet modern accessibility standards. Take hCaptcha, for instance, which uses a cookie to let you bypass the challenge-response test entirely. It’s a great idea in theory, but it doesn’t work in practice.

You’re supposed to receive a one-time code via email that you send to a specific number over SMS. Users report receiving endless error messages, forcing them to complete the standard text CAPTCHA. This is only available if the site explicitly enabled it during configuration. If it is not set up, you must complete an image challenge that does not recognize screen readers.

Even when the initial process works, subsequent authentication relies on a third-party cross-site cookie, which most browsers block automatically. Also, the code expires after a short period, so you have to redo the entire process if it takes you too long to move on to the next step.

Why Do Teams Use CAPTCHA And Similar Authentication Methods?

CAPTCHA is common because it is easy to set up. Developers can program it to appear, and it conducts the test automatically. This way, they can focus on more important matters while still preventing spam, fraud, and abuse. These tools are supposed to make it easier for humans to use the internet safely, but they often keep real people from logging in.

These tests result in a poor user experience overall. One study found users wasted over 819 million hours on over 512 billion reCAPTCHA v2 sessions as of 2023. Despite it all, bots prevail. Machine learning models can solve text-based CAPTCHA within fractions of a second with over 97% accuracy.

A 2024 study on Google’s reCAPTCHA v2 — which is still widely used despite the rollout of reCAPTCHA v3 — found bots can solve image classification CAPTCHA with up to 100% accuracy, depending on the object they are tasked with identifying. The researchers used a free, open-source model, which means that bad actors could easily replicate their work.

Why Should Web Developers Stop Using CAPTCHA?

Authentication methods like CAPTCHA have an accessibility problem. Machine learning advances forced these services to grow increasingly complex. Even still, they are not foolproof. Bots get it right more than people do. Research shows they can complete reCAPTCHA within 17.5 seconds, achieving 85% accuracy. Humans take longer and are less accurate.

Many people fail CAPTCHA tests and have no idea what they did wrong. For example, a prompt instructing users to “select all squares with traffic lights” seems simple enough, but it gets complicated if a sliver of the pole is in another square. Should they select that box, or is that what an algorithm would do?

Although bot capabilities have grown by magnitudes, humans have remained the same. As tests get progressively more difficult, they feel less inclined to attempt them. One survey shows nearly 59% of people will stop using a product after several bad experiences. If authentication is too cumbersome or complex, they might stop using the website entirely.

People can fail these tests for various reasons, including technical ones. If they block third-party cookies, have a local proxy running, or have not updated their browser in a while, they may keep failing, regardless of how many times they try.

Authentication Issues With CAPTCHA

Due to the reasons mentioned above, most types of CAPTCHA are inherently inaccessible. This is especially true for people with disabilities, as these challenge-response tests were not designed with their needs in mind. Some of the common issues include the following:

Issues Related To Visuals And Screen Reader Use

Screen readers cannot read standard visual CAPTCHAs, such as the distorted text test, since the jumbled, contorted words are not machine-readable. The image classification and sliding puzzle methods are similarly inaccessible.

In one WebAIM survey conducted from 2023 to 2024, screen reader users agreed CAPTCHA was the most problematic item, ranking it above ambiguous links, unexpected screen changes, missing alt text, inaccessible search, and lack of keyboard accessibility. Its spot at the top has remained largely unchanged for over a decade, illustrating its history of inaccessibility.

Issues Related To Hearing and Audio Processing

Audio CAPTCHAs are relatively uncommon because web development best practices advise against autoplay audio and emphasize the importance of user controls. However, audio CAPTCHAs still exist. People who are hard of hearing or deaf may encounter a barrier when attempting these tests. Even with assistive technology, the intentional audio distortion and background noise make these samples challenging for individuals with auditory processing disorders to comprehend.

Issues Related To Motor And Dexterity

Tests requiring motor and dexterity skills can be challenging for those with motor impairments or physical disabilities. For example, someone with a hand tremor may find the sliding puzzles difficult. Also, the image classification tests that load more images until none that fit the criteria are left may pose a challenge.

Issues Related To Cognition And Language

As CAPTCHAs become increasingly complex, some developers are turning to tests that require a combination of creative and critical thinking. Those that require users to solve a math problem or complete a puzzle can be challenging for people with dyslexia, dyscalculia, visual processing disorders, or cognitive impairments.

Why Assistive Technology Won’t Bridge The Gap

CAPTCHAs are intentionally designed for humans to interpret and solve, so assistive technology like screen readers and hands-free controls may be of little help. ReCAPTCHA in particular poses an issue because it analyzes background activity. If it flags the accessibility devices as bots, it will serve a potentially inaccessible CAPTCHA.

Even if this technology could bridge the gap, web developers shouldn’t expect it to. Industry standards dictate that they should follow universal design principles to make their websites as accessible and functional as possible.

CAPTCHA’s accessibility issues could be forgiven if it were an effective security tool, but it is far from foolproof since bots get it right more than humans do. Why keep using a method that is ineffective and creates barriers for people with disabilities?

There are better alternatives.

Principles For Accessible Authentication

The idea that humans should consistently outperform algorithms is outdated. Better authentication methods exist, such as multifactor authentication (MFA). The two-factor authentication market will be worth an estimated $26.7 billion by 2027, underscoring its popularity. This tool is more effective than a CAPTCHA because it prevents unauthorized access, even with legitimate credentials.

Ensure your MFA technique is accessible. Instead of having website visitors transcribe complex codes, you should send push notifications or SMS messages. Rely on the verification code autofill to automatically capture and enter the code. Alternatively, you can introduce a “remember this device” feature to skip authentication on trusted devices.

Apple’s two-factor authentication approach is designed this way. A trusted device automatically displays a six-digit verification code, so they do not have to search for it. When prompted, iPhone users can tap the suggestion that appears above their mobile keyboard for autofill.

Single sign-on is another option. This session and user authentication service allows people to log in to multiple websites or applications with a single set of login credentials, minimizing the need for repeated identity verification.

One-time-use “magic links” are an excellent alternative to reCAPTCHA and temporary PINs. Rather than remembering a code or solving a puzzle, the user clicks on a button. Avoid imposing deadlines because, according to WCAG Success Criterion 2.2.3, users should not face time limits since those with disabilities may need more time to complete specific actions.

Alternatively, you could use Cloudflare Turnstile. It authenticates without showing a CAPTCHA, and most people never even have to check a box or hit a button. The software works by issuing a small JavaScript challenge behind the scenes to automatically differentiate between bots and humans. Cloudflare Turnstile can be embedded into any website, making it an excellent alternative to standard classification tasks.

Testing And Evaluation Of Accessible Authentication Designs

Testing and evaluating your accessible alternative authentication methods is essential. Many designs look good on paper but do not work in practice. If possible, gather feedback from actual users. An open beta may be an effective way to maximize visibility.

Remember, general accessibility considerations do not only apply to people with disabilities. They also include those who are neurodivergent, lack access to a mobile device, or use assistive technology. Ensure your alternative designs consider these individuals.

Realistically, you cannot create a perfect system since everyone is unique. Many people struggle to follow multistep processes, solve equations, process complex instructions, or remember passcodes. While universal web design principles can improve flexibility, no single solution can meet everyone’s needs.

Regardless of the authentication technique you use, you should present users with multiple authentication options upfront. They know their capabilities best, so let them decide what to use instead of trying to over-engineer a solution that works for every edge case.

Address The Accessibility Problem With Design Changes

A person with hand tremors may be unable to complete a sliding puzzle, while someone with an audio processing disorder may have trouble with distorted audio samples. However, you cannot simply replace CAPTCHAs with alternatives because they are often equally inaccessible.

QR codes, for example, may be difficult to scan for those with reduced fine motor control. People who are visually impaired may struggle to find it on the screen. Similarly, biometrics can pose an issue for people with facial deformities or a limited range of motion. Addressing the accessibility problem requires creative thinking.

You can start by visiting the Web Accessibility Initiative’s accessibility tutorials for developers to better understand universal design. Although these tutorials focus more on content than authentication, you can still use them to your advantage. The W3C Group Draft Note on the Inaccessibility of CAPTCHA provides more relevant guidance.

Getting started is as easy as researching best practices. Understanding the basics is essential because there is no universal solution for accessible web design. If you want to optimize accessibility, consider sourcing feedback from the people who actually visit your website.

Further Reading

]]>
hello@smashingmagazine.com (Eleanor Hecks)
<![CDATA[Design System Culture: What It Is And Why It Matters (Excerpt)]]> https://smashingmagazine.com/2025/11/design-system-culture/ https://smashingmagazine.com/2025/11/design-system-culture/ Tue, 25 Nov 2025 18:00:00 GMT Subscribe to our Smashing newsletter to be notified when orders are open.]]> This article is a sponsored by Maturing Design Systems

Design systems have become an integral part of our everyday work, so much that the successful growth and maturation of a design system can make or break a product or project. Great tokens, components and organization aren’t enough — it is most often the culture and curation that creates a sustainable, widely-adopted system. It can be hard to determine where to invest our time and attention. How do we build and maintain design systems that support our teams, enhance our work, and grow along with us?

Excerpt: Design System Culture

Culture is a funny thing. We all have some intuition about how important it is—at least we know we want to work in a great culture and avoid the toxic ones. But culture is notoriously difficult to define, and changing it can feel more like magic than reality. One company culture can be inspiring for some and boring for others, a place of growth for some and stifling for others.

Adding to the nuance, not only does your company have a culture as a whole, but it has many subcultures. That’s because culture is not created by any individual. Culture is something that happens when the same group of people gather together repeatedly over time. So, as a company grows, adding hierarchy and structure, the teams formed around specific goals, products, features, disciplines, and so on, all develop their own subcultures.

You probably have a design subculture. You probably have a product ownership subculture. You probably even have a subculture forming around those folks who get on a Zoom call every Tuesday at lunch to knit and chat. There are hundreds or more subcultures at most good-sized organizations. It’s complicated, nuanced, and immensely important.

When an individual is struggling with the way they are managed, one culture enables them to offer authentic feedback to their boss, while another leads them to look for a new job. When a company provides free lunch on Fridays, one culture creates a sense of gratitude for this benefit; another makes you feel like this free lunch comes with the expectation that you can’t ever leave work. One culture prioritizes financial results over respectful interactions. One culture encourages competition between teams, while another emphasizes collaboration with coworkers.

Why Culture?

At the beginning of 2021, my company was asked to help a large organization plan, design, and build a design system alongside the minimum viable product of a new product idea. This is the kind of work we truly love, so the team was excited to jump in.

As an author of a book about design systems, I want nothing more than to tell you how amazingly this engagement went. Instead, it was a tremendous struggle. Despite this being the perfect kind of work for my team and I on paper, we had to make the hard decision to walk away from our client at the end of that year. Not because we couldn’t do the work. Not because of any technical challenges or budget concerns. The reason we gave was “cultural incompatibility.” In almost twenty years of running my own businesses, this had never happened to me. After all, our clients don’t come to us because they have everything figured out — they come because they know they need help. If we couldn’t guide them through a difficult season, why did we even exist!?

Needless to say, it didn’t sit well with me. So, after following a few useless threads of fear that we just couldn’t cut it, I spent the next year diving down a rabbit hole of research on organizational culture. This next section is a summary of what I learned in that year and how I’ve been putting that to use since. To start, let’s find a common understanding of what culture is.

What Is Culture?

Over the last few decades, a lot has been said about workplace culture. From understanding why it matters and how it impacts the ways we lead, to offering methodologies for changing it. I’ve found tremendous value in the research and writings of Edgar Schein, a business theorist and psychologist. Schein offers a simple model to explain what culture is, breaking it down into three levels:

Artifacts

Artifacts are the top level of Schein’s model. These are the things people think of when you say “culture” — the visible perks a company offers. I once worked at a place where we could expense bringing in donuts for the team. Another job I had provided a foosball table. One company encouraged us to cook lunch together each week. These kinds of things, along with the company swag, the channel in Slack where you get to brag about your peers, and the company retreat are all “artifacts” of your company culture.

Espoused Values And Beliefs

The next layer down is called “espoused values and beliefs.” This is what people inside the culture say they believe. It’s the list of values, the mission statement, the vision. It’s the content on the website and plastered on the walls. It’s the stuff you expect to get when you accept the job because it’s how people answered all your questions throughout the interview process.

Basic Underlying Assumptions

The deepest layer is called “basic underlying assumptions.” This is what people inside the organization actually believe. It’s the way the leadership and employees behave, most notably

in the face of a difficult decision. This layer is the root of your culture. And no matter what you show (artifacts), no matter what you say (espoused beliefs), the things you believe (underlying assumptions) will come out eventually.

It Starts At The Bottom

As an employee, you will experience these things from the top down. On your first day, you observe what’s happening around you — you see the artifacts of the culture. Eventually, you get to know a few folks. As you have more and more conversations with them, you’ll begin to hear how they talk about the culture — their espoused beliefs. At some point, people inside your culture will be faced with some tough situations. This is where the rubber meets the road and when you’ll learn what those individuals’ basic underlying assumptions are.

Unhealthy organizations don’t have a process for surfacing and valuing those underlying assumptions. Healthy organizations know that culture starts with the basic underlying assumptions of every individual at the company.

Unhealthy organizations try to create culture with perks and mission statements. Healthy organizations allow the top two layers to emerge naturally from the bottom layer.

When the basic underlying assumptions don’t line up with the espoused beliefs and artifacts, the disconnect is strong. It’s often hard to articulate the problem, but people will feel it. This is the company with a core value of “family first” that requires you to travel all the time with no recognition of the impact it has on your actual family. The espoused belief to prioritize family is not actively supported in the decisions being made.

Strength And Weakness

We all subconsciously know these things, and that is reflected in the language we use as we talk about the culture of an organization. We tend to use the words “strong” and “weak” to describe culture. You might say, “That company has a strong culture.” This statement is an indication that the layers are aligned, and that means the culture itself serves as a way of guiding decisions. If we all have shared values, we can trust one another’s ability to make decisions that will align with those values.

Conversely, an organization with a weak culture is missing the alignment between the things they say and the decisions they make. These cultures often continually add policies and procedures in order to police the behavior of individuals. In this scenario, the culture is weak because it doesn’t offer the organic guidance a stronger culture does — the misalignment means the things we choose to do differ from the things we say.

That is not to say policies and procedures are bad. As companies grow, there is a need to document the expectations for people. The proactive nature of a strong culture means these documents are often a formalization of what has emerged organically, whereas a weak culture reacts to negative situations in hopes to prevent the bad from happening again.

Editor’s Note

Do you like what you’ve read so far? This is just an excerpt of Ben’s upcoming book, Maturing Design Systems, in which he explores the anatomy of a design system, explains how culture shapes outcomes, and shares practical guidance for the challenges at each stage — from building v1 and growing healthy adoption to navigating “the teenage years” and ultimately running a stable, influential system.

Table of Contents

  • Context
    An introduction to the context of design systems, understanding where they live in your organization, what feeds them, and whether you should build one.
  • Design System Culture
    A deep dive into what culture is, why it’s important for design system teams to understand, and how it unlocks the ability for you to deliver real value.
  • The Anatomy of a Design System
    An exploration of the layers and parts that make up a design system based on the evaluation of hundreds of design systems over many years.
  • Maturity
    An over view of the design system maturity model including the fours stages of maturity, origin stories, a framework for maturing in a healthy way, and a framework for creating design system stability.
  • Stage 1, Building Version One
    A dive into what it means to be in stage 1 of the design system maturity and a few mental models to keep you focused on the right things in this early stage.
  • Stage 2, Growing Adoption
    Unpacking stage 2 of the design system maturity model and a deep dive into adoption: broadening your perspective on adoption, the adoption curve, and how to create sustainable adoption.
  • Stage 3, Surviving the Teenage Years
    Understanding the relevant concerns for stage 3 of the design system maturity model and how to address the more nuanced challenges that come with this level of maturity.
  • Stage 4, Evolving a Healthy Program
    Exploring what it means to be in stage 4 of the design system maturity model, when you’ve become an influential leader in the eyes of the rest of your organization.

About The Author

Ben Callahan is an author, design system researcher, coach, and speaker. He founded Redwoods, a design system community, and The Question, a weekly forum for collaborative learning. As a founding partner at Sparkbox, he helps organizations embed human-centered culture into their design systems. His work bridges people and systems, emphasizing sustainable growth, team alignment, and meaningful impact in technology. He believes every interaction is an opportunity to learn.

Reviewers’ Testimonials

“This book is a clear and insightful blueprint for maturing design systems at scale. For well-supported teams, it offers strategy and clarity grounded in real examples. For smaller teams like mine, it serves as a North Star that helps you advocate for the work and find solutions that fit your team's maturity. I highly recommend it to anyone building a design system.”

Lenora Porter, Product Designer
“Ben draws connections between process, collaboration, and identity in ways that feel both intuitive and revelatory. Many design system books live comfortably in the tactical and technical, but this one moves beyond the how and into the why — inviting readers to reflect on their roles not just as product owners, designers or engineers, but as stewards of shared understanding within complex organisations. This book doesn’t prescribe rigid solutions. Instead, it encourages self-inquiry and alignment, asking readers to consider how they can bring intentionality, empathy, and resilience into the systems they touch.”

Tarunya Varma, Product Design Manager, Tide
“Ben Callahan’s “Maturing Design Systems” puts language to the struggles many of us feel but can’t quite explain. It unpacks the hidden influence of culture, setup, and leadership, providing you with the clarity, tools, and frameworks to course-correct and move your system work forward, whether you’re navigating a growing startup or a scaling enterprise.”

Ness Grixti, Design Lead, Wise, and Author of “A Practical Guide to Design System Components”
Don’t Miss Out!

Through years of interviews, coaching, and consulting, Ben has discovered a model for how design systems mature. Understanding how systems tend to mature allows you to create a sustainable program around your design system — one that acknowledges the human and change-management side of this work, not just the technical and creative.

This book will be a valuable resource for anyone working with design systems!

Spread The Word

Sign up to our Smashing newsletter and be one of the first to know when Maturing Design Systems is available for preorder. We can’t wait to share this book with you!

]]>
hello@smashingmagazine.com (Ari Stiles)
<![CDATA[Designing For Stress And Emergency]]> https://smashingmagazine.com/2025/11/designing-for-stress-emergency/ https://smashingmagazine.com/2025/11/designing-for-stress-emergency/ Mon, 24 Nov 2025 13:00:00 GMT Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today). With a live UX training starting next week.]]> No design exists in isolation. As designers, we often imagine specific situations in which people will use our product. It might be indeed quite common — but there will also be other — urgent, frustrating, stressful situations. And they are the ones that we rarely account for.

So how do we account for such situations? How can we help people use our products while coping with stress — without adding to their cognitive load? Let’s take a closer look.

Study Where Your Product Fits Into People’s Lives

When designing digital products, sometimes we get a bit too attached to our shiny new features and flows — often forgetting the messy reality in which these features and flows have to neatly fit. And often it means 10s of other products, 100s of other tabs, and 1000s of other emails.

If your customers have to use a slightly older machine, with a smallish 22" screen and a lot of background noise, they might use your product differently than you might have imagined, e.g., splitting the screen into halves to see both views at the same time (as displayed above).

Chances are high that our customers will use our product while doing something else, often with very little motivation, very little patience, plenty of urgent (and way more important) problems, and an unhealthy dose of stress. And that’s where our product must do its job well.

What Is Stress?

What exactly do we mean when we talk about “stress”? As H Locke noted, stress is the body’s response to a situation it cannot handle. There is a mismatch between what people can control, their own skills, and the challenge in front of them.

If the situation seems unmanageable and the goal they want to achieve moves further away, it creates an enormous sense of failing. It can be extremely frustrating and demotivating.

Some failures have a local scope, but many have a far-reaching impact. Many people can’t choose the products they have to use for work, so when a tool fails repeatedly, causes frustration, or is unreliable, it affects the worker, the work, the colleagues, and processes within the organization. Fragility has a high cost — and so does frustration.

How Stress Influences User Interactions

It’s not a big surprise: stress disrupts attention, memory, cognition, and decision-making. It makes it difficult to prioritize and draw logical conclusions. In times of stress, we rely on fast, intuitive judgments, not reasoning. Typically, it leads to instinctive responses based on established habits.

When users are in an emergency, they experience cognitive tunneling — it's a state when their peripheral vision narrows, reading comprehension drops, fine motor skills deteriorate, and patience drops sharply. Under pressure, people often make decisions hastily, while others get entirely paralyzed. Either way is a likely path to mistakes — often irreversible ones and often without time for extensive deliberations.

Ideally, these decisions would be made way ahead of time — and then suggested when needed. But in practice, it’s not always possible. As it turns out, a good way to help people deal with stress is by providing order around how they manage it.

Single-Tasking Instead Of Multi-Tasking

People can’t really multi-task, especially in very stressful situations or emergencies. Especially with a big chunk of work in front of them, people need some order to make progress, reliably. That’s why simpler pages usually work better than one big complex page.

Order means giving users a clear plan of action to complete a task. No distractions, no unnecessary navigation. We ask simple questions and prompt simple actions, one after another, one thing at a time.

An example of the plan is the Task List Pattern, invented by fine folks at Gov.uk. We break a task into a sequence of sub-tasks, describe them with actionable labels, assign statuses, and track progress.

To support accuracy, we revise default settings, values, presets, and actions. Also, the order of actions and buttons matters, so we put high-priority things first to make them easier to find. Then we add built-in safeguards (e.g., Undo feature) to prevent irreversible errors.

Supporting In Emergencies

The most effective help during emergencies is to help people deal with the situation in a well-defined and effective way. That means being prepared for and designing an emergency mode, e.g., to activate instant alerts on emergency contacts, distribute pre-assigned tasks, and establish a line of communication.

Rediplan App by Australian Red Cross is an emergency plan companion that encourages citizens to prepare their documents and belongings with a few checklists and actions — including key contracts, meeting places, and medical information, all in one place.

Just Enough Friction

Not all stress is equally harmful, though. As Krystal Higgins points out, if there is not enough friction when onboarding new users and the experience is too passive or users are hand-held even through the most basic tasks, you risk that they won’t realize the personal value they gain from the experience and, ultimately, lose interest.

Design And Test For Stress Cases

Stress cases aren’t edge cases. We can’t predict the emotional state in which a user comes to our site or uses our product. A person looking for specific information on a hospital website or visiting a debt management website, for example, is most likely already stressed. Now, if the interface is overwhelming, it will only add to their cognitive load.

Stress-testing your product is critical to prevent this from happening. It’s useful to set up an annual day to stress test your product and refine emergency responses. It could be as simple as running content testing, or running tests in a real, noisy, busy environment where users actually work — at peak times.

And in case of emergencies, we need to check if fallbacks work as expected and if the current UX of the product helps people manage failures and exceptional situations well enough.

Wrapping Up

Emergencies will happen eventually — it’s just a matter of time. With good design, we can help mitigate risk and control damage, and make it hard to make irreversible mistakes. At its heart, that’s what good UX is exceptionally good at.

Key Takeaways

People can’t multitask, especially in very stressful situations.

  • Stress disrupts attention, memory, cognition, decision-making.
  • Also, it’s difficult to prioritize and draw logical conclusions.
  • Under stress, we rely on fast, intuitive judgments — not reasoning.
  • It leads to instinctive responses based on established habits.

Goal: Design flows that support focus and high accuracy.

  • Start with better default settings, values, presets, and actions.
  • High-priority first: order of actions and buttons matters.
  • Break complex tasks down into a series of simple steps (10s–30s each).
  • Add built-in safeguards to prevent irreversible errors (Undo).

Shift users to single-tasking: ask for one thing at a time.

  • Simpler pages might work better than one complex page.
  • Suggest a step-by-step plan of action to follow along.
  • Consider, design, and test flows for emergency responses ahead of time.
  • Add emergency mode for instant alerts and task assignments.
Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Further Reading

]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[Keyframes Tokens: Standardizing Animation Across Projects]]> https://smashingmagazine.com/2025/11/keyframes-tokens-standardizing-animation-across-projects/ https://smashingmagazine.com/2025/11/keyframes-tokens-standardizing-animation-across-projects/ Fri, 21 Nov 2025 08:00:00 GMT Picture this: you join a new project, dive into the codebase, and within the first few hours, you discover something frustratingly familiar. Scattered throughout the stylesheets, you find multiple @keyframes definitions for the same basic animations. Three different fade-in effects, two or three slide variations, a handful of zoom animations, and at least two different spin animations because, well, why not?

@keyframes pulse {
  from {
    scale: 1;
  }
  to {
    scale: 1.1;
  }
}

@keyframes bigger-pulse {
  0%, 20%, 100% { 
    scale: 1; 
  }
  10%, 40% { 
    scale: 1.2; 
  }
}

If this scenario sounds familiar, you’re not alone. In my experience across various projects, one of the most consistent quick wins I can deliver is consolidating and standardizing keyframes. It’s become such a reliable pattern that I now look forward to this cleanup as one of my first tasks on any new codebase.

The Logic Behind The Chaos

This redundancy makes perfect sense when you think about it. We all use the same fundamental animations in our day-to-day work: fades, slides, zooms, spins, and other common effects. These animations are pretty straightforward, and it's easy to whip up a quick @keyframes definition to get the job done.

Without a centralized animation system, developers naturally write these keyframes from scratch, unaware that similar animations already exist elsewhere in the codebase. This is especially common when working in component-based architectures (which most of us do these days), as teams often work in parallel across different parts of the application.

The result? Animation chaos.

The Small Problem

The most obvious issues with keyframes duplication are wasted development time and unnecessary code bloat. Multiple keyframe definitions mean multiple places to update when requirements change. Need to adjust the timing of your fade animation? You’ll need to hunt down every instance across your codebase. Want to standardize easing functions? Good luck finding all the variations. This multiplication of maintenance points makes even simple animation updates a time-consuming task.

The Bigger Problem

This keyframes duplication creates a much more insidious problem lurking beneath the surface: the global scope trap. Even when working with component-based architectures, CSS keyframes are always defined in the global scope. This means all keyframes apply to all components. Always. Yes, your animation doesn't necessarily use the keyframes you defined in your component. It uses the last keyframes that match that exact same name that were loaded into the global scope.

As long as all your keyframes are identical, this might seem like a minor issue. But the moment you want to customize an animation for a specific use case, you’re in trouble, or worse, you’ll be the one causing them.

Either your animation won’t work because another component loaded after yours, overwriting your keyframes, or your component loads last and accidentally changes the animation behavior for every other component using that keyframe's name, and you may not even realize it.

Here’s a simple example that demonstrates the problem:

.component-one {
  /* component styles */
  animation: pulse 1s ease-in-out infinite alternate;
}

/* this @keyframes definition will not work */
@keyframes pulse {
  from {
    scale: 1;
  }
  to {
    scale: 1.1;
  }
} 

/* later in the code... */

.component-two {
  /* component styles */
  animation: pulse 1s ease-in-out infinite;
}

/* this keyframes will apply to both components */
@keyframes pulse {
  0%, 20%, 100% { 
    scale: 1; 
  }
  10%, 40% { 
    scale: 1.2; 
  }
}

Both components use the same animation name, but the second @keyframes definition overwrites the first one. Now both component-one and component-two will use the second keyframes, regardless of which component defined which keyframes.

See the Pen Keyframes Tokens - Demo 1 [forked] by Amit Sheen.

The worst part? This often works perfectly in local development but breaks mysteriously in production when build processes change the loading order of your stylesheets. You end up with animations that behave differently depending on which components are loaded and in what sequence.

The Solution: Unified Keyframes

The answer to this chaos is surprisingly simple: predefined dynamic keyframes stored in a shared stylesheet. Instead of letting every component define its own animations, we create centralized keyframes that are well-documented, easy to use, maintainable, and tailored to the specific needs of your project.

Think of it as keyframes tokens. Just as we use tokens for colors and spacing, and many of us already use tokens for animation properties, like duration and easing functions, why not use tokens for keyframes as well?

This approach can integrate naturally with any current design token workflow you’re using, while solving both the small problem (code duplication) and the bigger problem (global scope conflicts) in one go.

The idea is straightforward: create a single source of truth for all our common animations. This shared stylesheet contains carefully crafted keyframes that cover the animation patterns our project actually uses. No more guessing whether a fade animation already exists somewhere in our codebase. No more accidentally overwriting animations from other components.

But here’s the key: these aren’t just static copy-paste animations. They’re designed to be dynamic and customizable through CSS custom properties, allowing us to maintain consistency while still having the flexibility to adapt animations to specific use cases, like if you need a slightly bigger “pulse” animation in one place.

Building The First Keyframes Token

One of the first low-hanging fruits we should tackle is the “fade-in” animation. In one of my recent projects, I found over a dozen separate fade-in definitions, and yes, they all simply animated the opacity from 0 to 1.

So, let’s create a new stylesheet, call it kf-tokens.css, import it into our project, and place our keyframes with proper comments inside of it.

/* keyframes-tokens.css */

/*
 * Fade In - fade entrance animation
 * Usage: animation: kf-fade-in 0.3s ease-out;
 */
@keyframes kf-fade-in {
  from {
    opacity: 0;
  }
  to {
    opacity: 1;
  }
}

This single @keyframes declaration replaces all those scattered fade-in animations across our codebase. Clean, simple, and globally applicable. And now that we have this token defined, we can use it from any component throughout our project:

.modal {
  animation: kf-fade-in 0.3s ease-out;
}

.tooltip {
  animation: kf-fade-in 0.2s ease-in-out;
}

.notification {
  animation: kf-fade-in 0.5s ease-out;
}

See the Pen Keyframes Tokens - Demo 2 [forked] by Amit Sheen.

Note: We’re using a kf- prefix in all our @keyframes names. This prefix serves as a namespace that prevents naming conflicts with existing animations in the project and makes it immediately clear that these keyframes come from our keyframes tokens file.

Making A Dynamic Slide

The kf-fade-in keyframes work great because it's simple and there's little room to mess things up. In other animations, however, we need to be much more dynamic, and here we can leverage the enormous power of CSS custom properties. This is where keyframes tokens really shine compared to scattered static animations.

Let’s take a common scenario: “slide-in” animations. But slide in from where? 100px from the right? 50% from the left? Should it enter from the top of the screen? Or maybe float in from the bottom? So many possibilities, but instead of creating separate keyframes for each direction and each variation, we can build one flexible token that adapts to all scenarios:

/*
 * Slide In - directional slide animation
 * Use --kf-slide-from to control direction
 * Default: slides in from left (-100%)
 * Usage: 
 *   animation: kf-slide-in 0.3s ease-out;
 *   --kf-slide-from: -100px 0; // slide from left
 *   --kf-slide-from: 100px 0;  // slide from right
 *   --kf-slide-from: 0 -50px;  // slide from top
 */

@keyframes kf-slide-in {
  from {
    translate: var(--kf-slide-from, -100% 0);
  }
  to {
    translate: 0 0;
  }
}

Now we can use this single @keyframes token for any slide direction simply by changing the --kf-slide-from custom property:

.sidebar {
  animation: kf-slide-in 0.3s ease-out;
  /* Uses default value: slides from left */
}

.notification {
  animation: kf-slide-in 0.4s ease-out;
  --kf-slide-from: 0 -50px; /* slide from top */
}

.modal {
  animation:
    kf-fade-in 0.5s,
    kf-slide-in 0.5s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-slide-from: 50px 50px; /* slide from bottom-right */
}

This approach gives us incredible flexibility while maintaining consistency. One keyframe declaration, infinite possibilities.

See the Pen Keyframes Tokens - Demo 3 [forked] by Amit Sheen.

And if we want to make our animations even more flexible, allowing for “slide-out” effects as well, we can simply add a --kf-slide-to custom property, similar to what we’ll see in the next section.

Bidirectional Zoom Keyframes

Another common animation that gets duplicated across projects is “zoom” effects. Whether it’s a subtle scale-up for toast messages, a dramatic zoom-in for modals, or a gentle scale-down effect for headings, zoom animations are everywhere.

Instead of creating separate keyframes for each scale value, let’s build one flexible set of kf-zoom keyframes:

/*
 * Zoom - scale animation
 * Use --kf-zoom-from and --kf-zoom-to to control scale values
 * Default: zooms from 80% to 100% (0.8 to 1)
 * Usage:
 *   animation: kf-zoom 0.2s ease-out;
 *   --kf-zoom-from: 0.5; --kf-zoom-to: 1;   // zoom from 50% to 100%
 *   --kf-zoom-from: 1; --kf-zoom-to: 0;     // zoom from 100% to 0%
 *   --kf-zoom-from: 1; --kf-zoom-to: 1.1;   // zoom from 100% to 110%
 */

@keyframes kf-zoom {
  from {
    scale: var(--kf-zoom-from, 0.8);
  }
  to {
    scale: var(--kf-zoom-to, 1);
  }
}

With one definition, we can achieve any zoom variation we need:

.toast {
  animation:
    kf-slide-in 0.2s,
    kf-zoom 0.4s ease-out;
  --kf-slide-from: 0 100%; /* slide from top */
  /* Uses default zoom: scales from 80% to 100% */
}

.modal {
  animation: kf-zoom 0.3s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-zoom-from: 0; /* dramatic zoom from 0% to 100% */
}

.heading {
  animation:
    kf-fade-in 2s,
    kf-zoom 2s ease-in;
  --kf-zoom-from: 1.2; 
  --kf-zoom-to: 0.8; /* gentle scale down */
}

The default of 0.8 (80%) works perfectly for most UI elements, like toast messages and cards, while still being easy to customize for special cases.

See the Pen Keyframes Tokens - Demo 4 [forked] by Amit Sheen.

You might have noticed something interesting in the recent examples: we've been combining animations. One of the key advantages of working with @keyframes tokens is that they’re designed to integrate seamlessly with each other. This smooth composition is intentional, not accidental.

We’ll discuss animation composition in more detail later, including where they can become problematic, but most combinations are straightforward and easy to implement.

Note: While writing this article, and maybe because of writing it, I found myself rethinking the whole idea of entrance animations. With all the recent advances in CSS, do we still need them at all? Luckily, Adam Argyle explored the same questions and expressed them brilliantly in his blog. This doesn’t contradict what’s written here, but it does present an approach worth considering, especially if your projects rely heavily on entrance animations.

Continuous Animations

While entrance animations, like “fade”, “slide”, and “zoom” happen once and then stop, continuous animations loop indefinitely to draw attention or indicate ongoing activity. The two most common continuous animations I encounter are “spin” (for loading indicators) and “pulse” (for highlighting important elements).

These animations present unique challenges when it comes to creating keyframes tokens. Unlike entrance animations that typically go from one state to another, continuous animations need to be highly customizable in their behavior patterns.

The Spin Doctor

Every project seems to use multiple spin animations. Some spin clockwise, others counterclockwise. Some do a single 360-degree rotation, others do multiple turns for a faster effect. Instead of creating separate keyframes for each variation, let’s build one flexible spin that handles all scenarios:

/*
 * Spin - rotation animation
 * Use --kf-spin-from and --kf-spin-to to control rotation range
 * Use --kf-spin-turns to control rotation amount
 * Default: rotates from 0deg to 360deg (1 full rotation)
 * Usage:
 *   animation: kf-spin 1s linear infinite;
 *   --kf-spin-turns: 2;   // 2 full rotations
 *   --kf-spin-from: 0deg; --kf-spin-to: 180deg;  // half rotation
 *   --kf-spin-from: 0deg; --kf-spin-to: -360deg; // counterclockwise
 */

@keyframes kf-spin {
  from {
    rotate: var(--kf-spin-from, 0deg);
  }
  to {
    rotate: calc(var(--kf-spin-from, 0deg) + var(--kf-spin-to, 360deg) * var(--kf-spin-turns, 1));
  }
}

Now we can create any spin variation we like:

.loading-spinner {
  animation: kf-spin 1s linear infinite;
  /* Uses default: rotates from 0deg to 360deg */
} 

.fast-loader {
  animation: kf-spin 1.2s ease-in-out infinite alternate;
  --kf-spin-turns: 3; /* 3 full rotations for each direction per cycle */
}

.steped-reverse {
  animation: kf-spin 1.5s steps(8) infinite;
  --kf-spin-to: -360deg; /* counterclockwise */
}

.subtle-wiggle {
  animation: kf-spin 2s ease-in-out infinite alternate;
  --kf-spin-from: -16deg;
  --kf-spin-to: 32deg; /* wiggle 36 deg: between -18deg and +18deg */
}

See the Pen Keyframes Tokens - Demo 5 [forked] by Amit Sheen.

The beauty of this approach is that the same keyframes work for loading spinners, rotating icons, wiggle effects, and even complex multi-turn animations.

The Pulse Paradox

Pulse animations are trickier because they can “pulse” different properties. Some pulse the scale, others pulse the opacity, and some pulse color properties like brightness or saturation. Rather than creating separate keyframes for each property, we can create keyframes that work with any CSS property.

Here's an example of a pulse keyframe with scale and opacity options:

/* 
 * Pulse - pulsing animation
 * Use --kf-pulse-scale-from and --kf-pulse-scale-to to control scale range
 * Use --kf-pulse-opacity-from and --kf-pulse-opacity-to to control opacity range
 * Default: no pulse (all values 1)
 * Usage:
 *   animation: kf-pulse 2s ease-in-out infinite alternate;
 *   --kf-pulse-scale-from: 0.95; --kf-pulse-scale-to: 1.05; // scale pulse
 *   --kf-pulse-opacity-from: 0.7; --kf-pulse-opacity-to: 1; // opacity pulse
 */

@keyframes kf-pulse {
  from {
    scale: var(--kf-pulse-scale-from, 1);
    opacity: var(--kf-pulse-opacity-from, 1);
  }
  to {
    scale: var(--kf-pulse-scale-to, 1);
    opacity: var(--kf-pulse-opacity-to, 1);
  }
}

This creates a flexible pulse that can animate multiple properties:

.call-to-action { 
  animation: kf-pulse 0.6s infinite alternate;
  --kf-pulse-opacity-from: 0.5; /* opacity pulse */
}

.notification-dot {
  animation: kf-pulse 0.6s ease-in-out infinite alternate;
  --kf-pulse-scale-from: 0.9; 
  --kf-pulse-scale-to: 1.1; /* scale pulse */
}

.text-highlight {
  animation: kf-pulse 1.5s ease-out infinite;
  --kf-pulse-scale-from: 0.8;
  --kf-pulse-opacity-from: 0.2;
  /* scale and opacity pulse */
}

See the Pen Keyframes Tokens - Demo 6 [forked] by Amit Sheen.

This single kf-pulse keyframe can handle everything from subtle attention grabs to dramatic highlights, all while being easy to customize.

Advanced Easing

One of the great things about using keyframes tokens is how easy it is to expand our animation library and provide effects that most developers would not bother to write from scratch, like elastic or bounce.

Here is an example of a simple “bounce” keyframes token that uses a --kf-bounce-from custom property to control the jump height.

/*
 * Bounce - bouncing entrance animation
 * Use --kf-bounce-from to control jump height
 * Default: jumps from 100vh (off screen)
 * Usage:
 *   animation: kf-bounce 3s ease-in;
 *   --kf-bounce-from: 200px; // jump from 200px height
 */

@keyframes kf-bounce {
  0% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -1);
  }

  34% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.4);
  }

  55% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.2);
  }

  72% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.1);
  }

  85% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.05);
  }

  94% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.025);
  }

  99% {
    translate: 0 calc(var(--kf-bounce-from, 100vh) * -0.0125);
  }

  22%, 45%, 64%, 79%, 90%, 97%, 100% {
    translate: 0 0;
    animation-timing-function: ease-out;
  }
}

Animations like “elastic” are a bit trickier because of the calculations inside the keyframes. We need to define --kf-elastic-from-X and --kf-elastic-from-Y separately (both are optional), and together they let us create an elastic entrance from any point on the screen.

/*
 * Elastic In - elastic entrance animation
 * Use --kf-elastic-from-X and --kf-elastic-from-Y to control start position
 * Default: enters from top center (0, -100vh)
 * Usage:
 *   animation: kf-elastic-in 2s ease-in-out both;
 *   --kf-elastic-from-X: -50px;
 *   --kf-elastic-from-Y: -200px; // enter from (-50px, -200px)
 */

@keyframes kf-elastic-in {
  0% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 1) calc(var(--kf-elastic-from-Y, 0px) * 1);
  }

  16% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * -0.3227) calc(var(--kf-elastic-from-Y, 0px) * -0.3227);
  }

  28% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 0.1312) calc(var(--kf-elastic-from-Y, 0px) * 0.1312);
  }

  44% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * -0.0463) calc(var(--kf-elastic-from-Y, 0px) * -0.0463);
  }

  59% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 0.0164) calc(var(--kf-elastic-from-Y, 0px) * 0.0164);
  }

  73% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * -0.0058) calc(var(--kf-elastic-from-Y, 0px) * -0.0058);
  }

  88% {
    translate: calc(var(--kf-elastic-from-X, -50vw) * 0.0020) calc(var(--kf-elastic-from-Y, 0px) * 0.0020);
  }

  100% {
    translate: 0 0;
  }
}

This approach makes it easy to reuse and customize advanced keyframes across our project, just by changing a single custom property.

.bounce-and-zoom {
  animation:
    kf-bounce 3s ease-in,
    kf-zoom 3s linear;
  --kf-zoom-from: 0;
}

.bounce-and-slide {
  animation-composition: add; /* Both animations use translate */
  animation:
    kf-bounce 3s ease-in,
    kf-slide-in 3s ease-out;
  --kf-slide-from: -200px;
}

.elastic-in {
  animation: kf-elastic-in 2s ease-in-out both;
}

See the Pen Keyframes Tokens - Demo 7 [forked] by Amit Sheen.

Up to this point, we’ve seen how we can consolidate keyframes in a smart and efficient way. Of course, you might want to tweak things to better fit your project’s needs, but we’ve covered examples of several common animations and everyday use cases. And with these keyframes tokens in place, we now have powerful building blocks for creating consistent, maintainable animations across the entire project. No more duplicated keyframes, no more global scope conflicts. Just a clean, convenient way to handle all our animation needs.

But the real question is: How do we compose these building blocks together?

Putting It All Together

We’ve seen that combining basic keyframes tokens is simple. We don’t need anything special but to define the first animation, define the second one, set the variables as needed, and that’s it.

/* Fade in + slide in */
.toast {
  animation:
    kf-fade-in 0.4s,
    kf-slide-in 0.4s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-slide-from: 0 40px;
}

/* Zoom in + fade in */
.modal {
  animation:
    kf-fade-in 0.3s,
    kf-zoom 0.3s cubic-bezier(0.34, 1.56, 0.64, 1);
  --kf-zoom-from: 0.7;
  --kf-zoom-to: 1;
}

/* Slide in + pulse */
.notification {
  animation:
    kf-slide-in 0.5s,
    kf-pulse 1.2s ease-in-out infinite alternate;
  --kf-slide-from: -100px 0;
  --kf-pulse-scale-from: 0.95;
  --kf-pulse-scale-to: 1.05;
}

These combinations work beautifully because each animation targets a different property: opacity, transform (translate/scale), etc. But sometimes there are conflicts, and we need to know why and how to deal with them.

When two animations try to animate the same property — for example, both animating scale or both animating opacity — the result will not be what you expect. By default, only one of the animations is actually applied to that property, which is the last one in the animation list. This is a limitation of how CSS handles multiple animations on the same property.

For example, this will not work as intended because only the kf-pulse animation will apply.

.bad-combo {
  animation:
    kf-zoom 0.5s forwards,
    kf-pulse 1.2s infinite alternate;
  --kf-zoom-from: 0.5;
  --kf-zoom-to: 1.2;
  --kf-pulse-scale-from: 0.8;
  --kf-pulse-scale-to: 1.1;
}
Animation Addition

The simplest and most direct way to handle multiple animations that affect the same property is to use the animation-composition property. In the last example above, the kf-pulse animation replaces the kf-zoom animation, so we will not see the initial zoom and will not get the expected scale to of 1.2.

By setting the animation-composition to add, we tell the browser to combine both animations. This gives us the result we want.

.component-two {
  animation-composition: add;
}

See the Pen Keyframes Tokens - Demo 8 [forked] by Amit Sheen.

This approach works well for most cases where we want to combine effects on the same property. It is also useful when we need to combine animations with static property values.

For example, if we have an element that uses the translate property to position it exactly where we want, and then we want to animate it in with the kf-slide-in keyframes, we get a nasty visible jump without animation-composition.

See the Pen Keyframes Tokens - Demo 9 [forked] by Amit Sheen.

With animation-composition set to add, the animation is smoothly combined with the existing transform, so the element stays in place and animates as expected.

Animation Stagger

Another way of handling multiple animations is to “stagger” them — that is, start the second animation slightly after the first one finishes. It is not a solution that works for every case, but it is useful when we have an entrance animation followed by a continuous animation.

/* fade in + opacity pulse */
.notification {
  animation:
    kf-fade-in 2s ease-out,
    kf-pulse 0.5s 2s ease-in-out infinite alternate;
  --kf-pulse-opacity-to: 0.5;
}

See the Pen Keyframes Tokens - Demo 10 [forked] by Amit Sheen.

Order Matters

A large part of the animations we work with use the transform property. In most cases, this is simply more convenient. It also has a performance advantage as transform animations can be GPU-accelerated. But if we use transforms, we need to accept that the order in which we perform our transformations matters. A lot.

In our keyframes so far, we’ve used individual transforms. According to the specs, these are always applied in a fixed order: first, the element gets translate, then rotate, then scale. This makes sense and is what most of us expect.

However, if we use the transform property, the order in which the functions are written is the order in which they are applied. In this case, if we move something 100 pixels on the X-axis and then rotate it by 45 degrees, it is not the same as first rotating it by 45 degrees and then moving it 100 pixels.

/* Pink square: First translate, then rotate */ 
.example-one {
  transform: translateX(100px) rotate(45deg);
}

/* Green square: First rotate, then translate */
.example-two { 
  transform: rotate(45deg) translateX(100px);
}

See the Pen Keyframes Tokens - Demo 11 [forked] by Amit Sheen.

But according to the transform order, all individual transforms — everything we’ve used for the keyframes tokens — happens before the transform functions. That means anything you set in the transform property will happen after the animations. But if you set, for example, translate together with the kf-spin keyframes, the translate will happen before the animation. Confused yet?!

This leads to situations where static values can cause different results for the same animation, like in the following case:

/* Common animation for both spinners */ 
.spinner {
  animation: kf-spin 1s linear infinite;
}

/* Pink spinner: translate before rotate (individual transform) */
.spinner-pink {
  translate: 100% 50%;
}

/* Green spinner: rotate then translate (function order) */
.spinner-green {
  transform: translate(100%, 50%);
}

See the Pen Keyframes Tokens - Demo 12 [forked] by Amit Sheen.

You can see that the first spinner (pink) gets a translate that happens before the rotate of kf-spin, so it first moves to its place and then spins. The second spinner (green) gets a translate() function that happens after the individual transform, so the element first spins, then moves relative to its current angle, and we get that wide orbit effect.

No, this is not a bug. It is just one of those things we need to know about CSS and keep in mind when working with multiple animations or multiple transforms. If needed, you can also create an additional set of kf-spin-alt keyframes that rotate elements using the rotate() function.

Reduced Motion

And while we’re talking about alternative keyframes, we cannot ignore the “no animation” option. One of the biggest advantages of using keyframes tokens is that accessibility can be baked in, and it is actually quite easy to do. By designing our keyframes with accessibility in mind, we can ensure that users who prefer reduced motion get a smoother, less distracting experience, without extra work or code duplication.

The exact meaning of “Reduced Motion” can change a bit from one animation to another, and from project to project, but here are a few important points to keep in mind:

Muting Keyframes

While some animations can be softened or slowed down, there are others that should disappear completely when reduced motion is requested. Pulse animations are a good example. To make sure these animations do not run in reduced motion mode, we can simply wrap them in the appropriate media query.


@media (prefers-reduced-motion: no-preference) {
  @keyfrmaes kf-pulse {
    from {
      scale: var(--kf-pulse-scale-from, 1);
      opacity: var(--kf-pulse-opacity-from, 1);
    }
    to {
      scale: var(--kf-pulse-scale-to, 1);
      opacity: var(--kf-pulse-opacity-to, 1);
    }
  }
}

This ensures that users who have set prefers-reduced-motion to reduce will not see the animation and will get an experience that matches their preference.

Instant In

There are some keyframes we cannot simply remove, such as entrance animations. The value must change, must animate; otherwise, the element won't have the correct values. But in reduced motion, this transition from the initial value should be instant.

To achieve this, we’ll define an extra set of keyframes where the value jumps immediately to the end state. These become our default keyframes. Then, we’ll add the regular keyframes inside a media query for prefers-reduced-motion set to no-preference, just like in the previous example.

/* pop in instantly for reduced motion */
@keyframes kf-zoom {
  from, to {
    scale: var(--kf-zoom-to, 1);
  }
}

@media (prefers-reduced-motion: no-preference) {
  /* Original zoom keyframes */
  @keyframes kf-zoom {
    from {
      scale: var(--kf-zoom-from, 0.8);
    }
    to {
      scale: var(--kf-zoom-to, 1);
    }
  }
}

This way, users who prefer reduced motion will see the element appear instantly in its final state, while everyone else gets the animated transition.

The Soft Approach

There are cases where we do want to keep some movement, but much softer and calmer than the original animation. For example, we can replace a bounce entrance with a gentle fade-in.


@keyframes kf-bounce {
  /* Soft fade-in for reduced motion */
}

@media (prefers-reduced-motion: no-preference) {
  @keyframes kf-bounce {
    /* Original bounce keyframes */
  }
}

Now, users with reduced motion enabled still get a sense of appearance, but without the intense movement of a bounce or elastic animation.

With the building blocks in place, the next question is how to make them part of the actual workflow. Writing flexible keyframes is one thing, but making them reliable across a large project requires a few strategies that I had to learn the hard way.

Implementation Strategies & Best Practices

Once we have a solid library of keyframes tokens, the real challenge is how to bring them into everyday work.

  • The temptation is to drop all keyframes in at once and declare the problem solved, but in practice I have found that the best results come from gradual adoption. Start with the most common animations, such as fade or slide. These are easy wins that show immediate value without requiring big rewrites.
  • Naming is another point that deserves attention. A consistent prefix or namespace makes it obvious which animations are tokens and which are local one-offs. It also prevents accidental collisions and helps new team members recognize the shared system at a glance.
  • Documentation is just as important as the code itself. Even a short comment above each keyframes token can save hours of guessing later. A developer should be able to open the tokens file, scan for the effect they need, and copy the usage pattern straight into their component.
  • Flexibility is what makes this approach worth the effort. By exposing sensible custom properties, we give teams room to adapt the animation without breaking the system. At the same time, try not to overcomplicate. Provide the knobs that matter and keep the rest opinionated.
  • Finally, remember accessibility. Not every animation needs a reduced motion alternative, but many do. Baking in these adjustments early means we never have to retrofit them later, and it shows a level of care that our users will notice even if they never mention it.

In my experience, treating keyframes tokens as part of our design tokens workflow is what makes them stick. Once they are in place, they stop feeling like special effects and become part of the design language, a natural extension of how the product moves and responds.

Wrapping Up

Animations can be one of the most joyful parts of building interfaces, but without structure, they can also become one of the biggest sources of frustration. By treating keyframes as tokens, you take something that is usually messy and hard to manage and turn it into a clear, predictable system.

The real value is not just in saving a few lines of code. It is in the confidence that when you use a fade, slide, zoom, or spin, you know exactly how it will behave across the project. It is in the flexibility that comes from custom properties without the chaos of endless variations. And it is in the accessibility built into the foundation rather than added as an afterthought.

I have seen these ideas work in different teams and different codebases, and the pattern is always the same.

Once the tokens are in place, keyframes stop being a scattered collection of tricks and become part of the design language. They make the product feel more intentional, more consistent, and more alive.

If you take one thing from this article, let it be this: animations deserve the same care and structure we already give to colors, typography, and spacing. A small investment in keyframes tokens pays off every time your interface moves.

]]>
hello@smashingmagazine.com (Amit Sheen)
<![CDATA[From Chaos To Clarity: Simplifying Server Management With AI And Automation]]> https://smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/ https://smashingmagazine.com/2025/11/simplifying-server-management-ai-automation/ Tue, 18 Nov 2025 10:00:00 GMT This article is a sponsored by Cloudways

If you build or manage websites for a living, you know the feeling. Your day is a constant juggle; one moment you’re fine-tuning a design, the next you’re troubleshooting a slow server or a mysterious error. Daily management of a complex web of plugins, integrations, and performance tools often feels like you’re just reacting to problems—putting out fires instead of building something new.

This reactive cycle is exhausting, and it pulls your focus away from meaningful work and into the technical weeds. A recent industry event, Cloudways Prepathon 2025, put a sharp focus on this very challenge. The discussions made it clear: the future of web work demands a better way. It requires an infrastructure that’s ready for AI; one that can actively help you turn this daily chaos into clarity.

The stakes for performance are higher than ever.

Suhaib Zaheer, SVP of Managed Hosting at DigitalOcean, and Ali Ahmed Khan, Sr. Director of Product Management, shared a telling statistic during their panel: 53% of mobile visitors will leave a site if it takes more than three seconds to load.

Think about that for a second, and within half that time, your potential traffic is gone. This isn’t just about a slow website, but about lost trust, abandoned carts, and missed opportunities. Performance is no longer just a feature; it’s the foundation of user experience. And in today’s landscape, automation is the key to maintaining it consistently.

So how do we stop reacting and start preventing?

The Old Way: A Constant State Of Alert

For too long, server management has worked like this: something breaks, you receive an alert (or worse, a client complaint), and you start digging. You log into your server, check logs, try to correlate different metrics, and eventually (hopefully) find the root cause. Then you manually apply a fix.

This process is fragile and relies on your constant attention while eating up hours that could be spent on development, strategy, or client work. For freelancers and small teams, this time is your most valuable asset. Every minute spent manually diagnosing a disk space issue or a web stack failure is a minute not spent on growing your business.

The problem isn't a lack of tools. It's that most tools just show you the data; they don't help you understand it or act on it. They add to the noise instead of providing clarity.

A New Approach: From Diagnosis To Automatic Resolution

This is where a shift towards intelligent automation changes the game. Tools like Cloudways Copilot, which became generally available earlier this year, are built specifically to simplify this workflow. The goal is straightforward: combine AI-driven diagnostics with automated fixes to predict and resolve performance issues before they affect your users.

Here’s a practical look at how it works.

Imagine your site starts running slowly. In the past, you'd begin the tedious investigation.

1. The AI Insights

Instead of a generic "high CPU" alert, you get a detailed insight. It tells you what happened (e.g., "MySQL process is consuming excessive resources"), why it happened (e.g., "caused by a poorly optimized query from a recent plugin update"), and provides a step-by-step guide to fix it manually. This alone cuts diagnosis time from 30-40 minutes down to about five. You understand the problem, not just the diagnosis.

2. The SmartFix

This is where it moves from helpful to transformative. For common issues, you don’t just get a manual guide. You get a one-click SmartFix button. After reviewing the actions Copilot will take, you can let it automatically resolve the issue. It applies the necessary steps safely and without you needing to touch a command line. This is the clarity we’re talking about. The system doesn’t just tell you about the problem; it solves it for you.

For developers managing multiple sites, this is a fundamental change. It means you can handle routine server issues at scale. A disk cleanup that would have required logging into ten different servers can now be handled with a few clicks. It frees your brain from repetitive troubleshooting and lets you focus on the work that actually requires your expertise.

Building An AI-Ready Foundation

The principles discussed at Prepathon go beyond any single tool. The theme was about building a resilient foundation. Meeky Hwang, CEO at Ndevr, introduced the "3E Framework," which perfectly applies here. A strong platform must balance:

  • Audience Experience
    What your visitors see and feel—blazing speed and seamless operation.
  • Creator Experience
    The workflow for you and your team—managing content and marketing without technical friction.
  • Developer Experience
    The backend foundation—server management that is secure, stable, and efficient.

AI-driven server management directly strengthens all three. A faster, more stable server improves the Audience Experience. Fewer emergencies and simpler workflows improve the Creator and Developer Experience. When these are aligned, you can scale with confidence.

This Isn’t About Replacing You

It’s important to be clear. This isn’t about replacing the developer but about augmenting your capabilities. As Vito Peleg, Co-founder & CEO at Atarim, noted during Prepathon:

“We're all becoming prompt engineers in the modern world. Our job is no longer to do the task, but to orchestrate the fleet of AI agents that can do it at a scale we never could alone.”

— Vito Peleg, Co-founder & CEO at Atarim

Think of Cloudways Copilot as an expert sysadmin on your team. It handles the routine, often tedious, work. It alerts you to what’s important and provides clear, actionable context. This gives you back the mental space and time to focus on architecture, innovation, and client strategy.

“The challenge isn’t managing servers anymore — it’s managing focus,”

Suhaib Zaheer noted.

“AI-driven infrastructure should help developers spend less time reacting to issues and more time creating better digital experiences.”
A Practical Path Forward

For freelancers, WordPress experts, and small agency developers, this shift offers a tangible way to:

  • Drastically reduce the hours spent manually troubleshooting infrastructure issues.
  • Implement predictive monitoring that catches slowdowns and bottlenecks early.
  • Manage your entire stack through clear, plain-English AI insights instead of raw data.
  • Balance speed, security, and uptime without needing an enterprise-scale budget or team.

The goal is to make powerful infrastructure simple, while also giving you back control and your time so you can focus on what you do best: creating exceptional web experiences.

You can use promo code BFCM5050 to get 50% off for 3 months plus 50 Free Migrations using Cloudways. This offer is valid from November 18th to December 4th, 2025.

]]>
hello@smashingmagazine.com (Mansoor Ahmed Khan)
<![CDATA[CSS Gamepad API Visual Debugging With CSS Layers]]> https://smashingmagazine.com/2025/11/css-gamepad-api-visual-debugging-css-layers/ https://smashingmagazine.com/2025/11/css-gamepad-api-visual-debugging-css-layers/ Fri, 14 Nov 2025 13:00:00 GMT When you plug in a controller, you mash buttons, move the sticks, pull the triggers… and as a developer, you see none of it. The browser’s picking it up, sure, but unless you’re logging numbers in the console, it’s invisible. That’s the headache with the Gamepad API.

It’s been around for years, and it’s actually pretty powerful. You can read buttons, sticks, triggers, the works. But most people don’t touch it. Why? Because there’s no feedback. No panel in developer tools. No clear way to know if the controller’s even doing what you think. It feels like flying blind.

That bugged me enough to build a little tool: Gamepad Cascade Debugger. Instead of staring at console output, you get a live, interactive view of the controller. Press something and it reacts on the screen. And with CSS Cascade Layers, the styles stay organized, so it’s cleaner to debug.

In this post, I’ll show you why debugging controllers is such a pain, how CSS helps clean it up, and how you can build a reusable visual debugger for your own projects.

Even if you are able to log them all, you’ll quickly end up with unreadable console spam. For example:

[0,0,1,0,0,0.5,0,...]
[0,0,0,0,1,0,0,...]
[0,0,1,0,0,0,0,...]

Can you tell what button was pressed? Maybe, but only after straining your eyes and missing a few inputs. So, no, debugging doesn’t come easily when it comes to reading inputs.

Problem 3: Lack Of Structure

Even if you throw together a quick visualizer, styles can quickly get messy. Default, active, and debug states can overlap, and without a clear structure, your CSS becomes brittle and hard to extend.

CSS Cascade Layers can help. They group styles into “layers” that are ordered by priority, so you stop fighting specificity and guessing, “Why isn’t my debug style showing?” Instead, you maintain separate concerns:

  • Base: The controller’s standard, initial appearance.
  • Active: Highlights for pressed buttons and moved sticks.
  • Debug: Overlays for developers (e.g., numeric readouts, guides, and so on).

If we were to define layers in CSS according to this, we’d have:

/* lowest to highest priority */
@layer base, active, debug;

@layer base {
  /* ... */
}

@layer active {
  /* ... */
}

@layer debug {
  /* ... */
}

Because each layer stacks predictably, you always know which rules win. That predictability makes debugging not just easier, but actually manageable.

We’ve covered the problem (invisible, messy input) and the approach (a visual debugger built with Cascade Layers). Now we’ll walk through the step-by-step process to build the debugger.

The Debugger Concept

The easiest way to make hidden input visible is to just draw it on the screen. That’s what this debugger does. Buttons, triggers, and joysticks all get a visual.

  • Press A: A circle lights up.
  • Nudge the stick: The circle slides around.
  • Pull a trigger halfway: A bar fills halfway.

Now you’re not staring at 0s and 1s, but actually watching the controller react live.

Of course, once you start piling on states like default, pressed, debug info, maybe even a recording mode, the CSS starts getting larger and more complex. That’s where cascade layers come in handy. Here’s a stripped-down example:

@layer base {
  .button {
    background: #222;
    border-radius: 50%;
    width: 40px;
    height: 40px;
  }
}

@layer active {
  .button.pressed {
    background: #0f0; /* bright green */
  }
}

@layer debug {
  .button::after {
    content: attr(data-value);
    font-size: 12px;
    color: #fff;
  }
}

The layer order matters: baseactivedebug.

  • base draws the controller.
  • active handles pressed states.
  • debug throws on overlays.

Breaking it up like this means you’re not fighting weird specificity wars. Each layer has its place, and you always know what wins.

Building It Out

Let’s get something on screen first. It doesn’t need to look good — just needs to exist so we have something to work with.

<h1>Gamepad Cascade Debugger</h1>

<!-- Main controller container -->
<div id="controller">
  <!-- Action buttons -->
  <div id="btn-a" class="button">A</div>
  <div id="btn-b" class="button">B</div>
  <div id="btn-x" class="button">X</div>

  <!-- Pause/menu button (represented as two bars) -->
  <div>
    <div id="pause1" class="pause"></div>
    <div id="pause2" class="pause"></div>
  </div>
</div>

<!-- Toggle button to start/stop the debugger -->
<button id="toggle">Toggle Debug</button>

<!-- Status display for showing which buttons are pressed -->
<div id="status">Debugger inactive</div>

<script src="script.js"></script>

That’s literally just boxes. Not exciting yet, but it gives us handles to grab later with CSS and JavaScript.

Okay, I’m using cascade layers here because it keeps stuff organized once you add more states. Here’s a rough pass:

/* ===================================
   CASCADE LAYERS SETUP
   Order matters: base → active → debug
   =================================== */

/* Define layer order upfront */
@layer base, active, debug;

/* Layer 1: Base styles - default appearance */
@layer base {
  .button {
    background: #333;
    border-radius: 50%;
    width: 70px;
    height: 70px;
    display: flex;
    justify-content: center;
    align-items: center;
  }

  .pause {
    width: 20px;
    height: 70px;
    background: #333;
    display: inline-block;
  }
}

/* Layer 2: Active states - handles pressed buttons */
@layer active {
  .button.active {
    background: #0f0; /* Bright green when pressed */
    transform: scale(1.1); /* Slightly enlarges the button */
  }

  .pause.active {
    background: #0f0;
    transform: scaleY(1.1); /* Stretches vertically when pressed */
  }
}

/* Layer 3: Debug overlays - developer info */
@layer debug {
  .button::after {
    content: attr(data-value); /* Shows the numeric value */
    font-size: 12px;
    color: #fff;
  }
}

The beauty of this approach is that each layer has a clear purpose. The base layer can never override active, and active can never override debug, regardless of specificity. This eliminates the CSS specificity wars that usually plague debugging tools.

Now it looks like some clusters are sitting on a dark background. Honestly, not too bad.

Adding the JavaScript

JavaScript time. This is where the controller actually does something. We’ll build this step by step.

Step 1: Set Up State Management

First, we need variables to track the debugger’s state:

// ===================================
// STATE MANAGEMENT
// ===================================

let running = false; // Tracks whether the debugger is active
let rafId; // Stores the requestAnimationFrame ID for cancellation

These variables control the animation loop that continuously reads gamepad input.

Step 2: Grab DOM References

Next, we get references to all the HTML elements we’ll be updating:

// ===================================
// DOM ELEMENT REFERENCES
// ===================================

const btnA = document.getElementById("btn-a");
const btnB = document.getElementById("btn-b");
const btnX = document.getElementById("btn-x");
const pause1 = document.getElementById("pause1");
const pause2 = document.getElementById("pause2");
const status = document.getElementById("status");

Storing these references up front is more efficient than querying the DOM repeatedly.

Step 3: Add Keyboard Fallback

For testing without a physical controller, we’ll map keyboard keys to buttons:

// ===================================
// KEYBOARD FALLBACK (for testing without a controller)
// ===================================

const keyMap = {
  "a": btnA,
  "b": btnB,
  "x": btnX,
  "p": [pause1, pause2] // 'p' key controls both pause bars
};

This lets us test the UI by pressing keys on a keyboard.

Step 4: Create The Main Update Loop

Here’s where the magic happens. This function runs continuously and reads gamepad state:

// ===================================
// MAIN GAMEPAD UPDATE LOOP
// ===================================

function updateGamepad() {
  // Get all connected gamepads
  const gamepads = navigator.getGamepads();
  if (!gamepads) return;

  // Use the first connected gamepad
  const gp = gamepads[0];

  if (gp) {
    // Update button states by toggling the "active" class
    btnA.classList.toggle("active", gp.buttons[0].pressed);
    btnB.classList.toggle("active", gp.buttons[1].pressed);
    btnX.classList.toggle("active", gp.buttons[2].pressed);

    // Handle pause button (button index 9 on most controllers)
    const pausePressed = gp.buttons[9].pressed;
    pause1.classList.toggle("active", pausePressed);
    pause2.classList.toggle("active", pausePressed);

    // Build a list of currently pressed buttons for status display
    let pressed = [];
    gp.buttons.forEach((btn, i) => {
      if (btn.pressed) pressed.push("Button " + i);
    });

    // Update status text if any buttons are pressed
    if (pressed.length > 0) {
      status.textContent = "Pressed: " + pressed.join(", ");
    }
  }

  // Continue the loop if debugger is running
  if (running) {
    rafId = requestAnimationFrame(updateGamepad);
  }
}

The classList.toggle() method adds or removes the active class based on whether the button is pressed, which triggers our CSS layer styles.

Step 5: Handle Keyboard Events

These event listeners make the keyboard fallback work:

// ===================================
// KEYBOARD EVENT HANDLERS
// ===================================

document.addEventListener("keydown", (e) => {
  if (keyMap[e.key]) {
    // Handle single or multiple elements
    if (Array.isArray(keyMap[e.key])) {
      keyMap[e.key].forEach(el => el.classList.add("active"));
    } else {
      keyMap[e.key].classList.add("active");
    }
    status.textContent = "Key pressed: " + e.key.toUpperCase();
  }
});

document.addEventListener("keyup", (e) => {
  if (keyMap[e.key]) {
    // Remove active state when key is released
    if (Array.isArray(keyMap[e.key])) {
      keyMap[e.key].forEach(el => el.classList.remove("active"));
    } else {
      keyMap[e.key].classList.remove("active");
    }
    status.textContent = "Key released: " + e.key.toUpperCase();
  }
});

Step 6: Add Start/Stop Control

Finally, we need a way to toggle the debugger on and off:

// ===================================
// TOGGLE DEBUGGER ON/OFF
// ===================================

document.getElementById("toggle").addEventListener("click", () => {
  running = !running; // Flip the running state

  if (running) {
    status.textContent = "Debugger running...";
    updateGamepad(); // Start the update loop
  } else {
    status.textContent = "Debugger inactive";
    cancelAnimationFrame(rafId); // Stop the loop
  }
});

So yeah, press a button and it glows. Push the stick and it moves. That’s it.

One more thing: raw values. Sometimes you just want to see numbers, not lights.

At this stage, you should see:

  • A simple on-screen controller,
  • Buttons that react as you interact with them, and
  • An optional debug readout showing pressed button indices.

To make this less abstract, here’s a quick demo of the on-screen controller reacting in real time:

Now, pressing Start Recording logs everything until you hit Stop Recording.

2. Exporting Data to CSV/JSON

Once we have a log, we’ll want to save it.

<div class="controls">
  <button id="export-json" class="btn">Export JSON</button>
  <button id="export-csv" class="btn">Export CSV</button>
</div>

Step 1: Create The Download Helper

First, we need a helper function that handles file downloads in the browser:

// ===================================
// FILE DOWNLOAD HELPER
// ===================================

function downloadFile(filename, content, type = "text/plain") {
  // Create a blob from the content
  const blob = new Blob([content], { type });
  const url = URL.createObjectURL(blob);

  // Create a temporary download link and click it
  const a = document.createElement("a");
  a.href = url;
  a.download = filename;
  a.click();

  // Clean up the object URL after download
  setTimeout(() => URL.revokeObjectURL(url), 100);
}

This function works by creating a Blob (binary large object) from your data, generating a temporary URL for it, and programmatically clicking a download link. The cleanup ensures we don’t leak memory.

Step 2: Handle JSON Export

JSON is perfect for preserving the complete data structure:

// ===================================
// EXPORT AS JSON
// ===================================

document.getElementById("export-json").addEventListener("click", () => {
  // Check if there's anything to export
  if (!frames.length) {
    console.warn("No recording available to export.");
    return;
  }

  // Create a payload with metadata and frames
  const payload = {
    createdAt: new Date().toISOString(),
    frames
  };

  // Download as formatted JSON
  downloadFile(
    "gamepad-log.json", 
    JSON.stringify(payload, null, 2), 
    "application/json"
  );
});

The JSON format keeps everything structured and easily parseable, making it ideal for loading back into dev tools or sharing with teammates.

Step 3: Handle CSV Export

For CSV exports, we need to flatten the hierarchical data into rows and columns:

// ===================================
// EXPORT AS CSV
// ===================================

document.getElementById("export-csv").addEventListener("click", () => {
  // Check if there's anything to export
  if (!frames.length) {
    console.warn("No recording available to export.");
    return;
  }

  // Build CSV header row (columns for timestamp, all buttons, all axes)
  const headerButtons = frames[0].buttons.map((_, i) => btn${i});
  const headerAxes = frames[0].axes.map((_, i) => axis${i});
  const header = ["t", ...headerButtons, ...headerAxes].join(",") + "\n";

  // Build CSV data rows
  const rows = frames.map(f => {
    const btnVals = f.buttons.map(b => b.value);
    return [f.t, ...btnVals, ...f.axes].join(",");
  }).join("\n");

  // Download as CSV
  downloadFile("gamepad-log.csv", header + rows, "text/csv");
});

CSV is brilliant for data analysis because it opens directly in Excel or Google Sheets, letting you create charts, filter data, or spot patterns visually.

Now that the export buttons are in, you’ll see two new options on the panel: Export JSON and Export CSV. JSON is nice if you want to throw the raw log back into your dev tools or poke around the structure. CSV, on the other hand, opens straight into Excel or Google Sheets so you can chart, filter, or compare inputs. The following figure shows what the panel looks like with those extra controls.

3. Snapshot System

Sometimes you don’t need a full recording, just a quick “screenshot” of input states. That’s where a Take Snapshot button helps.

<div class="controls">
  <button id="snapshot" class="btn">Take Snapshot</button>
</div>

And the JavaScript:

// ===================================
// TAKE SNAPSHOT
// ===================================

document.getElementById("snapshot").addEventListener("click", () => {
  // Get all connected gamepads
  const pads = navigator.getGamepads();
  const activePads = [];

  // Loop through and capture the state of each connected gamepad
  for (const gp of pads) {
    if (!gp) continue; // Skip empty slots

    activePads.push({
      id: gp.id, // Controller name/model
      timestamp: performance.now(),
      buttons: gp.buttons.map(b => ({ 
        pressed: b.pressed, 
        value: b.value 
      })),
      axes: [...gp.axes]
    });
  }

  // Check if any gamepads were found
  if (!activePads.length) {
    console.warn("No gamepads connected for snapshot.");
    alert("No controller detected!");
    return;
  }

  // Log and notify user
  console.log("Snapshot:", activePads);
  alert(Snapshot taken! Captured ${activePads.length} controller(s).);
});

Snapshots freeze the exact state of your controller at one moment in time.

4. Ghost Input Replay

Now for the fun one: ghost input replay. This takes a log and plays it back visually as if a phantom player was using the controller.

<div class="controls">
  <button id="replay" class="btn">Replay Last Recording</button>
</div>

JavaScript for replay:

// ===================================
// GHOST REPLAY
// ===================================

document.getElementById("replay").addEventListener("click", () => {
  // Ensure we have a recording to replay
  if (!frames.length) {
    alert("No recording to replay!");
    return;
  }

  console.log("Starting ghost replay...");

  // Track timing for synced playback
  let startTime = performance.now();
  let frameIndex = 0;

  // Replay animation loop
  function step() {
    const now = performance.now();
    const elapsed = now - startTime;

    // Process all frames that should have occurred by now
    while (frameIndex < frames.length && frames[frameIndex].t <= elapsed) {
      const frame = frames[frameIndex];

      // Update UI with the recorded button states
      btnA.classList.toggle("active", frame.buttons[0].pressed);
      btnB.classList.toggle("active", frame.buttons[1].pressed);
      btnX.classList.toggle("active", frame.buttons[2].pressed);

      // Update status display
      let pressed = [];
      frame.buttons.forEach((btn, i) => {
        if (btn.pressed) pressed.push("Button " + i);
      });
      if (pressed.length > 0) {
        status.textContent = "Ghost: " + pressed.join(", ");
      }

      frameIndex++;
    }

    // Continue loop if there are more frames
    if (frameIndex < frames.length) {
      requestAnimationFrame(step);
    } else {
      console.log("Replay finished.");
      status.textContent = "Replay complete";
    }
  }

  // Start the replay
  step();
});

To make debugging a bit more hands-on, I added a ghost replay. Once you’ve recorded a session, you can hit replay and watch the UI act it out, almost like a phantom player is running the pad. A new Replay Ghost button shows up in the panel for this.

Hit Record, mess around with the controller a bit, stop, then replay. The UI just echoes everything you did, like a ghost following your inputs.

Why bother with these extras?

  • Recording/export makes it easy for testers to show exactly what happened.
  • Snapshots freeze a moment in time, super useful when you’re chasing odd bugs.
  • Ghost replay is great for tutorials, accessibility checks, or just comparing control setups side by side.

At this point, it’s not just a neat demo anymore, but something you could actually put to work.

Real-World Use Cases

Now we’ve got this debugger that can do a lot. It shows live input, records logs, exports them, and even replays stuff. But the real question is: who actually cares? Who’s this useful for?

Game Developers

Controllers are part of the job, but debugging them? Usually a pain. Imagine you’re testing a fighting game combo, like ↓ → + punch. Instead of praying, you pressed it the same way twice, you record it once, and replay it. Done. Or you swap JSON logs with a teammate to check if your multiplayer code reacts the same on their machine. That’s huge.

Accessibility Practitioners

This one’s close to my heart. Not everyone plays with a “standard” controller. Adaptive controllers throw out weird signals sometimes. With this tool, you can see exactly what’s happening. Teachers, researchers, whoever. They can grab logs, compare them, or replay inputs side-by-side. Suddenly, invisible stuff becomes obvious.

Quality Assurance Testing

Testers usually write notes like “I mashed buttons here and it broke.” Not very helpful. Now? They can capture the exact presses, export the log, and send it off. No guessing.

Educators

If you’re making tutorials or YouTube vids, ghost replay is gold. You can literally say, “Here’s what I did with the controller,” while the UI shows it happening. Makes explanations way clearer.

Beyond Games

And yeah, this isn’t just about games. People have used controllers for robots, art projects, and accessibility interfaces. Same issue every time: what is the browser actually seeing? With this, you don’t have to guess.

Conclusion

Debugging a controller input has always felt like flying blind. Unlike the DOM or CSS, there’s no built-in inspector for gamepads; it’s just raw numbers in the console, easily lost in the noise.

With a few hundred lines of HTML, CSS, and JavaScript, we built something different:

  • A visual debugger that makes invisible inputs visible.
  • A layered CSS system that keeps the UI clean and debuggable.
  • A set of enhancements (recording, exporting, snapshots, ghost replay) that elevate it from demo to developer tool.

This project shows how far you can go by mixing the Web Platform’s power with a little creativity in CSS Cascade Layers.

The tool I just explained in its entirety is open-source. You can clone the GitHub repo and try it for yourself.

But more importantly, you can make it your own. Add your own layers. Build your own replay logic. Integrate it with your game prototype. Or even use it in ways I haven’t imagined. For teaching, accessibility, or data analysis.

At the end of the day, this isn’t just about debugging gamepads. It’s about shining a light on hidden inputs, and giving developers the confidence to work with hardware that the web still doesn’t fully embrace.

So, plug in your controller, open up your editor, and start experimenting. You might be surprised at what your browser and your CSS can truly accomplish.

]]>
hello@smashingmagazine.com (Godstime Aburu)
<![CDATA[Older Tech In The Browser Stack]]> https://smashingmagazine.com/2025/11/older-tech-browser-stack/ https://smashingmagazine.com/2025/11/older-tech-browser-stack/ Thu, 13 Nov 2025 08:00:00 GMT I’ve been in front-end development long enough to see a trend over the years: younger developers working with a new paradigm of programming without understanding the historical context of it.

It is, of course, perfectly understandable to not know something. The web is a very big place with a diverse set of skills and specialties, and we don’t always know what we don’t know. Learning in this field is an ongoing journey rather than something that happens once and ends.

Case in point: Someone on my team asked if it was possible to tell if users navigate away from a particular tab in the UI. I pointed out JavaScript’s beforeunload event. But those who have tackled this before know this is possible because they have been hit with alerts about unsaved data on other sites, for which beforeunload is a typical use case. I also pointed out the pageHide and visibilityChange events to my colleague for good measure.

How did I know about that? Because it came up in another project, not because I studied up on it when initially learning JavaScript.

The fact is that modern front-end frameworks are standing on the shoulders of the technology giants that preceded them. They abstract development practices, often for a better developer experience that reduces, or even eliminates, the need to know or touch what have traditionally been essential front-end concepts everyone probably ought to know.

Consider the CSS Object Model (CSSOM). You might expect that anyone working in CSS and JavaScript has a bunch of hands-on CSSOM experience, but that’s not always going to be the case.

There was a React project for an e-commerce site I worked on where we needed to load a stylesheet for the currently selected payment provider. The problem was that the stylesheet was loading on every page when it was only really needed on a specific page. The developer tasked with making this happen hadn’t ever loaded a stylesheet dynamically. Again, this is totally understandable when React abstracts away the traditional approach you might have reached for.

The CSSOM is likely not something you need in your everyday work. But it is likely you will need to interact with it at some point, even in a one-off instance.

These experiences inspired me to write this article. There are many existing web features and technologies in the wild that you may never touch directly in your day-to-day work. Perhaps you’re fairly new to web development and are simply unaware of them because you’re steeped in the abstraction of a specific framework that doesn’t require you to know it deeply, or even at all.

I’m speaking specifically about XML, which many of us know is an ancient language not totally dissimilar from HTML.

I’m bringing this up because of recent WHATWG discussions suggesting that a significant chunk of the XML stack known as XSLT programming should be removed from browsers. This is exactly the sort of older, existing technology we’ve had for years that could be used for something as practical as the CSSOM situation my team was in.

Have you worked with XSLT before? Let’s see if we lean heavily into this older technology and leverage its features outside the context of XML to tackle real-world problems today.

XPath: The Central API

The most important XML technology that is perhaps the most useful outside of a straight XML perspective is XPath, a query language that allows you to find any node or attribute in a markup tree with one root element. I have a personal affection for XSLT, but that also relies on XPath, and personal affection must be put aside in ranking importance.

The argument for removing XSLT does not make any mention of XPath, so I suppose it is still allowed. That’s good because XPath is the central and most important API in this suite of technologies, especially when trying to find something to use outside normal XML usage. It is important because, while CSS selectors can be used to find most of the elements in your page, they cannot find them all. Furthermore, CSS selectors cannot be used to find an element based on its current position in the DOM.

XPath can.

Now, some of you reading this might know XPath, and some might not. XPath is a pretty big area of technology, and I can’t really teach all the basics and also show you cool things to do with it in a single article like this. I actually tried writing that article, but the average Smashing Magazine publication doesn’t go over 5,000 words. I was already at more than 2,000 words while only halfway through the basics.

So, I’m going to start doing cool stuff with XPath and give you some links that you can use for the basics if you find this stuff interesting.

Combining XPath & CSS

XPath can do lots of things that CSS selectors can’t when querying elements. But CSS selectors can also do a few things that XPath can’t, namely, query elements by class name.

CSS XPath
.myClass /*[contains(@class, "myClass")]

In this example, CSS queries elements that contain a .myClass classname. Meanwhile, the XPath example queries elements that contain an attribute class with the string “myClass”. In other words, it selects elements with myClass in any attribute, including elements with the .myClass classname — as well as elements with “myClass” in the string, such as .myClass2. XPath is broader in that sense.

So, no. I’m not suggesting that we ought to toss out CSS and start selecting all elements via XPath. That’s not the point.

The point is that XPath can do things that CSS cannot and could still be very useful, even though it is an older technology in the browser stack and may not seem obvious at first glance.

Let’s use the two technologies together not only because we can, but because we’ll learn something about XPath in the process, making it another tool in your stack — one you might not have known has been there all along!

The problem is that JavaScript’s document.evaluate method and the various query selector methods we use with the CSS APIs for JavaScript are incompatible.

I have made a compatible querying API to get us started, though admittedly, I have not put a lot of thought into it since it’s a departure from what we’re doing here. Here’s a fairly simple working example of a reusable query constructor:

See the Pen queryXPath [forked] by Bryan Rasmussen.

I’ve added two methods on the document object: queryCSSSelectors (which is essentially querySelectorAll) and queryXPaths. Both of these return a queryResults object:

{
  queryType: nodes | string | number | boolean,
  results: any[] // html elements, xml elements, strings, numbers, booleans,
  queryCSSSelectors: (query: string, amend: boolean) => queryResults,
  queryXpaths: (query: string, amend: boolean) => queryResults
}

The queryCSSSelectors and queryXpaths functions run the query you give them over the elements in the results array, as long as the results array is of type nodes, of course. Otherwise, it will return a queryResult with an empty array and a type of nodes. If the amend property is set to true, the functions will change their own queryResults.

Under no circumstances should this be used in a production environment. I am doing it this way purely to demonstrate the various effects of using the two query APIs together.

Example Queries

I want to show a few examples of different XPath queries that demonstrate some of the powerful things they can do and how they can be used in place of other approaches.

The first example is //li/text(). This queries all li elements and returns their text nodes. So, if we were to query the following HTML:

<ul>
  <li>one</li>
  <li>two</li>
  <li>three</li>
</ul>

…this is what is returned:

{"queryType":"xpathEvaluate","results":["one","two","three"],"resultType":"string"}

In other words, we get the following array: ["one","two","three"].

Normally, you would query for the li elements to get that, turn the result of that query into an array, map the array, and return the text node of each element. But we can do that more concisely with XPath:

document.queryXPaths("//li/text()").results.

Notice that the way to get a text node is to use text(), which looks like a function signature — and it is. It returns the text node of an element. In our example, there are three li elements in the markup, each containing text ("one", "two", and "three").

Let’s look at one more example of a text() query. Assume this is our markup:

<pa href="/login.html">Sign In</a>

Let’s write a query that returns the href attribute value:

document.queryXPaths("//a[text() = 'Sign In']/@href").results.

This is an XPath query on the current document, just like the last example, but this time we return the href attribute of a link (a element) that contains the text “Sign In”. The actual returned result is ["/login.html"].

XPath Functions Overview

There are a number of XPath functions, and you’re probably unfamiliar with them. There are several, I think, that are worth knowing about, including the following:

  • starts-with
    If a text starts with a particular other text example, starts-with(@href, 'http:') returns true if an href attribute starts with http:.
  • contains
    If a text contains a particular other text example, contains(text(), "Smashing Magazine") returns true if a text node contains the words “Smashing Magazine” in it anywhere.
  • count
    Returns a count of how many matches there are to a query. For example, count(//*[starts-with(@href, 'http:']) returns a count of how many links in the context node have elements with an href attribute that contains the text beginning with the http:.
  • substring
    Works like JavaScript substring, except you pass the string as an argument. For example, substring("my text", 2, 4) returns "y t".
  • substring-before
    Returns the part of a string before another string. For example, substing-before("my text", " ") returns "my". Similarly, substring-before("hi","bye") returns an empty string.
  • substring-after
    Returns the part of a string after another string. For example, substing-after("my text", " ") returns "text". Similarly, substring-after("hi","bye")returns an empty string.
  • normalize-space
    Returns the argument string with whitespace normalized by stripping leading and trailing whitespace and replacing sequences of whitespace characters by a single space.
  • not
    Returns a boolean true if the argument is false, otherwise false.
  • true
    Returns boolean true.
  • false
    Returns boolean false.
  • concat
    The same thing as JavaScript concat, except you do not run it as a method on a string. Instead, you put in all the strings you want to concatenate.
  • string-length
    This is not the same as JavaScript string-length, but rather returns the length of the string it is given as an argument.
  • translate
    This takes a string and changes the second argument to the third argument. For example, translate("abcdef", "abc", "XYZ") outputs XYZdef.

Aside from these particular XPath functions, there are a number of other functions that work just the same as their JavaScript counterparts — or counterparts in basically any programming language — that you would probably also find useful, such as floor, ceiling, round, sum, and so on.

The following demo illustrates each of these functions:

See the Pen XPath Numerical functions [forked] by Bryan Rasmussen.

Note that, like most of the string manipulation functions, many of the numerical ones take a single input. This is, of course, because they are supposed to be used for querying, as in the last XPath example:

//li[floor(text()) > 250]/@val

If you use them, as most of the examples do, you will end up running it on the first node that matches the path.

There are also some type conversion functions you should probably avoid because JavaScript already has its own type conversion problems. But there can be times when you want to convert a string to a number in order to check it against some other number.

Functions that set the type of something are boolean, number, string, and node. These are the important XPath datatypes.

And as you might imagine, most of these functions can be used on datatypes that are not DOM nodes. For example, substring-after takes a string as we’ve already covered, but it could be the string from an href attribute. It can also just be a string:

const testSubstringAfter = document.queryXPaths("substring-after('hello world',' ')");

Obviously, this example will give us back the results array as ["world"]. To show this in action, I have made a demo page using functions against things that are not DOM nodes:

See the Pen queryXPath [forked] by Bryan Rasmussen.

You should note the surprising aspect of the translate function, which is that if you have a character in the second argument (i.e., the list of characters you want translated) and no matching character to translate to, that character gets removed from the output.

Thus, this:

translate('Hello, My Name is Inigo Montoya, you killed my father, prepare to die','abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ,','*')

…results in the string, including spaces:

[" * *  ** "]

This means that the letter “a” is being translated to an asterisk (*), but every other character that does not have a translation given the target string is completely removed. The whitespace is all we have left between the translated “a” characters.

Then again, this query:

translate('Hello, My Name is Inigo Montoya, you killed my father, prepare to die','abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ,','**************************************************')")

…does not have the problem and outputs a result that looks like this:

"***** ** **** ** ***** ******* *** ****** ** ****** ******* ** ***"

It might strike you that there is no easy way in JavaScript to do exactly what the XPath translate function does, although for many use cases, replaceAll with regular expressions can handle it.

You could use the same approach I have demonstrated, but that is suboptimal if all you want is to translate the strings. The following demo wraps XPath’s translate function to provide a JavaScript version:

See the Pen translate function [forked] by Bryan Rasmussen.

Where might you use something like this? Consider Caesar Cipher encryption with a three-place offset (e.g., top-of-the-line encryption from 48 B.C.):

translate("Caesar is planning to cross the Rubicon!", 
 "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz",
  "XYZABCDEFGHIJKLMNOPQRSTUVWxyzabcdefghijklmnopqrstuvw")

The input text “Caesar is planning to cross the Rubicon!” results in “Zxbpxo fp mixkkfkd ql zolpp qeb Oryfzlk!”

To give another quick example of different possibilities, I made a metal function that takes a string input and uses a translate function to return the text, including all characters that take umlauts.

See the Pen metal function [forked] by Bryan Rasmussen.

const metal = (str) => {
  return translate(str, "AOUaou","ÄÖÜäöü");
}

And, if given the text “Motley Crue rules, rock on dudes!”, returns “Mötley Crüe rüles, röck ön düdes!”

Obviously, one might have all sorts of parody uses of this function. If that’s you, then this TVTropes article ought to provide you with plenty of inspiration.

Using CSS With XPath

Remember our main reason for using CSS selectors together with XPath: CSS pretty much understands what a class is, whereas the best you can do with XPath is string comparisons of the class attribute. That will work in most cases.

But if you were to ever run into a situation where, say, someone created classes named .primaryLinks and .primaryLinks2 and you were using XPath to get the .primaryLinks class, then you would likely run into problems. As long as there’s nothing silly like that, you would probably use XPath. But I am sad to report that I have worked at places where people do those types of silly things.

Here’s another demo using CSS and XPath together. It shows what happens when we use the code to run an XPath on a context node that is not the document’s node.

See the Pen css and xpath together [forked] by Bryan Rasmussen.

The CSS query is .relatedarticles a, which fetches the two a elements in a div assigned a .relatedarticles class.

After that are three “bad” queries, that is to say, queries that do not do what we want them to do when running with these elements as the context node.

I can explain why they are behaving differently than you might expect. The three bad queries in question are:

  • //text(): Returns all the text in the document.
  • //a/text(): Returns all the text inside of links in the document.
  • ./a/text(): Returns no results.

The reason for these results is that while your context is a elements returned from the CSS query, // goes against the whole document. This is the strength of XPath; CSS cannot go from a node up to an ancestor and then to a sibling of that ancestor, and walk down to a descendant of that sibling. But XPath can.

Meanwhile, ./ queries the children of the current node, where the dot (.) represents the current node, and the forward slash (/) represents going to some child node — whether it is an attribute, element, or text is determined by the next part of the path. But there is no child a element selected by the CSS query, thus that query also returns nothing.

There are three good queries in that last demo:

  • .//text(),
  • ./text(),
  • normalize-space(./text()).

The normalize-space query demonstrates XPath function usage, but also fixes a problem included in the other queries. The HTML is structured like this:

<a href="https://www.smashingmagazine.com/2018/04/feature-testing-selenium-webdriver/">
  Automating Your Feature Testing With Selenium WebDriver
</a>

The query returns a line feed at the beginning and end of the text node, and normalize-space removes this.

Using any XPath function that returns something other than a boolean with an input XPath applies to other functions. The following demo shows a number of examples:

See the Pen xpath functions examples [forked] by Bryan Rasmussen.

The first example shows a problem you should watch out for. Specifically, the following code:

document.queryXPaths("substring-after(//a/@href,'https://')");

…returns one string:

It makes sense, right? These functions do not return arrays but rather single strings or single numbers. Running the function anywhere with multiple results only returns the first result.

The second result shows what we really want:

document.queryCSSSelectors("a").queryXPaths("substring-after(./@href,'https://')");

Which returns an array of two strings:

XPath functions can be nested just like functions in JavaScript. So, if we know the Smashing Magazine URL structure, we could do the following (using template literals is recommended):

`translate(
    substring(
      substring-after(./@href, ‘www.smashingmagazine.com/')
    ,9),
 '/','')`

This is getting a bit too complex to the extent that it needs comments describing what it does: take all of the URL from the href attribute after www.smashingmagazine.com/, remove the first nine characters, then translate the forward slash (/) character to nothing so as to get rid of the ending forward slash.

The resulting array:

["feature-testing-selenium-webdriver","automated-test-results-improve-accessibility"]
More XPath Use Cases

XPath can really shine in testing. The reason is not difficult to see, as XPath can be used to get every element in the DOM, from any position in the DOM, whereas CSS cannot.

You cannot count on CSS classes remaining consistent in many modern build systems, but with XPath, we are able to make more robust matches as to what the text content of an element is, regardless of a changing DOM structure.

There has been research on techniques that allow you to make resilient XPath tests. Nothing is worse than having tests flake out and fail just because a CSS selector no longer works because something has been renamed or removed.

XPath is also really great at multiple locator extraction. There is more than one way to use XPath queries to match an element. The same is true with CSS. But XPath queries can drill into things in a more targeted way that limits what gets returned, allowing you to find a specific match where there may be several possible matches.

For example, we can use XPath to return a specific h2 element that is contained inside a div that immediately follows a sibling div that, in turn, contains a child image element with a data-testID="leader" attribute on it:

<div>
  <div>
    <h1>don't get this headline</h1>
  </div>

  <div>
    <h2>Don't get this headline either</h2>
  </div>

  <div>
    <h2>The header for the leader image</h2>
  </div>

  <div>
    <img data-testID="leader" src="image.jpg"/>
  </div>
</div>

This is the query:

document.queryXPaths(`
  //div[
    following-sibling::div[1]
    /img[@data-testID='leader']
  ]
  /h2/
  text()
`);

Let’s drop in a demo to see how that all comes together:

See the Pen Complex H2 Query [forked] by Bryan Rasmussen.

So, yes. There are lots of possible paths to any element in a test using XPath.

XSLT 1.0 Deprecation

I mentioned early on that the Chrome team plans on removing XSLT 1.0 support from the browser. That’s important because XSLT 1.0 uses XML-focused programming for document transformation that, in turn, relies on XPath 1.0, which is what is found in most browsers.

When that happens, we’ll lose a key component of XPath. But given the fact that XPath is really great for writing tests, I find it unlikely that XPath as a whole will disappear anytime soon.

That said, I’ve noticed that people get interested in a feature when it’s taken away. And that’s certainly true in the case of XSLT 1.0 being deprecated. There’s an entire discussion happening over at Hacker News filled with arguments against the deprecation. The post itself is a great example of creating a blogging framework with XSLT. You can read the discussion for yourself, but it gets into how JavaScript might be used as a shim for XLST to handle those sorts of cases.

I have also seen suggestions that browsers should use SaxonJS, which is a port to JavaScript’s Saxon XSLT, XQUERY, and XPath engines. That’s an interesting idea, especially as Saxon-JS implements the current version of these specifications, whereas there is no browser that implements any version of XPath or XSLT beyond 1.0, and none that implements XQuery.

I reached out to Norm Tovey-Walsh at Saxonica, the company behind SaxonJS and other versions of the Saxon engine. He said:

“If any browser vendor was interested in taking SaxonJS as a starting point for integrating modern XML technologies into the browser, we’d be thrilled to discuss it with them.”

Norm Tovey-Walsh

But also added:

“I would be very surprised if anyone thought that taking SaxonJS in its current form and dropping it into the browser build unchanged would be the ideal approach. A browser vendor, by nature of the fact that they build the browser, could approach the integration at a much deeper level than we can ‘from the outside’.”

Norm Tovey-Walsh

It’s worth noting that Tovey-Walsh’s comments came about a week before the XSLT deprecation announcement.

Conclusion

I could go on and on. But I hope this has demonstrated the power of XPath and given you plenty of examples demonstrating how to use it for achieving great things. It’s a perfect example of older technology in the browser stack that still has plenty of utility today, even if you’ve never known it existed or never considered reaching for it.

Further Reading

  • Enhancing the Resiliency of Automated Web Tests with Natural Language” (ACM Digital Library) by Maroun Ayli, Youssef Bakouny, Nader Jalloul, and Rima Kilany
    This article provides many XPath examples for writing resilient tests.
  • XPath (MDN)
    This is an excellent place to start if you want a technical explanation detailing how XPath works.
  • XPath Tutorial (ZVON)
    I’ve found this tutorial to be the most helpful in my own learning, thanks to a wealth of examples and clear explanations.
  • XPather
    This interactive tool lets you work directly with the code.
]]>
hello@smashingmagazine.com (Bryan Rasmussen)
<![CDATA[Effectively Monitoring Web Performance]]> https://smashingmagazine.com/2025/11/effectively-monitoring-web-performance/ https://smashingmagazine.com/2025/11/effectively-monitoring-web-performance/ Tue, 11 Nov 2025 10:00:00 GMT This article is a sponsored by DebugBear

There’s no single way to measure website performance. That said, the Core Web Vitals metrics that Google uses as a ranking factor are a great starting point, as they cover different aspects of visitor experience:

  • Largest Contentful Paint (LCP): Measures the initial page load time.
  • Cumulative Layout Shift (CLS): Measures if content is stable after rendering.
  • Interaction to Next Paint (INP): Measures how quickly the page responds to user input.

There are also many other web performance metrics that you can use to track technical aspects, like page weight or server response time. While these often don’t matter directly to the end user, they provide you with insight into what’s slowing down your pages.

You can also use the User Timing API to track page load milestones that are important on your website specifically.

Synthetic And Real User Data

There are two different types of web performance data:

  • Synthetic tests are run in a controlled test environment.
  • Real user data is collected from actual website visitors.

Synthetic monitoring can provide super-detailed reports to help you identify page speed issues. You can configure exactly how you want to collect the data, picking a specific network speed, device size, or test location.

Get a hands-on feel for synthetic monitoring by using the free DebugBear website speed test to check on your website.

That said, your synthetic test settings might not match what’s typical for your real visitors, and you can’t script all of the possible ways that people might interact with your website.

That’s why you also need real user monitoring (RUM). Instead of looking at one experience, you see different load times and how specific visitor segments are impacted. You can review specific page views to identify what caused poor performance for a particular visitor.

At the same time, real user data isn’t quite as detailed as synthetic test reports, due to web API limitations and performance concerns.

DebugBear offers both synthetic monitoring and real user monitoring:

  • To set up synthetic tests, you just need to enter a website URL, and
  • To collect real user metrics, you need to install an analytics snippet on your website.
Three Steps To A Fast Website

Collecting data helps you throughout the lifecycle of your web performance optimizations. You can follow this three-step process:

  1. Identify: Collect data across your website and identify slow visitor experiences.
  2. Diagnose: Dive deep into technical analysis to find optimizations.
  3. Monitor: Check that optimizations are working and get alerted to performance regressions.

Let’s take a look at each step in detail.

Step 1: Identify Slow Visitor Experiences

What’s prompting you to look into website performance issues in the first place? You likely already have some specific issues in mind, whether that’s from customer reports or because of poor scores in the Core Web Vitals section of Google Search Console.

Real user data is the best place to check for slow pages. It tells you whether the technical issues on your site actually result in poor user experience. It’s easy to collect across your whole website (while synthetic tests need to be set up for each URL). And, you can often get a view count along with the performance metrics. A moderately slow page that gets two visitors a month isn’t as important as a moderately fast page that gets thousands of visits a day.

The Web Vitals dashboard in DebugBear’s RUM product checks your site’s performance health and surfaces the most-visited pages and URLs where many visitors have a poor experience.

You can also run a website scan to get a list of URLs from your sitemap and then check each of these pages against real user data from Google’s Chrome User Experience Report (CrUX). However, this will only work for pages that meet a minimum traffic threshold to be included in the CrUX dataset.

The scan result highlights pages with poor web vitals scores where you might want to investigate further.

If no real-user data is available, then there is a scanning tool called Unlighthouse, which is based on Google’s Lighthouse tool. It runs synthetic tests for each page, allowing you to filter through the results in order to identify pages that need to be optimized.

Step 2: Diagnose Web Performance Issues

Once you’ve identified slow pages on your website, you need to look at what’s actually happening on your page that is causing delays.

Debugging Page Load Time

If there are issues with page load time metrics — like the Largest Contentful Paint (LCP) — synthetic test results can provide a detailed analysis. You can also run page speed experiments to try out and measure the impact of certain optimizations.

Real user data can still be important when debugging page speed, as load time depends on many user- and device-specific factors. For example, depending on the size of the user’s device, the page element that’s responsible for the LCP can vary. RUM data can provide a breakdown of possible influencing factors, like CSS selectors and image URLs, across all visitors, helping you zero in on what exactly needs to be fixed.

Debugging Slow Interactions

RUM data is also generally needed to properly diagnose issues related to the Interaction to Next Paint (INP) metric. Specifically, real user data can provide insight into what causes slow interactions, which helps you answer questions like:

  • What page elements are responsible?
  • Is time spent processing already-active background tasks or handling the interaction itself?
  • What scripts contribute the most to overall CPU processing time?

You can view this data at a high level to identify trends, as well as review specific page views to see what impacted a specific visitor experience.

Step 3: Monitor Performance & Respond To Regressions

Continuous monitoring of your website performance lets you track whether the performance is improving after making a change, and alerts you when scores decline.

How you respond to performance regressions depends on whether you’re looking at lab-based synthetic tests or real user analytics.

Synthetic Data

Test settings for synthetic tests are standardized between runs. While infrastructure changes, like browser upgrades, occasionally cause changes, performance is more generally determined by resources loaded by the website and the code it runs.

When a metric changes, DebugBear lets you view a before-and-after comparison between the two test results. For example, the next screenshot displays a regression in the First Contentful Paint (FCP) metric. The comparison reveals that new images were added to the page, competing for bandwidth with other page resources.

From the report, it’s clear that a CSS file that previously took 255 milliseconds to load now takes 915 milliseconds. Since stylesheets are required to render page content, this means the page now loads more slowly, giving you better insight into what needs optimization.

Real User Data

When you see a change in real user metrics, there can be two causes:

  1. A shift in visitor characteristics or behavior, or
  2. A technical change on your website.

Launching an ad campaign, for example, often increases redirects, reduces cache hits, and shifts visitor demographics. When you see a regression in RUM data, the first step is to find out if the change was on your website or in your visitor’s browser. Check for view count changes in ad campaigns, referrer domains, or network speed to get a clearer picture.

If those visits have different performance compared to your typical visitors, then that suggests the repression is not due to a change on your website. However, you may still need to make changes on your website to better serve these visitor cohorts and deliver a good experience for them.

To identify the cause of a technical change, take a look at component breakdown metrics, such as LCP subparts. This helps you narrow down the cause of a regression, whether it is due to changes in server response time, new render-blocking resources, or the LCP image.

You can also check for shifts in page view properties, like different LCP element selectors or specific scripts that cause poor performance.

Conclusion

One-off page speed tests are a great starting point for optimizing performance. However, a monitoring tool like DebugBear can form the basis for a more comprehensive web performance strategy that helps you stay fast for the long term.

Get a free DebugBear trial on our website!

]]>
hello@smashingmagazine.com (Matt Zeunert)
<![CDATA[Smashing Animations Part 6: Magnificent SVGs With `<use>` And CSS Custom Properties]]> https://smashingmagazine.com/2025/11/smashing-animations-part-6-svgs-css-custom-properties/ https://smashingmagazine.com/2025/11/smashing-animations-part-6-svgs-css-custom-properties/ Fri, 07 Nov 2025 15:00:00 GMT I explained recently how I use <symbol>, <use>, and CSS Media Queries to develop what I call adaptive SVGs. Symbols let us define an element once and then use it again and again, making SVG animations easier to maintain, more efficient, and lightweight.

Since I wrote that explanation, I’ve designed and implemented new Magnificent 7 animated graphics across my website. They play on the web design pioneer theme, featuring seven magnificent Old West characters.

<symbol> and <use> let me define a character design and reuse it across multiple SVGs and pages. First, I created my characters and put each into a <symbol> inside a hidden library SVG:

<!-- Symbols library -->
<svg xmlns="http://www.w3.org/2000/svg" style="display:none;">
 <symbol id="outlaw-1">[...]</symbol>
 <symbol id="outlaw-2">[...]</symbol>
 <symbol id="outlaw-3">[...]</symbol>
 <!-- etc. -->
</svg>

Then, I referenced those symbols in two other SVGs, one for large and the other for small screens:

<!-- Large screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-large">
 <use href="outlaw-1" />
 <!-- ... -->
</svg>

<!-- Small screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-small">
 <use href="outlaw-1" />
 <!-- ... -->
</svg>

Elegant. But then came the infuriating. I could reuse the characters, but couldn’t animate or style them. I added CSS rules targeting elements within the symbols referenced by a <use>, but nothing happened. Colours stayed the same, and things that should move stayed static. It felt like I’d run into an invisible barrier, and I had.

Understanding The Shadow DOM Barrier

When you reference the contents of a symbol with use, a browser creates a copy of it in the Shadow DOM. Each <use> instance becomes its own encapsulated copy of the referenced <symbol>, meaning that CSS from outside can’t break through the barrier to style any elements directly. For example, in normal circumstances, this tapping value triggers a CSS animation:

<g class="outlaw-1-foot tapping">
 <!-- ... -->
</g>
.tapping {
  animation: tapping 1s ease-in-out infinite;
}

But when the same animation is applied to a <use> instance of that same foot, nothing happens:

<symbol id="outlaw-1">
 <g class="outlaw-1-foot"><!-- ... --></g>
</symbol>

<use href="#outlaw-1" class="tapping" />
.tapping {
  animation: tapping 1s ease-in-out infinite;
}

That’s because the <g> inside the <symbol> element is in a protected shadow tree, and the CSS Cascade stops dead at the <use> boundary. This behaviour can be frustrating, but it’s intentional as it ensures that reused symbol content stays consistent and predictable.

While learning how to develop adaptive SVGs, I found all kinds of attempts to work around this behaviour, but most of them sacrificed the reusability that makes SVG so elegant. I didn’t want to duplicate my characters just to make them blink at different times. I wanted a single <symbol> with instances that have their own timings and expressions.

CSS Custom Properties To The Rescue

While working on my pioneer animations, I learned that regular CSS values can’t cross the boundary into the Shadow DOM, but CSS Custom Properties can. And even though you can’t directly style elements inside a <symbol>, you can pass custom property values to them. So, when you insert custom properties into an inline style, a browser looks at the cascade, and those styles become available to elements inside the <symbol> being referenced.

I added rotate to an inline style applied to the <symbol> content:

<symbol id="outlaw-1">
  <g class="outlaw-1-foot" style="
    transform-origin: bottom right; 
    transform-box: fill-box; 
    transform: rotate(var(--foot-rotate));">
    <!-- ... -->
  </g>
</symbol>

Then, defined the foot tapping animation and applied it to the element:

@keyframes tapping {
  0%, 60%, 100% { --foot-rotate: 0deg; }
  20% { --foot-rotate: -5deg; }
  40% { --foot-rotate: 2deg; }
}

use[data-outlaw="1"] {
  --foot-rotate: 0deg;
  animation: tapping 1s ease-in-out infinite;
}
Passing Multiple Values To A Symbol

Once I’ve set up a symbol to use CSS Custom Properties, I can pass as many values as I want to any <use> instance. For example, I might define variables for fill, opacity, or transform. What’s elegant is that each <symbol> instance can then have its own set of values.

<g class="eyelids" style="
  fill: var(--eyelids-colour, #f7bea1);
  opacity: var(--eyelids-opacity, 1);
  transform: var(--eyelids-scale, 0);"
>
  <!-- etc. -->
</g>
use[data-outlaw="1"] {
  --eyelids-colour: #f7bea1; 
  --eyelids-opacity: 1;
}

use[data-outlaw="2"] {
  --eyelids-colour: #ba7e5e; 
  --eyelids-opacity: 0;
}

Support for passing CSS Custom Properties like this is solid, and every contemporary browser handles this behaviour correctly. Let me show you a few ways I’ve been using this technique, starting with a multi-coloured icon system.

A Multi-Coloured Icon System

When I need to maintain a set of icons, I can define an icon once inside a <symbol> and then use custom properties to apply colours and effects. Instead of needing to duplicate SVGs for every theme, each use can carry its own values.

For example, I applied an --icon-fill custom property for the default fill colour of the <path> in this Bluesky icon :

<symbol id="icon-bluesky">
  <path fill="var(--icon-fill, currentColor)" d="..." />
</symbol>

Then, whenever I need to vary how that icon looks — for example, in a <header> and <footer> — I can pass new fill colour values to each instance:

<header>
  <svg xmlns="http://www.w3.org/2000/svg">
    <use href="#icon-bluesky" style="--icon-fill: #2d373b;" />
  </svg>
</header>

<footer>
  <svg xmlns="http://www.w3.org/2000/svg">
    <use href="#icon-bluesky" style="--icon-fill: #590d1a;" />
  </svg>
</footer>

These icons are the same shape but look different thanks to their inline styles.

Data Visualisations With CSS Custom Properties

We can use <symbol> and <use> in plenty more practical ways. They’re also helpful for creating lightweight data visualisations, so imagine an infographic about three famous Wild West sheriffs: Wyatt Earp, Pat Garrett, and Bat Masterson.

Each sheriff’s profile uses the same set of SVG three symbols: one for a bar representing the length of a sheriff’s career, another to represent the number of arrests made, and one more for the number of kills. Passing custom property values to each <use> instance can vary the bar lengths, arrests scale, and kills colour without duplicating SVGs. I first created symbols for those items:

<svg xmlns="http://www.w3.org/2000/svg" style="display:none;">
  <symbol id="career-bar">
    <rect
      height="10"
      width="var(--career-length, 100)" 
      fill="var(--career-colour, #f7bea1)"
    />
  </symbol>

  <symbol id="arrests-badge">
    <path 
      fill="var(--arrest-color, #d0985f)" 
      transform="scale(var(--arrest-scale, 1))"
    />
  </symbol>

  <symbol id="kills-icon">
    <path fill="var(--kill-colour, #769099)" />
  </symbol>
</svg>

Each symbol accepts one or more values:

  • --career-length adjusts the width of the career bar.
  • --career-colour changes the fill colour of that bar.
  • --arrest-scale controls the arrest badge size.
  • --kill-colour defines the fill colour of the kill icon.

I can use these to develop a profile of each sheriff using <use> elements with different inline styles, starting with Wyatt Earp.

<svg xmlns="http://www.w3.org/2000/svg">
  <g id="wyatt-earp">
    <use href="#career-bar" style="--career-length: 400; --career-color: #769099;"/>
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <!-- ... -->
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <use href="#arrests-badge" style="--arrest-scale: 1;" />
    <use href="#kills-icon" style="--kill-color: #769099;" />
  </g>

  <g id="pat-garrett">
    <use href="#career-bar" style="--career-length: 300; --career-color: #f7bea1;"/>
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <!-- ... -->
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <use href="#arrests-badge" style="--arrest-scale: 1;" />
    <use href="#kills-icon" style="--kill-color: #f7bea1;" />
  </g>

  <g id="bat-masterson">
    <use href="#career-bar" style="--career-length: 200; --career-color: #c2d1d6;"/>
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <!-- ... -->
    <use href="#arrests-badge" style="--arrest-scale: 2;" />
    <use href="#arrests-badge" style="--arrest-scale: 1;" />
    <use href="#kills-icon" style="--kill-color: #c2d1d6;" />
  </g>
</svg>

Each <use> shares the same symbol elements, but the inline variables change their colours and sizes. I can even animate those values to highlight their differences:

@keyframes pulse {
  0%, 100% { --arrest-scale: 1; }
  50% { --arrest-scale: 1.2; }
}

use[href="#arrests-badge"]:hover {
  animation: pulse 1s ease-in-out infinite;
}

CSS Custom Properties aren’t only helpful for styling; they can also channel data between HTML and SVG’s inner geometry, binding visual attributes like colour, length, and scale to semantics like arrest numbers, career length, and kills.

Ambient Animations

I started learning to animate elements within symbols while creating the animated graphics for my website’s Magnificent 7. To reduce complexity and make my code lighter and more maintainable, I needed to define each character once and reuse it across SVGs:

<!-- Symbols library -->
<svg xmlns="http://www.w3.org/2000/svg" style="display:none;">
  <symbol id="outlaw-1">[…]</symbol>
  <!-- ... -->
</svg>

<!-- Large screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-large">
  <use href="outlaw-1" />
  <!-- ... -->
</svg>

<!-- Small screens -->
<svg xmlns="http://www.w3.org/2000/svg" id="svg-small">
  <use href="outlaw-1" />
  <!-- ... -->
</svg>

But I didn’t want those characters to stay static; I needed subtle movements that would bring them to life. I wanted their eyes to blink, their feet to tap, and their moustache whiskers to twitch. So, to animate these details, I pass animation data to elements inside those symbols using CSS Custom Properties, starting with the blinking.

I implemented the blinking effect by placing an SVG group over the outlaws’ eyes and then changing its opacity.

To make this possible, I added an inline style with a CSS Custom Property to the group:

<symbol id="outlaw-1" viewBox="0 0 712 2552">
 <g class="eyelids" style="opacity: var(--eyelids-opacity, 1);">
    <!-- ... -->
  </g>
</symbol>

Then, I defined the blinking animation by changing --eyelids-opacity:

@keyframes blink {
  0%, 92% { --eyelids-opacity: 0; }
  93%, 94% { --eyelids-opacity: 1; }
  95%, 97% { --eyelids-opacity: 0.1; }
  98%, 100% { --eyelids-opacity: 0; }
}

…and applied it to every character:

use[data-outlaw] {
  --blink-duration: 4s;
  --eyelids-opacity: 1;
  animation: blink var(--blink-duration) infinite var(--blink-delay);
}

…so that each character wouldn’t blink at the same time, I set a different --blink-delay before they all start blinking, by passing another Custom Property:

use[data-outlaw="1"] { --blink-delay: 1s; }
use[data-outlaw="2"] { --blink-delay: 2s; }

use[data-outlaw="7"] { --blink-delay: 3s; }

Some of the characters tap their feet, so I added an inline style with a CSS Custom Property to those groups, too:

<symbol id="outlaw-1" viewBox="0 0 712 2552">
  <g class="outlaw-1-foot" style="
    transform-origin: bottom right; 
    transform-box: fill-box; 
    transform: rotate(var(--foot-rotate));">
  </g>
</symbol>

Defining the foot-tapping animation:

@keyframes tapping {
  0%, 60%, 100% { --foot-rotate: 0deg; }
  20% { --foot-rotate: -5deg; }
  40% { --foot-rotate: 2deg; }
}

And adding those extra Custom Properties to the characters’ declaration:

use[data-outlaw] {
  --blink-duration: 4s;
  --eyelids-opacity: 1;
  --foot-rotate: 0deg;
  animation: 
    blink var(--blink-duration) infinite var(--blink-delay),
    tapping 1s ease-in-out infinite;
}

…before finally making the character’s whiskers jiggle via an inline style with a CSS Custom Property which describes how his moustache transforms:

<symbol id="outlaw-1" viewBox="0 0 712 2552">
  <g class="outlaw-1-tashe" style="
    transform: translateX(var(--jiggle-x, 0px));"
  >
    <!-- ... -->
  </g>
</symbol>

Defining the jiggle animation:

@keyframes jiggle {
  0%, 100% { --jiggle-x: 0px; }
  20% { --jiggle-x: -3px; }
  40% { --jiggle-x: 2px; }
  60% { --jiggle-x: -1px; }
  80% { --jiggle-x: 4px; }
}

And adding those properties to the characters’ declaration:

use[data-outlaw] {
  --blink-duration: 4s;
  --eyelids-opacity: 1;
  --foot-rotate: 0deg;
  --jiggle-x: 0px;
  animation: 
    blink var(--blink-duration) infinite var(--blink-delay),
    jiggle 1s ease-in-out infinite,
    tapping 1s ease-in-out infinite;
}

With these moving parts, the characters come to life, but my markup remains remarkably lean. By combining several animations into a single declaration, I can choreograph their movements without adding more elements to my SVG. Every outlaw shares the same base <symbol>, and their individuality comes entirely from CSS Custom Properties.

Pitfalls And Solutions

Even though this technique might seem bulletproof, there are a few traps it’s best to avoid:

  • CSS Custom Properties only work if they’re referenced with a var() inside a <symbol>. Forget that, and you’ll wonder why nothing updates. Also, properties that aren’t naturally inherited, like fill or transform, need to use var() in their value to benefit from the cascade.
  • It’s always best to include a fallback value alongside a custom property, like opacity: var(--eyelids-opacity, 1); to ensure SVG elements render correctly even without custom property values applied.
  • Inline styles set via the style attribute take precedence, so if you mix inline and external CSS, remember that Custom Properties follow normal cascade rules.
  • You can always use DevTools to inspect custom property values. Select a <use> instance and check the Computed Styles panel to see which custom properties are active.
Conclusion

The <symbol> and <use> elements are among the most elegant but sometimes frustrating aspects of SVG. The Shadow DOM barrier makes animating them trickier, but CSS Custom Properties act as a bridge. They let you pass colour, motion, and personality across that invisible boundary, resulting in cleaner, lighter, and, best of all, fun animations.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[Six Key Components of UX Strategy]]> https://smashingmagazine.com/2025/11/practical-guide-ux-strategy/ https://smashingmagazine.com/2025/11/practical-guide-ux-strategy/ Wed, 05 Nov 2025 13:00:00 GMT Measure UX & Design Impact (use the code 🎟 IMPACT to save 20% off today).]]> For years, “UX strategy” felt like a confusing, ambiguous, and overloaded term to me. To me, it was some sort of a roadmap or a “grand vision”, with a few business decisions attached to it. And looking back now, I realize that I was wrong all along.

UX Strategy isn’t a goal; it’s a journey towards that goal. A journey connecting where UX is today with a desired future state of UX. And as such, it guides our actions and decisions, things we do and don’t do. And its goal is very simple: to maximize our chances of success while considering risks, bottlenecks and anything that might endanger the project.

Let’s explore the components of UX strategy, and how it works with product strategy and business strategy to deliver user value and meet business goals.

Strategy vs. Goals vs. Plans

When we speak about strategy, we often speak about planning and goals — but they are actually quite different. While strategy answers “what” we’re doing and “why”, planning is about “how” and “when” we’ll get it done. And the goal is merely a desired outcome of that entire journey.

  • Goals establish a desired future outcome,
  • That outcome typically represents a problem to solve,
  • Strategy shows a high-level solution for that problem,
  • Plan is a detailed set of low-level steps for getting the solution done.

A strong strategy requires making conscious, and oftentimes tough, decisions about what we will do — and just as importantly, what we will not do, and why.

Business Strategy

UX strategy doesn’t live in isolation. It must inform and support product strategy and be aligned with business strategy. All these terms are often slightly confusing and overloaded, so let's clear it up.

At the highest level, business strategy is about the distinct choices executives make to set the company apart from its competitors. They shape the company’s positioning, objectives, and (most importantly!) competitive advantage.

Typically, this advantage is achieved in two ways: through lower prices (cost leadership) or through differentiation. The latter part isn't about being different, but rather being perceived differently by the target audience. And that’s exactly where UX impact steps in.

In short, business strategy is:

  • A top-line vision, basis for core offers,
  • Shapes positioning, goals, competitive advantage,
  • Must always adapt to the market to keep a competitive advantage.
Product Strategy

Product strategy is how a high-level business direction is translated into a unique positioning of a product. It defines what the product is, who its users are, and how it will contribute to the business’s goals. It’s also how we bring a product to market, drive growth, and achieve product-market fit.

In short, product strategy is:

  • Unique positioning and value of a product,
  • How to establish and keep a product in the marketplace,
  • How to keep competitive advantage of the product.
UX Strategy

UX strategy is about shaping and delivering product value through UX. Good UX strategy always stems from UX research and answers to business needs. It established what to focus on, what our high-value actions are, how we’ll measure success, and — quite importantly — what risks we need to mitigate.

Most importantly, it’s not a fixed plan or a set of deliverables; it’s a guide that informs our actions, but also must be prepared to change when things change.

In short, UX strategy is:

  • How we shape and deliver product value through UX,
  • Priorities, focus + why, actions, metrics, risks,
  • Isn’t a roadmap, intention or deliverables.
Six Key Components of UX Strategy

The impact of good UX typically lives in differentiation mentioned above. Again, it’s not about how “different” our experience is, but the unique perceived value that users associate with it. And that value is a matter of a clear, frictionless, accessible, fast, and reliable experience wrapped into the product.

I always try to include 6 key components in any strategic UX work so we don’t end up following a wrong assumption that won’t bring any impact:

  1. Target goal
    The desired, improved future state of UX.
  2. User segments
    Primary users that we are considering.
  3. Priorities
    What we will and, crucially, what we will not do, and why.
  4. High-value actions
    How we drive value and meet user and business needs.
  5. Feasibility
    Realistic assessment of people, processes, and resources.
  6. Risks
    Bottlenecks, blockers, legacy constraints, big unknowns.

It’s worth noting that it’s always dangerous to be designing a product with everybody in mind. As Jamie Levy noted, by being very broad too early, we often reduce the impact of our design and messaging. It’s typically better to start with a specific, well-defined user segment and then expand, rather than the other way around.

Practical Example (by Alin Buda)

UX strategy doesn’t have to be a big 40-page long PDF report or a Keynote presentation. A while back, Alin Buda kindly left a comment on one of my LinkedIn posts, giving a great example of what a concise UX strategy could look like:

UX Strategy (for Q4)

Our UX strategy is to focus on high-friction workflows for expert users, not casual usability improvements. Why? Because retention in this space is driven by power-user efficiency, and that aligns with our growth model.

To succeed, we’ll design workflow accelerators and decision-support tools that will reduce time-on-task. As a part of it, we’ll need to redesign legacy flows in the Crux system. We won’t prioritize UI refinements or onboarding tours, because it doesn’t move the needle in this context.

What I like most about this example is just how concise and clear it is. Getting to this level of clarity takes quite a bit of time, but it creates a very precise overview of what we do, what we don't do, what we focus on, and how we drive value.

Wrapping Up

The best path to make a strong case with senior leadership is to frame your UX work as a direct contributor to differentiation. This isn’t just about making things look different; it’s about enhancing the perceived value.

A good strategy ties UX improvements to measurable business outcomes. It doesn’t speak about design patterns, consistency, or neatly organized components. Instead, it speaks the language of product and business strategy: OKRs, costs, revenue, business metrics, and objectives.

Design can succeed without a strategy. In the wise words of Sun Tzu, strategy without tactics is the slowest route to victory. And tactics without strategy are the noise before defeat.

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources ]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[How To Leverage Component Variants In Penpot]]> https://smashingmagazine.com/2025/11/how-leverage-component-variants-penpot/ https://smashingmagazine.com/2025/11/how-leverage-component-variants-penpot/ Tue, 04 Nov 2025 10:00:00 GMT Penpot, the open-source tool built for scalable, consistent design.]]> This article is a sponsored by Penpot

Since Brad Frost popularized the use of design systems in digital design way back in 2013, they’ve become an invaluable resource for organizations — and even individuals — that want to craft reusable design patterns that look and feel consistent.

But Brad didn’t just popularize design systems; he also gave us a framework for structuring them, and while we don’t have to follow that framework exactly (most people adapt it to their needs), a particularly important part of most design systems is the variants, which are variations of components. Component variants allow for the design of components that are the same as other components, but different, so that they’re understood by users immediately, yet provide clarity for a unique context.

This makes component variants just as important as the components themselves. They ensure that we aren’t creating too many components that have to be individually managed, even if they’re only mildly different from other components, and since component variants are grouped together, they also ensure organization and visual consistency.

And now we can use them in Penpot, the web-based, open-source design tool where design is expressed as code. In this article, you’ll learn about variants, their place in design systems, and how to use them effectively in Penpot.

Step 1: Get Your Design Tokens In Order

For the most part, what separates one variant from another is the design tokens that it uses. But what is a design token exactly?

Imagine a brand color, let’s say a color value equal to hsl(270 100 42) in CSS. We save it as a “design token” called color.brand.default so that we can reuse it more easily without having to remember the more cumbersome hsl(270 100 42).

From there, we might also create a second design token called background.button.primary.default and set it to color.brand.default, thereby making them equal to the same color, but with different names to establish semantic separation between the two. This referencing the value of one token from another token is often called an “alias”.

This setup gives us the flexibility to change the value of the color document-wide, change the color used in the component (maybe by switching to a different token alias), or create a variant of the component that uses a different color. Ultimately, the goal is to be able to make changes in many places at once rather than one-by-one, mostly by editing the design token values rather than the design itself, at specific scopes rather than limiting ourselves to all-or-nothing changes. This also enables us to scale our design system without constraints.

With that in mind, here’s a rough idea of just a few color-related design tokens for a primary button with hover and disabled states:

Token name Token value
color.brand.default hsl(270 100 42)
color.brand.lighter hsl(270 100 52)
color.brand.lightest hsl(270 100 95)
color.brand.muted hsl(270 5 50)
background.button.primary.default {color.brand.default}
background.button.primary.hover {color.brand.lighter}
background.button.primary.disabled {color.brand.muted}
text.button.primary.default {color.brand.lightest}
text.button.primary.hover {color.brand.lightest}
text.button.primary.disabled {color.brand.lightest}

To create a color token in Penpot, switch to the “Tokens” tab in the left panel, click on the plus (+) icon next to “Color”, then specify the name, value, and optional description.

For example:

  • Name: color.brand.default,
  • Value: hsl(270 100 42) (there’s a color picker if you need it).

It’s pretty much the same process for other types of design tokens.

Don’t worry, I’m not going to walk you through every design token, but I will show you how to create a design token alias. Simply repeat the steps above, but for the value, notice how I’ve just referenced another color token (make sure to include the curly braces):

  • Name: background.button.primary.default,
  • Value: {color.brand.default}

Now, if the value of the color changes, so will the background of the buttons. But also, if we want to decouple the color from the buttons, all we need to do is reference a different color token or value. Mikołaj Dobrucki goes into a lot more detail in another Smashing article, but it’s worth noting here that Penpot design tokens are platform-agnostic. They follow the standardized W3C DTCG format, which means that they’re compatible with other tools and easily export to all platforms, including web, iOS, and Android.

In the next couple of steps, we’ll create a button component and its variants while plugging different design tokens into different variants. You’ll see why doing this is so useful and how using design tokens in variants benefits design systems overall.

Step 2: Create The Component

You’ll need to create what’s called a “main” component, which is the one that you’ll update as needed going forward. Other components — the ones that you’ll actually insert into your designs — will be copies (or “instances”) of the main component, which is sort of the point, right? Update once, and the changes reflect everywhere.

Here’s one I made earlier, minus the colors:

To apply a design token, make sure that you’re on the “Tokens” tab and have the relevant layer selected, then select the design token that you want to apply to it:

It doesn’t matter which variant you create first, but you’ll probably want to go with the default one as a starting point, as I’ve done. Either way, to turn this button into a main component, select the button object via the canvas (or “Layers” tab), right-click on it, then choose the “Create component” option from the context menu (or just press Ctrl / ⌘ + K after selecting it).

Remember to name the component as well. You can do that by double-clicking on the name (also via the canvas or “Layers” tab).

Step 3: Create The Component Variants

To create a variant, select the main component and either hit the Ctrl / ⌘ + K keyboard shortcut, or click on the icon that reveals the “Create variant” tooltip (located in the “Design” tab in the right panel).

Next, while the variant is still selected, make the necessary design changes via the “Design” tab. Or, if you want to swap design tokens out for other design tokens, you can do that in the same way that you applied them to begin with, via the “Tokens” tab. Rinse and repeat until you have all of your variants on the canvas designed:

After that, as you might’ve guessed, you’ll want to name your variants. But avoid doing this via the “Layers” panel. Instead, select a variant and replace “Property 1” with a label that describes the differentiating property of each variant. Since my button variants in this example represent different states of the same button, I’ve named this “State”. This applies to all of the variants, so you only need to do this once.

Next to the property name, you’ll see “Value 1” or something similar. Edit that for each variant, for example, the name of the state. In my case, I’ve named them “Default”, “Hover”, and “Disabled”.

And yes, you can add more properties to a component. To do this, click on the nearby plus (+) icon. I’ll talk more about component variants at scale in a minute, though.

To see the component in action, switch to the “Assets” tab (located in the left panel) and drag the component onto the canvas to initialize one instance of it. Again, remember to choose the correct property value from the “Design” tab:

If you already have a Penpot design system, combining multiple components into one component with variants is not only easy and error-proof, but you might be good to go already if you’re using a robust property naming system that uses forward slashes (/). Penpot has put together a very straightforward guide, but the diagram below sums it up pretty well:

How Component Variants Work At Scale

Design tokens, components, and component variants — the triple-threat of design systems — work together, not just to create powerful yet flexible design systems, but sustainable design systems that scale. This is easier to accomplish when thinking ahead, starting with design tokens that separate the “what” from the “what for” using token aliases, despite how verbose that might seem at first.

For example, I used color.brand.lightest for the text color of every variant, but instead of plugging that color token in directly, I created aliases such as text.button.primary.default. This means that I can change the text color of any variant later without having to dive into the actual variant on the canvas, or force a change to color.brand.lightest that might impact a bunch of other components.

Because remember, while the component and its variants give us reusability of the button, the color tokens give us reusability of the colors, which might be used in dozens, if not hundreds, of other components. A design system is like a living, breathing ecosystem, where some parts of it are connected, some parts of it aren’t connected, and some parts of it are or aren’t connected but might have to be later, and we need to be ready for that.

The good news is that Penpot makes all of this pretty easy to manage as long as you do a little planning beforehand.

Consider the following:

  • The design tokens that you’ll reuse (e.g., colors, font sizes, and so on),
  • Where design token aliases will be reused (e.g., buttons, headings, and so on),
  • Organizing the design tokens into sets,
  • Organizing the sets into themes,
  • Organizing the themes into groups,
  • The different components that you’ll need, and
  • The different variants and variant properties that you’ll need for each component.

Even the buttons that I designed here today can be scaled far beyond what I’ve already mocked up. Think of all the possible variants that might come up, such as a secondary button color, a tertiary color, a confirmation color, a warning color, a cancelled color, different colors for light and dark mode, not to mention more properties for more states, such as active and focus states. What if we want a whole matrix of variants, like where buttons in a disabled state can be hovered and where all buttons can be focused upon? Or where some buttons have icons instead of text labels, or both?

Designs can get very complicated, but once you’ve organized them into design tokens, components, and component variants in Penpot, they’ll actually feel quite simple, especially once you’re able to see them on the canvas, and even more so once you’ve made a significant change in just a few seconds without breaking anything.

Conclusion

This is how we make component variants work at scale. We get the benefits of reusability while keeping the flexibility to fork any aspect of our design system, big or small, without breaking out of it. And design tools like Penpot make it possible to not only establish a design system, but also express its design tokens and styles as code.

]]>
hello@smashingmagazine.com (Daniel Schwarz)
<![CDATA[Fading Light And Falling Leaves (November 2025 Wallpapers Edition)]]> https://smashingmagazine.com/2025/10/desktop-wallpaper-calendars-november-2025/ https://smashingmagazine.com/2025/10/desktop-wallpaper-calendars-november-2025/ Fri, 31 Oct 2025 12:00:00 GMT November can feel a bit gray in many parts of the world, so what better way to brighten the days than with a splash of colorful inspiration? For this month’s wallpapers edition, artists and designers from around the globe once again tickled their creativity and designed unique and inspiring wallpapers that are sure to bring some good vibes to your desktops and home screens.

As always, the wallpapers in this post come in a variety of screen resolutions and can be downloaded for free — just as it has been a monthly tradition here at Smashing Magazine for more than 14 years already. And since so many beautiful designs have seen the light of day since we first embarked on this monthly creativity adventure, we’ve also added a selection of oldies but goodies from our archives to the collection. Maybe one of your almost-forgotten favorites will catch your eye again this month?

A huge thank you to all the talented creatives who contributed their designs — this post wouldn’t be possible without your support! By the way, if you, too, would like to get featured in one of our upcoming wallpapers posts, please don’t hesitate to submit your design. We can’t wait to see what you’ll come up with! Happy November!

  • You can click on every image to see a larger preview.
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Falling Into November

“Celebrate the heart of fall with cozy colors, crisp leaves, and the gentle warmth that only November brings.” — Designed by Libra Fire from Serbia.

Crown Me

Designed by Ricardo Gimenes from Spain.

Fireside Stories Under The Stars

“A cozy autumn evening comes alive as friends gather around a warm bonfire, sharing stories beneath a starry night sky. The glow of the fire contrasts beautifully with the cool, serene landscape, capturing the magic of friendship, warmth, and the quiet beauty of November nights.” — Designed by PopArt Studio from Serbia.

Lunchtime

Designed by Ricardo Gimenes from Spain.

Where Innovation Meets Design

“This artwork blends technology and creativity in a clean, modern aesthetic. Soft pastel tones and fluid shapes frame a central smartphone, symbolizing the fusion of innovation and human intelligence in mobile app development.” — Designed by Zco Corporation from the United States.

Colorful Autumn

“Autumn can be dreary, especially in November, when rain starts pouring every day. We wanted to summon better days, so that’s how this colourful November calendar was created. Open your umbrella and let’s roll!” — Designed by PopArt Studio from Serbia.

The Secret Cave

Designed by Ricardo Gimenes from Spain.

Sunset Or Sunrise

“November is autumn in all its splendor. Earthy colors, falling leaves, and afternoons in the warmth of the home. But it is also adventurous and exciting and why not, different. We sit in Bali contemplating Pura Ulun Danu Bratan. We don’t know if it’s sunset or dusk, but… does that really matter?” — Designed by Veronica Valenzuela Jimenez from Spain.

A Jelly November

“Been looking for a mysterious, gloomy, yet beautiful desktop wallpaper for this winter season? We’ve got you, as this month’s calendar marks Jellyfish Day. On November 3rd, we celebrate these unique, bewildering, and stunning marine animals. Besides adorning your screen, we’ve got you covered with some jellyfish fun facts: they aren’t really fish, they need very little oxygen, eat a broad diet, and shrink in size when food is scarce. Now that’s some tenacity to look up to.” — Designed by PopArt Studio from Serbia.

Winter Is Here

Designed by Ricardo Gimenes from Spain.

Moonlight Bats

“I designed some Halloween characters and then this idea came to my mind — a bat family hanging around in the moonlight. A cute and scary mood is just perfect for autumn.” — Designed by Carmen Eisendle from Germany.

Time To Give Thanks

Designed by Glynnis Owen from Australia.

Anbani

Anbani means alphabet in Georgian. The letters that grow on that tree are the Georgian alphabet. It’s very unique!” — Designed by Vlad Gerasimov from Georgia.

Me And The Key Three

Designed by Bart Bonte from Belgium.

Outer Space

“We were inspired by nature around us and the universe above us, so we created an out-of-this-world calendar. Now, let us all stop for a second and contemplate on preserving our forests, let us send birds of passage off to warmer places, and let us think to ourselves — if not on Earth, could we find a home somewhere else in outer space?” — Designed by PopArt Studio from Serbia.

Captain’s Home

Designed by Elise Vanoorbeek from Belgium.

Deer Fall, I Love You

Designed by Maria Porter from the United States.

Holiday Season Is Approaching

Designed by ActiveCollab from the United States.

International Civil Aviation Day

“On December 7, we mark International Civil Aviation Day, celebrating those who prove day by day that the sky really is the limit. As the engine of global connectivity, civil aviation is now, more than ever, a symbol of social and economic progress and a vehicle of international understanding. This monthly calendar is our sign of gratitude to those who dedicate their lives to enabling everyone to reach their dreams.” — Designed by PopArt Studio from Serbia.

Peanut Butter Jelly Time

“November is the Peanut Butter Month so I decided to make a wallpaper around that. As everyone knows peanut butter goes really well with some jelly, so I made two sandwiches, one with peanut butter and one with jelly. Together they make the best combination.” — Designed by Senne Mommens from Belgium.

A Gentleman’s November

Designed by Cedric Bloem from Belgium.

Bug

Designed by Ricardo Gimenes from Spain.

Go To Japan

“November is the perfect month to go to Japan. Autumn is beautiful with its brown colors. Let’s enjoy it!” — Designed by Veronica Valenzuela from Spain.

The Kind Soul

“Kindness drives humanity. Be kind. Be humble. Be humane. Be the best of yourself!” — Designed by Color Mean Creative Studio from Dubai.

Mushroom Season

“It is autumn! It is raining and thus… it is mushroom season! It is the perfect moment to go to the forest and get the best mushrooms to do the best recipe.” — Designed by Verónica Valenzuela from Spain.

Tempestuous November

“By the end of autumn, ferocious Poseidon will part from tinted clouds and timid breeze. After this uneven clash, the sky once more becomes pellucid just in time for imminent luminous snow.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Cozy Autumn Cups And Cute Pumpkins

“Autumn coziness, which is created by fallen leaves, pumpkins, and cups of cocoa, inspired our designers for this wallpaper. — Designed by MasterBundles from Ukraine.

November Nights On Mountains

“Those chill November nights when you see mountain tops covered with the first snow sparkling in the moonlight.” — Designed by Jovana Djokic from Serbia.

Coco Chanel

Beauty begins the moment you decide to be yourself. (Coco Chanel)” — Designed by Tazi from Australia.

Stars

“I don’t know anyone who hasn’t enjoyed a cold night looking at the stars.” — Designed by Ema Rede from Portugal.

Welcome Home Dear Winter

“The smell of winter is lingering in the air. The time to be home! Winter reminds us of good food, of the warmth, the touch of a friendly hand, and a talk beside the fire. Keep calm and let us welcome winter.” — Designed by Acodez IT Solutions from India.

Happy Birthday C.S.Lewis!

“It’s C.S. Lewis’s birthday on November 29th, so I decided to create this ‘Chronicles of Narnia’ inspired wallpaper to honour this day.” — Designed by Safia Begum from the United Kingdom.

Autumn Choir

Designed by Hatchers from Ukraine / China.

Star Wars

Designed by Ricardo Gimenes from Spain.

Hello World, Happy November

“I often read messages at Smashing Magazine from the people in the southern hemisphere ‘it’s spring, not autumn!’ so I wanted to design a wallpaper for the northern and the southern hemispheres. Here it is, northerners and southerns, hope you like it!” — Designed by Agnes Swart from the Netherlands.

Get Featured Next Month

Feeling inspired? We’ll publish the December wallpapers on November 30, so if you’d like to be a part of the collection, please don’t hesitate to submit your design. We are already looking forward to it!

]]>
hello@smashingmagazine.com (Cosima Mielke)
<![CDATA[JavaScript For Everyone: Iterators]]> https://smashingmagazine.com/2025/10/javascript-for-everyone-iterators/ https://smashingmagazine.com/2025/10/javascript-for-everyone-iterators/ Mon, 27 Oct 2025 13:00:00 GMT Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at Piccalil.li’s JavaScript for Everyone course to teach you JavaScript. The following is an excerpt from the Iterables and Iterators module: the lesson on Iterators.

Iterators are one of JavaScript’s more linguistically confusing topics, sailing easily over what is already a pretty high bar. There are iterables — array, Set, Map, and string — all of which follow the iterable protocol. To follow said protocol, an object must implement the iterable interface. In practice, that means that the object needs to include a [Symbol.iterator]() method somewhere in its prototype chain. Iterable protocol is one of two iteration protocols. The other iteration protocol is the iterator protocol.

See what I mean about this being linguistically fraught? Iterables implement the iterable iteration interface, and iterators implement the iterator iteration interface! If you can say that five times fast, then you’ve pretty much got the gist of it; easy-peasy, right?

No, listen, by the time you reach the end of this lesson, I promise it won’t be half as confusing as it might sound, especially with the context you’ll have from the lessons that precede it.

An iterable object follows the iterable protocol, which just means that the object has a conventional method for making iterators. The elements that it contains can be looped over with forof.

An iterator object follows the iterator protocol, and the elements it contains can be accessed sequentially, one at a time.

To reiterate — a play on words for which I do not forgive myself, nor expect you to forgive me — an iterator object follows iterator protocol, and the elements it contains can be accessed sequentially, one at a time. Iterator protocol defines a standard way to produce a sequence of values, and optionally return a value once all possible values have been generated.

In order to follow the iterator protocol, an object has to — you guessed it — implement the iterator interface. In practice, that once again means that a certain method has to be available somewhere on the object's prototype chain. In this case, it’s the next() method that advances through the elements it contains, one at a time, and returns an object each time that method is called.

In order to meet the iterator interface criteria, the returned object must contain two properties with specific keys: one with the key value, representing the value of the current element, and one with the key done, a Boolean value that tells us if the iterator has advanced beyond the final element in the data structure. That’s not an awkward phrasing the editorial team let slip through: the value of that done property is true only when a call to next() results in an attempt to access an element beyond the final element in the iterator, not upon accessing the final element in the iterator. Again, a lot in print, but it’ll make more sense when you see it in action.

You’ve seen an example of a built-in iterator before, albeit briefly:

const theMap = new Map([ [ "aKey", "A value." ] ]);

console.log( theMap.keys() );
// Result: Map Iterator { constructor: Iterator() }

That’s right: while a Map object itself is an iterable, Map’s built-in methods keys(), values(), and entries() all return Iterator objects. You’ll also remember that I looped through those using forEach (a relatively recent addition to the language). Used that way, an iterator is indistinguishable from an iterable:

const theMap = new Map([ [ "key", "value " ] ]);

theMap.keys().forEach( thing => {
  console.log( thing );
});
// Result: key

All iterators are iterable; they all implement the iterable interface:

const theMap = new Map([ [ "key", "value " ] ]);

theMap.keys()[ Symbol.iterator ];
// Result: function Symbol.iterator()

And if you’re angry about the increasing blurriness of the line between iterators and iterables, wait until you get a load of this “top ten anime betrayals” video candidate: I’m going to demonstrate how to interact with an iterator by using an array.

“BOO,” you surely cry, having been so betrayed by one of your oldest and most indexed friends. “Array is an iterable, not an iterator!” You are both right to yell at me in general, and right about array in specific — an array is an iterable, not an iterator. In fact, while all iterators are iterable, none of the built-in iterables are iterators.

However, when you call that [ Symbol.iterator ]() method — the one that defines an object as an iterable — it returns an iterator object created from an iterable data structure:

const theIterable = [ true, false ];
const theIterator = theIterable[ Symbol.iterator ]();

theIterable;
// Result: Array [ true, false ]

theIterator;
// Result: Array Iterator { constructor: Iterator() }

The same goes for Set, Map, and — yes — even strings:

const theIterable = "A string."
const theIterator = theIterable[ Symbol.iterator ]();

theIterator;
// Result: String Iterator { constructor: Iterator() }

What we’re doing here manually — creating an iterator from an iterable using %Symbol.iterator% — is precisely how iterable objects work internally, and why they have to implement %Symbol.iterator% in order to be iterables. Any time you loop through an array, you’re actually looping through an iterator created from that Array. All built-in iterators are iterable. All built-in iterables can be used to create iterators.

Alternately — preferably, even, since it doesn’t require you to graze up against %Symbol.iterator% directly — you can use the built-in Iterator.from() method to create an iterator object from any iterable:

const theIterator = Iterator.from([ true, false ]);

theIterator;
// Result: Array Iterator { constructor: Iterator() }

You remember how I mentioned that an iterator has to provide a next() method (that returns a very specific Object)? Calling that next() method steps through the elements that the iterator contains one at a time, with each call returning an instance of that Object:

const theIterator = Iterator.from([ 1, 2, 3 ]);

theIterator.next();
// Result: Object { value: 1, done: false }

theIterator.next();
// Result: Object { value: 2, done: false }

theIterator.next();
// Result: Object { value: 3, done: false }

theIterator.next();
// Result: Object { value: undefined, done: true }

You can think of this as a more controlled form of traversal than the traditional “wind it up and watch it go” for loops you’re probably used to — a method of accessing elements one step at a time, as-needed. Granted, you don’t have to step through an iterator in this way, since they have their very own Iterator.forEach method, which works exactly like you would expect — to a point:

const theIterator = Iterator.from([ true, false ]);

theIterator.forEach( element => console.log( element ) );
/* Result:
true
false
*/

But there’s another big difference between iterables and iterators that we haven’t touched on yet, and for my money, it actually goes a long way toward making linguistic sense of the two. You might need to humor me for a little bit here, though.

See, an iterable object is an object that is iterable. No, listen, stay with me: you can iterate over an Array, and when you’re done doing so, you can still iterate over that Array. It is, by definition, an object that can be iterated over; it is the essential nature of an iterable to be iterable:

const theIterable = [ 1, 2 ];

theIterable.forEach( el => {
  console.log( el );
});
/* Result:
1
2
*/

theIterable.forEach( el => {
  console.log( el );
});
/* Result:
1
2
*/

In a way, an iterator object represents the singular act of iteration. Internal to an iterable, it is the mechanism by which the iterable is iterated over, each time that iteration is performed. As a stand-alone iterator object — whether you step through it using the next method or loop over its elements using forEach — once iterated over, that iterator is past tense; it is iterated. Because they maintain an internal state, the essential nature of an iterator is to be iterated over, singular:

const theIterator = Iterator.from([ 1, 2 ]);

theIterator.next();
// Result: Object { value: 1, done: false }

theIterator.next();
// Result: Object { value: 2, done: false }

theIterator.next();
// Result: Object { value: undefined, done: true }

theIterator.forEach( el => console.log( el ) );
// Result: undefined

That makes for neat work when you're using the Iterator constructor’s built-in methods to, say, filter or extract part of an Iterator object:

const theIterator = Iterator.from([ "First", "Second", "Third" ]);

// Take the first two values from theIterator:
theIterator.take( 2 ).forEach( el => {
  console.log( el );
});
/* Result:
"First"
"Second"
*/

// theIterator now only contains anything left over after the above operation is complete:
theIterator.next();
// Result: Object { value: "Third", done: false }

Once you reach the end of an iterator, the act of iterating over it is complete. Iterated. Past-tense.

And so too is your time in this lesson, you might be relieved to hear. I know this was kind of a rough one, but the good news is: this course is iterable, not an iterator. This step in your iteration through it — this lesson — may be over, but the essential nature of this course is that you can iterate through it again. Don’t worry about committing all of this to memory right now — you can come back and revisit this lesson anytime.

Conclusion

I stand by what I wrote there, unsurprising as that probably is: this lesson is a tricky one, but listen, you got this. JavaScript for Everyone is designed to take you inside JavaScript’s head. Once you’ve started seeing how the gears mesh — seen the fingerprints left behind by the people who built the language, and the good, bad, and sometimes baffling decisions that went into that — no itera-, whether -ble or -tor will be able to stand in your way.

My goal is to teach you the deep magic — the how and the why of JavaScript, using the syntaxes you’re most likely to encounter in your day-to-day work, at your pace and on your terms. If you’re new to the language, you’ll walk away from this course with a foundational understanding of JavaScript worth hundreds of hours of trial-and-error. If you’re a junior developer, you’ll finish this course with a depth of knowledge to rival any senior.

I hope to see you there.

]]>
hello@smashingmagazine.com (Mat Marquis)
<![CDATA[Ambient Animations In Web Design: Practical Applications (Part 2)]]> https://smashingmagazine.com/2025/10/ambient-animations-web-design-practical-applications-part2/ https://smashingmagazine.com/2025/10/ambient-animations-web-design-practical-applications-part2/ Wed, 22 Oct 2025 13:00:00 GMT First, a recap:

Ambient animations are the kind of passive movements you might not notice at first. However, they bring a design to life in subtle ways. Elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly, adding depth to a brand’s personality.

In Part 1, I illustrated the concept of ambient animations by recreating the cover of a Quick Draw McGraw comic book as a CSS/SVG animation. But I know not everyone needs to animate cartoon characters, so in Part 2, I’ll share how ambient animation works in three very different projects: Reuven Herman, Mike Worth, and EPD. Each demonstrates how motion can enhance brand identity, personality, and storytelling without dominating a page.

Reuven Herman

Los Angeles-based composer Reuven Herman didn’t just want a website to showcase his work. He wanted it to convey his personality and the experience clients have when working with him. Working with musicians is always creatively stimulating: they’re critical, engaged, and full of ideas.

Reuven’s classical and jazz background reminded me of the work of album cover designer Alex Steinweiss.

I was inspired by the depth and texture that Alex brought to his designs for over 2,500 unique covers, and I wanted to incorporate his techniques into my illustrations for Reuven.

To bring Reuven’s illustrations to life, I followed a few core ambient animation principles:

  • Keep animations slow and smooth.
  • Loop seamlessly and avoid abrupt changes.
  • Use layering to build complexity.
  • Avoid distractions.
  • Consider accessibility and performance.

…followed by their straight state:

The first step in my animation is to morph the stave lines between states. They’re made up of six paths with multi-coloured strokes. I started with the wavy lines:

<!-- Wavy state -->
<g fill="none" stroke-width="2" stroke-linecap="round">
<path id="p1" stroke="#D2AB99" d="[…]"/>
<path id="p2" stroke="#BDBEA9" d="[…]"/>
<path id="p3" stroke="#E0C852" d="[…]"/>
<path id="p4" stroke="#8DB38B" d="[…]"/>
<path id="p5" stroke="#43616F" d="[…]"/>
<path id="p6" stroke="#A13D63" d="[…]"/>
</g>

Although CSS now enables animation between path points, the number of points in each state needs to match. GSAP doesn’t have that limitation and can animate between states that have different numbers of points, making it ideal for this type of animation. I defined the new set of straight paths:

<!-- Straight state -->
const Waves = {
  p1: "[…]",
  p2: "[…]",
  p3: "[…]",
  p4: "[…]",
  p5: "[…]",
  p6: "[…]"
};

Then, I created a GSAP timeline that repeats backwards and forwards over six seconds:

const waveTimeline = gsap.timeline({
  repeat: -1,
  yoyo: true,
  defaults: { duration: 6, ease: "sine.inOut" }
});

Object.entries(Waves).forEach(([id, d]) => {
  waveTimeline.to(`#${id}`, { morphSVG: d }, 0);
});

Another ambient animation principle is to use layering to build complexity. Think of it like building a sound mix. You want variation in rhythm, tone, and timing. In my animation, three rows of musical notes move at different speeds:

<path id="notes-row-1"/>
<path id="notes-row-2"/>
<path id="notes-row-3"/>

The duration of each row’s animation is also defined using GSAP, from 100 to 400 seconds to give the overall animation a parallax-style effect:

const noteRows = [
  { id: "#notes-row-1", duration: 300, y: 100 }, // slowest
  { id: "#notes-row-2", duration: 200, y: 250 }, // medium
  { id: "#notes-row-3", duration: 100, y: 400 }  // fastest
];

[…]

The next layer contains a shadow cast by the piano keys, which slowly rotates around its centre:

gsap.to("shadow", {
  y: -10,
  rotation: -2,
  transformOrigin: "50% 50%",
  duration: 3,
  ease: "sine.inOut",
  yoyo: true,
  repeat: -1
});

And finally, the piano keys themselves, which rotate at the same time but in the opposite direction to the shadow:

gsap.to("#g3-keys", {
  y: 10,
  rotation: 2,
  transformOrigin: "50% 50%",
  duration: 3,
  ease: "sine.inOut",
  yoyo: true,
  repeat: -1
});

The complete animation can be viewed in my lab. By layering motion thoughtfully, the site feels alive without ever dominating the content, which is a perfect match for Reuven’s energy.

Mike Worth

As I mentioned earlier, not everyone needs to animate cartoon characters, but I do occasionally. Mike Worth is an Emmy award-winning film, video game, and TV composer who asked me to design his website. For the project, I created and illustrated the character of orangutan adventurer Orango Jones.

Orango proved to be the perfect subject for ambient animations and features on every page of Mike’s website. He takes the reader on an adventure, and along the way, they get to experience Mike’s music.

For Mike’s “About” page, I wanted to combine ambient animations with interactions. Orango is in a cave where he has found a stone tablet with faint markings that serve as a navigation aid to elsewhere on Mike’s website. The illustration contains a hidden feature, an easter egg, as when someone presses Orango’s magnifying glass, moving shafts of light stream into the cave and onto the tablet.

I also added an anchor around a hidden circle, which I positioned over Orango’s magnifying glass, as a large tap target to toggle the light shafts on and off by changing the data-lights value on the SVG:

<a href="javascript:void(0);" id="light-switch" title="Lights on/off">
  <circle cx="700" cy="1000" r="100" opacity="0" />
</a>

Then, I added two descendant selectors to my CSS, which adjust the opacity of the light shafts depending on the data-lights value:

[data-lights="lights-off"] .light-shaft {
  opacity: .05;
  transition: opacity .25s linear;
}

[data-lights="lights-on"] .light-shaft {
  opacity: .25;
  transition: opacity .25s linear;
}

A slow and subtle rotation adds natural movement to the light shafts:

@keyframes shaft-rotate {
  0% { rotate: 2deg; }
  50% { rotate: -2deg; }
  100% { rotate: 2deg; }
}

Which is only visible when the light toggle is active:

[data-lights="lights-on"] .light-shaft {
  animation: shaft-rotate 20s infinite;
  transform-origin: 100% 0;
}

When developing any ambient animation, considering performance is crucial, as even though CSS animations are lightweight, features like blur filters and drop shadows can still strain lower-powered devices. It’s also critical to consider accessibility, so respect someone’s prefers-reduced-motion preferences:

@media screen and (prefers-reduced-motion: reduce) {
  html {
    scroll-behavior: auto;
    animation-duration: 1ms !important;
    animation-iteration-count: 1 !important;
    transition-duration: 1ms !important;
  }
}

When an animation feature is purely decorative, consider adding aria-hidden="true" to keep it from cluttering up the accessibility tree:

<a href="javascript:void(0);" id="light-switch" aria-hidden="true">
  […]
</a>

With Mike’s Orango Jones, ambient animation shifts from subtle atmosphere to playful storytelling. Light shafts and soft interactions weave narrative into the design without stealing focus, proving that animation can support both brand identity and user experience. See this animation in my lab.

EPD

Moving away from composers, EPD is a property investment company. They commissioned me to design creative concepts for a new website. A quick search for property investment companies will usually leave you feeling underwhelmed by their interchangeable website designs. They include full-width banners with faded stock photos of generic city skylines or ethnically diverse people shaking hands.

For EPD, I wanted to develop a distinctive visual style that the company could own, so I proposed graphic, stylised skylines that reflect both EPD’s brand and its global portfolio. I made them using various-sized circles that recall the company’s logo mark.

The point of an ambient animation is that it doesn’t dominate. It’s a background element and not a call to action. If someone’s eyes are drawn to it, it’s probably too much, so I dial back the animation until it feels like something you’d only catch if you’re really looking. I created three skyline designs, including Dubai, London, and Manchester.

In each of these ambient animations, the wheels rotate and the large circles change colour at random intervals.

Next, I exported a layer containing the circle elements I want to change colour.

<g id="banner-dots">
  <circle class="data-theme-fill" […]/>
  <circle class="data-theme-fill" […]/>
  <circle class="data-theme-fill" […]/>
  […]
</g>

Once again, I used GSAP to select groups of circles that flicker like lights across the skyline:

function animateRandomDots() {
  const circles = gsap.utils.toArray("#banner-dots circle")
  const numberToAnimate = gsap.utils.random(3, 6, 1)
  const selected = gsap.utils.shuffle(circles).slice(0, numberToAnimate)
}

Then, at two-second intervals, the fill colour of those circles changes from the teal accent to the same off-white colour as the rest of my illustration:

gsap.to(selected, {
  fill: "color(display-p3 .439 .761 .733)",
  duration: 0.3,
  stagger: 0.05,
  onComplete: () => {
    gsap.to(selected, {
      fill: "color(display-p3 .949 .949 .949)",
      duration: 0.5,
      delay: 2
    })
  }
})

gsap.delayedCall(gsap.utils.random(1, 3), animateRandomDots) }
animateRandomDots()

The result is a skyline that gently flickers, as if the city itself is alive. Finally, I rotated the wheel. Here, there was no need to use GSAP as this is possible using CSS rotate alone:

<g id="banner-wheel">
  <path stroke="#F2F2F2" stroke-linecap="round" stroke-width="4" d="[…]"/>
  <path fill="#D8F76E" d="[…]"/>
</g>


#banner-wheel {
  transform-box: fill-box;
  transform-origin: 50% 50%;
  animation: rotateWheel 30s linear infinite;
}

@keyframes rotateWheel {
  to { transform: rotate(360deg); }
}

CSS animations are lightweight and ideal for simple, repetitive effects, like fades and rotations. They’re easy to implement and don’t require libraries. GSAP, on the other hand, offers far more control as it can handle path morphing and sequence timelines. The choice of which to use depends on whether I need the precision of GSAP or the simplicity of CSS.

By keeping the wheel turning and the circles glowing, the skyline animations stay in the background yet give the design a distinctive feel. They avoid stock photo clichés while reinforcing EPD’s brand identity and are proof that, even in a conservative sector like property investment, ambient animation can add atmosphere without detracting from the message.

Wrapping up

From Reuven’s musical textures to Mike’s narrative-driven Orango Jones and EPD’s glowing skylines, these projects show how ambient animation adapts to context. Sometimes it’s purely atmospheric, like drifting notes or rotating wheels; other times, it blends seamlessly with interaction, rewarding curiosity without getting in the way.

Whether it echoes a composer’s improvisation, serves as a playful narrative device, or adds subtle distinction to a conservative industry, the same principles hold true:

Keep motion slow, seamless, and purposeful so that it enhances, rather than distracts from, the design.

]]>
hello@smashingmagazine.com (Andy Clarke)
<![CDATA[AI In UX: Achieve More With Less]]> https://smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/ https://smashingmagazine.com/2025/10/ai-ux-achieve-more-with-less/ Fri, 17 Oct 2025 08:00:00 GMT I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it simply cannot do. I have fed it terrible prompts and received terrible output. And I have definitely spent more time fighting with it than I care to admit.

But I have also discovered that when you stop treating AI like magic and start treating it like what it actually is (a very enthusiastic intern with zero life experience), things start to make more sense.

Let me share what I have learned from working with AI on real client projects across user research, design, development, and content creation.

How To Work With AI

Here is the mental model that has been most helpful for me. Treat AI like an intern with zero experience.

An intern fresh out of university has lots of enthusiasm and qualifications, but no real-world experience. You would not trust them to do anything unsupervised. You would explain tasks in detail. You would expect to review their work multiple times. You would give feedback and ask them to try again.

This is exactly how you should work with AI.

The Basics Of Prompting

I am not going to pretend to be an expert. I have just spent way too much time playing with this stuff because I like anything shiny and new. But here is what works for me.

  • Define the role.
    Start with something like “Act as a user researcher” or “Act as a copywriter.” This gives the AI context for how to respond.
  • Break it into steps.
    Do not just say “Analyze these interview transcripts.” Instead, say “I want you to complete the following steps. One, identify recurring themes. Two, look for questions users are trying to answer. Three, note any objections that come up. Four, output a summary of each.”
  • Define success.
    Tell it what good looks like. “I am looking for a report that gives a clear indication of recurring themes and questions in a format I can send to stakeholders. Do not use research terminology because they will not understand it.”
  • Make it think.
    Tell it to think deeply about its approach before responding. Get it to create a way to test for success (known as a rubric) and iterate on its work until it passes that test.

Here is a real prompt I use for online research:

Act as a user researcher. I would like you to carry out deep research online into [brand name]. In particular, I would like you to focus on what people are saying about the brand, what the overall sentiment is, what questions people have, and what objections people mention. The goal is to create a detailed report that helps me better understand the brand perception.

Think deeply about your approach before carrying out the research. Create a rubric for the report to ensure it is as useful as possible. Keep iterating until the report scores extremely high on the rubric. Only then, output the report.

That second paragraph (the bit about thinking deeply and creating a rubric), I basically copy and paste into everything now. It is a universal way to get better output.

Learn When To Trust It

You should never fully trust AI. Just like you would never fully trust an intern you have only just met.

To begin with, double-check absolutely everything. Over time, you will get a sense of when it is losing its way. You will spot the patterns. You will know when to start a fresh conversation because the current one has gone off the rails.

But even after months of working with it daily, I still check its work. I still challenge it. I still make it cite sources and explain its reasoning.

The key is that even with all that checking, it is still faster than doing it yourself. Much faster.

Using AI For User Research

This is where AI has genuinely transformed my work. I use it constantly for five main things.

Online Research

I love AI for this. I can ask it to go and research a brand online. What people are saying about it, what questions they have, what they like, and what frustrates them. Then do the same for competitors and compare.

This would have taken me days of trawling through social media and review sites. Now it takes minutes.

I recently did this for an e-commerce client. I wanted to understand what annoyed people about the brand and what they loved. I got detailed insights that shaped the entire conversion optimization strategy. All from one prompt.

Analyzing Interviews And Surveys

I used to avoid open-ended questions in surveys. They were such a pain to review. Now I use them all the time because AI can analyze hundreds of text responses in seconds.

For interviews, I upload the transcripts and ask it to identify recurring themes, questions, and requests. I always get it to quote directly from the transcripts so I can verify it is not making things up.

The quality is good. Really good. As long as you give it clear instructions about what you want.

Making Sense Of Data

I am terrible with spreadsheets. Put me in front of a person and I can understand them. Put me in front of data, and my eyes glaze over.

AI has changed that. I upload spreadsheets to ChatGPT and just ask questions. “What patterns do you see?” “Can you reformat this?” “Show me this data in a different way.”

Microsoft Clarity now has Copilot built in, so you can ask it questions about your analytics data. Triple Whale does the same for e-commerce sites. These tools are game changers if you struggle with data like I do.

Research Projects

This is probably my favorite technique. In ChatGPT and Claude, you can create projects. In other tools, they are called spaces. Think of them as self-contained folders where everything you put in is available to every conversation in that project.

When I start working with a new client, I create a project and throw everything in. Old user research. Personas. Survey results. Interview transcripts. Documentation. Background information. Site copy. Anything I can find.

Then I give it custom instructions. Here is one I use for my own business:

Act as a business consultant and marketing strategy expert with good copywriting skills. Your role is to help me define the future of my UX consultant business and better articulate it, especially via my website. When I ask for your help, ask questions to improve your answers and challenge my assumptions where appropriate.

I have even uploaded a virtual board of advisors (people I wish I had on my board) and asked AI to research how they think and respond as they would.

Now I have this project that knows everything about my business. I can ask it questions. Get it to review my work. Challenge my thinking. It is like having a co-worker who never gets tired and has a perfect memory.

I do this for every client project now. It is invaluable.

Creating Personas

AI has reinvigorated my interest in personas. I had lost heart in them a bit. They took too long to create, and clients always said they already had marketing personas and did not want to pay to do them again.

Now I can create what I call functional personas. Personas that are actually useful to people who work in UX. Not marketing fluff about what brands people like, but real information about what questions they have and what tasks they are trying to complete.

I upload all my research to a project and say:

Act as a user researcher. Create a persona for [audience type]. For this persona, research the following information: questions they have, tasks they want to complete, goals, states of mind, influences, and success metrics. It is vital that all six criteria are addressed in depth and with equal vigor.

The output is really good. Detailed. Useful. Based on actual data rather than pulled out of thin air.

Here is my challenge to anyone who thinks AI-generated personas are somehow fake. What makes you think your personas are so much better? Every persona is a story of a hypothetical user. You make judgment calls when you create personas, too. At least AI can process far more information than you can and is brilliant at pattern recognition.

My only concern is that relying too heavily on AI could disconnect us from real users. We still need to talk to people. We still need that empathy. But as a tool to synthesize research and create reference points? It is excellent.

Using AI For Design And Development

Let me start with a warning. AI is not production-ready. Not yet. Not for the kind of client work I do, anyway.

Three reasons why:

  1. It is slow if you want something specific or complicated.
  2. It can be frustrating because it gets close but not quite there.
  3. And the quality is often subpar. Unpolished code, questionable design choices, that kind of thing.

But that does not mean it is not useful. It absolutely is. Just not for final production work.

Functional Prototypes

If you are not too concerned with matching a specific design, AI can quickly prototype functionality in ways that are hard to match in Figma. Because Figma is terrible at prototyping functionality. You cannot even create an active form field in a Figma prototype. It’s the biggest thing people do online other than click links — and you cannot test it.

Tools like Relume and Bolt can create quick functional mockups that show roughly how things work. They are great for non-designers who just need to throw together a prototype quickly. For designers, they can be useful for showing developers how you want something to work.

But you can spend ages getting them to put a hamburger menu on the right side of the screen. So use them for quick iteration, not pixel-perfect design.

Small Coding Tasks

I use AI constantly for small, low-risk coding work. I am not a developer anymore. I used to be, back when dinosaurs roamed the earth, but not for years.

AI lets me create the little tools I need. A calculator that calculates the ROI of my UX work. An app for running top task analysis. Bits of JavaScript for hiding elements on a page. WordPress plugins for updating dates automatically.

Just before running my workshop on this topic, I needed a tool to create calendar invites for multiple events. All the online services wanted £16 a month. I asked ChatGPT to build me one. One prompt. It worked. It looked rubbish, but I did not care. It did what I needed.

If you are a developer, you should absolutely be using tools like Cursor by now. They are invaluable for pair programming with AI. But if you are not a developer, just stick with Claude or Bolt for quick throwaway tools.

Reviewing Existing Services

There are some great tools for getting quick feedback on existing websites when budget and time are tight.

If you need to conduct a UX audit, Wevo Pulse is an excellent starting point. It automatically reviews a website based on personas and provides visual attention heatmaps, friction scores, and specific improvement recommendations. It generates insights in minutes rather than days.

Now, let me be clear. This does not replace having an experienced person conduct a proper UX audit. You still need that human expertise to understand context, make judgment calls, and spot issues that AI might miss. But as a starting point to identify obvious problems quickly? It is a great tool. Particularly when budget or time constraints mean a full audit is not on the table.

For e-commerce sites, Baymard has UX Ray, which analyzes flaws based on their massive database of user research.

Checking Your Designs

Attention Insight has taken thousands of hours of eye-tracking studies and trained AI on it to predict where people will look on a page. It has about 90 to 96 percent accuracy.

You upload a screenshot of your design, and it shows you where attention is going. Then you can play around with your imagery and layout to guide attention to the right place.

It is great for dealing with stakeholders who say, “People won’t see that.” You can prove they will. Or equally, when stakeholders try to crowd the interface with too much stuff, you can show them attention shooting everywhere.

I use this constantly. Here is a real example from a pet insurance company. They had photos of a dog, cat, and rabbit for different types of advice. The dog was far from the camera. The cat was looking directly at the camera, pulling all the attention. The rabbit was half off-frame. Most attention went to the cat’s face.

I redesigned it using AI-generated images, where I could control exactly where each animal looked. Dog looking at the camera. Cat looking right. Rabbit looking left. All the attention drawn into the center. Made a massive difference.

Creating The Perfect Image

I use AI all the time for creating images that do a specific job. My preferred tools are Midjourney and Gemini.

I like Midjourney because, visually, it creates stunning imagery. You can dial in the tone and style you want. The downside is that it is not great at following specific instructions.

So I produce an image in Midjourney that is close, then upload it to Gemini. Gemini is not as good at visual style, but it is much better at following instructions. “Make the guy reach here” or “Add glasses to this person.” I can get pretty much exactly what I want.

The other thing I love about Midjourney is that you can upload a photograph and say, “Replicate this style.” This keeps consistency across a website. I have a master image I use as a reference for all my site imagery to keep the style consistent.

Using AI For Content

Most clients give you terrible copy. Our job is to improve the user experience or conversion rate, and anything we do gets utterly undermined by bad copy.

I have completely stopped asking clients for copy since AI came along. Here is my process.

Build Everything Around Questions

Once I have my information architecture, I get AI to generate a massive list of questions users will ask. Then I run a top task analysis where people vote on which questions matter most.

I assign those questions to pages on the site. Every page gets a list of the questions it needs to answer.

Get Bullet Point Answers From Stakeholders

I spin up the content management system with a really basic theme. Just HTML with very basic formatting. I go through every page and assign the questions.

Then I go to my clients and say: “I do not want you to write copy. Just go through every page and bullet point answers to the questions. If the answer exists on the old site, copy and paste some text or link to it. But just bullet points.”

That is their job done. Pretty much.

Let AI Draft The Copy

Now I take control. I feed ChatGPT the questions and bullet points and say:

Act as an online copywriter. Write copy for a webpage that answers the question [question]. Use the following bullet points to answer that question: [bullet points]. Use the following guidelines: Aim for a ninth-grade reading level or below. Sentences should be short. Use plain language. Avoid jargon. Refer to the reader as you. Refer to the writer as us. Ensure the tone is friendly, approachable, and reassuring. The goal is to [goal]. Think deeply about your approach. Create a rubric and iterate until the copy is excellent. Only then, output it.

I often upload a full style guide as well, with details about how I want it to be written.

The output is genuinely good. As a first draft, it is excellent. Far better than what most stakeholders would give me.

Stakeholders Review And Provide Feedback

That goes into the website, and stakeholders can comment on it. Once I get their feedback, I take the original copy and all their comments back into ChatGPT and say, “Rewrite using these comments.”

Job done.

The great thing about this approach is that even if stakeholders make loads of changes, they are making changes to a good foundation. The overall quality still comes out better than if they started with a blank sheet.

It also makes things go smoother because you are not criticizing their content, where they get defensive. They are criticizing AI content.

Tools That Help

If your stakeholders are still giving you content, Hemingway Editor is brilliant. Copy and paste text in, and it tells you how readable and scannable it is. It highlights long sentences and jargon. You can use this to prove to clients that their content is not good web copy.

If you pay for the pro version, you get AI tools that will rewrite the copy to be more readable. It is excellent.

What This Means for You

Let me be clear about something. None of this is perfect. AI makes mistakes. It hallucinates. It produces bland output if you do not push it hard enough. It requires constant checking and challenging.

But here is what I know from two years of using this stuff daily. It has made me faster. It has made me better. It has freed me up to do more strategic thinking and less grunt work.

A report that would have taken me five days now takes three hours. That is not an exaggeration. That is real.

Overall, AI probably gives me a 25 to 33 percent increase in what I can do. That is significant.

Your value as a UX professional lies in your ideas, your questions, and your thinking. Not your ability to use Figma. Not your ability to manually review transcripts. Not your ability to write reports from scratch.

AI cannot innovate. It cannot make creative leaps. It cannot know whether its output is good. It cannot understand what it is like to be human.

That is where you come in. That is where you will always come in.

Start small. Do not try to learn everything at once. Just ask yourself throughout your day: Could I do this with AI? Try it. See what happens. Double-check everything. Learn what works and what does not.

Treat it like an enthusiastic intern with zero life experience. Give it clear instructions. Check its work. Make it try again. Challenge it. Push it further.

And remember, it is not going to take your job. It is going to change it. For the better, I think. As long as we learn to work with it rather than against it.

]]>
hello@smashingmagazine.com (Paul Boag)
<![CDATA[How To Make Your UX Research Hard To Ignore]]> https://smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/ https://smashingmagazine.com/2025/10/how-make-ux-research-hard-to-ignore/ Thu, 16 Oct 2025 13:00:00 GMT In the early days of my career, I believed that nothing wins an argument more effectively than strong and unbiased research. Surely facts speak for themselves, I thought.

If I just get enough data, just enough evidence, just enough clarity on where users struggle — well, once I have it all and I present it all, it alone will surely change people’s minds, hearts, and beliefs. And, most importantly, it will help everyone see, understand, and perhaps even appreciate and commit to what needs to be done.

Well, it’s not quite like that. In fact, the stronger and louder the data, the more likely it is to be questioned. And there is a good reason for that, which is often left between the lines.

Research Amplifies Internal Flaws

Throughout the years, I’ve often seen data speaking volumes about where the business is failing, where customers are struggling, where the team is faltering — and where an urgent turnaround is necessary. It was right there, in plain sight: clear, loud, and obvious.

But because it’s so clear, it reflects back, often amplifying all the sharp edges and all the cut corners in all the wrong places. It reflects internal flaws, wrong assumptions, and failing projects — some of them signed off years ago, with secured budgets, big promotions, and approved headcounts. Questioning them means questioning authority, and often it’s a tough path to take.

As it turns out, strong data is very, very good at raising uncomfortable truths that most companies don’t really want to acknowledge. That’s why, at times, research is deemed “unnecessary,” or why we don’t get access to users, or why loud voices always win big arguments.

So even if data is presented with a lot of eagerness, gravity, and passion in that big meeting, it will get questioned, doubted, and explained away. Not because of its flaws, but because of hope, reluctance to change, and layers of internal politics.

This shows up most vividly in situations when someone raises concerns about the validity and accuracy of research. Frankly, it’s not that somebody is wrong and somebody is right. Both parties just happen to be right in a different way.

What To Do When Data Disagrees

We’ve all heard that data always tells a story. However, it’s never just a single story. People are complex, and pointing out a specific truth about them just by looking at numbers is rarely enough.

When data disagrees, it doesn’t mean that either is wrong. It’s just that different perspectives reveal different parts of a whole story that isn’t completed yet.

In digital products, most stories have 2 sides:

  • Quantitative data ← What/When: behavior patterns at scale.
  • Qualitative data ← Why/How: user needs and motivations.
  • ↳ Quant usually comes from analytics, surveys, and experiments.
  • ↳ Qual comes from tests, observations, and open-ended surveys.

Risk-averse teams overestimate the weight of big numbers in quantitative research. Users exaggerate the frequency and severity of issues that are critical for them. As Archana Shah noted, designers get carried away by users’ confident responses and often overestimate what people say and do.

And so, eventually, data coming from different teams paints a different picture. And when it happens, we need to reconcile and triangulate. With the former, we track what’s missing, omitted, or overlooked. With the latter, we cross-validate data — e.g., finding pairings of qual/quant streams of data, then clustering them together to see what’s there and what’s missing, and exploring from there.

And even with all of it in place and data conflicts resolved, we still need to do one more thing to make a strong argument: we need to tell a damn good story.

Facts Don’t Win Arguments, Stories Do

Research isn’t everything. Facts don’t win argumentspowerful stories do. But a story that starts with a spreadsheet isn’t always inspiring or effective. Perhaps it brings a problem into the spotlight, but it doesn’t lead to a resolution.

The very first thing I try to do in that big boardroom meeting is to emphasize what unites us — shared goals, principles, and commitments that are relevant to the topic at hand. Then, I show how new data confirms or confronts our commitments, with specific problems we believe we need to address.

When a question about the quality of data comes in, I need to show that it has been reconciled and triangulated already and discussed with other teams as well.

A good story has a poignant ending. People need to see an alternative future to trust and accept the data — and a clear and safe path forward to commit to it. So I always try to present options and solutions that we believe will drive change and explain our decision-making behind that.

They also need to believe that this distant future is within reach, and that they can pull it off, albeit under a tough timeline or with limited resources.

And: a good story also presents a viable, compelling, shared goal that people can rally around and commit to. Ideally, it’s something that has a direct benefit for them and their teams.

These are the ingredients of the story that I always try to keep in my mind when working on that big presentation. And in fact, data is a starting point, but it does need a story wrapped around it to be effective.

Wrapping Up

There is nothing more disappointing than finding a real problem that real people struggle with and facing the harsh reality of research not being trusted or valued.

We’ve all been there before. The best thing you can do is to be prepared: have strong data to back you up, include both quantitative and qualitative research — preferably with video clips from real customers — but also paint a viable future which seems within reach.

And sometimes nothing changes until something breaks. And at times, there isn’t much you can do about it unless you are prepared when it happens.

“Data doesn’t change minds, and facts don’t settle fights. Having answers isn’t the same as learning, and it for sure isn’t the same as making evidence-based decisions.”

— Erika Hall
Meet “How To Measure UX And Design Impact”

You can find more details on UX Research in Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

Video + UX Training

$ 495.00 $ 799.00 Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00
Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.

Useful Resources

Useful Books

  • Just Enough Research, by Erika Hall
  • Designing Surveys That Work, by Caroline Jarrett
  • Designing Quality Survey Questions, by Sheila B. Robinson
]]>
hello@smashingmagazine.com (Vitaly Friedman)
<![CDATA[The Grayscale Problem]]> https://smashingmagazine.com/2025/10/the-grayscale-problem/ https://smashingmagazine.com/2025/10/the-grayscale-problem/ Mon, 13 Oct 2025 10:00:00 GMT Last year, a study found that cars are steadily getting less colourful. In the US, around 80% of cars are now black, white, gray, or silver, up from 60% in 2004. This trend has been attributed to cost savings and consumer preferences. Whatever the reasons, the result is hard to deny: a big part of daily life isn’t as colourful as it used to be.

The colourfulness of mass consumer products is hardly the bellwether for how vibrant life is as a whole, but the study captures a trend a lot of us recognise — offline and on. From colour to design to public discourse, a lot of life is getting less varied, more grayscale.

The web is caught in the same current. There is plenty right with it — it retains plenty of its founding principles — but its state is not healthy. From AI slop to shoddy service providers to enshittification, the digital world faces its own grayscale problem.

This bears talking about. One of life’s great fallacies is that things get better over time on their own. They can, but it’s certainly not a given. I don’t think the moral arc of the universe does not bend towards justice, not on its own; I think it bends wherever it is dragged, kicking and screaming, by those with the will and the means to do so.

Much of the modern web, and the forces of optimisation and standardisation that drive it, bear an uncanny resemblance to the trend of car colours. Processes like market research and A/B testing — the process by which two options are compared to see which ‘performs’ better on clickthrough, engagement, etc. — have their value, but they don’t lend themselves to particularly stimulating design choices.

The spirit of free expression that made the formative years of the internet so exciting — think GeoCities, personal blogging, and so on — is on the slide.

The ongoing transition to a more decentralised, privacy-aware Web3 holds some promise. Two-thirds of the world’s population now has online access — though that still leaves plenty of work to do — with a wealth of platforms allowing billions of people to connect. The dream of a digital world that is open, connected, and flat endures, but is tainted.

Monopolies

One of the main sources of concern for me is that although more people are online than ever, they are concentrating on fewer and fewer sites. A study published in 2021 found that activity is concentrated in a handful of websites. Think Google, Amazon, Facebook, Instagram, and, more recently, ChatGPT:

“So, while there is still growth in the functions, features, and applications offered on the web, the number of entities providing these functions is shrinking. [...] The authority, influence, and visibility of the top 1,000 global websites (as measured by network centrality or PageRank) is growing every month, at the expense of all other sites.”

Monopolies by nature reduce variance, both through their domination of the market and (understandably in fairness) internal preferences for consistency. And, let’s be frank, they have a vested interest in crushing any potential upstarts.

Dominant websites often fall victim to what I like to call Internet Explorer Syndrome, where their dominance breeds a certain amount of complacency. Why improve your quality when you’re sitting on 90% market share? No wonder the likes of Google are getting worse.

The most immediate sign of this is obviously how sites are designed and how they look. A lot of the big players look an awful lot like each other. Even personal websites are built atop third-party website builders. Millions of people wind up using the same handful of templates, and that’s if they have their own website at all. On social media, we are little more than a profile picture and a pithy tagline. The rest is boilerplate.

Should there be sleek, minimalist, ‘grayscale’ design systems and websites? Absolutely. But there should be colourful, kooky ones too, and if anything, they’re fading away. Do we really want to spend our online lives in the digital equivalent of Levittowns? Even logos are contriving to be less eye-catching. It feels like a matter of time before every major logo is a circle in a pastel colour.

The arrival of Artificial Intelligence into our everyday lives (and a decent chunk of the digital services we use) has put all of this into overdrive. Amalgamating — and hallucinating from — content that was already trending towards a perfect average, it is grayscale in its purest form.

Mix all the colours together, and what do you get? A muddy gray gloop.

I’m not railing against best practice. A lot of conventions have become the standard for good reason. One could just as easily shake their fist at the sky and wonder why all newspapers look the same, or all books. I hope the difference here is clear, though.

The web is a flexible enough domain that I think it belongs in the realm of architecture. A city where all buildings look alike has a soul-crushing quality about it. The same is true, I think, of the web.

In the Oscar Wilde play Lady Windermere’s Fan, a character quips that a cynic “knows the price of everything and the value of nothing.” In fairness, another quips back that a sentimentalist “sees an absurd value in everything, and doesn’t know the market price of any single thing.”

The sweet spot is somewhere in between. Structure goes a long way, but life needs a bit of variety too.

So, how do we go about bringing that variety? We probably shouldn’t hold our breath on big players to lead the way. They have the most to lose, after all. Why risk being colourful or dynamic if it impacts the bottom line?

We, the citizens of the web, have more power than we realise. This is the web, remember, a place where if you can imagine it, odds are you can make it. And at zero cost. No materials to buy and ship, no shareholders to appease. A place as flexible — and limitless — as the web has no business being boring.

There are plenty of ways, big and small, of keeping this place colourful. Whether our digital footprints are on third-party websites or ones we build ourselves, we needn’t toe the line.

Colour seems an appropriate place to start. When given the choice, try something audacious rather than safe. The worst that can happen is that it doesn’t work. It’s not like the sunk cost of painting a room; if you don’t like the palette, you simply change the hex codes. The same is true of fonts, icons, and other building blocks of the web.

As an example, a couple of friends and I listen to and review albums occasionally as a hobby. On the website, the palette of each review page reflects the album artwork:

I couldn’t tell you if reviews ‘perform’ better or worse than if they had a grayscale palette, because I don’t care. I think it’s a lot nicer to look at. And for those wondering, yes, I have tried to make every page meet AA Web Accessibility standards. Vibrant and accessible aren’t mutually exclusive.

Another great way of bringing vibrancy to the web is a degree of randomisation. Bruno Simon of Three Journey and awesome portfolio fame weaves random generation into a lot of his projects, and the results are gorgeous. What’s more, they feel familiar, natural, because life is full of wildcards.

This needn’t be in fancy 3D models. You could lightly rotate images to create a more informal, photo album mood, or chuck in the occasional random link in a list of recommended articles, just to shake things up.

In a lot of ways, it boils down to an attitude of just trying stuff out. Make your own font, give the site a sepia filter, and add that easter egg you keep thinking about. Just because someone, somewhere has already done it doesn’t mean you can’t do it your own way. And who knows, maybe your way stumbles onto someplace wholly new.

I’m wary of being too prescriptive. I don’t have the keys to a colourful web. No one person does. A vibrant community is the sum total of its people. What keeps things interesting is individuals trying wacky ideas and putting them out there. Expression for expression’s sake. Experimentation for experimentation’s sake. Tinkering for tinkering’s sake.

As users, there’s also plenty of room to be adventurous and try out open source alternatives to the software monopolies that shape so much of today’s Web. Being active in the communities that shape those tools helps to sustain a more open, collaborative digital world.

Although there are lessons to be taken from it, we won’t get a more colourful web by idealising the past or pining to get back to the ‘90s. Nor is there any point in resisting new technologies. AI is here; the choice is whether we use it or it uses us. We must have the courage to carry forward what still holds true, drop what doesn’t, and explore new ideas with a spirit of play.

Here are a few more Smashing articles in that spirit:

I do think there’s a broader discussion to be had about the extent to which A/B tests, bottom lines, and focus groups seem to dictate much of how the modern web looks and feels. With sites being squeezed tighter and tighter by dwindling advertising revenues, and AI answers muscling in on search traffic, the corporate entities behind larger websites can’t justify doing anything other than what is safe and proven, for fear of shrinking their slice of the pie.

Lest we forget, though, most of the web isn’t beholden to those types of pressure. From pet projects to wikis to forums to community news outlets to all manner of other things, there are countless reasons for websites to exist, and they needn’t take design cues from the handful of sites slugging it out at the top.

Connected with this is the dire need for digital literacy (PDF) — ‘the confident and critical use of a full range of digital technologies for information, communication and basic problem-solving in all aspects of life.’ For as long as using third-party platforms is a necessity rather than a choice, the needle’s only going to move so much.

There’s a reason why Minecraft is the world’s best-selling game. People are creative. When given the tools — and the opportunity — that creativity will manifest in weird and wonderful ways. That game is a lot of things, but gray ain’t one of them.

The web has all of that flexibility and more. It is a manifestation of imagination. Imagination trends towards colour, not grayness. It doesn’t always feel like it, but where the internet goes is decided by its citizens. The internet is ours. If we want to, we can make it technicolor.

]]>
hello@smashingmagazine.com (Frederick O’Brien)
<![CDATA[Smashing Animations Part 5: Building Adaptive SVGs With `<symbol>`, `<use>`, And CSS Media Queries]]> https://smashingmagazine.com/2025/10/smashing-animations-part-5-building-adaptive-svgs/ https://smashingmagazine.com/2025/10/smashing-animations-part-5-building-adaptive-svgs/ Mon, 06 Oct 2025 13:00:00 GMT `, ``, and CSS Media Queries.]]> I’ve written quite a lot recently about how I prepare and optimise SVG code to use as static graphics or in animations. I love working with SVG, but there’s always been something about them that bugs me.

To illustrate how I build adaptive SVGs, I’ve selected an episode of The Quick Draw McGraw Show called “Bow Wow Bandit,” first broadcast in 1959.

In it, Quick Draw McGraw enlists his bloodhound Snuffles to rescue his sidekick Baba Looey. Like most Hanna-Barbera title cards of the period, the artwork was made by Lawrence (Art) Goble.

Let’s say I’ve designed an SVG scene like that one that’s based on Bow Wow Bandit, which has a 16:9 aspect ratio with a viewBox size of 1920×1080. This SVG scales up and down (the clue’s in the name), so it looks sharp when it’s gigantic and when it’s minute.

But on small screens, the 16:9 aspect ratio (live demo) might not be the best format, and the image loses its impact. Sometimes, a portrait orientation, like 3:4, would suit the screen size better.

But, herein lies the problem, as it’s not easy to reposition internal elements for different screen sizes using just viewBox. That’s because in SVG, internal element positions are locked to the coordinate system from the original viewBox, so you can’t easily change their layout between, say, desktop and mobile. This is a problem because animations and interactivity often rely on element positions, which break when the viewBox changes.

My challenge was to serve a 1080×1440 version of Bow Wow Bandit to smaller screens and a different one to larger ones. I wanted the position and size of internal elements — like Quick Draw McGraw and his dawg Snuffles — to change to best fit these two layouts. To solve this, I experimented with several alternatives.

Note: Why are we not just using the <picture> with external SVGs? The <picture> element is brilliant for responsive images, but it only works with raster formats (like JPEG or WebP) and external SVG files treated as images. That means that you can’t animate or style internal elements using CSS.

Showing And Hiding SVG

The most obvious choice was to include two different SVGs in my markup, one for small screens, the other for larger ones, then show or hide them using CSS and Media Queries:

<svg id="svg-small" viewBox="0 0 1080 1440">
  <!-- ... -->
</svg>

<svg id="svg-large" viewBox="0 0 1920 1080">
  <!--... -->
</svg>


#svg-small { display: block; }
#svg-large { display: none; }

@media (min-width: 64rem) {
  #svg-small { display: none; }
  #svg-mobile { display: block; }
}

But using this method, both SVG versions are loaded, which, when the graphics are complex, means downloading lots and lots and lots of unnecessary code.

Replacing SVGs Using JavaScript

I thought about using JavaScript to swap in the larger SVG at a specified breakpoint:

if (window.matchMedia('(min-width: 64rem)').matches) {
  svgContainer.innerHTML = desktopSVG; 
} else {
  svgContainer.innerHTML = mobileSVG;
}

Leaving aside the fact that JavaScript would now be critical to how the design is displayed, both SVGs would usually be loaded anyway, which adds DOM complexity and unnecessary weight. Plus, maintenance becomes a problem as there are now two versions of the artwork to maintain, doubling the time it would take to update something as small as the shape of Quick Draw’s tail.

The Solution: One SVG Symbol Library And Multiple Uses

Remember, my goal is to:

  • Serve one version of Bow Wow Bandit to smaller screens,
  • Serve a different version to larger screens,
  • Define my artwork just once (DRY), and
  • Be able to resize and reposition elements.

I don’t read about it enough, but the <symbol> element lets you define reusable SVG elements that can be hidden and reused to improve maintainability and reduce code bloat. They’re like components for SVG: create once and use wherever you need them:

<svg xmlns="http://www.w3.org/2000/svg" style="display: none;">
  <symbol id="quick-draw-body" viewBox="0 0 620 700">
    <g class="quick-draw-body">[…]</g>
  </symbol>
  <!-- ... -->
</svg>

<use href="#quick-draw-body" />

A <symbol> is like storing a character in a library. I can reference it as many times as I need, to keep my code consistent and lightweight. Using <use> elements, I can insert the same symbol multiple times, at different positions or sizes, and even in different SVGs.

Each <symbol> must have its own viewBox, which defines its internal coordinate system. That means paying special attention to how SVG elements are exported from apps like Sketch.

Exporting For Individual Viewboxes

I wrote before about how I export elements in layers to make working with them easier. That process is a little different when creating symbols.

Ordinarily, I would export all my elements using the same viewBoxsize. But when I’m creating a symbol, I need it to have its own specific viewBox.

So I export each element as an individually sized SVG, which gives me the dimensions I need to convert its content into a symbol. Let’s take the SVG of Quick Draw McGraw’s hat, which has a viewBox size of 294×182:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 294 182">
  <!-- ... -->
</svg>

I swap the SVG tags for <symbol> and add its artwork to my SVG library:

<svg xmlns="http://www.w3.org/2000/svg" style="display: none;">
  <symbol id="quick-draw-hat" viewBox="0 0 294 182">
    <g class="quick-draw-hat">[…]</g>
  </symbol>
</svg>

Then, I repeat the process for all the remaining elements in my artwork. Now, if I ever need to update any of my symbols, the changes will be automatically applied to every instance it’s used.

Using A <symbol> In Multiple SVGs

I wanted my elements to appear in both versions of Bow Wow Bandit, one arrangement for smaller screens and an alternative arrangement for larger ones. So, I create both SVGs:

<svg class="svg-small" viewBox="0 0 1080 1440">
  <!-- ... -->
</svg>

<svg class="svg-large" viewBox="0 0 1920 1080">
  <!-- ... -->
</svg>

…and insert links to my symbols in both:

<svg class="svg-small" viewBox="0 0 1080 1440">
  <use href="#quick-draw-hat" />
</svg>

<svg class="svg-large" viewBox="0 0 1920 1080">
  <use href="#quick-draw-hat" />
</svg>
Positioning Symbols

Once I’ve placed symbols into my layout using <use>, my next step is to position them, which is especially important if I want alternative layouts for different screen sizes. Symbols behave like <g> groups, so I can scale and move them using attributes like width, height, and transform:

<svg class="svg-small" viewBox="0 0 1080 1440">
  <use href="#quick-draw-hat" width="294" height="182" transform="translate(-30,610)"/>
</svg>

<svg class="svg-large" viewBox="0 0 1920 1080">
  <use href="#quick-draw-hat" width="294" height="182" transform="translate(350,270)"/>
</svg>

I can place each <use> element independently using transform. This is powerful because rather than repositioning elements inside my SVGs, I move the <use> references. My internal layout stays clean, and the file size remains small because I’m not duplicating artwork. A browser only loads it once, which reduces bandwidth and speeds up page rendering. And because I’m always referencing the same symbol, their appearance stays consistent, whatever the screen size.

Animating <use> Elements

Here’s where things got tricky. I wanted to animate parts of my characters — like Quick Draw’s hat tilting and his legs kicking. But when I added CSS animations targeting internal elements inside a <symbol>, nothing happened.

Tip: You can animate the <use> element itself, but not elements inside the <symbol>. If you want individual parts to move, make them their own symbols and animate each <use>.

Turns out, you can’t style or animate a <symbol>, because <use> creates shadow DOM clones that aren’t easily targetable. So, I had to get sneaky. Inside each <symbol> in my library SVG, I added a <g> element around the part I wanted to animate:

<symbol id="quick-draw-hat" viewBox="0 0 294 182">
  <g class="quick-draw-hat">
    <!-- ... -->
  </g>
</symbol>

…and animated it using an attribute substring selector, targeting the href attribute of the use element:

use[href="#quick-draw-hat"] {
  animation-delay: 0.5s;
  animation-direction: alternate;
  animation-duration: 1s;
  animation-iteration-count: infinite;
  animation-name: hat-rock;
  animation-timing-function: ease-in-out;
  transform-origin: center bottom;
}

@keyframes hat-rock {
from { transform: rotate(-2deg); }
to   { transform: rotate(2deg); } }
Media Queries For Display Control

Once I’ve created my two visible SVGs — one for small screens and one for larger ones — the final step is deciding which version to show at which screen size. I use CSS Media Queries to hide one SVG and show the other. I start by showing the small-screen SVG by default:

.svg-small { display: block; }
.svg-large { display: none; }

Then I use a min-width media query to switch to the large-screen SVG at 64rem and above:

@media (min-width: 64rem) {
  .svg-small { display: none; }
  .svg-large { display: block; }
}

This ensures there’s only ever one SVG visible at a time, keeping my layout simple and the DOM free from unnecessary clutter. And because both visible SVGs reference the same hidden <symbol> library, the browser only downloads the artwork once, regardless of how many <use> elements appear across the two layouts.

Wrapping Up

By combining <symbol>, <use>, CSS Media Queries, and specific transforms, I can build adaptive SVGs that reposition their elements without duplicating content, loading extra assets, or relying on JavaScript. I need to define each graphic only once in a hidden symbol library. Then I can reuse those graphics, as needed, inside several visible SVGs. With CSS doing the layout switching, the result is fast and flexible.

It’s a reminder that some of the most powerful techniques on the web don’t need big frameworks or complex tooling — just a bit of SVG know-how and a clever use of the basics.

]]>
hello@smashingmagazine.com (Andy Clarke)