May 09, 2022

Designing With AI: What Product Designers Need to Understand

Designing With AI: What Product Designers Need to Understand

This post synthesizes the key ideas, terms, and frameworks I now use when thinking about AI-powered products—and how they apply directly to UX design. I learned this material through the ELVTR course AI in Product Design with Robert Redmond.

This post synthesizes the key ideas, terms, and frameworks I now use when thinking about AI-powered products—and how they apply directly to UX design. I learned this material through the ELVTR course AI in Product Design with Robert Redmond.

AI isn’t a feature you sprinkle on top of a product. It’s not a magic recommendation engine. And it’s definitely not just a technical concern to “hand off to engineers.” AI changes how products decide, predict, adapt, and treat users differently over time. That makes it a design problem at its core.

First Principles: What AI Actually Does in Products

At a high level, machine learning systems tend to do one (or more) of the following:

  1. Predict something (future behavior, outcomes, risk, usage)

  2. Group things (users, behaviors, content, patterns)

  3. Recommend actions (what to do next, what to change)

  4. Automate decisions (sometimes with feedback loops)

For designers, the important question is not which model is used, but what kind of decision is the system making and how visible is that decision to the user?

Every AI-powered product implicitly answers:

  • Who gets treated similarly?

  • Who gets flagged as “different”?

  • What the system thinks matters most

  • How confident it is in those judgments

Key Machine Learning Concepts (Supervised vs. Unsupervised Learning)

Supervised learning works with labeled data. The system learns from examples where the “correct answer” is known.

Common product implications:

  • Predicting churn

  • Estimating costs or usage

  • Classifying outcomes (yes/no, high/low risk)

UX impact:

  • These systems feel authoritative

  • Users often assume they are “right”

  • Designers must handle uncertainty and confidence carefully

Unsupervised learning, on the other hand, finds patterns without predefined labels.

This is often used for:

  • User segmentation

  • Behavioral clustering

  • Discovering emergent patterns

UX impact:

  • Users may not understand why they’re grouped

  • Labels like “people like you” need careful framing

  • Explainability becomes a design responsibility

Prediction vs. Explanation

Many models are excellent at prediction but terrible at explanation.

This matters because:

  • Users don’t trust outcomes they don’t understand

  • Designers are often responsible for interpreting model outputs

  • “Why am I seeing this?” becomes a core UX question

This is where feature importance and model transparency matter—not as technical artifacts, but as design inputs. A system that can say: “Your AC usage matters more than your lighting” is fundamentally more usable than one that simply says: “Your bill will increase.”

Clustering, Segmentation, and the Myth of the Average User

Clustering algorithms group users based on similarity—usage patterns, behaviors, preferences.

This sounds harmless, but it has major design implications:

  • Segments become identities

  • Recommendations reinforce behavior

  • Outliers risk being underserved or misclassified

For UX designers, this raises questions like:

  • Should users know what group they’re in?

  • Can users move between clusters?

  • What happens when the system is wrong?

Good AI UX designs allow escape hatches:

  • Manual overrides

  • Editable preferences

  • Transparent comparisons

Hybrid Systems: When Multiple Models Work Together

Most real products don’t rely on a single model.

Instead, they combine:

  • Prediction models (what will happen)

  • Clustering models (who is similar)

  • Rules engines (what’s allowed)

  • Knowledge bases (what experts say)

From a design perspective, this means:

  • The system’s “voice” must be consistent

  • Conflicting recommendations need resolution

  • Users need clarity on what’s automated vs. advisory

This is where AI stops being “smart” and starts being opinionated—and opinionated systems must be designed intentionally.

Automation vs. Assistance: A Critical UX Boundary

One of the most important distinctions I learned is the difference between:

  • Decision support (suggesting actions)

  • Decision automation (taking actions)

Automation introduces:

  • Loss of user agency

  • Risk amplification

  • Trust erosion if outcomes are surprising

Strong AI product design often starts with:

  • Suggestions before automation

  • Clear previews of impact

  • Reversible actions

Trust is built when users feel in control, not impressed.

Privacy, Fairness, and Design Accountability

AI systems inevitably treat users differently.

That makes questions of privacy and fairness design responsibilities, not just legal ones.

Key principles I now design around:

  • Data minimization: collect only what’s necessary

  • Purpose clarity: explain why data is used

  • Inclusive defaults: avoid designing only for “ideal” users

  • Human-in-the-loop: allow expert oversight where stakes are high

Fairness is not just about outcomes, it’s about access, interpretation, and representation.

Generative AI: Creativity Still Needs Direction

Generative models (like large language models) introduce a different challenge:

  • Outputs are flexible, creative, and unpredictable

  • UX constraints become even more important

  • Prompt design is a form of interaction design

For designers, this means:

  • Designing inputs as carefully as outputs

  • Handling ambiguity gracefully

  • Setting boundaries around tone, length, and intent

AI can generate content, but designers generate meaningful experiences.

What This Changed About My Design Practice

After working through these concepts, I no longer ask:

“How can AI improve this product?”

Instead, I ask:

  • What decisions does this product make?

  • Who benefits from those decisions?

  • Who might be harmed or excluded?

  • How visible should intelligence be?

  • Where should humans stay in control?

AI doesn’t remove the need for design judgment, it amplifies it.

Closing Thought

The future of AI products won’t be defined by better models alone. It will be defined by designers who understand:

  • Systems, not just screens

  • Consequences, not just features

  • People, not just predictions

That’s the work I’m interested in doing.

Let's talk

Time for me:

Email:

remmysharma1107@gmail.com

Reach out:

Let's talk

Time for me:

Email:

remmysharma1107@gmail.com

Reach out:

Create a free website with Framer, the website builder loved by startups, designers and agencies.