The Challenge

Why does trust matter in AI design?

AI systems increasingly act as decision-making partners — yet most operate as black boxes. This gap between capability and comprehension creates a core design problem.

Problem 01

The Black Box Problem

When AI makes decisions without justification, users rapidly lose trust — or worse, over-trust opaque outputs and stop questioning them entirely.

Problem 02

Calibration Failure

Users simultaneously over-rely on AI for routine tasks while under-relying where it excels. This stems from poor interface-level communication of uncertainty.

Problem 03

The Control Paradox

As AI becomes more agentic, users want both automation and oversight. Current systems rarely offer granular checkpoints — forcing an all-or-nothing relationship.

Research Process

Five-stage mixed-method investigation

This semester builds the foundation. Next semester applies the framework to a specific AI tool to examine how these trust principles hold up in practice.

Theory → Framework

Trust-in-AI Design Framework

Synthesized key theories from trust research and UX for AI into 9 actionable interaction criteria through close reading and thematic extraction.

Literature reviewThematic extractionFramework design
Mapping the Space

Territory Map

Mapped the conceptual territory between AI systems and users — identifying key dimensions: transparency, trust calibration, over/under-trust, and multimodal gaps.

System mappingGap analysis
Quantitative Research

Dual Survey — Users & Experts

Two parallel surveys: 21 general user responses and 45 AI/design professional responses. Captured trust levels, interaction preferences, and reactions to AI behavior.

21 user responses45 expert responsesGoogle Forms
Qualitative Research

In-Depth Interviews

45-minute interviews exploring how trust forms, breaks, and rebuilds — capturing moments of hesitation and overreliance that surveys can’t reveal.

Zoom interviewsThematic analysisExpert + user voices
Design Artifact

Interactive Card Deck

Built “Step Outside the Screen” using Claude — 100 design provocations paired with real-world images and one-sentence insights. Closes semester one.

Claude / vibe codingDeployed on Netlify100 questions

Next Semester

Apply the framework to a specific AI tool — testing how trust principles hold up in practice through focused design iteration.

Step 01 Outcome

9 Principles for Trustworthy AI Design

The framework translates trust theory into concrete interface behaviors. Each principle addresses a specific failure mode in current AI interactions.

Principle 01

Calibrated Accuracy

Accuracy is a dial, not a switch. Dynamically balance precision and recall based on stakes. A medical tool should be conservative; a creative tool, generative.

Principle 02

Intent-Aware Understanding

Resolve what users mean, not just what they say. Success is whether the underlying need was met, not whether the literal question was answered.

Principle 03

Confirm Before Acting

For consequential actions — sending, deleting, publishing — surface a confirmation step. The higher the consequence, the more explicit.

Principle 04

Editable Interpretation Layer

Surface AI assumptions as visible, tappable variables. Let users see what AI understood and correct it in one click.

Principle 05

Talking Back

When AI can’t fulfill a request, explain what went wrong in plain language. Good failure states rebuild trust through honesty.

Principle 06

Zero-Click Discovery

Surface relevant suggestions before the user asks. Create the feeling AI anticipated the next need without requiring extra input.

Principle 07

Guided Next Steps

After answering, offer paths beyond the original question. Shift AI from question-answerer to thinking partner.

Principle 08

Regeneration with Variance

In creative work, each regeneration should introduce meaningful variation — different tone, structure, angle. Feel like a collaborator, not a copy machine.

Principle 09

Ethics as Design Constraint

Ethics embedded in every decision — not appended at the end. Transparency, no false confidence, respect for user autonomy throughout.

Step 02 Outcome

The Territory Map

A spatial representation mapping the relationship between AI systems and users — and the key forces in between: transparency, trust calibration, and the gap between conversational and multimodal interfaces.

Step 03 Outcome

What the surveys revealed

Two parallel surveys surfaced a striking paradox: trust in AI is high, and so is concern about overreliance. Users want AI reasoning to be visible, not hidden.

81%
of users rate trust in AI at 4 or 5 out of 5
81%
also believe AI encourages dangerous overreliance
62%
say showing reasoning steps builds trust most
69%
of experts link over-trust to multimodal interfaces
User survey — what builds trust? (n=21)
Explanation makes sense
57%
Past experience with tool
57%
Sources are provided
52%
Answer sounds logical
48%
Verify elsewhere
24%
Confident vs. cautious tone
14%
User survey — where AI feels most helpful (n=21)
Summarizing information
71%
Writing or editing text
67%
Brainstorming ideas
57%
Quick factual questions
57%
Learning new topics
43%
Decision making
29%
Expert survey — most important trust factors (n=45)
Explainability
58%
Control over AI behavior
56%
Accuracy and performance
56%
Transparency
56%
Consistency in performance
49%
UI / Design quality
31%
Expert survey — does showing uncertainty increase trust? (n=45)
Increases trust
60%
Depends on context
20%
Decreases trust
20%
Key Insight

Trust is high, but so is concern about overreliance. The solution isn’t less AI — it’s more visible AI reasoning and process transparency.

Step 04 Outcome

Patterns & Opportunities

In-depth interviews surfaced the lived experience behind the numbers — revealing how trust is formed, lost, and negotiated from two perspectives.

Expert Practitioners

AI/UX professionals · design & engineering
Pattern

Trust requires more than technical accuracy

Users build trust when AI is predictable, transparent, and interactive — not just mathematically correct. Consistency and communication outperform raw performance.

Pattern

The “Black Box” destroys trust rapidly

When interfaces resolve suggestions without explaining the “why,” users tend to ignore outputs entirely. Hidden logic creates hesitation and disengagement.

Opportunity

Shift from passive tools to active collaborators

Allowing users to view AI reasoning and explore alternatives transforms AI into a thinking partner rather than a black-box vendor.

Opportunity

Design transparent model updates

When AI models improve, interfaces should visually communicate these changes — preventing confusion when systems behave differently than users remember.

General Users

Everyday AI users across disciplines
Pattern

Task-dependent trust — high for coding, low for creativity

Users trust AI for operational tasks but distrust it for creative work. Perception shifted from “perfect search engine” to “probability machine that requires monitoring.”

Pattern

Heavy reliance on manual verification

Users review step-by-step logic, compare models, and cross-check via search. Trust is earned through process review, not output alone.

Opportunity

Program AI to admit when it doesn’t know

One of the fastest trust builders: stating plainly when AI lacks information. Honest failure builds more trust than confident misinformation.

Opportunity

Granular checkpoints for agentic tasks

Users want to monitor AI logic and intervene before final execution — reducing errors and financial waste in autonomous workflows.

Step 05 — Design Artifact

Step Outside the Screen

100 design provocations across 14 themes. Built with Claude (vibe coding) and deployed on Netlify as an open thinking tool for AI designers everywhere.

Step Outside The Screen card deck landing page — bold typography with floating design question cards featuring photography

A thinking tool for AI designers

Each card pairs a design provocation with curated everyday images on the front, and a one-sentence design insight on the back. The front invites interpretation; the back clarifies the intended meaning.

Designed not just for this thesis, but as an open resource for any designer working on AI-human interactions.

Explore the Card Deck
Step 01

Pick a question card matching what you want to explore

Step 02

Spend a moment reading the everyday images

Step 03

Flip — what changed from front to back?

Step 04

Read the hint connecting image to concept

Step 05

Reflect and discuss with your team

Step 06

Apply the insight to guide design decisions

Research Synthesis

Key Insights Across All Stages

Three overarching design imperatives emerged — each pointing toward a richer understanding of what it means to build AI that users can genuinely trust.

Insight 01

Visibility is trust infrastructure

The strongest trust builders: reasoning steps (62%), source citations (57%), confidence scores (48%). Making the invisible process legible is the highest-leverage design intervention.

Insight 02

Trust is calibrated, not binary

Users don’t simply trust or distrust AI — they calibrate reliance to context. Design should support this nuanced relationship, not flatten it.

Insight 03

Control preserves agency

The anxiety around agentic AI isn’t about capability — it’s about oversight. Confirmation steps and granular checkpoints are not friction; they are features.

Insight 04

Failure states are opportunities

User reactions to AI errors skew curious (38%) and confused (43%) rather than negative. Well-designed failure states can strengthen trust more than a perfect but opaque success.

Insight 05

Richer interfaces mislead more

Experts most associate AI capability overestimation with multimodal systems (69%). Visual richness creates a halo effect — design should resist this illusion, not exploit it.

Insight 06

Ethics is a UX property

Ethics cannot be delegated to policy review. Transparency, autonomy, and surfacing harm must be embedded in interaction design from the very first wireframe.