Why does trust matter in AI design?
AI systems increasingly act as decision-making partners — yet most operate as black boxes. This gap between capability and comprehension creates a core design problem.
The Black Box Problem
When AI makes decisions without justification, users rapidly lose trust — or worse, over-trust opaque outputs and stop questioning them entirely.
Calibration Failure
Users simultaneously over-rely on AI for routine tasks while under-relying where it excels. This stems from poor interface-level communication of uncertainty.
The Control Paradox
As AI becomes more agentic, users want both automation and oversight. Current systems rarely offer granular checkpoints — forcing an all-or-nothing relationship.
Five-stage mixed-method investigation
This semester builds the foundation. Next semester applies the framework to a specific AI tool to examine how these trust principles hold up in practice.
Trust-in-AI Design Framework
Synthesized key theories from trust research and UX for AI into 9 actionable interaction criteria through close reading and thematic extraction.
Territory Map
Mapped the conceptual territory between AI systems and users — identifying key dimensions: transparency, trust calibration, over/under-trust, and multimodal gaps.
Dual Survey — Users & Experts
Two parallel surveys: 21 general user responses and 45 AI/design professional responses. Captured trust levels, interaction preferences, and reactions to AI behavior.
In-Depth Interviews
45-minute interviews exploring how trust forms, breaks, and rebuilds — capturing moments of hesitation and overreliance that surveys can’t reveal.
Interactive Card Deck
Built “Step Outside the Screen” using Claude — 100 design provocations paired with real-world images and one-sentence insights. Closes semester one.
9 Principles for Trustworthy AI Design
The framework translates trust theory into concrete interface behaviors. Each principle addresses a specific failure mode in current AI interactions.
Calibrated Accuracy
Accuracy is a dial, not a switch. Dynamically balance precision and recall based on stakes. A medical tool should be conservative; a creative tool, generative.
Intent-Aware Understanding
Resolve what users mean, not just what they say. Success is whether the underlying need was met, not whether the literal question was answered.
Confirm Before Acting
For consequential actions — sending, deleting, publishing — surface a confirmation step. The higher the consequence, the more explicit.
Editable Interpretation Layer
Surface AI assumptions as visible, tappable variables. Let users see what AI understood and correct it in one click.
Talking Back
When AI can’t fulfill a request, explain what went wrong in plain language. Good failure states rebuild trust through honesty.
Zero-Click Discovery
Surface relevant suggestions before the user asks. Create the feeling AI anticipated the next need without requiring extra input.
Guided Next Steps
After answering, offer paths beyond the original question. Shift AI from question-answerer to thinking partner.
Regeneration with Variance
In creative work, each regeneration should introduce meaningful variation — different tone, structure, angle. Feel like a collaborator, not a copy machine.
Ethics as Design Constraint
Ethics embedded in every decision — not appended at the end. Transparency, no false confidence, respect for user autonomy throughout.
The Territory Map
A spatial representation mapping the relationship between AI systems and users — and the key forces in between: transparency, trust calibration, and the gap between conversational and multimodal interfaces.
What the surveys revealed
Two parallel surveys surfaced a striking paradox: trust in AI is high, and so is concern about overreliance. Users want AI reasoning to be visible, not hidden.
Trust is high, but so is concern about overreliance. The solution isn’t less AI — it’s more visible AI reasoning and process transparency.
Patterns & Opportunities
In-depth interviews surfaced the lived experience behind the numbers — revealing how trust is formed, lost, and negotiated from two perspectives.
Expert Practitioners
AI/UX professionals · design & engineeringTrust requires more than technical accuracy
Users build trust when AI is predictable, transparent, and interactive — not just mathematically correct. Consistency and communication outperform raw performance.
The “Black Box” destroys trust rapidly
When interfaces resolve suggestions without explaining the “why,” users tend to ignore outputs entirely. Hidden logic creates hesitation and disengagement.
Shift from passive tools to active collaborators
Allowing users to view AI reasoning and explore alternatives transforms AI into a thinking partner rather than a black-box vendor.
Design transparent model updates
When AI models improve, interfaces should visually communicate these changes — preventing confusion when systems behave differently than users remember.
General Users
Everyday AI users across disciplinesTask-dependent trust — high for coding, low for creativity
Users trust AI for operational tasks but distrust it for creative work. Perception shifted from “perfect search engine” to “probability machine that requires monitoring.”
Heavy reliance on manual verification
Users review step-by-step logic, compare models, and cross-check via search. Trust is earned through process review, not output alone.
Program AI to admit when it doesn’t know
One of the fastest trust builders: stating plainly when AI lacks information. Honest failure builds more trust than confident misinformation.
Granular checkpoints for agentic tasks
Users want to monitor AI logic and intervene before final execution — reducing errors and financial waste in autonomous workflows.
Step Outside the Screen
100 design provocations across 14 themes. Built with Claude (vibe coding) and deployed on Netlify as an open thinking tool for AI designers everywhere.
A thinking tool for AI designers
Each card pairs a design provocation with curated everyday images on the front, and a one-sentence design insight on the back. The front invites interpretation; the back clarifies the intended meaning.
Designed not just for this thesis, but as an open resource for any designer working on AI-human interactions.
Explore the Card DeckPick a question card matching what you want to explore
Spend a moment reading the everyday images
Flip — what changed from front to back?
Read the hint connecting image to concept
Reflect and discuss with your team
Apply the insight to guide design decisions
Key Insights Across All Stages
Three overarching design imperatives emerged — each pointing toward a richer understanding of what it means to build AI that users can genuinely trust.
Visibility is trust infrastructure
The strongest trust builders: reasoning steps (62%), source citations (57%), confidence scores (48%). Making the invisible process legible is the highest-leverage design intervention.
Trust is calibrated, not binary
Users don’t simply trust or distrust AI — they calibrate reliance to context. Design should support this nuanced relationship, not flatten it.
Control preserves agency
The anxiety around agentic AI isn’t about capability — it’s about oversight. Confirmation steps and granular checkpoints are not friction; they are features.
Failure states are opportunities
User reactions to AI errors skew curious (38%) and confused (43%) rather than negative. Well-designed failure states can strengthen trust more than a perfect but opaque success.
Richer interfaces mislead more
Experts most associate AI capability overestimation with multimodal systems (69%). Visual richness creates a halo effect — design should resist this illusion, not exploit it.
Ethics is a UX property
Ethics cannot be delegated to policy review. Transparency, autonomy, and surfacing harm must be embedded in interaction design from the very first wireframe.
Want to Explore the Card Deck?
100 design provocations to challenge how you think about AI interfaces, trust, and human-machine collaboration.
Master’s thesis research · 2025–2026 · Semester 1 complete