Designing Human-AI Experiences
A full-day workshop built around the 6 HAX Principles — for shipping AI products that are trustworthy, engaging, and ethical.
TicketsA full-day workshop built around the 6 HAX Principles — for shipping AI products that are trustworthy, engaging, and ethical.
TicketsFor over twenty years, I have been designing at the intersection of human behavior and emerging technology. The hardest design problem of this decade is not how to use AI tools — it is how to ship AI products that people understand, trust, and choose to come back to. The field has been quietly converging on the answer for years; most teams just have not had the time to read it all.
That is what this workshop is for.
Across one full day in Berlin, you learn the 6 HAX Principles — Empathy First, Automation vs. Augmentation, Transparency and Confidence, Real Control and Editability, Graceful Failure, and Mental Model Shaping — and apply each one to a real product you ship or use weekly. The principles are not invented from thin air; they are researched, synthesized, and curated from the canonical work at Microsoft, Google, Apple, GitHub, the Shape of AI catalogue, AIverse.design, and the academic literature on human-AI interaction — pressure-tested across international fieldwork at Optimizer in Pharma, Healthcare, Finance, Logistics, and the public sector.
We use the 7 Sins of AI Product Design as the diagnostic lens, the 6 HAX Principles as the antidote, and the AX (Agentic Experience) Framework as the strategic scaffold for shipping AI behavior — not just AI interfaces. Hands-on. Group work. A working prototype by the end of the day.
You leave with three concrete artifacts: an audit of a real product, a HAX-principle redesign brief, and a functional prototype that takes the unhappy paths seriously — graceful failure, uncertainty signaling, real human agency.
For senior UX designers, product managers, design leads, and engineers shipping AI features. No machine-learning background required.
The 6 HAX Principles, applied. Empathy First. Automation vs. Augmentation. Transparency and Confidence. Real Control and Editability. Graceful Failure. Mental Model Shaping. The six load-bearing beams of every AI product worth coming back to — applied, in the room, to a product you actually ship.
HAX Design Patterns. The practical patterns that make the principles concrete — explainability cues, calibrated trust signals, feedback loops that compound, error handling that does not insult the user, and editability as a first-class design problem.
The AX Framework. A strategic lens for assessing decision boundaries, responsibility, and trust. We use it to align People, Machines, and Intelligence around a real product problem — and to surface what happens behind the interface, not just on top of it.
Human-Centered AI Integration. Translate user needs into data and decision strategies. Set automation limits deliberately. Mitigate bias before it ships. Define the AX Metrics you will actually monitor post-launch — because shipping an AI feature is not the end of the design work, it is the beginning of it.
We open with a Confessional Audit. Using the 7 Deadly Sins of AI Product Design as the frame, we shift from analyzing digital façades — the UI — to auditing system behavior. We apply the AX Human Layers lens to unpack intent, decisions, and the trust assumptions products quietly inherit.
Morning refreshments and networking.
We work through the balance between Automation and Augmentation, then design for explainability and calibrated trust using indicators over raw data. We apply the AX Autonomy & Control Check to "Gold Standard" products to assess what real user control and meaningful fallback limits look like in production.
Sponsored networking lunch at a nearby restaurant.
Hands-on group work with the AX Sprint Canvas. We align People, Machines, and Intelligence to solve a real problem. Then we shift to use-case validation and data strategy, using the 7 AX Pillars as guardrails for ethical, functional decisions.
Quick refreshments before the final sprint.
Build a functional prototype where the AI's behavior is the product. Using tools like Figma AI, V0, or Lovable, we focus on the Unhappy Paths — graceful failures, uncertainty signaling, and preserving human agency when the model gets it wrong.
Each group presents the underwater logic of their solution. Not just screens — the data sources, the bias mitigation plans, and the AX Metrics they would monitor post-launch. We close with what you take back to Monday morning.