AI Product Design as Risk Management: Workday Edition

When designing human-in-the-loop automations and AI workflows, you have to figure out how much human should be in the loop. It looks like Workday is finding this out the hard way.

Workday has a set of powerful human-in-the-loop workflows that accept, screen, evaluate and accept or reject candidates--they save a ton of time in an otherwise labor-intensive process.

The product design of this human-in-the-loop feature needs to be tuned so that humans can influence the process and then participate in the process effectively. For instance, have the AI match some criteria, but have a human make the final call.

It's a balance. The objectives are well-meaning, but the development of these products and the tactics shown in the features need to be handled carefully. If the process leans too heavily into AI to make decisions without proper attenuation, and doesn't include human judgement in the workflow, then it could expose the company (or customer company) to risks like, say, discrimination lawsuits.

It looks like we're going to find out if that may have happened. Workday is facing a class action lawsuit charging that their algorithmic and AI-based applicant evaluation tools unfairly discriminated against applicants over 40.

In order to compile the dataset needed to determine if the charges are true, the judge in the case has ordered Workday to supply a list of ALL the companies that used these features, even those that used the features AFTER the lawsuit was filed.

At its core, this will probably be like a 'product failure' case (I'm not a lawyer). Did it cause harm? Did Workday know there were these kinds of risks? And did they make decisions to move forward anyway? If yes, then guilty.

It's one of the ways that product design, and the product design process, has real risk management impact for companies. What kind of impact? Well, while the case works through the courts, investors aren't waiting. Since the case has moved forward on May 22, $WDAY started dropping and hasn't recovered. The company has lost 17.5% or $12.2B of its value. The AI lawsuit has been widely cited as a key reason for investors to stop backing Workday.

Read More

Delta cracks Dynamic Pricing

More questions than answers: looks like a new day is dawning in e-commerce. Delta Airlines has successfully executed dynamic, personalized pricing on 3% of its sales.

https://fortune.com/2025/07/16/delta-moves-toward-eliminating-set-prices-in-favor-of-ai-that-determines-how-much-you-personally-will-pay-for-a-ticket

Its, shall we say... concerning. Displaying different prices on the same goods to different customers has an elevated risk of Price Discrimination. Amazon took the brunt of it back in 2000 when they spun up a test of dynamic pricing. The thing is, they showed different prices to people that were demographically different, sparking price discrimination claims.

https://abcnews.go.com/Technology/story?id=119399&page=1

Research done on more recent attempts at dynamic pricing has shown that the worst deals were given to the poorest people. "For example, a study of broadband internet offers to 1.1 million residential addresses showed the worst deals given to the poorest people."

https://consumerwatchdog.org/wp-content/uploads/2024/12/Surveillance-Price-Gouging.pdf

So the risks from outside the org (demographics) and from inside the org (guardrails) are substantial. Reputational and regulatory risks could both be substantial but would it be existential? Or could they test and iterate their way through it?

Delta is happy to hype the new innovation for investors, but failed to tip off their strategy. Are they showing someone the lowest possible price to entice them to buy and guarantee full planes and safe margins? Or are they showing people the highest prices to test price elasticity and capture maximal value, even if that means there are a couple empty seats?

This is a compelling example of how AI will be working its way into complex systems and consumer products. By integrating An AI-driven calculation into real-time price modeling, Delta has successfully straddled the chasm between two kinds of companies: operational maximizers and profit drivers. Maximizers try to fill every seat, even if it means losing a bit of value, e.g., broadway theaters that give away tickets at the last minute. Profit drivers are willing to have some some slack in the system but make it up on the back end by capturing greater profits from those that do buy tickets. Leveraging the real-time analysis that is now possible, Delta is able to stride between both types, presumably for maximal benefit.

So what do you think, if you have read this far? Will this lead to smarter competition? Or is it the latest way that airlines will be squeezing customers without improving services?

Read More

The impact of Generative AI on Generative Research

Attacking Weaknesses in the Research Process

In many ways, AI-drive tools are what researchers have ben waiting for. Agents to find and schedule your participants? Amazing. Quick organization, tagging, and summarization of interview data? Fantastic.

Desk Research

Before starting, before venturing to start, it's important to assemble some initial level of context. Compiling a range of initial sources of information is highly useful in establishing initial context for the problem space.

The summarization and distillation abilities of ChatGPT can be useful for initially establishing the foundational context. This step is not without its risks, however. Hallucinations can sneak in, affecting your ability to orient yourself around the problem. I have overcome this by creating custom GPT with strict guardrails for summarizing and synthesizing factual information. With these parameters in place, information provided can bereliably verifiable.

Generative Research

In user research, there are two general areas of research. In this post, I am primarily focused on generative research. Validation will be discussed at a later post. Generative research is the practice of learning about people, their circumstances, and a problem they face in order to develop a strategy for solving that problem. It is the process of defining context in which a product will operate.

Take a research framework like Learn More Faster, a lightweight way to conduct research for product ideas from Michael Margolis and the team at Google Ventures, alongside ways to leverage AI-powered strategies for lightening the load and accelerate the process, particularly for researchers who find themselves workign alone to lead the process.

  1. Agree on goals: It cannot be stressed enough that these must come from the stakeholders.
  2. Define your bullseye customer: while sources of customer data can be wide-ranging, a simple way to start this process is prompting an LLM to summarize the findings and identify behavioral trends that cold be characterise.
  3. Recruit five bullseye customers: This is where I am perhaps most excited. If AI-enabled tools, especially agents, could locate, contact, and schedule potential customer interviews, it would be a major win.
  4. Choose your value props and three prototypes: producing value propositions can be a challenging writing exercise for anyone. Take a tip from different professional writers: work in groups. At the Onion, individual writers pitch their hilarious headlines, but they refine them in groups. In Hollywood, screenwriters work in pairs (or more) to tag team on scripts. You can develop value propositions by working in partners with a GPT. Consider using a multi-shot prompt that provides significant customer context from the steps above, then provides examples of well-structured value prompts. Additionally, whether it's Figma's new tools or vibe coding platforms like Lovable, rapid prototyping is getting more rapid by the day. Creating different versions or variations of a given idea can be rendered in just a few minutes.
  5. Draft your interview guide: more paired writing, working with a GPT as a writing partner. Margolis gives some good examples of how to frame questions so they don't lead the participant, but provide open ended opportunities to dig into real issues.
  6. Learn more from every interview: interviews must be conducted by humans with humans. Never develop synthetic personas from the existing customer data as any more than a thought exercise. AI tools that transcribe, record, and summarize each interaction with participants. At Nooma, we have created 'recipes' that can run analysis on a given participant transcript and provde insights into the research questions nearly instantly. Final results? no. Faster preliminary findings? yes.
  7. Plan a watch party: more a matter of coordination and participation than AI generation and analysis, but I'm interested to see where AI tools can be a 'watcher' down the road. For instance, could one of the tools used in corporate interview process, which assesses people against the stated skills of a job description, be assess an interview participant's account against the research questions while catching facial expressions, tone and other aspects? It sill feels kind of gross. But assistance from a service like that could eventually be refined.
  8. Analysis: Breaking down what you hear into components is commonly referred to as tagging. Can AI help automate this process? Some platforms like Dovetail are already in the process of doing just that. Without a platform like that, you could use a prompt chain through a GPT to help accelerate the process. For instance, prompt the GPT to break down the transcript into topics, assemble passages into a table, identify common themes, then apply those themes in the table alongside the quotations. The GPT would need to be trained to do this effectively on the first 3-5 transcripts, but the time savings could ultimately be very significant.
  9. Synthesis: Typically, a research campaign will also include a final presentation to decision makers where the 'so what?' ideas are shared. To achieve this, the trove of interview and research data needs to be synthesized into concise and valuable points that can be used as a foundation for accelerating clear-headed decision making. While I have written prompts that can compile multiple transcripts to find trends, this would still be in development. My approach reduces hallucinations and erroneous compilations of unrelated data

Where does the Research get put to use?

Prompt Engineering: Clearer understanding of users, their context, and their challenges provides the foundation for developing a prompt that will deliver the intended outcome.

Context Engineering: Context Engineering is emerging as a powerful tool in building and improving Agents. In order for an agent to efficiently maximize its capabilities, it must have not only a broad foundation of data but an effective contextual framework for understanding, understanding, and implementing that data. This is then translated and implemented into the most efficient code implementation to make the Agent as accurate and nimble as possible. But before doing so, clear research provides the framing for how to inform the context.

Workshops: Customer problems, articulated as Jobs to Be Done, personas, or other tools that can align teams. This gives the team the best ability to curate the workshop context and ensure the best input for its participants.

So what's the impact?

I have been running analysis and synthesis steps on some internal Nooma projects. I'm finding time savings of about 50% - 60% with strong level of quality throughout. We're still working to refine the process and output quality.

More importantly in consulting is the impact this can have on accelerating client engagements and establishing a shared understanding of client business. Initial diagnostic research of any kind is the cornerstone of any consulting endeavor. We come in and make it clear that we don't know it all, no matter how many times we have worked in that client's industry, in their market, or with their same functions. Each consulting engagement relies on establishing a shared set of facts in order to move forward. Any way we can accelerate that path to understanding is critical.

Want to talk about it? Let's connect »

Read More

Design Process is (going to be) New All Over Again

It's the end of design as we know it, and I feel fine...

Design is changing fast. AI tools such as LLMs, Agents, and other automation tools perform some tasks remarkably well. As such, Designers can be empowered to execute parts of the workflow with greater speed, fluency, and accuracy.

Here's a summary of some areas where I'm seeing AI impact emerge, with more info in the coming weeks...

Research

User Research is critically important, however it is often believed to be prohibitively labor intensive

  • Crunching large datasets to find potential interview participants
  • Intelligent and automated scheduling of sessions
  • Capturing, transcribing, and summarizing interviews
  • Finding similarities and trends across interview data

...and so on. Not perfect. Not suitable for every occasion... but making inroads daily.

What is emerging in the development and usage of AI tools is the importance of Context Engineering. Effective user research is a critical input into defining the context of a given product and will likely pay an outsized role in shaping Context Input (for organizations that can embrace it).

Ideation

Teams often need hours* to move beyond obvious ideas. AI is not replacing human brainstorms, but it can serve as a partner:

Research has shown that, in brainstorming situations, people benefit from taking a pretty long amount of time* to begin gnerating ideas that are truly effective.

I don't have the confidence (or naiveté) to believe that AI is replacing human ingenuity en masse.

Tools like a GPT can be put to use in the same way that a screenwriter will lean on a writing partner.

  1. Write an idea
  2. Ask the partner to come up its take on the same idea
  3. Compare the two
  4. Refine the original

...and so on.

An LLM's ability to produce seemingly infinite quantity of ideas is often mistaken for its ability to produce those ideas at a level of quality that is both innovative and valuable.

Ideation sessions benefit from a kind of decomposition and reconstitution that AI does well. An LLM will examine a given input without the context (or baggage) that a team may have. It's a mechanical approach to the task, leaving the judgement and evaluation to the humans in the loop.

I spent time with the team at McKinsey working on this problem in a business innovation context and found it fascinating. Experts in a give field leveraged AI to generate new innovations and mine weaker outputs for salvageable parts. The team weighed the options presented, leading to a secondary conversation determining if any of those 'insufficient' ideas had something in them. This kind of fluency and malleability of ideas unlocks a certain invigorated innovation practice.

*I can't locate it at the moment, but I recall reading a paper that teams had to spend something like three hours together before their ideas transcended mere repackaging of the present and really got good

Prototype Validation and Production

Vibe coding tools such as Lovable and so on, provide ways to communicate those new ideas that can be more persuasive artifacts of the experience. This goes beyond the 'clickable' into the 'doable.'

Hours upon hours have been spent trying t0 make a Figma or other type of prototype appear as if it were a coded, live app or site. Copious states for an individual button or form field, so that the button could appear to respond to input from the user and changes in data from the system.

Now a vibe-coded prototype could easily project the experience with live data, created and iterated upon in in near-instant intervals.

And with this in place, the transition to Production-level code purportedly gets smoother. An AI Copilot can anticipate the ways that prototype code needs to be hardened for the product at scale.

Don't hate, Operate

Tying this new process together is Design/Product/Dev operations. Still yet to be determined, perfected, and optimized, a new form of operations is taking shape. For example...

After an event (meeting, research call, scrum standup) an agent can transcribe, prioritize, and communicate the outputs of that call, while simultaneously creating Jira tickets for the parties that need to take action.

This is some of the automation we have started enabling at Nooma, with an agentic horizon not far behind. I'm looking forward to the oncoming changes as we all adapt to new tech.

Read More

AI Basics: Prompt Design

Like everyone else, I have been learning how to use and abuse AI. Also like everyone else, I have found it to be compelling in its strengths and weaknesses. Here are some notes that helped me get my head around what's possible and useful when working with AI.

These notes are rough, but I find myself referring to them from time to time.

Prompting Principles make inputs and outputs more consistent.

  1. An Instructional Example: Teach the Chat GPT to do something “I want you to perform this task. Performing this task looks like this [example input and output]”
    1. This must include a single task and example, but can be augmented with multiple examples to create a Multi-shot prompt

  2. Once the learned task is established, prompt the GPT to conduct the task on the new information:
    1. For direct replication, repetitive formatting, or structured responses: “Apply the method you just learned to this input: [insert information].”
    2. For generalized replication that flexes to new contexts: “"Transfer the approach you used earlier to this new example: [insert information].”

See techniques below

Prompt Structure

  1. I want you to perform this task. Performing this task looks like this [example]
  2. example]
  3. Apply the method you just learned to this input: [insert information]

Individual Prompt Development Process

The overall process of developing a prompt will be an iterative

  1. “I want you to perform this task. Performing this task looks like this [example]” = [Instructional example]
  2. <examine output>
  3. Refine the output: provide more context or further describe what you are looking for
  4. Request a prompt: “Write a prompt that will reliably produce the output of your previous reply. The prompt must be specific enough that the output will be reliably formatted”

Abstraction layers of prompting

System Prompt

  • ROLE This is who you are, CONTEXT here’s information to keep in mind
  • TASK here’s how to perform this action, EXAMPLE, input, EXAMPLE process, EXAMPLE output, now create the OUTPUT it with this CONTEXT / INFORMATION
  • Extracting skills

User prompt


🎯 Prompting Paradigms

Technique What It Does Structure Benefits
Zero-shot prompting Asks for task execution with no prior examples One simple instruction-only prompt Fast, efficient, and useful when examples aren't available
Prompt chaining Builds or refines output step-by-step across multiple interactions Sequential turns between user and model Enables complex task execution through manageable steps, improves reasoning
Multi-shot prompting Provides multiple examples in one prompt to guide behavior One large input block with several input-output pairs Improves output consistency and helps the model learn the pattern through repetition
Decomposition (Prompt Chaining for Analysis) Breaks down complex problems into smaller subproblems before solving Starts with identifying subproblems, then tackles each via prompt chaining Encourages structured thinking, increases accuracy on complex tasks
Self-Criticism (Iterative Prompting) Uses the model to review and improve its own output Prompt > Draft > Critique > Revision loop Enhances quality through reflection and iteration, simulates peer review
Additional Information (Context) Provides background or external context to improve output relevance Context block before task instruction Increases task alignment, helps with domain-specific or nuanced evaluations
Chain of Thought / Thought Generation Forces the model to “show its work” step-by-step Prompt includes instruction to output reasoning before final answer Improves accuracy and transparency, exposes and corrects reasoning errors

Zero-shot prompting

Example (asking for results with no examples)

  1. Simple exploration
  2. When neither output nor logical process is known

User: Think of A, Now take A, and come up with three ways it is like B,explain why the first reason is the most importantLLM: [Explanation]

Prompt Chaining

Example (multiple prompts)

Best used:

  1. Exploratory prompting when a logical process is known but the output is not

User: Create ALLM: A#User: Now take A, and come up with three ways it is like B, in priority orderLLM: [Reason 1], [Reason 2], [Reason 3]#User: Explain why the first reason is the most importantLLM: [Explanation]

Multi-Shot Prompting

Example (single submission multi-prompt):

Simple prompt provides a self-contained iterative example that shapes the LLM’s logic.

Best used

  1. When context and output is known, but replication is needed
  2. Starting a task that will need specifc outputs

Input: “Translate ‘Good morning’ to French.”Output: “Bonjour”Input: “Translate ‘Thank you’ to French.”Output: “Merci”Input: “Translate ‘Good night’ to French.”Output:

Decomposition (Prompt Chaining for Analysis)

Leveraging the GPT to break down problems into smaller problems before prompting

Best used

  1. When an output is known but the process is unknown but assumed to be complex

User: "I need to determine the return policy for a customer who bought a car with specific issues. First, what are the subproblems that would need to be solvedto figure this out?"LLM: "To determine the exact return policy for this scenario, I would need to:1. Verify the customer's identity and purchase records.2. Confirm the specific car model, year, and its condition (e.g., the 'small ding').3. Ascertain the exact purchase date to check against return windows.4. Review the details of any applicable insurance or warranty.5. Consult the dealership's official return policy for that specific vehicle type and condition."User: "Great. Now, for subproblem 1, how would I verify the customer's identity and purchase records?"LLM: "To verify the customer's identity and purchase records, you would typically need to perform a database lookup using their name, customer ID, or the VIN of the vehicle. This would confirm if they are a registered customer and provide their transaction history."(The conversation would continue, solving each subproblem until all necessary information is gathered for the final return policy determination.)

Self-Criticism (Iterative Prompting)

Leveraging the GPT to iterate on a prompt output

Best used

  1. Output refinement
  2. Identifying ways to refine processes

User: "Write a short blog post about the benefits of remote work for small businesses. Keep it concise and engaging."LLM: "Remote work offers small businesses increased flexibility and reduced overhead costs, leading to higher employee satisfaction. It can also expand the talent pool beyond geographical limits. [Initial Draft]"User: "Can you go and check your response, confirm that it's correct, or offer yourself some criticism?"LLM: "Upon review, my initial draft is concise but could be more engaging. I could add a call to action or a more compelling opening. I also didn't specifically mention how it boosts productivity, which is a key benefit."User: "That's great feedback. Why don't you go ahead and implement that to improve the blog post?"LLM: "Absolutely. Here's a revised version:'Unlock Growth: The Power of Remote Work for Your Small BusinessTired of limited talent and high office rents? Remote work isn't just a trend; it's a game-changer for small businesses. By embracing remote teams, you gain incredible flexibility, slash overhead costs, and access a global talent pool. Plus, studies show it often boosts productivity! Ready to transform your business? Consider the remote advantage. [Revised Draft]'"

Additional Information (Context)

Providing overall background that is not directly pertinent to the task, but provides supporting context.

Best used

  1. when initiating conversation and establishing context of initial tasks
  2. when refining outputs from previous tasks with additional context

User:COMPANY PROFILE:Company Name: "InnovateTech Solutions"Industry: Enterprise Software (SaaS)Core Products: Cloud-based CRM and Project Management tools.Key Values: Customer satisfaction, robust feature sets, intuitive UX.Customer Feedback Context: We differentiate by our support, even if our initial product releases might have some early-stage bugs. Negative feedback on 'bugs' is expected but must be weighed against positive 'support' comments.---Please analyze the sentiment of the following customer review for InnovateTech Solutions, considering our company profile and customer feedback context. Classify it as "Positive," "Negative," or "Mixed" and provide a brief rationale.Customer Review: "The new feature is buggy and frustrating, but the support team was fantastic and resolved my issue quickly."LLM: Sentiment: MixedRationale: The review expresses **negative sentiment** regarding the "new feature" being "buggy and frustrating," which aligns with expected early-stage bug feedback mentioned in the company profile. However, it strongly balances this with **positive sentiment** for the "support team" being "fantastic" and resolving the issue quickly. Given InnovateTech Solutions' value on customer satisfaction and differentiation through support, the positive support experience significantly mitigates the negative feature feedback, resulting in a mixed overall sentiment.

Chain of Thought / Thought Generation

Forces the LLM to provide its ‘thinking’ throughout the process, exposing any reasoning (and reasoning flaws)

Best used

  1. When the process is just as important as the output
  2. When the process will need to be explained to someone else

User: Calculate the total cost, including 8% sales tax, for the following items. Make sure to write out all your reasoning and intermediate steps before providing the final answer.Items:- Item A: $25- Item B: $40LLM: **Reasoning Steps**:1.  **Calculate the subtotal of items**:Item A price: $25Item B price: $40Subtotal = $25 + $40 = $652.  **Calculate the sales tax amount**:Sales tax rate: 8%Tax amount = 8% of $65 = 0.08 * 65 = $5.203.  **Calculate the total cost**:Total cost = Subtotal + Tax amountTotal cost = $65 + $5.20 = $70.20**Final Answer**: The total cost, including 8% sales tax, for Item A and Item B is **$70.20**.

A

Prompt Engineering : Lenny’s podcast

Read More

Design is a Risk Management Strategy

Design leaders need to advocate for their teams and their practice on a near-constant basis.

My perspective on accomplishing this has really shifted in the last couple years. For a long time, I’ve talked about how close we are to our customers, push for aspirational levels of design quality, and execute a robust process. I never felt like that landed with business stakeholders and I was burning myself out trying to convince them.

More recently, in the last year especially, I’ve stopped framing it that way. I now argue that the business is facing significant risks:

  • Product failure in market: people can’t use the app/site the way they need to
  • Service failure in market: people can use the app/site, but it doesn’t deliver what people need
  • Product misuse in market: people use the app/site in unexpected and unfavorable ways
  • Failure to differentiate: the experience of the app/site/service fails to differentiate, from competitors driving the value of the company’s strategy down

A strong product design practice is essential to reducing these risks as part of the company’s overall risk management strategy. Any one of these risk areas can result in loss of market share, litigation, brand equity loss, and decline of company performance. Design methods for identifying customer needs, testing ideas, prototyping concepts, and validating experiences are some ways to provide rich feedback into the team's impact in to reducing these risks in a controlled environment before incurring risk in the market.

Teams that launch products without establishing a strong design practice--also putting valuable feedback loops in place for refining the product pre- and post-launch--will be taking on risk that could tank the company. The design team’s contribution to this is how we win as a team. This risk-taking is overly lauded in the startup hustle culture, saturating many corners of the business world to, I believe, a greater detriment.

When I explain this approach to business stakeholders, they get it. They can visualize the impact of reducing risk. We can have more productive conversations about trade-offs in moving forward with effective refinement in the design process. It's also my job to understand the pressures to deliver and how I can modify the design process to achieve the speed to market we need. These adjustments to process are better understood by other stakeholders when the trade-offs in managing risk can, at a minimum, be discussed. Rarely does this risk need to be quantified, but it can be if needed.

On an individual level, I highlight when any individual contributes to this, embodies company values, or just does something rad. This has to be fine-tuned to accommodate the ways that individuals and team cultures respond to praise, attention, and individuation, but the positive feedback has to happen somehow.

Read More

Where would you begin with this team?

Let me know if you've worked on a project like this.

Think of a product team that's working on some kind of online platform, we'll call them TechSite. Someone on the TechSite team, a leader of some kind, decides that an initiative is important and declares it a priority to launch a new product. The team creates a prototype of some kind and reviews it internally. They come to agreement on the direction and the leader advocates for launching it. It doesn't do well, not a total failure, but customers are ambivalent. The leader believes the core of their idea is in place and wants to use testing as a means to improve on the project. The team creates a variety of A/B tests, multi-variate tests, and surveys. The tests come back and they don't point to a conclusive direction, winners are not clearly statistically significant. The surveys come back, and the responses don't point to a conclusive direction, sometimes responses target unrelated issues. The leader shifts the team's priorities to a more urgent matter but there's no intention to sunset the product and it lives on in a state of suspended animation, launched and partially improved. The team is left with a sense of missed opportunity and TechSite customers don't really get much at all.

This scenario is representative of so many projects that I have either heard about or been a part of that I am convinced that it has become a standard operating procedure for many product development teams. The team is trying to move quickly, make bold decisions, make big bets, and conduct testing on customers. These are the ubiquitous tenets of a nimble product team, aren't they? Why is the outcome so poor?

Why are teams of smart, hardworking people making such lame-ass products?

It's the leader's fault for pushing their agenda! It's the team's fault for weak execution! It's the team's fault or not designing a better solution or writing better code! It's the customer's fault for not understanding how to use it!

From my perspective, the team never stood a chance, not even the leader. The team thought they had what they needed to succeed, but one of the inputs into their process was insufficient and without it, the team could never make something that their customers would be willing to pay for.

What do you think? If you could fix one thing with this team, what would you address first?

Read More

Managing risks beyond the product

Without risk, there is no opportunity.

Any business, any organization--even any individual person--will face their own responsibility for identifying risks, assessing the probability and consequences, and managing the risk to the best possible outcome. Some people are better at it than others, and some firms are better at it than others.

Designers can find themselves in a position to identify and assess risks that are incurred when the product meets the public, often before anyone else, but taking action to manage these risks requires added perspective.

Risk in the organization

The field of Risk Management has developed as a rigorous, quantitatively rich practice within many organizations that leads the charge of identifying potential risks, assessing their impact, and managing them through tools like internal policy changes, strategic activities, financial instruments, or insurance policies to name a few.

There is much to be said on the types of risk affecting an organization, however for a firm with a consumer-facing product, one of the risks with the highest consequence is Product Failure Risk. If the product fails to operate properly and this results in harm to the user of that product, then it could be examined to see if the firm is legally responsible for that harm. To determine legal responsibility, the court would first determine if the firm had identified a better design for the product at some point, and secondly whether the firm made a decision not to implement it. It is worth noting that, in the legal context, a design decision is really anything impacting the product and service experience, not just visually-impactful or brand-reinforcing design decisions.

Businesses typically assess product failure risk from the perspective of the firm: if the product fails in market, what's the likelihood that there will be consequences and how severe could those consequences be? This insular perspective, evaluating risk only in the context of the firm itself, is reinforced by the concerns of multiple parties. Investors may be concerned that a product failure will jeopardize their likelihood to see a return return. Managers will be concerned that this will affect the bottom line. Marketing and Sales leaders may be concerned that this will affect the brand and reputation.

This risk is typically managed by enforcing a feedback-rich and effective product development methodology with with rigorous internal and external quality control. This assures that the best possible product reaches the market.

MBA students around the world are steeped in this perspective and take this approach with them as they lead organization.

In reality, there is further complexity to the landscape.

Designers see beyond the firm

Most effective design organizations have an intrinsic commitment to identifying, assessing, and managing risks. This is ingrained in to the organization through the use of a feedback-rich product development process that helps identify potential risks, assess their severity, and develop strategies for managing them before the product is developed for the market.

In any scenario where a designer gathers feedback on a work in progress, this work can be evaluated against the risks facing the firm. More than most disciplines within an organization, designers are uniquely positioned to identify the probability and severity of harm that a product can inflict on users and other parties. The design process, done well, absorbs much of this need for risk management.

In launch of a product, especially in the field of consumer-facing software that typically includes pilot launcehs, beta programs, user acceptance testing, and other observational methods, Designers are often faced with a conundrum. In using the product successfully, another problem may be created.

In using AirBnB correctly, hosts contributing to shifting real estate market conditions by setting price points that make AirBnB rentals more attractive.

In using Facebook or YouTube correctly, individual users may be radicalized when algorithmic recommendations exposed them to content that built upon their outrage.

Both of these examples are not product failures--AirBnB, Facebook, and YouTube all function properly. However in operating properly, these products have, in turn, inflicted harm.

Handling the moment when the risk of harm is identified

A Designer's handling of this discovery is critical. In many cases, my own career included, the Designer is often cast as a Cassandra within the organization, warning whoever will listen, but rarely believed. A friend of mine put it well, "I feel like the scientist in the Godzilla movies who figures it out before everyone else, then runs around yelling and dropping his papers."

Many designers and design leaders have felt the same way. When designers talk amongst themselves, much emphasis is placed on working in “the right organization” or one that “listens to designers.” The question remains, are the designers speaking the language that is needed in order to be heard?

In order to communicate the risk of harm effectively within the organization, any discovery of harm calls for answering a few questions.

  • If harm is discovered, is there a venue for articulating identified risks back to business leaders? If not, then why not?
  • Is the risk of harm communicated through factual observations?
  • Is the impact of the harm understood? Is the probability and severity of the harm understood? What are they?
  • Is the harmed party a willing participant in the transaction? Or is the harm inflicted through externalities?
  • Does the impact of the harm translate back to the company’s strategic goals? Does it jeopardize activities central to the company’s strength? Does it jeopardize differentiation from competitors?
  • Are there recommendations for reducing harm? Is the origin of the circumstances understood? What are they?
  • Do you have a recommendation for the timeline of addressing this issue? What questions do you (designer) need answered to make a recommendation?

The question is often where the firm's responsibility ends, and whether the harm inflicted through the product or resulting externalities should be addressed. Framing an identified risk in this manner--as something that starts externally but comes back to the firm--can help designers be more effective at benefitting their users while providing opportunities for the firm to build lasting benefit and value for their customers.

I wouldn't call it a fair fight

Unfortunately, this is not the norm and is not internalized for most organizations. Addressing potential harm is often dismissed. It's a matter of company culture and leadership principles. Addressing harm is often seen as capitalizing on opportunity rather than righting previous wrongs.

This perspective is endemic in academic and legal perspectives on business. I asked a business professor, "what happens if there is risk created by the proper use of the product?" and was met with an ambiguous answer, "in the event of externalities, regulators typically step in." This professor provided an excellent class that has greatly improved my understanding of risk; to be clear I am not offering an indictment of the professor. Instead, consider that academics and business leaders alike view the firm's responsibility as having a tighter barrierFacebook and AirBnB actively avoid addressing

To that end, I don't think there can be any realistic expectation that corporations will be held liable, legally or culturally, for the harms their products inflict on the economy and society. Oil firms will not be held responsible for pollution to the degree that it has impacted our climate. Firearms manufacturers will not be held responsible for harm in gun violence. Will tech companies be held responsible when their products are used effectively to nefarious ends? I don't expect it.

Nonetheless, Designers have an opportunity, perhaps a unique opportunity, to lead the charge within organizations in launching products that achieve objectives while eliminating or minimizing potential external harm.

Read More