The Changing Scope
The emergence of AI is an existential threat and existential opportunity for designers and other product professionals.
This calls for a recalibration of professional skills.
This calls for a new rhythm of working.
This calls for solving problems with new capabilities.
You and me? We can thrive. Or we can fade away.
Recalibration
The way that artificial intelligence is emerging in professional life is an attack on our hard skills. The impact is measurable and trackable. The amount of time to draw a box or a button or an interface is decimated. The decision-making that goes into implementation remains human.
In lieu of the deprecation of rote tasks, the decision-making, collaboration, communication, the 'soft skills' become the primary skills. Always were, but now it's easier to see.
The tooling has allowed for faster completion, rapid iteration, and a reconsidering of individual workflows.With those handled more readily by computers, it's people who connect the dots. And you do this by connecting each other: collaboration, communication, and so on.
The old scope: Diligent execution of research process; Meticulous and time consuming production of design systems; subjective assessment of work in progress; labor-intensive process of product and experience validation; separate contributions requiring significant alignment effort.
The new scope: operating faster and richer human-centered discovery at scale; rapid generation of design materials; critical thinking skills for evaluation of generated materials for errors and alignment; enriched evaluation of work in progress; modifying the design of tests that validate of potential ideas; elevating facilitation and collaboration; re-prioritizing skills needed at each level of the practice
It could happen. But there are risks. And we have to engage them head on.
Rhythm
The design process is being accelerated, remixed, revamped. Research information can be cataloged, analyzed, and synthesized in new ways, faster than before. Capabilities are getting fractured and handed to other people tin the product development team. New forms of pair programming, pair designing are emerging.
No tool is impacting this more visibly than vibe coding. Product, Design, and Engineering are all adjusting to a faster loop of development. What took three sprints now takes one. What took 12 points now takes 2.
The old scope: more time allocation to breaking down tasks; evaluating sizing of work; aggregation and assessment of work in progress; unclear estimation and unpredictable workflows; prioritization of technical execution leaves the team and the user behind.
The new scope: setting up real-time discovery with ongoing communications to stakeholders; refining a team’s understanding of user needs in an ongoing way; understanding how to apply findings to specifications; validating any generated or accelerated outputs; collaboration that bridges gaps across functions within a team and enhances the the execution of meaningful work
Solving Problems
AI itself lives in a black box. But it steps out into enterprises that calibrate it to a use case. Then it reaches out to you and me in the form of a product.
Sometimes, it's transformative and there is a genuine and rewarding delivery of value. It's our time to capitalize on that value.
Sometimes, we're left struggling, wondering why things are the way they are. We see systems getting worse. We see unease and destabilization in areas where it hasn't been before.
The old scope: long research processes; personalization is driven by algorithmic thinking and remains resource-intensive; cataloging and generating data-driven tools is time-consuming and solutions are fragile; designing to use cases is labor intensive error prone
The new scope: problem identification and framing is faster and clearer; assisted faster breakdowns of problems into testable solutions; Applied AI bends powerful neural networks to specific use cases, not just generalized tools; considering and developing multiple hypotheses for rapid testing; wrangling, synthesizing, and communicating insights as products and solutions are delivered at record pace; maintaining thoughtful and subjective alignment between the product, solution, execution, and downstream impact; identifying and considering the emergence of externalities, previously left out of scope
AI impact at different Levels
This is a collection of rough thoughts regarding how AI will impact different levels of designers.
We're going to talk about the impact on design teams at each level. I'm also looking at this as two major categories of change. First, the impact on the individual designer. Then, the impact to the design process. I'll have more on the design process at a later date, but as far as this goes, as far as impact on the levels, we are talking about the impact on the process from a technical standpoint; what are the steps in the process, how are they handled technically, what files get produced, and so on.
So, let's break down individual and process impact of AI at each level...
- Entry level is getting squeezed by task automation
- Mid level is facing raising expectations
- Leadership level needs to shape the conversation
- Some tips
Entry Level
Entry Level folks have a few key responsibilities.
- Learning the real-life process of designing, delivering. A significant part of the entry is also the responsibility to learn the process of designing in a professional environment and in doing so, connecting with collaborators from other professions. For instance, learning to work with a product manager or engineer
- There's also a lot of work that is doing first drafts, then refining slowly or someone else steps in to tune it up
- Taking instructions and being a team player. This means an entry level person succeeds is where they are fast at the Production work, but they also bring with it thoughtfulness, empathy, collaboration, and self awareness.
A lot of AI impact will be felt at the entry level because of the proportion of their work that is production-driven. That's the story: AI is replacing young grads. A NYT article from August 10th says entry level hiring in software engineering has fallen precipitously. Here's a staggering quotation:
Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York. That is more than double the unemployment rate among recent biology and art history graduates, which is just 3 percent.
OK, can we take a minute to hear it for the Art History grads?
The article goes on to describe that students who just graduated are particularly hard hit because in many case colleges are just now starting to teach AI coding tools, which are the most sought after skills.
Going back to design, the story's not going to be much different and it sounds bleak. It's a generational shift and there is a window of people, young people who are in the wrong place at the wrong time. These are folks who had early high school days shut down by the pandemic and they've absorbed a lot of volatility and change. Graduating into a job market that is imploding onto itself is something I wouldn't wish on anyone--and I'm saying that as someone who graduated right into the post-9/11 economy.
However, there is no obstacle for anyone at any level to deliver the other kind of value that comes with collaboration, thoughtfulness, and empathy.
The importance of these skills, so so important, typicaly called soft skills, this is glimmer of light in the ability to differentiate yourself on soft skills, critical thinking, and collaboration. At the end of the day, I'm also excited to see folks at this level, but it's going to be rough in the near term. Starting from scratch is like medicine--tastes like shit, but it's good for you in the long run.
Mid-level
At this level, you have learned the process and you have some production skills down pat. You are starting to gain some clout with other stakeholders and collaborators.
You are also expected to have a relatively strong sense of self-awareness and collaboration with others. Folks at this level can be trusted to manage others, such as running a meeting.
Your team will look to you to be highly experimental. You will be at the forefront of testing, evaluating, and adopting new tools to accelerate the workflow.
You need to get to know new tools as early as yesterday. The day to adopt AI-driven tooling, and quickly, has past.
Your position in the middle of the team actually gives you the most leverage. Since you can see the process from end to end, you need to assess that process and move into the realm of automation.
If you define an understanding of how to improve the process, and you bring others along with you for the need to do so, you will have a huge impact downward to the Junior folks, laterally to your fellow designers, and upward to Leadership.
You're being be tasked with ingenuity in two directions
- Designing for experiences that include AI
- Using AI to accelerate the design process
For designing experiences that include AI, the first step is to learn more about how AI works and how the outputs that come from interfacing with AI can impact your users.
Then, map out a product experience and think think about how something like an Agent or an automation with built in prompting can be used to streamline the product experience.
There is an emerging field in this regard known as Applied AI, which I'll talk about more in the future.
For accelerating the design process, as a product designer, you already have a lot of the skills to identify where AI is and isn't useful. Map out the workflow for your and all the systems in it. Service Blueprints are a great way to do this. Assess each part of the process and try to break it down to its individual components. Now, consider re-composing the process but with the AI tools that can help that process.
Being the designer of the design process is key, and mid-level or journeyman designers, you're in the best place to do that.
Also, as noted for Entry Level colleagues, you can really level up by investing in your own soft skills. Can you communicate the value of what you're doing with people outside your profession? Can you facilitate workshops and conversations that contributue to the process? Can you run meetings, guide others, and elevate other people's engagement in the process? These are, again, the thngs that AI is not going to do for us, and an area where we need to concentrate our level of expertise.
Leadership
This is probably the level that will have the greaterst and most outsized impact on the profession.
Leaders bear the responsibility (but not the ony contriburto) for shaping the culture and values of an organization. Eveyone contributes, but leadership carries the responsibility.
So... now your profession and the people who report to you are facing an existential disruption. Which way are you going to go? In short, there are two paths to choose based on newfound capacity.
Are you going to increase capacity so significantly, and unlock the potential of your team to do more and make a profoundly greater impact? Are you going to overtake other roles in the company with that increased capacity?
Are your going to maintain your capacity but reduce the headcount?
Can you empower your team to refine their workflows, to find efficiencies and enhance their process?
The answer has to be yes.
Tips
At the entry level, two things
- It's a freakishly good time for entrepreneurial experimentation. Come up with 3-5 projects and start hacking away. Adopt the process of mapping out a workflow, then designing it, then vibe coding it. Then, find a way to test/validate it. How can you do it as quickly as possible? Where does the process break down or create errors?
- You could use a prompt like a Job To Be Done for something in your life, or something that would help someone you know.
- Find that way to present yourself elevating communication skills, collaboration, and critical thinking. These things are sometimes best communicated by media other than graphic design or UX design deliverables, so explore writing, content creation of other kinds.
- Learn everything you can about AI. I have heard some chatter that entry level people who can demonstrate true fluency with native AI tools will be in increasing demand. Try every tool in every part of your design process. But more importantly, go further than just the tools. Really learn to speak the language. If you are concerned about the ethics of its use or the environmental impact, learn everything you can about that from reputable sources.
I also want to acknowledge... this is a tough time for the Entry Level folks in product, design, and software engineering. No one has gone through what you're going through before, so whatever you choose to do, if you really give it focus, is probably the right idea.
For mid-level, journeyman designers
Do what the entry level is doing, then expand into two key areas...
- Really own the process. Shape the process based on what you learn, and what you learn from others. Frame up the process and explain why it works, why one tool is more important than the others. And explore the use of more sophisticated and powerful tooling going beyond something like quick design mockups or quick prompts--start messing around with custom GPTs, Agents, and more powerful automations.
- Bring others along with you. Your ability to facilitate work with others will be more important now than ever. All of your stakeholders and fellow designers are looking for anything and anyone who knows what they're talking about, any form of knowledge that they can use to establish a shared set of facts.
- First off, get familiar with what AI is capable of. spoiler: More than just chat. It's not a bad time to take a class, if that's how you learn best. MIT pro | Stanford
For Leadership
- Help the team prioritize what elements of their work and process can be improved by adopting new tools
- Set the standard for critique; provide guidance to those who lead critique that it's being run properly, that the process ultimately improves the work
- Model a practice of advocacy for your team; Set the stage for the team to hype each other up; set up public forums to showcase their work. Guide your team on how to present their work and explain their process in ways that people outside your function can understand.
- Revisit the job leveling for your org; define what is expected at each title; get feedback from trusted members of the team and other leaders; communicate out that leveling ladder
- Define the change and make sure the team addresses it as a team, not cast to the wind to figure it out on their own
- Coach and mentor the team one on one to reaffirm their strengths and work on weaknesses, this can help guide their ability to focus on those soft skills and develop accordingly
Careers Getting Murdered as AI Takes on Tasks
AI is significantly impacting product development job roles, but will do so differently at different organizations.
Ok, so… you’re a product designer… you heard that AI is coming for all our jobs. But is it coming for your job, your co-worker’s job, and your other co-worker’s job the same way?
AI tech will impact many roles previously safe from automation or offshoring.
To understand this moment, take a look at what experts are saying about AI’s impact on a lot of different jobs. Here’s an article from Forbes published in April of this year. The title? These Jobs Will Fall First As AI Takes Over The Workplace
The article goes on to describe ways that jobs that will be impacted or eliminated by AI tech.
The first wave is jobs that perform repeated tasks: data entry, scheduling, and first tier customer service; tools are replacing these roles
The next wave is jobs that are repetitive but have some technical consideration. Bookkeeping, financial modeling, and basic data analysis; Banks are building out these capabilities.
And the list goes on.
AI will impact tasks that are applied across many roles.
Another way to look at it is to break down jobs into different tasks.
The team at Visual Capitalist created a chart of skills like writing, active learning, critical thinking, etc. And each skill has two numbers: how long it takes with AI and how long it takes without AI.
The visualization sorts the tasks by fastest to slowest after the application of Generative AI.
I wanted to see which had the biggest impact, which has the biggest change from without Gen AI to with Gen AI. At first glance, this is where the biggest impact would be on whether a given job is phased out over time: does the necessary skill get recalibrated in a significant way? And if so, how much?
There's no screaming obvious answer in this data. But the top skills? I'm getting a big Software Engineering vibe, with Programming being 2.9x faster with generative AI tools. But the lines are blurry: on the lower end, "Judgement and Decision Making" is now 1.8x faster(?) and would obviously cover a range of roles.
Broadly speaking, these tasks have broad and deep alignment with all three product development roles: design, product management, and engineering.
But what does that mean for the people involved?
Seeing how AI delivers impact on specific tasks, it's important to see what happens when we link all those tasks together.
AI's impact will be acute in cross-functional environments where there is high dependence on process.
For Product orgs, the way forward with AI looks different depending where you start as a team... but do all roads lead... to MURDER??
If you are a Designer in an organization where PM and Engineering want to move fast and break things, then they can now leverage AI-driven processes and tooling to get pretty far synthesizing user research, drafting prototypes, refining UI, and spec'ing out code without you. The Designer role just got killed! Oh nooooooo!
If you are a Designer where PM is lagging, but collaboration with Engineering is strong, then you can take your user research, frame it into JTBD, even model the potential business impact, then create your own designs with an AI-accelerated workflow, break it down into Jira tickets, and work directly with Engineering on prioritization. The Product Manager role just got killed! Aaaaaaa!
If you are a Designer where PM is strong, but Engineering is lagging, then you and the PM can get pretty far honing in on customer research, accelerating your synthesis, rapidly developing prototyping options with Figma Make and Lovable, then, once you get things working, reach out to an Engineering team to help with scaling and hardening... the Software Engineering role just got killed! Hey, somebody call the cops!
We know that in each of these scenarios there are huge gaps, but the reality is, this is playing out as we speak. It's not just reshaping or eliminating careers, but having a grueling effect on people involved. For that, we can take a more qualitative look. Stories are starting to surface of how this takes shape.
The impact of AI on process-driven roles will also be highly emotional.
A UX designer with 7 years experience is having a significant and negative emotional experience to the design work that their PM completed with AI tools.
I replied to them on Reddit, but in summary, this is going to be a journey fraught with emotional challenges and those emotions can get in the way of our concentration, our ability to be flexible, and our ability to perform at our best overall. I advise that UX designer to break the problem down, including identifying and separating out the emotions that are felt. Then addressing the problem through collaboration with the PM.
AI is impacting hard skills which can be quantified, but still lacks on impacting soft skills.
High-performing, high-quality teams are bound together by respect, collaboration, ingenuity, and the psychological safety needed for growth. These 'soft skills' let a team level up their output together. That's when 1+1=3.
Leadership must seize the opportunity to move the whole team forward
This path ahead is paved by a few factors:
- the competency or maturity of the functional role before AI
- the relationships built around the others on the team
- the soft skills and abilities of those in each role to adapt, collaborate, assess, and learn
So if all roles are strong... and all roles take on accelerated workflows and tooling... and if the people on the team are supported through this insane time of transition... then you're not killing off roles, your leveling up capacity. This is the REAL goal. This is the future we WANT.
Who knows, with all this newfound capacity, some of these products we build might actually be good!
So what do I do?
Well, we’re off to a great start.
Here are three suggestions to move forward
For individuals: experiment and get good. Yes, it’s worth signing up for your own subscription for GPT pro or Claude pro or Lovable or Figma pro. Experiment with having GPT open in one screen and just ask it questions along the way. New to vibe coding? Tell it what you want to do and how you should start. New to GPT? Ask it how to make the best prompts. It's sort of smart and dumb at the same time. Experimenting is the best way to level up.
Most importantly, once you get your bearints, see how tools can fit together. See what happens when you create something in Figma Make, then spin it up in Lovable. See what happens when you build an automation in Zapier that runs a prompt in ChatGPT. There's a lot of potential and, at least at the moment, the bar is pretty low in most organizations.
For teams: learn together. If you’re learning on your own, Regularly share what you learn. Break off a project and mess around. See if it’s something you can do quickly and clean up if you screw up.
For leaders: set the standard. This is where the real power is to scale within an organization. Join your team in experimenting and learning. Have the team identify what projects might be good for trying new methods, and if it fits, back them up. You can help set the metrics for the success of these experiments, the KPIs you're trying to hit. Help frame the team's ideas for what it means to get higher quality results.
Talk with other leaders about setting a culture that will elevate the disciplines you will invest in. Know what kind of org you are and will be. —find those ways to emphasize soft skills, set an expectation that you gotta be good at that
Most importantly, like any individual, you gotta learn. You need to lead by example.
Those are some tips! I'd love to hear what you're up to, what's working, and what isn't.
AI Product Design as Risk Management: Workday Edition
When designing human-in-the-loop automations and AI workflows, you have to figure out how much human should be in the loop. It looks like Workday is finding this out the hard way.
Workday has a set of powerful human-in-the-loop workflows that accept, screen, evaluate and accept or reject candidates--they save a ton of time in an otherwise labor-intensive process.
The product design of this human-in-the-loop feature needs to be tuned so that humans can influence the process and then participate in the process effectively. For instance, have the AI match some criteria, but have a human make the final call.
It's a balance. The objectives are well-meaning, but the development of these products and the tactics shown in the features need to be handled carefully. If the process leans too heavily into AI to make decisions without proper attenuation, and doesn't include human judgement in the workflow, then it could expose the company (or customer company) to risks like, say, discrimination lawsuits.
It looks like we're going to find out if that may have happened. Workday is facing a class action lawsuit charging that their algorithmic and AI-based applicant evaluation tools unfairly discriminated against applicants over 40.
In order to compile the dataset needed to determine if the charges are true, the judge in the case has ordered Workday to supply a list of ALL the companies that used these features, even those that used the features AFTER the lawsuit was filed.
At its core, this will probably be like a 'product failure' case (I'm not a lawyer). Did it cause harm? Did Workday know there were these kinds of risks? And did they make decisions to move forward anyway? If yes, then guilty.
It's one of the ways that product design, and the product design process, has real risk management impact for companies. What kind of impact? Well, while the case works through the courts, investors aren't waiting. Since the case has moved forward on May 22, $WDAY started dropping and hasn't recovered. The company has lost 17.5% or $12.2B of its value. The AI lawsuit has been widely cited as a key reason for investors to stop backing Workday.
Feedback: the POINT method
As we are disrupted by AI, this practice assures the team has high-value insight into their performance and processes. The Critique practice is where the team hones their ability to identify and apply feedback. I think of it as the atomic unit of the Design process and it can help elevate the function and performance of the practice.
The POINT method
I have seen how bad it can get. Design teams can be hard on each other, and picking apart work becomes a sport. Stakeholders can come into a session with little context and decry the work in progress as useless. Designers creating the work can see conversations spiral off topic.
To avoid going off the rails, I have found that feedback is best structured using the following framework:
- Permission "Can I give some feedback?"
- Objective 'What is the objective we're trying to achieve?"
- Inquiry "Does this work achieve the objective, why or why not?"
- Next Steps "What are some ways to improve the work?"
- Timeline "When will we see those improvements?"
This provides multiple benefits. First, it is flexible enough to apply to those in the role of giving or receiving feedback. Second, it can be applied differently for providing professional feedback regarding a given work product vs providing personal feedback on someone's skills and attributes. Finally, it's easy enough to follow that stakeholders who participate in feedback sessions less frequently can adopt the method when called upon.
I will typically introduce this method and coach the team through it when I begin as a leader. I also provide a reminder at the beginning of each session. For instance, for remote critique, I can easily include it on a board in Figma, Miro, or whatever platform we're using. In person, I have posted it next to work being viewed, or left handouts around the room.
Giving Feedback
Design teams, and to some extend broader Product teams, have multiple opportunities to surface feedback. As a leader of critique, I invite members of the team to pitch their work in a certain way, then ask for feedback and discuss the implementation of that feedback.
The focus is on the work and not on the person doing the work.
Permission: "Can I give you some feedback" Setting up the conversation is key. In a critique session, it is inferred but it is best to say it aloud.
Objective: "What is the work trying to achieve here?" Stating the user objective and business objective is key as a reminder to those in the session.
Inquiry: "Does it achieve the objective? Why or why not?" Whether the work achieves the objective or not, inspecting the reasons behind that assessment help us understand where there are potentials for improvement.
Next Steps: "Here are some changes to consider." If the person providing the feedback has ideas, then they can be provided. After the session is over, the person doing the work is responsible for making the changes or for assessing why the changes don't work.
Timeline: "When do you think we'll see those changes?" This is the part of the POINT process that truly adapts it to the work environment. We need to set timelines and see results within each day, week, sprint, month, and quarter. That's just the reality of doing business. Typically, I advise my team to try all changes, then show evidence why one didn't work. There is some concern that this wastes time and in some high-speed organizations, that concern can be legitimate. However with new design tooling, we're unlocking a new dimension of speed to value, so it is more feasible to try multiple ideas or rounds of improvement when historically it would be time prohibitive. With this in mind, I ask the team to make their best estimate while also encouraging the team to give each other some grace while we adapt to new tooling and processes.
Receiving Feedback
This framework also applies to receiving feedback when the role is reversed. Briefly, that looks like this:
Permission "Can I get some feedback?" Let others know you're ready.
Objective "Here is the objective the work should achieve." Stating your assumptions helps set and clarify expectations.
Inquiry "Why do you think this works to meet the objective? Why not?" Sometimes you need to pull it out of them.
Next Steps "Here's what I hear are the next steps here." Repeating back the next steps reflects your understanding and demonstrates that you are practicing active listening.
Timeline "Here's when you can expect to see a new version." Taking accountability for enacting change and sticking to the timeline is a great way to build trust within a team.
Personal Feedback
At times, as a leader, you have to provide coaching to members of the team and providing objective feedback is a critical part of that effort. The POINT method can offer some support to the leader or the person being coached.
As a leader, you could leverage the POINT method to provide feedback when a direct report faces typical challenges like an office conflict or a meeting outcome that bothered them.
Permission "Can I give you some feedback?" Establish the expectation for the conversation.
Objective 'What is the objective you we're trying to achieve in that scenario?" Get their perspective on the circumstances.
Inquiry "Does you achieve the objective, why or why not?" Determine their role in the situation.
Next Steps "What are some things you might do differently?" Discuss options at hand for addressing the issue or opportunities to develop a skill that would address the conflict.
Timeline "When will you have an opportunity to see those improvements?" Identify upcoming points in the schedule when they can put new practices to work. For instance, if the conflict was in a recurring meeting, then the next session will be an opportunity to put new skills into practice.
Diverse Feedback Sources
When a critique is run well, it's easier for outside people to join and be productive. Business stakeholders, subject matter experts, and customers can provide vital input and validation during a design process, however they can struggle to articulate the feedback in a way that is conducive to improving the work. It's not their fault. It's important to recognize that different organizational cultures, different industries, different job functions all have different communications styles and vernaculars.
Designers sometime lose sight of this. Design education is setup for regular, near-constant feedback. As a result, Designers sometimes get frustrated with other stakeholders inability to articulate their feedback and as a result fail to build the necessary bridges for incurring the proper input. The POINT method tries to provide a simplified framework that people outside the design practice.
Permission "Can I give some feedback?" As a leader or facilitator, assure the stakeholder that the feedback session gives them this permission and that the team is ready for their feedback. Say it out loud.
Objective 'What is the objective we're trying to achieve?" Articulate the objective. The stakeholder may see the objective differently or, what is typically common, is that the stakeholder has the same objective but it is articulated differently within their job role.
Inquiry "Does this work achieve the objective, why or why not?" Here is where some stakeholders struggle. Sometimes, it's self-limited belief that they don't feel qualified to render an opinion. Reassure them that their perspective is valuable. Clarify that this isn't a matter of opinion, it's a matter of giving yours best assessment of how well the work gets the job done. Stakeholders also play a HUGE role here in identifying risks that the designers may not be aware of.
Next Steps "What are some ways to improve the work?" Some stakeholders jump right to this step. Their minds work quickly, and they have already processes the last 3 steps, made some functional assumptions and push forward with their feedback. Don't push back, help guide them. Honor this method and talk them back through why they came to that conclusion.
Timeline "When will we see those improvements?" Stakeholders may or may not have insight into how long it takes work to get done. Clarify for the team when the next opportunity will come to share the work with this stakeholder.
Getting it done
I have put the method to work in all the above situations and found a few benefits. In Critique sessions with teams, it helps align the methodology and the expectations between team members. It helps clarify my own thinking, so I can bring others along by offering feedback. It helps establish clear communication in coaching situations when providing support. And it opens the door a little wider to outside support from stakeholders, which helps enrich the Design practice overall.
Feedback methodologies will be increasingly important as Design and Product teams infuse AI-enabled tooling and practices into Agile design and development methods. But human cognition and emotions still remains a little hard to pin down. AI can't tell you what your coworker things about the work at hand, but an efficient feedback practice is a valuable human-in-the-loop tool that unlocks a more effective final product.
Delta cracks Dynamic Pricing
More questions than answers: looks like a new day is dawning in e-commerce. Delta Airlines has successfully executed dynamic, personalized pricing on 3% of its sales.
Its, shall we say... concerning. Displaying different prices on the same goods to different customers has an elevated risk of Price Discrimination. Amazon took the brunt of it back in 2000 when they spun up a test of dynamic pricing. The thing is, they showed different prices to people that were demographically different, sparking price discrimination claims.
https://abcnews.go.com/Technology/story?id=119399&page=1
Research done on more recent attempts at dynamic pricing has shown that the worst deals were given to the poorest people. "For example, a study of broadband internet offers to 1.1 million residential addresses showed the worst deals given to the poorest people."
https://consumerwatchdog.org/wp-content/uploads/2024/12/Surveillance-Price-Gouging.pdf
So the risks from outside the org (demographics) and from inside the org (guardrails) are substantial. Reputational and regulatory risks could both be substantial but would it be existential? Or could they test and iterate their way through it?
Delta is happy to hype the new innovation for investors, but failed to tip off their strategy. Are they showing someone the lowest possible price to entice them to buy and guarantee full planes and safe margins? Or are they showing people the highest prices to test price elasticity and capture maximal value, even if that means there are a couple empty seats?
This is a compelling example of how AI will be working its way into complex systems and consumer products. By integrating An AI-driven calculation into real-time price modeling, Delta has successfully straddled the chasm between two kinds of companies: operational maximizers and profit drivers. Maximizers try to fill every seat, even if it means losing a bit of value, e.g., broadway theaters that give away tickets at the last minute. Profit drivers are willing to have some some slack in the system but make it up on the back end by capturing greater profits from those that do buy tickets. Leveraging the real-time analysis that is now possible, Delta is able to stride between both types, presumably for maximal benefit.
So what do you think, if you have read this far? Will this lead to smarter competition? Or is it the latest way that airlines will be squeezing customers without improving services?
How I Run 1:1s With My Design Team
As a design leader, I see 1:1s as one of the most important tools I have for supporting growth, building trust, and making space for honest conversations.
The frameworks below are devoid of subject matter, individual nuances, or changes based on job level within the IC or Manager roles. Those are applied quickly as I get to know people. Most importantly, these are framed for instrospection and support, which are going to be critical to teams adapting design methodologies to leverage AI-driven tooling and processes. Managing the people in an age of dynamic change will require rigorous transparency to assure their growth.
So, every 1:1 starts the same way:
“What’s on your mind?”
I picked this up over the years as a way to signals to the team that this time is for them to check in and surface what matters, not for me to lecture. I try to create a conversation that’s focused and frank while still being supportive.
1:1s With Individual Contributors
Focus: Growth, coaching, connection
With ICs, the goal is to support their development, not just as designers and collaborators, and open the door to explore leadership.
Once we open with “What’s on your mind?”, I listen closely. If they bring up a challenge, something they’re stuck on, a tough dynamic, or something that just feels off, we’ll explore it together:
- “What have you already tried?”
- “What would a good outcome look like?”
- “What kind of support would be helpful from me?”
If the conversation turns toward wins, I look for ways to stretch them like leading critique instead of participating, mentoring a peer instead of commenting, or sharing work with a broader audience. These moves help ICs build confidence and influence within their role.
About once a quarter, I’ll zoom out and dig into broader scope of career development:
- “What kind of work do you want more of?”
- “What’s a skill you’re trying to build?”
- “What do you want to be known for in a year?”
We’ll turn those answers into tangible next steps that they can own with support from me where needed.
At the end of each 1:1, I ask one more question:
“What’s something I could be doing better to support you?”
As a manager, it is critical to model this kind of invitation for input. I expect my team to take feedback and I set the bar by inviting it myself. It's also critically important to monitoring my successes and weaknesses as a manager.
1:1s With Managers
Here's how I would support the same framework for Managers.
Focus: Leadership, team health, operational support
The opening is the same, "What's on your mind?" I expect managers to come with issues in hand. Some weeks it’s about a struggling team member. Other weeks it’s delivery risk, team morale, hiring, or areas of personal development like confidence in their own handling of a team issue.
I’ll ask questions to help them reflect and act:
- “Is this a person issue, a process issue, or a pattern?”
- “Where are you tempted to step in? Where might you step back?”
- “What would it look like to lead this change yourself?”
If they’re struggling to get value from their own 1:1s with their reports, we’ll talk about structure of those sessions, then dig in to ways to establish trust and provide coaching. I want them to know they have my support in managing tough conversations.
And just like with ICs, I ask for feedback at the end:
“What should I be doing differently to support you as a manager?”
Similarly, I'm modeling the behavior I want to see and establishing the expectations that feedback can always be given and received.
Why It Works
With ICs, I focus on their skill growth. With managers, I focus on helping them lead better. In both cases, I’m there to listen, coach, and adapt. And I always ask for feedback, because I’m still learning, too. 1:1s establish a feedback loop that is one of the most powerful levers we have to grow our people, our practice, and ourselves. Done right, it's a meeting we're looking forward to when the time comes!
The impact of Generative AI on Generative Research
Attacking Weaknesses in the Research Process
In many ways, AI-drive tools are what researchers have ben waiting for. Agents to find and schedule your participants? Amazing. Quick organization, tagging, and summarization of interview data? Fantastic.
Desk Research
Before starting, before venturing to start, it's important to assemble some initial level of context. Compiling a range of initial sources of information is highly useful in establishing initial context for the problem space.
The summarization and distillation abilities of ChatGPT can be useful for initially establishing the foundational context. This step is not without its risks, however. Hallucinations can sneak in, affecting your ability to orient yourself around the problem. I have overcome this by creating custom GPT with strict guardrails for summarizing and synthesizing factual information. With these parameters in place, information provided can bereliably verifiable.
Generative Research
In user research, there are two general areas of research. In this post, I am primarily focused on generative research. Validation will be discussed at a later post. Generative research is the practice of learning about people, their circumstances, and a problem they face in order to develop a strategy for solving that problem. It is the process of defining context in which a product will operate.
Take a research framework like Learn More Faster, a lightweight way to conduct research for product ideas from Michael Margolis and the team at Google Ventures, alongside ways to leverage AI-powered strategies for lightening the load and accelerate the process, particularly for researchers who find themselves workign alone to lead the process.
- Agree on goals: It cannot be stressed enough that these must come from the stakeholders.
- Define your bullseye customer: while sources of customer data can be wide-ranging, a simple way to start this process is prompting an LLM to summarize the findings and identify behavioral trends that cold be characterise.
- Recruit five bullseye customers: This is where I am perhaps most excited. If AI-enabled tools, especially agents, could locate, contact, and schedule potential customer interviews, it would be a major win.
- Choose your value props and three prototypes: producing value propositions can be a challenging writing exercise for anyone. Take a tip from different professional writers: work in groups. At the Onion, individual writers pitch their hilarious headlines, but they refine them in groups. In Hollywood, screenwriters work in pairs (or more) to tag team on scripts. You can develop value propositions by working in partners with a GPT. Consider using a multi-shot prompt that provides significant customer context from the steps above, then provides examples of well-structured value prompts. Additionally, whether it's Figma's new tools or vibe coding platforms like Lovable, rapid prototyping is getting more rapid by the day. Creating different versions or variations of a given idea can be rendered in just a few minutes.
- Draft your interview guide: more paired writing, working with a GPT as a writing partner. Margolis gives some good examples of how to frame questions so they don't lead the participant, but provide open ended opportunities to dig into real issues.
- Learn more from every interview: interviews must be conducted by humans with humans. Never develop synthetic personas from the existing customer data as any more than a thought exercise. AI tools that transcribe, record, and summarize each interaction with participants. At Nooma, we have created 'recipes' that can run analysis on a given participant transcript and provde insights into the research questions nearly instantly. Final results? no. Faster preliminary findings? yes.
- Plan a watch party: more a matter of coordination and participation than AI generation and analysis, but I'm interested to see where AI tools can be a 'watcher' down the road. For instance, could one of the tools used in corporate interview process, which assesses people against the stated skills of a job description, be assess an interview participant's account against the research questions while catching facial expressions, tone and other aspects? It sill feels kind of gross. But assistance from a service like that could eventually be refined.
- Analysis: Breaking down what you hear into components is commonly referred to as tagging. Can AI help automate this process? Some platforms like Dovetail are already in the process of doing just that. Without a platform like that, you could use a prompt chain through a GPT to help accelerate the process. For instance, prompt the GPT to break down the transcript into topics, assemble passages into a table, identify common themes, then apply those themes in the table alongside the quotations. The GPT would need to be trained to do this effectively on the first 3-5 transcripts, but the time savings could ultimately be very significant.
- Synthesis: Typically, a research campaign will also include a final presentation to decision makers where the 'so what?' ideas are shared. To achieve this, the trove of interview and research data needs to be synthesized into concise and valuable points that can be used as a foundation for accelerating clear-headed decision making. While I have written prompts that can compile multiple transcripts to find trends, this would still be in development. My approach reduces hallucinations and erroneous compilations of unrelated data
Where does the Research get put to use?
Prompt Engineering: Clearer understanding of users, their context, and their challenges provides the foundation for developing a prompt that will deliver the intended outcome.
Context Engineering: Context Engineering is emerging as a powerful tool in building and improving Agents. In order for an agent to efficiently maximize its capabilities, it must have not only a broad foundation of data but an effective contextual framework for understanding, understanding, and implementing that data. This is then translated and implemented into the most efficient code implementation to make the Agent as accurate and nimble as possible. But before doing so, clear research provides the framing for how to inform the context.
Workshops: Customer problems, articulated as Jobs to Be Done, personas, or other tools that can align teams. This gives the team the best ability to curate the workshop context and ensure the best input for its participants.
So what's the impact?
I have been running analysis and synthesis steps on some internal Nooma projects. I'm finding time savings of about 50% - 60% with strong level of quality throughout. We're still working to refine the process and output quality.
More importantly in consulting is the impact this can have on accelerating client engagements and establishing a shared understanding of client business. Initial diagnostic research of any kind is the cornerstone of any consulting endeavor. We come in and make it clear that we don't know it all, no matter how many times we have worked in that client's industry, in their market, or with their same functions. Each consulting engagement relies on establishing a shared set of facts in order to move forward. Any way we can accelerate that path to understanding is critical.
Want to talk about it? Let's connect »
Design Process is (going to be) New All Over Again
It's the end of design as we know it, and I feel fine...
Design is changing fast. AI tools such as LLMs, Agents, and other automation tools perform some tasks remarkably well. As such, Designers can be empowered to execute parts of the workflow with greater speed, fluency, and accuracy.
Here's a summary of some areas where I'm seeing AI impact emerge, with more info in the coming weeks...
Research
User Research is critically important, however it is often believed to be prohibitively labor intensive
- Crunching large datasets to find potential interview participants
- Intelligent and automated scheduling of sessions
- Capturing, transcribing, and summarizing interviews
- Finding similarities and trends across interview data
...and so on. Not perfect. Not suitable for every occasion... but making inroads daily.
What is emerging in the development and usage of AI tools is the importance of Context Engineering. Effective user research is a critical input into defining the context of a given product and will likely pay an outsized role in shaping Context Input (for organizations that can embrace it).
Ideation
Teams often need hours* to move beyond obvious ideas. AI is not replacing human brainstorms, but it can serve as a partner:
Research has shown that, in brainstorming situations, people benefit from taking a pretty long amount of time* to begin gnerating ideas that are truly effective.
I don't have the confidence (or naiveté) to believe that AI is replacing human ingenuity en masse.
Tools like a GPT can be put to use in the same way that a screenwriter will lean on a writing partner.
- Write an idea
- Ask the partner to come up its take on the same idea
- Compare the two
- Refine the original
...and so on.
An LLM's ability to produce seemingly infinite quantity of ideas is often mistaken for its ability to produce those ideas at a level of quality that is both innovative and valuable.
Ideation sessions benefit from a kind of decomposition and reconstitution that AI does well. An LLM will examine a given input without the context (or baggage) that a team may have. It's a mechanical approach to the task, leaving the judgement and evaluation to the humans in the loop.
I spent time with the team at McKinsey working on this problem in a business innovation context and found it fascinating. Experts in a give field leveraged AI to generate new innovations and mine weaker outputs for salvageable parts. The team weighed the options presented, leading to a secondary conversation determining if any of those 'insufficient' ideas had something in them. This kind of fluency and malleability of ideas unlocks a certain invigorated innovation practice.
*I can't locate it at the moment, but I recall reading a paper that teams had to spend something like three hours together before their ideas transcended mere repackaging of the present and really got good
Prototype Validation and Production
Vibe coding tools such as Lovable and so on, provide ways to communicate those new ideas that can be more persuasive artifacts of the experience. This goes beyond the 'clickable' into the 'doable.'
Hours upon hours have been spent trying t0 make a Figma or other type of prototype appear as if it were a coded, live app or site. Copious states for an individual button or form field, so that the button could appear to respond to input from the user and changes in data from the system.
Now a vibe-coded prototype could easily project the experience with live data, created and iterated upon in in near-instant intervals.
And with this in place, the transition to Production-level code purportedly gets smoother. An AI Copilot can anticipate the ways that prototype code needs to be hardened for the product at scale.
Don't hate, Operate
Tying this new process together is Design/Product/Dev operations. Still yet to be determined, perfected, and optimized, a new form of operations is taking shape. For example...
After an event (meeting, research call, scrum standup) an agent can transcribe, prioritize, and communicate the outputs of that call, while simultaneously creating Jira tickets for the parties that need to take action.
This is some of the automation we have started enabling at Nooma, with an agentic horizon not far behind. I'm looking forward to the oncoming changes as we all adapt to new tech.
AI Basics: Prompt Design
Like everyone else, I have been learning how to use and abuse AI. Also like everyone else, I have found it to be compelling in its strengths and weaknesses. Here are some notes that helped me get my head around what's possible and useful when working with AI.
These notes are rough, but I find myself referring to them from time to time.
Prompting Principles make inputs and outputs more consistent.
- An Instructional Example: Teach the Chat GPT to do something “I want you to perform this task. Performing this task looks like this [example input and output]”
- This must include a single task and example, but can be augmented with multiple examples to create a Multi-shot prompt
- Once the learned task is established, prompt the GPT to conduct the task on the new information:
- For direct replication, repetitive formatting, or structured responses: “Apply the method you just learned to this input: [insert information].”
- For generalized replication that flexes to new contexts: “"Transfer the approach you used earlier to this new example: [insert information].”
See techniques below
Prompt Structure
- I want you to perform this task. Performing this task looks like this [example]
- example]
- Apply the method you just learned to this input: [insert information]
Individual Prompt Development Process
The overall process of developing a prompt will be an iterative
- “I want you to perform this task. Performing this task looks like this [example]” = [Instructional example]
- <examine output>
- Refine the output: provide more context or further describe what you are looking for
- Request a prompt: “Write a prompt that will reliably produce the output of your previous reply. The prompt must be specific enough that the output will be reliably formatted”
Abstraction layers of prompting
System Prompt
- ROLE This is who you are, CONTEXT here’s information to keep in mind
- TASK here’s how to perform this action, EXAMPLE, input, EXAMPLE process, EXAMPLE output, now create the OUTPUT it with this CONTEXT / INFORMATION
- Extracting skills
User prompt
🎯 Prompting Paradigms
Technique
What It Does
Structure
Benefits
Zero-shot prompting
Asks for task execution with no prior examples
One simple instruction-only prompt
Fast, efficient, and useful when examples aren't available
Prompt chaining
Builds or refines output step-by-step across multiple interactions
Sequential turns between user and model
Enables complex task execution through manageable steps, improves reasoning
Multi-shot prompting
Provides multiple examples in one prompt to guide behavior
One large input block with several input-output pairs
Improves output consistency and helps the model learn the pattern through repetition
Decomposition (Prompt Chaining for Analysis)
Breaks down complex problems into smaller subproblems before solving
Starts with identifying subproblems, then tackles each via prompt chaining
Encourages structured thinking, increases accuracy on complex tasks
Self-Criticism (Iterative Prompting)
Uses the model to review and improve its own output
Prompt > Draft > Critique > Revision loop
Enhances quality through reflection and iteration, simulates peer review
Additional Information (Context)
Provides background or external context to improve output relevance
Context block before task instruction
Increases task alignment, helps with domain-specific or nuanced evaluations
Chain of Thought / Thought Generation
Forces the model to “show its work” step-by-step
Prompt includes instruction to output reasoning before final answer
Improves accuracy and transparency, exposes and corrects reasoning errors
Zero-shot prompting
Example (asking for results with no examples)
- Simple exploration
- When neither output nor logical process is known
User: Think of A, Now take A, and come up with three ways it is like B,explain why the first reason is the most importantLLM: [Explanation]
Prompt Chaining
Example (multiple prompts)
Best used:
- Exploratory prompting when a logical process is known but the output is not
User: Create ALLM: A#User: Now take A, and come up with three ways it is like B, in priority orderLLM: [Reason 1], [Reason 2], [Reason 3]#User: Explain why the first reason is the most importantLLM: [Explanation]
Multi-Shot Prompting
Example (single submission multi-prompt):
Simple prompt provides a self-contained iterative example that shapes the LLM’s logic.
Best used
- When context and output is known, but replication is needed
- Starting a task that will need specifc outputs
Input: “Translate ‘Good morning’ to French.”Output: “Bonjour”Input: “Translate ‘Thank you’ to French.”Output: “Merci”Input: “Translate ‘Good night’ to French.”Output:
Decomposition (Prompt Chaining for Analysis)
Leveraging the GPT to break down problems into smaller problems before prompting
Best used
- When an output is known but the process is unknown but assumed to be complex
User: "I need to determine the return policy for a customer who bought a car with specific issues. First, what are the subproblems that would need to be solvedto figure this out?"LLM: "To determine the exact return policy for this scenario, I would need to:1. Verify the customer's identity and purchase records.2. Confirm the specific car model, year, and its condition (e.g., the 'small ding').3. Ascertain the exact purchase date to check against return windows.4. Review the details of any applicable insurance or warranty.5. Consult the dealership's official return policy for that specific vehicle type and condition."User: "Great. Now, for subproblem 1, how would I verify the customer's identity and purchase records?"LLM: "To verify the customer's identity and purchase records, you would typically need to perform a database lookup using their name, customer ID, or the VIN of the vehicle. This would confirm if they are a registered customer and provide their transaction history."(The conversation would continue, solving each subproblem until all necessary information is gathered for the final return policy determination.)
Self-Criticism (Iterative Prompting)
Leveraging the GPT to iterate on a prompt output
Best used
- Output refinement
- Identifying ways to refine processes
User: "Write a short blog post about the benefits of remote work for small businesses. Keep it concise and engaging."LLM: "Remote work offers small businesses increased flexibility and reduced overhead costs, leading to higher employee satisfaction. It can also expand the talent pool beyond geographical limits. [Initial Draft]"User: "Can you go and check your response, confirm that it's correct, or offer yourself some criticism?"LLM: "Upon review, my initial draft is concise but could be more engaging. I could add a call to action or a more compelling opening. I also didn't specifically mention how it boosts productivity, which is a key benefit."User: "That's great feedback. Why don't you go ahead and implement that to improve the blog post?"LLM: "Absolutely. Here's a revised version:'Unlock Growth: The Power of Remote Work for Your Small BusinessTired of limited talent and high office rents? Remote work isn't just a trend; it's a game-changer for small businesses. By embracing remote teams, you gain incredible flexibility, slash overhead costs, and access a global talent pool. Plus, studies show it often boosts productivity! Ready to transform your business? Consider the remote advantage. [Revised Draft]'"
Additional Information (Context)
Providing overall background that is not directly pertinent to the task, but provides supporting context.
Best used
- when initiating conversation and establishing context of initial tasks
- when refining outputs from previous tasks with additional context
User:COMPANY PROFILE:Company Name: "InnovateTech Solutions"Industry: Enterprise Software (SaaS)Core Products: Cloud-based CRM and Project Management tools.Key Values: Customer satisfaction, robust feature sets, intuitive UX.Customer Feedback Context: We differentiate by our support, even if our initial product releases might have some early-stage bugs. Negative feedback on 'bugs' is expected but must be weighed against positive 'support' comments.---Please analyze the sentiment of the following customer review for InnovateTech Solutions, considering our company profile and customer feedback context. Classify it as "Positive," "Negative," or "Mixed" and provide a brief rationale.Customer Review: "The new feature is buggy and frustrating, but the support team was fantastic and resolved my issue quickly."LLM: Sentiment: MixedRationale: The review expresses **negative sentiment** regarding the "new feature" being "buggy and frustrating," which aligns with expected early-stage bug feedback mentioned in the company profile. However, it strongly balances this with **positive sentiment** for the "support team" being "fantastic" and resolving the issue quickly. Given InnovateTech Solutions' value on customer satisfaction and differentiation through support, the positive support experience significantly mitigates the negative feature feedback, resulting in a mixed overall sentiment.
Chain of Thought / Thought Generation
Forces the LLM to provide its ‘thinking’ throughout the process, exposing any reasoning (and reasoning flaws)
Best used
- When the process is just as important as the output
- When the process will need to be explained to someone else
User: Calculate the total cost, including 8% sales tax, for the following items. Make sure to write out all your reasoning and intermediate steps before providing the final answer.Items:- Item A: $25- Item B: $40LLM: **Reasoning Steps**:1. **Calculate the subtotal of items**:Item A price: $25Item B price: $40Subtotal = $25 + $40 = $652. **Calculate the sales tax amount**:Sales tax rate: 8%Tax amount = 8% of $65 = 0.08 * 65 = $5.203. **Calculate the total cost**:Total cost = Subtotal + Tax amountTotal cost = $65 + $5.20 = $70.20**Final Answer**: The total cost, including 8% sales tax, for Item A and Item B is **$70.20**.
A
MBA Questions and Reflections
A lot of people ask me questions about my MBA. I'll try to catalog the questions and answers here. I will update this over time if I can.
Why did you do this?
I've been thinking about getting an MBA for about 15+ years. I have often felt like I have a language barrier with business-oriented stakeholders. We want to solve the same thing. We are trying to solve the same thing. We are talking about the same thing. But we still don't see eye to eye. It felt like I was speaking English and they were speaking French; languages that are technically related but not understandable by each other. And this tension kept happening, over and over, even as I got higher up the ladder.
In the last 5 years, I also felt like I was hit a career ceiling. The inability to cross this language barrier was also present when I applied for higher-level jobs, I always seemed to come up short somehow. I was having a real problem getting over the fact that I had only managed small teams, despite the fact that I had managed those teams to excellent performance.
When I explored MBA programs, I was frustrated by the commitment in time and money. For instance, pursuing a full-time MBA at our local UCSD would run over $100,000 for two years. The part-time option for professionals is closer to $120,000. BU developed their all-online program at a total two-year cost of $24,000. That's a lot of money, but stretched out over two years, it could become manageable.
Besides the curriculum, what did you get out of it?
I got a few different benefits from the program.
Intellectual stimulation: In the first few weeks, I told my wife that I could feel parts of my brain physically turning on that had been dormant for years.
Intellectual challenges: It's very good for your brain to be challenged with learning new things. In other words, it's good to be doing something you're bad at with the intention of getting better. Data analysis, learning Excel, role play in new management situations, risk identification and assessment, and all forms of strategy were incredibly challenging.
It made me a more rigorous thinker: It changed how I think about design and business. I went into the program thinking I had a decent handle on things and thought the program would give me a boost. I was eventually humbled learned that I didn’t actually have a decent handle on things (ha!!) and the learning curve got steeper but the lessons were that much more rewarding. One of the semesters is focused on understanding and addressing risk. That class gave me a mental framework that I have shaped to address the design process as a risk management strategy. That, more than anything else REALLY resonated in job interviews and in conversations with my clients over the last year. Business stakeholders that I’ve worked this year (COO, CPO, VPs, PMs) just want to be successful in turbulent times. When I framed design research, prototyping, and iteration as one tool in a broader toolbox of improving the odds of success while reducing the odds of failure, they got it. I spent the last 5 months helping a team develop a strategy for launching a new division--something I don’t think I would have EVER been able to do pre-MBA. But hey, you don’t have to take my word for it!
It feels good: Growing fluent in topics that were once challenging and intimidating feels amazing.
It seemed to help significantly with the job hunt: My job pursuit over 2022 was very challenging, but I can comfortably say that the ability to speak to business issues in design terms helped move me from screening call to interview. So far, I seem to get the added benefit of signaling to others that I'm not just making it up when I talk about business challenges, so that's nice.
What are you going to do with your MBA?
This is probably the most common question and it is the perfect artifact of the bullshit way we look at education in our society.
Look, a degree is not a wrench that you can use to turn a bolt. It's not a weapon that you point at an adversary. You don't get a degree and then 'do something with it.'
It's an artifact of your own past learning journey. The only thing you can do with any degree is apply yourself into the future in a new, better way.
What I'm going to do is this: do more of what I already do and do it better.
What program was it? How did it work?
Program Name: Boston University Questrom School of Business Online MBA (OMBA)
Tuition: Total program, $24,000, made in payments of $4,000 upon registration each semester, about every 4 months. Lots of students were having their tuition covered by their employers. I paid out of pocket. The cost of tuition is a tax deduction for the self-employed.
Does the program require a GMAT?: The program does not require a GMAT. Their site says it helps with your application, so I studied for it. I did Manhattan Prep and took the test in the first weird couple weeks of the pandemic when we didn't really know what was going on. I did terribly and ended up applying without my GMAT scores.
What are other students like?: a lot of other students were also 10-15+ years of experience and trying to break through to the next level of their careers, this was a common theme. Lots of parents. Lots of managers. I think the stereotypical MBA student--the hard-charging slicky boy--was the minority of students in this program.
What Software does the program use? The program uses Blackboard for course materials, Gsuite, Zoom. For the semester on data analysis you will have to specifically use Excel. There are one-off programs for things like
Were there other Materials: You will have to buy course packs from time to time containing articles and readings, usually ~$100 or less. Some people signed up for MBA Math in advance of starting the program, which is a really good way to dust off your math cobwebs.
How big is the class? You are part of a cohort of ~400-500 people who are entering the program at the same time. There is a new cohort every semester.
Do you do group work? Your cohort of 400-500 students is split into groups of 4-6 people. You will meet with this group weekly and do group projects together.
Weekly schedule: Each week you will have
- course materials that usually consists of of readings, videos, and quizzes
- a live session with the professor in Boston
- team meeting to discuss work or collaborate on group assignments (team meeting schedule is up to the team)
Live sessions are conducted once a week via zoom, with the professor in Boston and students logging in from around the world. There is one session in the morning (8am ET) and one session in the evening (8pm ET). The session material is the same in each am/pm session.
Was anyone else in UX? There was one other product designer that I talked to. She's great.
Curriculum: Other programs are structured more traditionally; a semester of accounting, a semester of finance, etc. BU's OMBA is broken into 6 total semester-long modules. Each module covers different topics on a theme. For instance, you have a semester on understanding risk and you learn a framework for identifying, assessing and managing risk, which you then apply in different practical areas; inventory and supply chain management, legal risks, product failures, financial risks, etc. Most semesters have a final capstone project that you work on as a group.
Stats on my end: I'll just acknowledge some demographic privilege here... a cis/het/married/White guy living in coastal US. I have over 20 years experience in the design field, about 15+ of that spent specifically in UX. I have managed designers and design processes with non-designers since 2011. I have worked independently as a consultant and on internal companies.
How is it similar or different from an Executive MBA?
The best way to answer this starts with considering in-person MBA programs... My understanding is that an Executive MBA has two factors distinguishing it from a 'regular' MBA;
- a schedule of classes that allows for evening and weekend participation
- a more generous reliance on electives that allows for some level of specialization that builds on your career experience and aspirations
This comparison of Wharton programs is a good explainer.
For Boston University, the Full-Time MBA would be the 'regular' program in-person program, while the Professional Evening MBA would fit that description of the Executive example. They even have bundles of electives for specialization in key areas of study, Health, Social Impact, and Corporate Analytics.
Now, add the online aspect to the conversation. The OMBA program is a nights-and-weekends schedule of an executive program, but it does not have the electives and specialization in its curriculum. Everyone takes the same courses.
Where did this program fall short?
There's a lot to be said about the value of an MBA. Considering in-person programs, it is often said that you build much deeper bonds and friendships with your cohort. Your professional network becomes shaped by your MBA experience.
In this program, that was not truly the case, at least for me. Meeting people only online is limiting in its emotional potential to build strong bonds. The age of the students in this program and their position in their careers brings with it the implication that your professional network is already in place. The university does not put much energy into strengthening the OMBA community as a professional network.
This is the biggest area where the program falls short, but it represents an opportunity for any student who is even barely capable of networking. Build out your professional network within your cohort on your own terms.
Another issue is that the program only barely touched on material that is specific to the internet age. On the one hand, you get well-grounded in business fundamentals that are still relevant today. On the other hand the case studies for learning those fundamentals are often from the pre-internet age. There was some material specific to online platforms, primarily from Professor Marshall Van Alstyne, that I found fascinating, to the point that I think the program would be enhanced greatly with a semester on just that topic. On the other other hand... many people in the program (and much of our economy) is not built online. We had people in industries like utilities, live performances, petrochemical industry, agriculture... in all cases technology and industrial digitization is a factor and was thoroughly discussed, which was great. But, as someone who spent their whole career in tech startups, I gained the knowledge into business fundamentals, but sometimes felt like contextualizing that work around internet-specific business would be more helpful.
100:1
After several months of searching I received, and accepted a job offer last week. Here is the story of how it went.
Starting the search
After being laid off in January 2022, I took some time off to recuperate from the ups and downs of my last job and two years of pandemic parenting, one year of which included studying for an MBA. I was still ramping back up to capacity after experiencing debilitating burnout. After the severance ran out and my initial leads went cold, I took on some freelance work, aiming for 20-30 hours a week for work, balanced against ~20 hours a week for school.
My last job search took almost exactly six months. With that in mind, and hearing that other design leaders were seeing the same window of time, I started to plan a new job search. My school work continues until December 10th and I wanted to maintain my reduced hours while still in school. My goal was to start a new job in January 2023, however my final semester covers design thinking and innovation which is an area where I have some experience and I assumed I could handle the workload more efficiently. If a job came earlier, I could make it work. With my MBA coming to a close, I wanted to test the idea that I could put the credential of the degree and the lessons learned in the curriculum to use; would I be a more attractive candidate? Could I interview more effectively?
I started applying in July.
Searching and refining
Over time, I wanted to work through a few levels of uncertainty to determine fit with a new position.
Uncertainty 1: the job description isn't totally accurate. If it seemed like a 70% fit, it's worth applying.
Uncertainty 2: the recruiter/screener may not have accurate information. If I get to talk to a recruiter, they may have information that clarifies the job description, but they won't have information specific to the role and the inner workings of the team.
Uncertainty 3: the organization uses different vernacular. The design field still suffers from an inability to "define the damn thing" so if I get to talk to a hiring manager, I want to get clearer on what is expected in the position and try to clear through any jargon.
In order to narrow through these levels of uncertainty, I took a broad approach to applying, looking for 70% fit. I stayed open to any possible recruiter/screening call that came from an application or a cold connection through email or LinkedIn.
I kept track of everything
I created a google sheet and for every application, I added a row with the company, position, and status of the application, along with some notes and the date of the last event related to that job.
I interviewed for a Director of Research position and made it to the final round before getting declined. I updated the sheet.
I interviewed for a Design Manager position, and had great conversations with the hiring manager. The company sent me a gift certificate to a service that lets you pick a box of snacks which are sent to your house as a gesture of good faith going through the interview. We had a final round interview scheduled in 4 days. They declined me in 2 days, and the snacks arrived the following week. This felt incredibly strange.
I talked with recruiters about jobs that would be a $25K pay cut for me (not accounting for the effects of inflation over the course of this period of time) and we parted ways then the compensation gap became clear.
I never heard back from dozens of companies. I created another column in the sheet that calculated the days since the last action. If my applications were still stale after 20 days, I changed the status to "Ghosted" I kept track of whether I was ghosted without any contact or after each step in the process.
A lot of people moved on with other candidates.
Notes on the search
Taking a look at the search overall shows an interesting climate for design leaders.
Ghosting--never hearing yes or no from a company--remains the most common way that companies handle rejecting applicants.
Rejection emails tend to follow a template: thanks for applying, we appreciate your time, we're moving on with other candidates, we'll keep you on file.
I also realized, about six weeks into this endeavor, that I had structured the data in a way that was not the most effective. I tracked job applications by company and gave them a current status. As a result, I am only able to capture the overall state of the applications. If I had it to do over again, god forbid, I would focus on capturing events so I could get a better sense of the timing of the search. What's the average time to ghosting? or rejection? or getting a call from a job? Rejections typically come within 2-3 weeks.
The endgame
In line with my broad approach to applying for jobs, I applied for just about every company you can think of, expanding out of tech product firms to consultancies as well; BCG and McKinsey.
In August, I received an email from BCG inviting me to join their 'talent community' but gradually realized that this was their way of issuing a rejection.
A McKinsey recruiter contacted me and asked if I wanted to have a call. We talked in September and she expressed an interest in moving forward with my application, but at a higher level than the one I had applied. I had seven more interviews with McKinsey over the course of September and October as I moved from portfolio review, and exercises, along with extensive questions on personal experience.
Interviewing with McKinsey is, in and of itself, a rigorous and interesting experience. They interview differently for the design discipline than the consulting arm. There is a pretty active little industry dedicated to consultant interview coaching. I started picking apart their advice to determine what would be applicable to my interviews and where there were differences between the typical consultant interview sequence and the design leadership. I prepared with materials that McKinsey posts online. I reviewed my own work and revised my portfolio case studies with the perspectives I have honed through business school. I practiced stories from my experience where I demonstrated leadership, conflict resolution, flexibility in practice, and entrepreneurial execution. I told the folks at McKinsey that I had mixed feelings about the company as a whole, as it has had a front row seat to some significant bad behavior. They acknowledged and accepted this openly. They walked me through the policies operations of the company implemented to eliminate conflicts of interest and elevate ethical practices. I was struck by how candid they were on this topic.
In the meantime, I was keeping my search going and applying for jobs. While I was having good interviews, I was driven to continue applying in part out of perseverance, and in part out of superstition that if I took my foot off the gas, then the forces of the universe would conspire to create another dead end with McKinsey. By late October, two days before the final round with McKinsey, I reached 100 submitted applications. I told my wife, "I think I'm going to take a break in applying."
After completing the final round with McKinsey, I was told I would receive the decision on Monday. It was delayed to Tuesday and again to Wednesday. Ultimately, they made an offer and after taking a day to read it over and discuss some logistics, I accepted.
The search is over
I take some comfort that MBA credential and the lessons learned in business school made a positive impact. I was able to speak more effectively on design topics and relate them back to business objectives more effectively than I possibly could have two years ago before the program. The MBA experience has sharpened my perspectives on the role of design and given me new rigor in my thought processes.
I read somewhere that McKinsey only extends offers to 1% of applicants. And my job search resulted in only 1% of applications yielding an offer. And on top of that, I am not McKinsey material on paper. I didn't go to an Ivy League school. I didn't graduate from a top ten business school. I have not worked at a Fortune 100 company. How did I get here?! Onward and Upward!
Design is a Risk Management Strategy
Design leaders need to advocate for their teams and their practice on a near-constant basis.
My perspective on accomplishing this has really shifted in the last couple years. For a long time, I’ve talked about how close we are to our customers, push for aspirational levels of design quality, and execute a robust process. I never felt like that landed with business stakeholders and I was burning myself out trying to convince them.
More recently, in the last year especially, I’ve stopped framing it that way. I now argue that the business is facing significant risks:
- Product failure in market: people can’t use the app/site the way they need to
- Service failure in market: people can use the app/site, but it doesn’t deliver what people need
- Product misuse in market: people use the app/site in unexpected and unfavorable ways
- Failure to differentiate: the experience of the app/site/service fails to differentiate, from competitors driving the value of the company’s strategy down
A strong product design practice is essential to reducing these risks as part of the company’s overall risk management strategy. Any one of these risk areas can result in loss of market share, litigation, brand equity loss, and decline of company performance. Design methods for identifying customer needs, testing ideas, prototyping concepts, and validating experiences are some ways to provide rich feedback into the team's impact in to reducing these risks in a controlled environment before incurring risk in the market.
Teams that launch products without establishing a strong design practice--also putting valuable feedback loops in place for refining the product pre- and post-launch--will be taking on risk that could tank the company. The design team’s contribution to this is how we win as a team. This risk-taking is overly lauded in the startup hustle culture, saturating many corners of the business world to, I believe, a greater detriment.
When I explain this approach to business stakeholders, they get it. They can visualize the impact of reducing risk. We can have more productive conversations about trade-offs in moving forward with effective refinement in the design process. It's also my job to understand the pressures to deliver and how I can modify the design process to achieve the speed to market we need. These adjustments to process are better understood by other stakeholders when the trade-offs in managing risk can, at a minimum, be discussed. Rarely does this risk need to be quantified, but it can be if needed.
On an individual level, I highlight when any individual contributes to this, embodies company values, or just does something rad. This has to be fine-tuned to accommodate the ways that individuals and team cultures respond to praise, attention, and individuation, but the positive feedback has to happen somehow.
Collaboration tips for a remote work world
Here are a few things that I have picked up that help keep the design process moving when you can't be in the same room putting post-its on a whiteboard together.
Set expectations with an agenda. Before you send out a meeting invite for any Design Thinking session, draft an agenda that FULLY explains the process, the expectations for the session. One of the expectations? It's OK to make mistakes, to go off on odd ideas, to explore new spaces—say it up front.
Assign a facilitator and a scribe in advance. When someone is running the meeting, that's a job into itself. It si difficult for them to also be a facilitator and an be effective contributor. This can be tricky, but it makes for a much better meeting. Same with the scribe—assign someone to take notes and
Over-communicate in both channel and timeframe. The meeting invitation is a communication channel for the agenda, the expectations, and the process. So is email. So is slack. So is the meeting itself. Embrace the strengths and weaknesses of remote work by reiterating the schedule for the session and the value of the session in different channels so your participants can capture that information on their own time.
Break up the activity of the agenda between group collaboration and individual work. Research has shown that creative brainstorming or divergent thinking is best done alone. But research has also shown (featured in one of our readings this week) that 5 people can end a meeting with 5 different interpretations of a problem. Build on these findings early in the Design Thinking process.
- Have participants write their understanding of the problem in advance of the meeting
- Come together as a group to share these problem statements and come to alignment with a How Might We statement
- Then, for the initial act of divergent brainstorming, have everyone go off camera and mute for 15 minutes, draft their own ideas
- Come back together to share ideas (break into groups of 5 or less if needed)
Alternating between collaborative and solo work will be more productive and can be done very effectively through remote tools, rather than having everyone move from collaborative spaces to solo spaces.
Help ward off the vulnerability of collaboration with warm ups and humor: When participating in divergent thinking, most people feel vulnerable. sketching, brainstorming, divergent thinking—these aren't always everyone's strong suit and it can be uncomfortable. This sense of vulnerability can, for some, be worse remotely where there is a sense of being 'alone' augmenting the working environment. I usually kick off with a warm up exercise to get everyone going like discussing a favorite local restaurant and what makes it great. When sharing remotely, I sometimes use sound effects (applause, fanfare, 'ooos and ahhhs') to help add some levity to the circumstances.
Use Collaborative Documents as Creative "Spaces" in ways that you can't in real life. While BU recommends MS Whiteboard, I can't speak to this tool personally as I have never used it. I can, however speak to Miro, Mural, and Figjam as options for team collaboration. Some of these have features that you can't take advantage of if you were doing the session face to face. For instance, when using Miro, you can make a list of ideas in a separate text document, then copy and paste that list into your Miro document and the text will automatically be instantly converted to sticky notes that you can use in your collaboration exercises. All these platforms also let you attach links to your notes, so you can include contextual references and documents. All of that would be hard IRL
… and these are not remote-specific, but they will help you conduct your sessions smoothly (depending on your company culture)
An important rule: No job titles. for the duration of any collaboration session, divergent or convergent thinking, I have seen benefits in abolishing job titles and hierarchy for the duration of the session. Good leaders are usually amenable to me saying something like "Tom, I know you're the boss, but not for the next 90 minutes. You can go back to being the boss after lunch."
Take a minute to praise 'outsiders' in your cross-functional participants. If you have the good fortune of bringing someone into a session that you don't usually get to work with, then call it out; thank them in front of everyone. For instance, at one firm where I worked, it was very difficult to get customer service representatives to participate in these sessions due to some organizational constraints. If I did get one or more of them involved, I always called it out and thanked them for their time and their expertise.
Set up incentives for productive behavior and disincentives for unproductive behavior. Have you ever used a "swear jar" where anyone who swears has to put $1 in the jar? I am facilitating a session in a few weeks and I am considering a "No jar" for anyone who says No to an idea during the convergent session has to venmo me $5 (which I will donate to a charity of the team's choosing at the end of the session)
Where would you begin with this team?
Let me know if you've worked on a project like this.
Think of a product team that's working on some kind of online platform, we'll call them TechSite. Someone on the TechSite team, a leader of some kind, decides that an initiative is important and declares it a priority to launch a new product. The team creates a prototype of some kind and reviews it internally. They come to agreement on the direction and the leader advocates for launching it. It doesn't do well, not a total failure, but customers are ambivalent. The leader believes the core of their idea is in place and wants to use testing as a means to improve on the project. The team creates a variety of A/B tests, multi-variate tests, and surveys. The tests come back and they don't point to a conclusive direction, winners are not clearly statistically significant. The surveys come back, and the responses don't point to a conclusive direction, sometimes responses target unrelated issues. The leader shifts the team's priorities to a more urgent matter but there's no intention to sunset the product and it lives on in a state of suspended animation, launched and partially improved. The team is left with a sense of missed opportunity and TechSite customers don't really get much at all.
This scenario is representative of so many projects that I have either heard about or been a part of that I am convinced that it has become a standard operating procedure for many product development teams. The team is trying to move quickly, make bold decisions, make big bets, and conduct testing on customers. These are the ubiquitous tenets of a nimble product team, aren't they? Why is the outcome so poor?
Why are teams of smart, hardworking people making such lame-ass products?
It's the leader's fault for pushing their agenda! It's the team's fault for weak execution! It's the team's fault or not designing a better solution or writing better code! It's the customer's fault for not understanding how to use it!
From my perspective, the team never stood a chance, not even the leader. The team thought they had what they needed to succeed, but one of the inputs into their process was insufficient and without it, the team could never make something that their customers would be willing to pay for.
What do you think? If you could fix one thing with this team, what would you address first?
Managing risks beyond the product
Without risk, there is no opportunity.
Any business, any organization--even any individual person--will face their own responsibility for identifying risks, assessing the probability and consequences, and managing the risk to the best possible outcome. Some people are better at it than others, and some firms are better at it than others.
Designers can find themselves in a position to identify and assess risks that are incurred when the product meets the public, often before anyone else, but taking action to manage these risks requires added perspective.
Risk in the organization
The field of Risk Management has developed as a rigorous, quantitatively rich practice within many organizations that leads the charge of identifying potential risks, assessing their impact, and managing them through tools like internal policy changes, strategic activities, financial instruments, or insurance policies to name a few.
There is much to be said on the types of risk affecting an organization, however for a firm with a consumer-facing product, one of the risks with the highest consequence is Product Failure Risk. If the product fails to operate properly and this results in harm to the user of that product, then it could be examined to see if the firm is legally responsible for that harm. To determine legal responsibility, the court would first determine if the firm had identified a better design for the product at some point, and secondly whether the firm made a decision not to implement it. It is worth noting that, in the legal context, a design decision is really anything impacting the product and service experience, not just visually-impactful or brand-reinforcing design decisions.
Businesses typically assess product failure risk from the perspective of the firm: if the product fails in market, what's the likelihood that there will be consequences and how severe could those consequences be? This insular perspective, evaluating risk only in the context of the firm itself, is reinforced by the concerns of multiple parties. Investors may be concerned that a product failure will jeopardize their likelihood to see a return return. Managers will be concerned that this will affect the bottom line. Marketing and Sales leaders may be concerned that this will affect the brand and reputation.
This risk is typically managed by enforcing a feedback-rich and effective product development methodology with with rigorous internal and external quality control. This assures that the best possible product reaches the market.
MBA students around the world are steeped in this perspective and take this approach with them as they lead organization.
In reality, there is further complexity to the landscape.
Designers see beyond the firm
Most effective design organizations have an intrinsic commitment to identifying, assessing, and managing risks. This is ingrained in to the organization through the use of a feedback-rich product development process that helps identify potential risks, assess their severity, and develop strategies for managing them before the product is developed for the market.
In any scenario where a designer gathers feedback on a work in progress, this work can be evaluated against the risks facing the firm. More than most disciplines within an organization, designers are uniquely positioned to identify the probability and severity of harm that a product can inflict on users and other parties. The design process, done well, absorbs much of this need for risk management.
In launch of a product, especially in the field of consumer-facing software that typically includes pilot launcehs, beta programs, user acceptance testing, and other observational methods, Designers are often faced with a conundrum. In using the product successfully, another problem may be created.
In using AirBnB correctly, hosts contributing to shifting real estate market conditions by setting price points that make AirBnB rentals more attractive.
In using Facebook or YouTube correctly, individual users may be radicalized when algorithmic recommendations exposed them to content that built upon their outrage.
Both of these examples are not product failures--AirBnB, Facebook, and YouTube all function properly. However in operating properly, these products have, in turn, inflicted harm.
Handling the moment when the risk of harm is identified
A Designer's handling of this discovery is critical. In many cases, my own career included, the Designer is often cast as a Cassandra within the organization, warning whoever will listen, but rarely believed. A friend of mine put it well, "I feel like the scientist in the Godzilla movies who figures it out before everyone else, then runs around yelling and dropping his papers."
Many designers and design leaders have felt the same way. When designers talk amongst themselves, much emphasis is placed on working in “the right organization” or one that “listens to designers.” The question remains, are the designers speaking the language that is needed in order to be heard?
In order to communicate the risk of harm effectively within the organization, any discovery of harm calls for answering a few questions.
- If harm is discovered, is there a venue for articulating identified risks back to business leaders? If not, then why not?
- Is the risk of harm communicated through factual observations?
- Is the impact of the harm understood? Is the probability and severity of the harm understood? What are they?
- Is the harmed party a willing participant in the transaction? Or is the harm inflicted through externalities?
- Does the impact of the harm translate back to the company’s strategic goals? Does it jeopardize activities central to the company’s strength? Does it jeopardize differentiation from competitors?
- Are there recommendations for reducing harm? Is the origin of the circumstances understood? What are they?
- Do you have a recommendation for the timeline of addressing this issue? What questions do you (designer) need answered to make a recommendation?
The question is often where the firm's responsibility ends, and whether the harm inflicted through the product or resulting externalities should be addressed. Framing an identified risk in this manner--as something that starts externally but comes back to the firm--can help designers be more effective at benefitting their users while providing opportunities for the firm to build lasting benefit and value for their customers.
I wouldn't call it a fair fight
Unfortunately, this is not the norm and is not internalized for most organizations. Addressing potential harm is often dismissed. It's a matter of company culture and leadership principles. Addressing harm is often seen as capitalizing on opportunity rather than righting previous wrongs.
This perspective is endemic in academic and legal perspectives on business. I asked a business professor, "what happens if there is risk created by the proper use of the product?" and was met with an ambiguous answer, "in the event of externalities, regulators typically step in." This professor provided an excellent class that has greatly improved my understanding of risk; to be clear I am not offering an indictment of the professor. Instead, consider that academics and business leaders alike view the firm's responsibility as having a tighter barrierFacebook and AirBnB actively avoid addressing
To that end, I don't think there can be any realistic expectation that corporations will be held liable, legally or culturally, for the harms their products inflict on the economy and society. Oil firms will not be held responsible for pollution to the degree that it has impacted our climate. Firearms manufacturers will not be held responsible for harm in gun violence. Will tech companies be held responsible when their products are used effectively to nefarious ends? I don't expect it.
Nonetheless, Designers have an opportunity, perhaps a unique opportunity, to lead the charge within organizations in launching products that achieve objectives while eliminating or minimizing potential external harm.
Career Values
The Knowdell Career Values Card Sort is a straightforward online card sort tool to help prioritize what aspects of your career you value or those you could pursue. I did an assessment as part of my MBA program, among many others. My report indicates the following variables that are key to my work satisfaction, in order of importance.
- Advancement: Be able to get ahead rapidly, gaining opportunities for growth and seniority from work well-done.
- Environment: Work on tasks that have a positive effect on the natural environment.
- Challenging Problems: Engage continually with complex questions, demanding tasks, trouble-shooting and problem-solving as a core part of my job.
- Influence People: Be in a position to change attitudes or opinions of others.
- Time Freedom: Have responsibilities at which I can work according to my time schedule; no specific working hours required.
- High Earnings Anticipated: Be able to purchase essentials and the luxuries of life that I wish.
- Creativity: Create new ideas, programs, organized structures or anything else not following a format developed by others.
- Help Others: Be involved in helping people directly, either individually or in small groups.
The test indicates a second tier of variables as well, including Independence, Authority, Work-Life Balance, Family, Fun & Humor, Work with others, Creative Expression, Intellectual status, and others.
I'm using these values to evaluate potential opportunities, grading each on a scale from 3, 2, 1, 0, -1.
What assessments reveal about me
Over the course of my MBA, I have taken a series of assessments to determine strengths, weaknesses, and skills. These are mostly provided by my program in an effort for every student to be come more attuned to their abilities and strategize ways to address their weaknesses, whether that means taking on new skills or developing ways to cope with shortcomings.
Summary
Highlighting strengths...
My peers recognize that I am a strong written and spoken communicator, in addition to being a dependable listener who is concerned for others.
I resolve conflict that arises in our team or project work by modeling effective and supportive problem solving by prioritizing collaboration. I work with the team or individually to find effective solutions.
I have fairly high emotional intelligence, with specific strengths in understanding others emotions, problem solving, managing conflict, and staying true to my values.
I have a reliable ability to read people’s facial expressions and emotions, key for remote environments.
I am an experiential learner, relying on my senses to interpret information and act quickly.
I am motivated by external factors like results and rewards.
Building on opportunities...
I am augmenting my abilities and deepening mny learning with greater analytical skills gained through continuing education.
I am committed to supporting my team’s development throgh collaboration, engaged feedback, in addition to engaging in more strategic development.
I am personally less motivated by internal factors like mastery, purpose or meaning, however I seek to lead a team that maintains their own sense of intrinsic motivation.
360 Feedback
Rounds of anonymous feedback conducted in September 2021 using the SpiderGap360 platform. Participants included managers and peers.
Top 5 strengths
- Communicating clearly
- Self Development
- Listening
- Concern for others
- Writing
Top 5 areas to improve
- Collaboration
- Developing strategy
- Giving feedback
- Prioritization
- Support others Development
These results were troublesome for me, because I have a track record of being strong in areas where the feedback identified that I needed to improve.
This is somehow reflective of the times when this was given. The company was going through some turmoil. We had turnover at every level. My own assessment of the areas I feel I need to improve varied a bit from the feedback. For instance, I gave myself low marks on "support others development" because, as the company was in some turmoil and we had experienced significant turnover, I felt responsible for not providing the career path that would retain team members. I thought I was being a stronger collaborator, but that clearly wasn't the case and in hindsight, it's a theme in other areas where I volunteer to do work and get it done, but perhaps not in a collaborative fashion in the moment. Since then, I have become more pro-active in seeking and structuring collaboration activities.
Conflict Intelligence
Conflict Intelligence is a mechanism for gauging how people navigate different types of conflict across different power dynamics. It is conducted by The Morton Deutsch International Center for Cooperation and Conflict Resolution ant the Teachers College, Columbia University. It's available for free.
Taking the test, you are given a short case where a conflict is taking place, and provided with 5 options for how you would respond. My tendencies are, as follows. These appear in order of my tendency towards this response to conflict. I also included how I differentiated from common trends identified by the Center for Cooperation and Conflict Resolution's research.
- Benevolence: An Active-Cooperative Orientation--This is an approach in which people take responsibility for problems and engage in constructive leadership behaviors (such as being a good role model and engaging in group problem solving). This is usually associated with feelings of genuine concern for the other party in the conflict. (This is my top score, meaning it is my most likely response to conflict. I am roughly just as likely as most people to respond this way)
- Autonomy: An Orientation of Independence--This is an approach in which people are mainly focused on achieving their own goals, so they seek to disengage from the conflict and find ways to unilaterally achieve their goals outside the conflictual relationship. This strategy may be related to stronger feelings of indifference to the conflict. (This is my second highest score, meaning it is another common way that I respond to conflict. I am far more likely than most people surveyed to respond this way.)
- Support: An Orientation of Willing Support and Dependence--This is an approach in which people value the support of the other disputant and engage in respectful followership behaviors, including seeking clarification from the other and attending carefully to those in positions of higher power. It is worth noting that situations of cooperative dependence can also induce a sense of anxiety and confusion in response to conflict. (This is my third highest score, meaning it is another way I tend to respond. This is roughly in line with most people surveyed)
I found the results of this somewhat conflicting. In another assessment that I have performed in the past during a class at Georgetown, the Thomas-Kilmann Conflict Mode Instrument, I also tested very, very high--highest in the class if I recall--in demonstrating 'collaborating' behaviors in response to conflict. This means my response is "both assertive and cooperative—the complete opposite of avoiding. Collaborating involves an attempt to work with others to find some solution that fully satisfies their concerns. It means digging into an issue to pinpoint the underlying needs and wants of the two individuals. Collaborating between two persons might take the form of exploring a disagreement to learn from each other’s insights or trying to find a creative solution to an interpersonal problem." I want us to see eye to eye, and find a way to resolve conflict by making 1+1=3.
It was interesting to me that I showed an Autonomy tendency in the more recent Conflict Intelligence assessment, especially as it was so far out of band from other test respondents. In reflecting on these results, I feel they almost form a narrative for the nature of collaboration in the face of conflict: I can help you, I also have my own needs, I support your needs, too.
Emotional Intelligence
Emotional Intelligence is defined as:
...an ability to recognize the meanings of emotions and their relationships, and to reason and problem-solve on the basis of them. Emotional intelligence is involved in the capacity to perceive emotions, assimilate emotion-related feelings, understand the information of those emotions, and manage them.
Mayer et al, 1999
This test is commercially available through Psychology Today. It is considered critical for leaders, as it is necessary to manage conflict and establish working environments where it is safe to share perspectives and foster growth.
This was an interesting one for me. Overall this is a pretty good score, including high ranking in:
- Emotional Understanding (89/100)
- Awareness of Strengths and Limitations (92/100)
- Problem-Solving (100/100)
- Emotional Integration (94/100)
- Conflict Management Knowledge (95/100)
- Conflict Resolution Behavior (95/100)
- Values Integrity (92/100)
These and others indicate that my overall Emotional IQ is good, and that I can recognize others' emotions effectively and perform well in other areas where emotions must be detected and managed.
Naturally, there are areas for improvement, cited in the "Limitations" portion of the report:
- You seem to struggle to act independently
- You are somewhat flexible
- Your impulse control is satisfactory
- You do not ruminate excessively, which is good, but you also may not give things sufficient thought
- You show some self-control (this one might be my favorite)
This is an interesting one to me: as it is a connundrum of leadership: the tension between moving with consensus and moving independently.
Another interesting component for me: my results on this test changed significantly since the last time I took it, including having lost my job two weeks ago. I took this test in September of 2021. I remember experiencing significant burnout at the time, even depressive symptoms, and wondering at the time if this would have a negative effect on my outcome. I feel like I have my answer: now, in the process of recovering from burnout, I'm befoming better attuned to my emotions, which I feel will provide a better foundation for my next step, whatever that will be.
The Kolb Experiential Learning Profile
The Kolb Experiential Learning Profile, or as it is swimmingly known, the KELP, helps reflect back to you a comprehensive way to understand how you learn. It creates a "kite" (which I would call a 4-point spider diagram) that illustrates the degree to which you engage a certain portion of the learning cycle. It's tempting to say that it "tells you how you learn" but what it really tells you is your go-to strategy for learning, while still recognizing engagement in the overall cycle of learning.
This video explains the cycle: you experience the environment, you reflect on what you observe, you think through conclusions, and you act on the decision. In reality, real life is a little messier, and we each have preferences and tendencies based on many factors in our lives that affect hour preference for leaning.
It's clear in my results that I am experiential learner: I learn from being there, taking it in, feeling and observing with my senses. This also means that I would benefit from improving the opposite tendency, the thinking side of the cycle.
These results make sense to me. In a narrative sense, I would experience a situation, reflect on it a bit, and take action on my reflections. To balance this learning cycle, I would need to engage the "thinking" phase of the cycle more. The test identifies this approach as a "flex" strategy and highlights activities and opportunities that would help improve my learning cycle. In particular...
Learning strengths
- Building deep personal relationships
- Strong intuition focused by reflection and action
- Open to new experiences
Learning challenges
- Understanding theory
- Systematic planning
- Critical evaluation
I'm optimistic here, because I have been doing just that, thanks to the methodologies I have learned recently that give me a greater analytical approach to problem solving and learning. I have since found that I am enjoying the challenge.
Motivation
This one isn't publicly available, but was made possible through a professor in one of my grad school classes.
Through a series of questions, this tool identified the extend to which you are motivated by intrinsic factors and extrinsic factors. This was only scored in relation to other people taking the survey in the same class, rather than on a defined scale.
Intrinsic motivation: 5 (mean: 5.66, SD: 0.93). This means that my intrinsic motivation was slightly less than other people in the class.
Intrinsic motivation factors include:
- Autonomy
- Mastery
- Purpose
- Control
- Growth
- Meaning
Extrinsic motivation: 5.42 (mean: 4.82, SD: 0.86). This means that my extrinsic motivation was significantly higher (.69 SD) than the others in the class.
Extrinsic motivation factors include:
- External benefits
- Compensation
- Rewards
- Avoiding punishment
- Results
This is an interesting one to thing about as a designers. I think a lot of designers land in intrinsic motivation, from a professional sense, in that they are driven by craft.
DANVA
This was an interesting one. The Diagnostic Analysis of Nonverbal Accuracy measures your ability to read facial expressions. Over the course of the test, you look at pictures of people's faces and select from a list of emotions which best matches the emotion depicted in the photo.
I came out average with an 18. Looking for more information on the scale of results, I couldn't find any additional details, and it looks like the psychology researchers that run the test at Emory University may have moved on.
Burnout Recovery
Last week, I was let go in a re-organization. It was a pretty great ride, and the people I have met during my tenure there have been remarkable partners and coworkers. No hard feelings.
The reality is that I've been dealing with pretty significant burnout for about... 9 months, maybe longer. Eventually, I would read articles like this one, identifying the 5 stages of burnout and I couldn't deny seeing myself in the later stages, 4, teetering on 5.
While I have grappled with the realities of significant burnout for a while, I have struggled with the idea of addressing it. That's the funny thing about burnout, it is inherently slow-moving, creeping, cunning. And all the while, while parenting in a pandemic, you're also telling yourself "this is just a 1-time problem" or "it will change" or "I should be able to deal with this."
The most significant hurdle is that I have a family to support and that comes with responsibilities that I can't ignore. In other words, I always knew that a departure from work would be necessary for healing, but I couldn't just quit and figure it out later. That's part of the reason I'm actually so grateful that I was laid off, as though fate made a choice that I couldn't bring myself to make. The most important thing now is to make the most of it, both in recovery and in the next opportunity.
I'm still putting the pieces together, but a plan is starting to take shape, based on a framework in this article. I'm still refining it with more insight. For instance, this is framed for someone who is working, and they need to recover from burnout while still working and then figure out how to make a change and leave their job. Lucky break for your boy here, that part took care of itself. So here's where I am at the moment
Stage 1: Admitting there's a problem
Gotcha. Next.
Stage 2: Distancing from work
Hey, guess what? Next.
Stage 3: Restoring health
OK, here's where I'm already in motion. At time of writing, I'm 25 days into a plan to drop weight, get stronger, and meditate often.
Here's a screenshot of the Google sheet I use to keep myself honest.
Meditation has been the hardest to start and maintain. Meditation is also proven as a burnout remedy, so I'll be working to dial this up. I'm also pro-actively trying to sleep better + more. This one may take a while.
Stage 4: Questioning values
Up next, digging back into assessments from business school that outline what you're good at, what you're not good at, what you value in your professional life. Time to get dialed in to all of these things.
One assessment, Knowdell Career Values report, outlined the values I look for in work to be, in order of importance:
- Advancement: Be able to get ahead rapidly, gaining opportunities for growth and seniority from work well-done.
- Environment: Work on tasks that have a positive effect on the natural environment.
- Challenging problems: Engage continually with complex questions, demanding tasks, trouble-shooting and problem-solving as a core part of my job.
- Influence people: Be in a position to change attitudes or opinions of others.
- Time Freedom: Have responsibilities at which I can work according to my time schedule; no specific working hours required.
... along with a few others. Interesting!
I'm digging into other assessments like the Kolb Experiential Learning Profile, Emotional Intelligence test, 360 feedback, and Conflict Intelligence tests.
Also interesting: my perspective on the role of Design in an organization has been evolving due in large part to my experience in school, studying organizational development, operational performance, and risk management. and I'm probably going to write about that soon. This may be an important value to pursue.
Stage 5: Exploring work possibilities
At this point, this one comes with a mixed bag.
On the up side, I was flooded with well-wishes and interest when I announced that I was laid off. Additionally, the job market is white hot right now.
On the down side, no matter how hot the job market, design leadership positions are rare and competitive. Now, with top-tier companies offering strong salaries to combat inflation and stave off competition from other companies, the positions are less rare but seemingly more competitive. So the environment is objectively better than it was a few years ago because there are more positions, but qualitatively, this might be a tough ride.
Either way, I still have to ramp up a more recent portfolio, reframe it around my more current experience and perspective. I have a lot to say about design and business these days, thanks to ongoing lessons through my MBA.
I think this article from HBR also makes a good point, that in a work context you have to make an assessment what you can change and what is fixed. In a life context, nearly everything can be changed on a long enough time horizon, so the considerations may vary.
I'm also giving a lot of thought to some points later in the same HBR article regarding relationships, and how important they are to dodging burnout. I'm actually flashing back to my old agency days, where we worked insane hours under unpredictable conditions, but we were thrilled to do it because a group of us were so tightly knit that we would do it together. It kept the burnout at bay. My life overall was very different back then, though, so the pieces may not fit together the way they once did.
Stage 6: Making a break, making a change
Already broke from the old company. The next step will be the next thing to figure out, is what direction matches the values above.
There is much to do to make this a legitimate plan, so I'm working through that, in between sessions working on my portfolio and homework for my integrated risk management class. But plans, for me, are good. I'm a a planner. Even if they change--it helps me put a frame around the unknown in ways that make them real.
Now to decide what's real...
Avon Calling
We used to go here in the summer, to the Outer Banks. I have so many wonderful memories of time spent on the beach and in beach houses and at mini golf and getting ice cream and eating crabs and so many more things.
One year, maybe in the early 2000s, I remember hearing on the radio that an environmental study had found that the ocean would encroach on the Outer Banks. It was not a matter of if it would happen, but when and by how much. 20 years? 30 years? 2 inches? 2 feet? There were ranges in the report, and none of them good. But the radio piece focused not on the severity of the report, but on the fact that the state government declined to formally accept the report as facts upon which to guide their policy, instead considering it an 'advisement' or something. I wish I could still find that report.
It stuck with me then, that facts are to be 'accepted' by those in positions of power. Or, to put it another way, that those in positions of power have any sort of ability to decide on facts.
If I recall correctly, it was the real estate and tourism industries that lobbied to ignore the report, fearing it would scare people away and therefore drive down real estate prices and tourism opportunities. The irony of course is that they were correct to be fearful.
I wish things were different.