- Published on
Right Kind of Wrong
- Authors
- Name
- Josh Haines
- @joshhaines
Introduction and Author
This is the second book I've read from Amy Edmondson. Her first book, The Fearless Organization was amazing and was one of two books that provided my introduction to the field of Psychological Safety. The other was The Culture Code by Daniel Coyle. Amy's writing was impeccable and her ability to both explain the science and persuade the reader as to the importance of the findings was powerful. She's currently a professor at Harvard Business School. My friend William Belcher is attending HBS currently and told me he walked past her office the other day. I probably would have went full fan girl and took a selfie with her door name tag or something.
Either way, her book was powerful as I worked towards improving how I lead teams. This new book promised to shed light on failing well with the same amount of clarity and impact. The book is divided into two sections: The Failure Landscape and Practicing the Science of Failing Well.
The Failure Landscape
In the first part of the book the author really works to explain the landscape of failure and chart some waypoints. She discusses three kinds kinds of failures: Intelligent, Complex, and Basic.
Intelligent Failures
Intelligent failures are the best place to learn and make progress. They are all about embracing reasonable risks for the reward and knowledge an intelligent failure can offer. A failure can be intelligent when it meets four criteria:
- It takes place in new territory
- It helps you work towards a goal
- It is informed with available knowledge
- It is as small as possible while still providing valuable insights
She also discusses the need for intelligent failures to drive and describe a bias for iteration. I love this way of looking at failures. Good failures should be almost expected with small blast radii. If you've done a good job up-front, then when things go wrong it should be clear how to use the new knowledge to run the next small experiment and get closer to your goal on each iteration.
Basic Failures
Basic failures are next. This is the domain of errors, mistakes, and oopsies! The key point is these are unintended and likely could have been prevented with care in the moment. They tend to not be majorly impactful (although some are!) and are a good source of learning. Basic failures tend to have some consistent characteristics:
- Occur in known territory
- Single/Simple causes
- Inattention/Distraction/Neglect
- Incorrect Assumptions
This tends to be a great place to build a practice of blameless reporting or blameless postmortems. In our team, we have written up some blameless postmortem documents on important and potentially serious mistakes and failures. We've also written up simple common mistakes and documented better practices. By building the muscle of reporting, owning, and sharing our failures widely, it builds our habits for when serious failures happen and reporting could be more visible, impactful, or potentially embarrassing.
Complex Failures
These failures tend to contain concepts like many small things went wrong. This reminds me of the commonly referenced “Swiss Cheese” model from Dr. James Reason in the 90s which is common in the aerospace industry (among others). This is the model where safety is a multi-layered process of known vulnerabilities (holes in the cheese) with processes designed to ensure the holes never align and daylight cannot get through. I've had this model drilled into my head from my years in aerospace at both Honeywell and Rolls-Royce. The author notes this model in the book as well.
From the book, complex failures share common attributes like:
- Many potentially small causes
- Take place in familiar settings
- Small missed warning signs
- Often containing external and/or unusual factors
One final note is where she discusses the concept of embracing false alarms. I really liked this concept of recognizing small warning signs through practice knowing that most won't be real or valid. If we begin to embrace and react to small warning signs and make it part of our normal behavior we may have a chance of avoiding a complex failure because the warning signs preceding it won't get overlooked. This is great.
Practicing the Science of Failing Well
Now that we have an understanding of the three major types of failures, the second part of the book goes into detail on what to do about failure and how to ensure we're maintaining a healthy relationship with it. Below are two high level points from this section that hit home for me:
React Slowly
The author uses this section to make the mandatory offering all recent research books make (and rightly so). That offering is a reference to the pivotal Thinking, Fast and Slow by Daniel Kahneman. I won't drum this point except to say the reference hits home perfectly that few things would make us have a fast and wrong reaction than a failure. Failures drive us into intense emotional states and trigger some of the post powerful animalistic emotions we have e.g. fear, shame, vulnerability, embarrassment, anxiety, etc. By purposely training yourself to react to failure using the high road or Slow thinking we can see the failure in the proper light to gain maximally from it.
NOTE
As I'm typing this I'm reading Brene Brown's Dare to Lead and am thinking about her past books on the power of shame (e.g. Daring Greatly) and considering how strongly those three concepts are connected: Shame/Failure/Fast Reactions. Powerful stuff.
Culture & Failure Sharing
It's easy and common to talk about failure lately and you frequently hear phrases like “move fast and break things” or “failure culture”, but actually living those values is difficult. The ability to grow a team to the point that it becomes normal to share failures openly and without fear of retribution or retaliation is a goal any decent leader should push for. In some ways achieving this level of safety is a mandatory guidepost on the way towards true innovation. This feels like a simple thing to say and understand, but an extremely difficult thing to achieve.
Wrap Up
Overall, this book was an absolute powerhouse of value and insights. I've come to expect nothing less from Amy Edmondson and this book didn't disappoint. I haven't yet read her other book Teaming, but it's high on my backlog list. Although I was reading this with my lens of software/tech/product-building, I feel like some of my friends and colleagues in more traditional engineering roles and product-safety roles would also get a lot from this book.