Saturday, June 25, 2016

Four Rules for Giving Terrible Feedback



Great sports analogies focus on players and coaches but rarely on referees. Whether it's Thursday evening youth lacrosse or Sunday afternoon Premier League or NFL football, referees of all stripes (pun intended) get enormous amounts of immediate and overwhelmingly negative performance feedback. They have a lot to teach us about terrible feedback. 

Referees are taught to ignore taunts from the sidelines but there are some circumstances that cannot be ignored. The 4Ps of Misconduct are guidelines about when a referee should directly address feedback from coaches, parents or players by issuing a warning for misconduct. They are: Public, Persistent, Personal and Profane. 

  • Public: feedback delivered in front of others
  • Persistent: feedback delivered multiple times
  • Personal: feedback about a person rather than a behavior
  • Profane: feedback that is irreverent or offensive

A referee's decision to deliver a warning depends on the depth and breadth of these factors. Questioning whether the ref needs glasses is public but not persistent, personal or profane. However, continuing to question the referee's eyesight after every call may warrant intervention (perhaps more at the youth level than the professional level). Using profanity about the referee's mother even if only once would certainly be considered misconduct even without being persistent or public. In the end, a good referee will assess how extreme the feedback is and how many of the 4Ps are involved before making a decision to warn the offender.

Public, persistent, personal and profane feedback in the workplace is more common than we'd like. The 4Ps of Misconduct highlight terrible feedback in sports as well professional settings. But these same 4Ps can be translated into guidelines to help avoid terrible feedback:

  • Non-Public: Was the feedback provided in a setting conducive for learning and development? This is more than just one-on-one but also at a time when the feedback is likely to be heard and understood.
  • Non-Persistent: Was the feedback clear and specific about the root cause so that the behavior could be corrected? The goal is to provide feedback about the primary cause rather than just symptoms that can be internalized and acted on rather than requiring continuous feedback.
  • Non-Personal: Was the feedback focused on the behavior rather than the person? People are not likely to change who they are but they can take responsibility for changing what they do so delivering feedback about an observed behavior is far more effective. For example, "Your attitude about your workload..." is very different from, "Your complaining about your workload..." One is about the person, the other about their behavior.
  • Non-Profane: Was the feedback delivered in a way that conveys respect? Extreme and hyperbolic language often distracts people from hearing and internalizing feedback. Profanity doesn't always require bad language but any feedback delivered without fundamental respect for the person is profane and likely to be ineffective.
In Good to Great, Jim Collins wrote that, "Good is the enemy of great." But sometimes good is just the absence of terrible. Avoiding the 4Ps of Misconduct in professional feedback won't make anyone great at feedback but it will help managers become more effective by avoiding the pitfalls that prevent feedback from being heard, internalized and acted upon.

Sunday, January 24, 2010

Breakthrough Innovation: Battlefield Medicine

Atul Gawande first examined the lethality of combat wounds in a 2004 New England Journal of Medicine. He noted that despite the invention and use of even more devastating weapons, the wounded-in-action lethality rate (that is, the rate of death among those wounded in combat) had fallen in every major US war until "settling" at roughly 25% after WWII. Even the Persian Gulf war in 1991 had a lethality rate of about 24%. But then, somewhat inexplicably, the combat lethality rate in Iraq and Afghanistan since 2001 dropped to just under 12%.

The notion that the lethality rate "settled" at 25% after WWII is somewhat misleading. The rate actually follows a somewhat predictable, logarithmic path. If you were a military planner in 2001 asked to estimate the combat deaths of a long war in Iraq or Afghanistan, you would've used this logarithmic model. It would have predicted a lethality rate of 20.5%. The exact rate, of course, could not be predicted with much precision. So it would have been useful to know the range of lethality rates to expect. The model would predict that the actual rate would likely be between 14.2% and 26.8%.  This would've been consistent with an ongoing improvement over the most recent experience in the Persian Gulf in 1991, when the lethality rate was roughly 24% (albeit on a very small base of wounded). It turns out that the rate for the current war, as of January 2010, is only 11.7% which is much lower than the 20.5% expected and even well below the range of what would be expected.

There are two ways to think about this. The first, explored by Gawande, is that the improvements prior to 2001 were the result of a process of discovery that led to incremental improvements year-over-year. This explains the near-constant decrease in the lethality rate every year since the Revolutionary War. Gawande attributes the significant improvement in the 2001-2010 rate in Iraq and Afghanistan to a change in the basic principles of R&D used by the military. After all, there have been no advances in medicine since the Persian Gulf War in 1991 significant enough to explain the transformational improvements suggested by steep decline in the lethality rate. But what if this seemingly smooth process of improvement is really the result of a series of big breakthroughs? These would lead to significant reductions in lethality followed by relatively flat periods of lethality. 
This explanation is plausible. Antiseptics and anesthetics were more widely used after 1800, the use of combat medics was pioneered in the Civil War and amputations were more widely used in the Napoleonic period before WWI. During and after WWII, field hospitals and MEDEVACs became regular features of combat. These inventions could explain a step-function pattern rather than a smooth process of continuous improvement.  In this context, the recent improvements in lethality are simply a result of a significant innovation. Without a significant improvement in medicine to explain the lethality declines, the step-function improvement must be in the systems and processes rather than the quality of the medicine or surgical techniques. Indeed, Gawande outlines several reasons for the recent improvement, which include:
  1. More widespread use of body-armor and eye-protection
  2. The development of Forward Surgical Teams (FSTs) of leaner and more mobile units of 20 surgeons, nurses, medics and other support personnel who are farther forward, closer to battle.
  3. A military surgical strategy focused on damage control, not definitive repair, "whatever is necessary to stop bleeding and control contamination without allowing the patient to lose body temperature... Surgeons seek to limit surgery to two hours or less, and then ship the patient off to a Combat Support Hospital (CSH), the next level of care."
These are examples of improvements in the systems and processes rather than the invention of new medicines or surgical techniques. Gawande's thesis is that the recent improvements have been driven by a focus on systems and process performance. But this transformation is a step-function breakthrough in a series of step-function breakthroughs.

There are two general conclusions from Gawande's observations. First, true innovation and improvement in disciplines like R&D and Product Development are more often the result of step-function breakthroughs than from continuous improvement. Second, a focus on performance can lead to further breakthroughs as the process of discovery reaches diminishing returns.

Source: http://content.nejm.org/cgi/content/full/351/24/2471 and Better: A Surgeon's Notes on Performance

Saturday, October 10, 2009

The Rise of Post-Journalism: Mea Culpa?

I appreciate the irony of this post. And I hope you do too. If you do, it means that you've read Mark Bowden's latest article in The Atlantic describing the collapse of journalism or, if you prefer, the rise of what he calls post-journalism. The central feature of this emerging genre is not the search for the truth but the pursuit of victory. He writes:

I would describe [the] approach as post-journalistic. It sees democracy, by definition, as perpetual political battle. The blogger’s role is to help his side. Distortions and inaccuracies, lapses of judgment, the absence of context, all of these things matter only a little, because they are committed by both sides, and tend to come out a wash. Nobody is actually right about anything, no matter how certain they pretend to be. The truth is something that emerges from the cauldron of debate. No, not the truth: victory, because winning is way more important than being right. Power is the highest achievement. There is nothing new about this. But we never used to mistake it for journalism. Today it is rapidly replacing journalism, leading us toward a world where all information is spun, and where all “news” is unapologetically propaganda.

The irony, of course, is that Bowden classifies most bloggers as what's known in the trade as a thumbsucker, "a lazy columnist who rarely stirs from behind his desk, who for material just reacts to the items that cross it."

For me this is an important distinction. In most of my writing, I try to offer thoughtful observations from a common perspective in a way that adds something original to the discussion. I prefer to think of posts like this as serving a slightly different but also important role: raising your awareness of others who are trying to do the same.

Saturday, August 22, 2009

Recomended Reading (Part I: Non-Fiction)

I buy many more books than I read. More accurately, I buy many more books than I finish reading. Here are some off-the-beaten-path suggestions that held my attention to the end and have found a permanent place in my library:

E=mc2: A Biography of the World's Most Famous Equation by David Bodanis and Simon Singh :: a narrative that links the people, places and politics of 21st century physics

Watching Baseball Smarter: A Professional Fan's Guide for Beginners, Semi-experts, and Deeply Serious Geeks by Zack Hample :: an easy to read guide that will help anyone enjoy baseball more fully regardless of your current understanding of the game

13 Things that Don't Make Sense: The Most Baffling Scientific Mysteries of Our Time (Vintage) by Michael Brooks :: a detailed yet succinct examination of things that the best minds in the world still don't understand

A Short History of Nearly Everything by Bill Bryson :: who couldn't use this? Bill Bryson is a humorist but this is an extremely well-researched examination of questions that we all have about why the world is the way it is

A Whole New Mind: Why Right-Brainers Will Rule the Future by Daniel H. Pink :: a thoughtful thesis on the rising value of creativity and intuition

Freakonomics: A Rogue Economist Explores the Hidden Side of Everything (P.S.) by Steven D. Levitt and Stephen J. Dubner :: like seeing a circus clown in church, Nobel-laureate Steven Levvitt exports his professional talent as an economist into an unconventional setting - daily life

How We Decide by Jonah Lehrer :: it turns out that every decision we make is a product of our intuitive, subconscious mind and our conscious "thinking" is really just a rationalization of the decision. Leher makes behavioral psychology both fascinating and accessible to those of us without time to get a PhD

Moneyball: The Art of Winning an Unfair Game by Michael Lewis :: on the surface a book about baseball but underneath it's an examination of business, psychology and the politics of institutions

Crucial Conversations: Tools for Talking When Stakes are High by Kerry Patterson, Joseph Grenny, Ron McMillan, and Al Switzler :: this book will make you a better parent, manager, teacher, leader and person

Jesus: Uncovering the Life, Teachings, and Relevance of a Religious Revolutionary by Marcus J. Borg :: a profoundly original and ultimately sensible articulation of who Jesus was and what it means to be a Christian

How Soccer Explains the World: An Unlikely Theory of Globalization by Franklin Foer :: another selection that sounds as if it belongs in the sports genre but pushes well beyond the boundaries of sports

Wednesday, August 12, 2009

The MLS Cartogram

This is a Rorschach test. What do you see? To some, it may look like a random distribution of different colored squares. It is actually a cartogram - a map in which another variable is substituted for land area, in this case population. It shows MSAs in the US and Canada with a population over 1M. New York is the large red square in the northeast. Los Angeles is the large red square in the far southwest. Miami is in white in the southeast corner. Chicago, Dallas and Houston are the three red squares in the middle. Portland, Seattle and Vancouver are the three red squares in the northwest.

Each red squares is an MSA with a Major League Soccer franchise. If you look closely at the map, you may notice that there are several large markets like Detroit, Montreal, Atlanta and Miami without franchises. As a matter of fact, there is an entire region - south of DC and east of Houston - that has no franchise at all. You may also notice several unusually small markets like Columbus, Kansas City and Salt Lake that have teams.

Does this distribution pattern make sense? A franchise model is only loosely planned, especially when there is no physical product to distribute. It evolves based on where both franchisees and the owner of the brand are interested in locating a franchise. A purely rational process would put franchises in cities with the greatest potential return on both the franchisee and owner's investment. However, there is lots to consider: how many soccer fans are there in the city? how likely are they to spend on tickets and merchandise? what is the competitive environment (will the team compete with football, baseball or any other local sports and entertainment)?

The large markets like NYC, LA, Chicago, Dallas, Philadelphia and Houston are big enough to support multiple professional sports teams. Those unusually small markets like Columbus and Salt Lake City do not have significant competition from baseball, professional football or other summer/fall sports that would compete with soccer (yes, the Buckeyes are like religion in Ohio but Columbus has a significant enough soccer population to offset the Buckeye effect). Kansas City is a curious market for MLS since it must compete with both the Chiefs and the Royals. But what about those mid-size markets without teams like Detroit, Montreal, Minneapolis, Phoenix and the entire southeast? SEC Football may explain why the southeast doesn't have an MLS franchise but Atlanta and Miami are the seventh and eighth largest MSAs in the US. So if Kansas City - with the Chiefs and Royals - and Columbus - with its Big 10 Buckeyes - can support MLS franchises, why couldn't Atlanta or Miami?

The four markets the MLS has either entered or announced upcoming entry are Seattle, Philadelphia, Portland and Vancouver. Seattle and Philly both have football and baseball teams but are large enough markets to support a soccer franchise. Portland and Vancouver look more like SLC and Columbus - small but without any competition from baseball and football. Also, both Vancouver and Portland have teams currently playing in the USL that are moving up to MLS. Philly did not have an existing team but built a new stadium as part of their pitch.

So, where would you put the next franchise? If the two (implicit) strategies are (i) large markets and (ii) small, less competitive markets with existing teams, then the most likely choices are (i) Miami or Atlanta and (ii) Charlotte, Montreal, Raleigh, San Antonio or Austin.


Sunday, August 9, 2009

The Organic Fallacy

The Omnivore’s Delusion: Against the Agri-intellectuals by Blake Hurst in The American, reminded me of a friend I've known for almost 25 years. She's a physician that specializes in alternative medicine. On more than one occasion she's scolded me for serving my family (gasp) non-organic foods. She is convinced that genetically altered foods, synthetic fertilizers and herbicides are slowly polluting our bodies. It's not clear if this is really true. To be sure, some chemicals - like DDT - have contributed to significant ecological and human damage in the past. But there is science supporting both sides of the argument. For example, worldwide life expectancy has more than doubled and the population has increased by 4 times since 1900 in part because of advances in genetically altered foods, synthetic fertilizers and herbicides.

This leads to the real question: would the world be better without industrial agriculture? The answer is: clearly no. The Nobel Laureate and father of the Green Revolution, Norman Borlaug, estimates that there is only enough natural nitrogen available on earth to feed 4 billion people (fertilizer is basically nitrogen and it turns out that nitrogen is the most scarce resource for farming). That means that almost 40% of the global population would not be alive today without synthetic sources of nitrogen (i.e. - synthetic fertilizers). Moreover, Hurst, who is a real farmer, argues convincingly that organic farming is more expensive and more harmful to the environment.

We embrace advances in technology in almost every other industry but "expect farmers to use 1930s techniques to raise food." Advanced technology in food production has allowed more people to live and those people to live longer. Insisting on organic foods is asking farmers to produce less at a higher cost, do more damage to the environment, under serve demand and drive worldwide food prices beyond the reach of a third of the world population (clearly the poorest will bear the burden). If we demand more organic foods, farmers will certainly supply them. But should we?

Wednesday, July 29, 2009

Imprimatur

Word of the week: Imprimatur.

My friend Ben McAllister (one of the 'X or So') suggested this one. Latin words used in English, like et cetera, ad hoc and de facto, are often printed in italics, as if to say "we have no equivalent so we're just borrowing this from Latin." I'm not sure why we don't similarly acknowledge words we borrow from French or Farsi with italics. In any event, according to Merriam-Webster, Imprimatur is defined as: a license to print or publish; a mark of approval or distinction.

After further exploration (i.e. - Wikipedia), this word has an even more intriguing context. According to Wikipedia, an Imprimatur is an official declaration from the hierarchy of the Roman Catholic Church that a literary or similar work is free from error in matters of Roman Catholic doctrine, and hence acceptable reading for faithful Roman Catholics. No implication is contained therin that those who have granded the Imprimatur agree with the contents, opinions or statements expressed. The term is also used more generally to mean any official endorsement (not necessarily by a church).