Twenty five years ago Chris Argyris summarized a long-term study of management consultants, typically recent MBA graduates from a handful of top business schools, as they struggled to learn how to improve their work.

Published in Harvard Business Review in 1991, “Teaching Smart People How to Learn” has lessons for improvement in health care, where there are lots of smart people, too. (https://hbr.org/1991/05/teaching-smart-people-how-to-learn, also available https://www.ncsu.edu/park_scholarships/pdf/chris_argyris_learning.pdf accessed 20 July 2016. Page numbers reference this pdf.)

What does Argyris mean by learning? How could academically accomplished MBAs not know how to learn? After all, they had great grade point averages and superior standardized test scores.

Argyris invented the distinction between single-loop learning and double-loop learning. He found the MBAs often were not capable of double-loop learning.

“First, most people define learning too narrowly as mere ‘problem solving,’ so they focus on identifying and correcting errors in the external environment. Solving problems is important. But if learning is to persist, managers and employees must also look inward. They need to reflect critically on their own behavior, identify the ways they often inadvertently contribute to the organization’s problems, and then change how they act. In particular, they must learn how the very way they go about defining and solving problems can be a source of problems in its own right.”

“I have coined the terms ‘single loop’ and ‘double loop’ learning to capture this crucial distinction. To give a simple analogy: a thermostat that automatically turns on the heat whenever the temperature in a room drops below 68 degrees is a good example of single-loop learning. A thermostat that could ask, ‘Why am I set at 68 degrees?’ and then explore whether or not some other temperature might more economically achieve the goal of heating the room would be engaging in double-loop learning.”(p.4)

In other words, double-loop learning builds on the simple feedback and adjustment of behavior in a defined environment that characterizes single-loop learning. In double-loop learning, we’re open to using information from the environment to change our mental model and decision-making rules. (See the clear explanation and picture in the Wikipedia article on double-loop learning here. 

Argyris’s subjects all appreciated this distinction and were committed to continuous improvement in performance. What prevented their embrace of double-loop learning?

When the MBAs found themselves in situations where single-loop learning failed to produce good results, the failure provoked defensiveness and anxiety--psychological states unlikely to promote effective learning.

Argyris concluded that a major reason for this reaction was predictable: the elite management consultants had succeeded spectacularly in school and seldom, if ever, experienced failure. They didn’t know how to learn from failure. When confronted with evidence of failure to achieve expected performance, their defensiveness and anxiety overwhelmed effective analysis.

This brings me back to the advice on testing developed by my colleagues at API summarized in table 7.1 in The Improvement Guide, 2nd edition. They give three factors to guide choice on scale of test, including Failure Cost.

Argyris has helped me reinterpret failure cost.

Failure cost certainly relates to costs like harm to patients, loss of revenue or reputation. However, Argyris convinces me that failure costs should include psychological costs: defensiveness and anxiety that prevent effective learning by smart people unused to failure.

With that interpretation, the cost of failure may loom large for professionals and generate behavior that appears as “resistance to change”.

API’s table points us to very small-scale tests when cost of failure is large.

Very small scale tests, with modest or negligible consequences of failure, appear to give smart people a way to gain experience with less than perfect performance.

Gradual exposure and mastery of very small failures, in a quest to inquire about how people actually work and a willingness to modify mental models, looks like a way to reduce anxiety and fear.

In testing, people can “…begin to identify the inconsistencies between their espoused and actual theories of action. They can face up to the fact that they unconsciously design and implement actions that they do not intend.” (p. 11)

Note on problem-solving

Argyris’s single and double loop learning are cousins of the first and second order problem-solving I discussed here. Second order problem solving involves reflection and study to prevent recurrence of problems, not just immediate fixes that permits work to continue.

Super User

This diagram of the management system we tested this past spring with ambulatory surgical centers shows a link between escalation,  integration with leaders and problem-solving.

The high performance organizations we described in our IHI white paper (link here) get these components working together.

For ambulatory surgical centers, my colleague Richard Scoville sketched a logical flow-chart to show what to do with an issue surfaced either directly by staff or by managers who are observing the way work actually happens.

The flow-chart shows how to triage any issue viewed as a problem, which means a gap between the desired expected result and the actual result.

For ambulatory surgical centers, my colleague Richard Scoville sketched a logical flow-chart to show what to do with an issue surfaced either directly by staff or by managers who are observing the way work actually happens.

The flow-chart shows how to triage any issue viewed as a problem, which means a gap between the desired expected result and the actual result.


While there are a lot of practical details to make the sketch operational, the diagram has helped our discussion with one center as managers and staff revise the way problems are identified and addressed.

 That center has invested in education and training for staff in problem-solving methods and so is increasing the capacity of people in each work unit to analyze and solve problems locally.   The center also has a formal process to commission or charter quality improvement projects, which matches the lower branch of Richard’s diagram.

Analogy to Clinical Escalation

Escalation to address quality or safety procedure issues—management issues-- is like the challenge of escalating clinical care for a patient who needs clinical “rescue.”   The barriers to escalation in clinical rescue are the same as the barriers you face in escalating management issues:

  • Missing or unclear escalation protocols
  • Inability to identify the appropriate point of escalation
  • Availability of senior staff to address the issues
  • Fear of negative response
  • Insufficient tools and methods for communication

See https://www.rmf.harvard.edu/Clinician-Resources/Newsletter-and-Publication/2014/AMC-PSO-Patient-Safety-Alert-Issue-22  accessed 13 July 2016.

Note on SBAR

Adapted by clinicians at Kaiser Permanente from a U.S. Navy method, SBAR (Situation, Background, Assessment, Recommendation) is an effective and efficient way to communicate important information. SBAR offers a simple way to help standardize communication and allows parties to have common expectations related to what is to be communicated and how the communication is structured:

S=Situation (a concise statement of the problem)

B=Background (pertinent and brief information related to the situation)

A=Assessment (analysis and considerations of options — what you found/think)

R=Recommendation (action requested/recommended — what you want)

See http://www.ihi.org/resources/pages/tools/sbartoolkit.aspx accessed 13 July 2016.

The top branch of Richard’s flowchart recommends SBAR to report on issues that need immediate attention. For example, in the ambulatory surgical setting, any time a safety issue is declared as part of the “Concern-Uncomfortable-Safety Issue” method promoted in the HRET Surgical Safety training, the center should have agreement on how to escalate the issue for management and clinical leader resolution.



I’m starting to work with two new projects. Each project brings together a number of organizations, with a common aim, an agreement to try changes and share lessons, and a belief that they can make substantial progress in 12 months.

Over the past 15 years, I’ve been involved in about a dozen of this kind of project, derived from the design proposed by colleagues at the Institute for Healthcare Improvement and known as a Breakthrough Collaborative.

In every collaborative project, most of the participating teams start out with optimism even though change is hard for individuals and hard for organizations. The challenge of project leaders is to help the teams temper unrealistic optimism through project structure.

Kelly McGonigal wrote a popular psychology book in 2012 (The Willpower Instinct: How Self-Control Works, Why It Matters, and What you Can Do to Get More of It, Avery: New York, http://kellymcgonigal.com/books/ ); she describes the situation for individuals who want to change habits. Her summary seems to apply to organizations, too:

“Vowing to change fills us with hope. We love to imagine how the change will transform our lives, and we fantasize about the person we will become. Research shows that deciding to start a diet makes people feel stronger, and planning to exercise makes people feel taller…The bigger the goal, the bigger the burst of hope. And so when we decide to change, it’s tempting to give ourselves some very large assignments. Why set a modest goal when setting a gigantic goal will make us feel even better? Why start small when you can dream big?”

“Unfortunately, the promise of change—like the promise of reward and the promise of relief—rarely delivers what we’re expecting. Unrealistic optimism may make us feel good in the moment, but it sets us up to feel much worse later on. The decision to change is the ultimate in instant gratification—you get all the good feelings before anything’s been done. But the challenge of actually making a change can be a rude awakening, and the initial rewards are rarely as transformative as our most hopeful fantasies. As we face our first setbacks, the initial feel-good rush of deciding to change is replaced with disappointment and frustration. Failing to meet our expectations triggers the same old guilt, depression, and self-doubt, and the emotional payoff of vowing to change is gone. At this point, most people will abandon their efforts altogether. It’s only when we are feeling out of control and in need of another hit of hope that we’ll once again vow to change—and start the cycle all over.” (p. 152)

The inventors of the Breakthrough Collaborative designed multiple features to help teams succeed despite the common psychological issues McGonigal describes.

The Model for Improvement is one key feature. With the project aim in mind, the inventors advise teams to start with very small tests, carried out with curiosity. Teams should invite the people who might be affected by the change to take part in the tests. The invitation to test recognizes attachment to current practice and asks only willingness to try: “if the idea really doesn’t work out, we can always go back to doing things the old way.”

Very small tests—attempting a change in minutes or at most a few hours—anchor teams in reality while bounding the consequences of failure. Very small tests, in the service of the larger aim, are a bridge between the initial hope for transformation and the challenge of actually making changes that stick.

Back to top