Kerry Patterson is coauthor of four New York Times bestsellers, Change Anything, Crucial Conversations, Crucial Confrontations, and Influencer.
Dear Crucial Skills,
After initiating a crucial conversation effort, how do you evaluate outcome of the effort? What objective outcome assessment do you utilize?
Over the years, we’ve used a variety of methods to assess the effectiveness of Crucial Conversations Training. Our most rigorous methods involve most of the research and statistical tools science can provide. Prior to the training course, we measure trainees’ knowledge with a paper-and-pencil test. With five possible responses, each answer averages 20 percent, or what you’d expect from chance. After the training however, almost everyone gets a perfect score. The thesis here is simple: if trainees don’t understand the concepts, they won’t be able to put them into practice.
Next, we measure people in action. We give trainees a problem situation and ask them to resolve the issue in a role play. We tape the interaction and then have experts code the presence or absence of ten different skills. Before the training, subjects typically earn a couple of points, after the training they average around 9.5. Now we’ve seen that they can actually do what they’ve studied.
But we’re still not through. Just because people understand the new skills and can demonstrate them on demand, do they actually want to do what they’ve been taught? After all, you just might teach something people find hokey or even risky to put into action. So, we ask people if and where they’ll use the skills at home and at work. The vast majority report that they want to put the skills into action. They see how the skills will help them solve problems and achieve key personal, family, and business objectives.
Finally, we see if participants actually practice the skills at work. Peers evaluate one another before and after the training. Now we’re no longer relying on self-report data, nor are we judging people under test conditions. We’re measuring people on the job, and we’re gathering the data from their coworkers. When the training is implemented well, participants show remarkable improvements in their crucial conversations skills at work.
And there’s still more to measure. In three different studies we’ve asked university researchers to explore the relationship between improvements in crucial skills and key corporate measures such as costs, productivity, and profitability. After all, companies implement Crucial Conversations Training as a means to solve corporate problems and increase overall corporate health, not simply as a means to enhance communication skills. In one study, we found that an increase in candid, honest, and crucial communication yielded and increase in productivity of 93 percent and also reduced customer-care expenses by $20 million, among other bottom-line results.
Now, at the less formal level, we’ve received hundreds of letters and e-mails reporting the immediate and beneficial effects of using crucial conversations skills at home and at work. For instance, one woman held a conversation with her mother that helped heal years of separation and recrimination. Another student talked with a boss about a leadership tactic that was driving him nuts—it ended up solving the problem and strengthening their relationship. The list of success stories is long and varied.
I suppose I enjoy the anecdotal evidence as much as the scientific body of knowledge we’ve built. These one-of-a-kind incidents shore up our numbers with poignant and memorable stories that often pull at our heart strings. Of course, when your goal is to convince a skeptical audience that learning crucial conversations skills can lead to changes in behavior that in turn lead to changes in key corporate indicators, we fall back on two decades of research that demonstrate that the training works.
Editor’s note: The measurement tools mentioned above are used during custom client interventions. The resources are not made available to the public.