Evidence Based Coaching
Contributors: Stu McMillan
Man, I hate that term.
Primarily because of what it has come to represent – not what the original intent was. While there has been much written about evidence-based practice over the course of the last couple of decades, it is clear that there is still no universal understanding of what this really means to coaches, and what responsibility, if any, the scientific community has in coach education.
As I see it, there are currently two challenges in respect to ‘evidence-based practice’:
- Bridging the gap between the (predominantly) old-school traditionalists, who are cautious (or even suspicious) of anything outside of their direct experience, and the hard-core (principally well-meaning) scientists-researchers (and occasionally coach educators), who take science as their starting point, and confuse scientific research with evidence.
- Coaches are forgetting how to actually coach. Growing up in an ‘evidence-based’ science culture, and confused as to what ‘evidence-based’ is intended to mean, younger coaches are threatening to co-opt the profession, turning it into an off-set of sport-science.
Within this short article, I would like to dig into what ‘evidence-based’ really means, what this means to coaches and their development, and what responsibility ‘sport science’ has to communicate research.
Firstly, some background on terminology:
The ‘traditional’ form of coaching has been termed belief-based coaching (BBC). It is guided primarily by the coaches’ personal experiences, as well as their understanding of current practice as it sits with their peers. It is generally subjective, biased, and unstructured (Rushall, 2002). And it has formed the basis of sports coaching for generations. In fact, in many sports, it still does.
Evidence-based coaching (EBC) is an extension of evidence-based medicine (EBM), which first came to prominence in the 1990s (although its roots go back to the 19th century). EBM was first introduced to shift emphasis in decision-making from “intuition, unsystematic clinical experience, and pathophysiologic rationale to scientific, clinically-relevant research“ (Guyatt et. al., 1992). Initially, EBM was explained as clinical decision-making based not only on research evidence, but also on clinical expertise and experience, and patient values and preference (Sackett, 1996). Aligning with these principles, evidence-based coaching is generally objective, structured, and (theoretically) highly predictive.
Regrettably, evidence-based practice has evolved to ignore the experience of the practitioner (or coach) and the perspective of the patient (or athlete).
*I will use the acronym EBP (evidence-based practice) throughout this article. This term is to be understood as synonymous with EBM and EBC.
As a starting point, I would argue that the old-timers have a point: generally, we place too much importance on scientific ‘evidence’, and not enough on intuition and experience. But that doesn’t mean we have the right to ignore ‘science’. It is still our best means to generate novel insight, and it is still the only way we can produce consistent reproducible results.
That being said, the dogmatism that currently exists in many scientific circles serves only to feed the mistrust that many experienced coaches have about ‘science’. This dogma is especially prevalent in nutrition, therapy, and sports medicine – basically any profession where it is easy to take a binary view on a subject.
A more appropriate view, however, is that almost nothing is black and white / good or bad. Nuance exists in every circumstance – especially as it relates to the human body. We still know far less than we do not about how we work, and to assume otherwise is simply arrogant.
This is especially evident in the overly-emotional world of elite sport – where everyone is battling for their small piece of the cake. The reality of current scientific practice in regards to sport simply does not allow for binary conclusions: besides suffering from various personal and professional biases, statistic limits, population specificity, straight-up errors, and misinterpretations, the linear-causal mechanistic, and reductionist simplicity of the typical scientific investigation frequently just does does not fit into the complex, ever-changing, interdependent, chaotic sports world where multiple, poorly-understood cause-effect interactions is the prevailing state of affairs.
It’s near impossible to do ‘good’ sport science – especially in an elite population. In order to detect meaningful difference, there has to be sufficient sample size relative to the effect in order to see something. The reality is that in elite sport we are dealing with the 1% of the 1%; so how do we develop good evidence when the sample size is so small? Sport science is constrained by the limitations of our environment – robust scientific evidence will always be tough to get.
Now – this is not to say that reverting back to a belief-based model is the answer. Evidence based practice is still the way forward – but with an acceptance of the original intention, involving the expertise and experience of the (in this case) the coach, as well as the individual circumstance of (in this case) the athlete(s): science helping to inform the decision-making process – but not dictating it.
Coaches are increasingly ready to generate their own knowledge and understanding with a more scientific approach to training, and I in no way want to discourage this practice. No doubt this requires significant effort and expertise, and it’s an essential part of the coaching journey.
But this journey is retarded when communication is compromised – whether from poor science, poor communication, or poor understanding. Additionally, less and less interest is being placed on the ‘soft-sciences’, and we are producing relatively ‘smarter’ (more educated) coaches who cannot communicate. Science and practice is simultaneously growing further apart, and coming closer together. As an industry, we need to do a better job of understanding what the dynamic is, and how we can work together better.
I contend that the current model is not working, and serious thought and discussion is required on the following:
- Evidence: Firstly, what do we mean by this? Does the industry (and by ‘industry’, I mean coaches and other support team personnel who work in elite sport, and the sports scientists who study in it) have a common definition? To many, ‘evidence’ is synonymous with ‘research’ – but it is important to understand that ‘research’ is just a part of the bigger picture. Evidence has been defined as “the available body of facts or information indicating whether a belief or proposition is true or valid” (Oxford Dictionary), and includes personal experience, intuition, anecdotes, and testimonials in addition to ‘scientific research’ (Guyatt). More specifically, what do we mean by evidence in how it relates to us as coaches? I like Sam Leahy’s ‘new’ definition of evidence-based coaching: “EBC is open, thoughtful, and professional coaching decision-making about the handling of athletes/clients that integrates the best available science evidence, research evidence, coaching experience, and the athlete/client values, preferences, and circumstances while considering the larger social context and logistics in which coaching services are provided.” Within this definition exists a launchpad from which we can communicate what evidence is valid, and how more traditional forms can be respected in a more science-centric world.
- Communication: Generally, it’s a growing reality that people are not really listening to each other anymore. We are less open to discussion. We begin with our gut feeling, and every piece of information we choose thereafter is chosen specifically to strengthen this feeling. We are interested only in information that supports our current beliefs, and tend to shy away from anything that challenges us. This categorical simplistic attitude is manifesting in an increasingly binary ‘us versus them’, black and white, and – ultimately – divisive world (the current political landscape the most salient example). As a society, our choice eventually comes down to talk or fight – there is no middle ground. Not finding a common stance to communicate from just leads to greater division; this is as true in the sports world as it is in society as a whole. Within our coaching world, one challenge is to breach the tension that exists between (generally less educated, less ‘science-based’) coaches with professional expertise and experience and well-informed (generally more-educated) coaches and scientists, and how this tension affects the possibility and quality of communication. Within this, we need to address how and when this scientific evidence is being disseminated to coaches. Too often – in the rush to publish – scientific ‘evidence’ is being presented that acts only to further muddy the waters, rather than bringing clarity to an already complicated world. This is most easily seen in the nutritional sciences, where seemingly every food is either cancer-causing or cancer-protective, but it exists in all research realms. Thirdly, the role of the scientist (and scientific community as a whole) has to pivot; from one where scientists primarily communicate with each other to one where scientists communicate with coaches and other practitioners in a manner in which is more understandable-relatable; talking over the heads of the less educated has become an enjoyable past-time for many researchers; this practice is simply self-indulgent ego massage, and does nothing to move the profession forward. Finally, scientists need to accept the limitations of the controlled, mechanistic, world they live in, and understand that there is as much they can learn from the practitioner as the practitioner can learn from them.
- Coaching education: Typical post-secondary education fails to deliver the tools to find out what works; instead teaching us how to regurgitate facts. Coaching education typically does no better – information is often decades old, there is little demand for comprehension of even basic methodology, there is rarely any requirement upon the appreciation of systemic interrelationships, no respect for the role that applied anatomy and biomechanics has in the coaching process, and little attention given to communication, psychology and belief. Curriculum is traditionally more interested in how coaches learn the syllabus rather than learning what will make them better coaches. I understand the difficulty in building high quality education content that is effectively communicable, but there is a great many excellent coach-educators in the world that have for years pointed out the problems with traditional coaching education pathways; yet, rarely do we see a more global acceptance of this, and an attempt to educate more creatively. Too often, ‘education’ has now become a CEU driven, ‘tick the box’ exercise, where ‘educators’ are more worried about collecting money than they are in actually educating their members. This is not to mention the escalation of the rise of the independent guru culture: we live in an age of unparalleled access to information; not only do we have a world of knowledge at our fingertips, but we are increasingly bombarded by the latest and greatest guru offerings in a traveling roadshow, that is generally more an attempt to cash in on the growth-dilution-desperation of the industry than it is to actually educate the masses.
- Research utility: Firstly, we need to do a better job of asking better questions, searching for better answers, and strive to increase the accuracy of our knowledge. Research has become a means to an end to justify employment, rather than a true search for what is meaningful ‘truth’. Too many people waste time debating if something is perfectly correct, when they should be focusing on whether it is practically useful. There is simply far too much useless research now being published (there has been a 300% increase in medical research in the last 25 years, for example). And like anything else that grows in numbers, there is a natural dilution in quality. More information isn’t necessarily a good thing; more information just means more bad information. It is perhaps too late to turn that ship around – now that researchers are on this never-ending quest to publish more and more – but the industry needs to think seriously about their publishing protocols; perhaps early (or new) research should not be reported until it is reproduced? Bond and Campbell (2008) proposed the following criteria for research within the confines of EBP: “clearly defined, designates the target group for whom it is intended, shown effective in a set of rigorous research studies, independently replicated by at least two research groups, addresses important needs in the target population, and capable of implementation in a wide range of settings”. To be counted as ‘evidence’, research should be translatable into practice. Coaches and scientists must work together to find ways to better integrate current elite sport practice into study design – paying heed to the unique environment that we exist within.
- Respecting experience and intuition: ‘Scientific evidence’ is just one of three core components of EBP. The expertise and experience of the practitioner, as well as the patient’s personal preferences and unique concerns of the patient were seen to be equally clinically-relevant. Originally, EBP was not about replacing traditional means of medicine, but about better understanding the value of research, and how it would be implemented into a responsible treatment plan. EBP appreciated that, for example, more experienced clinicians can have deeper contextual conversations with their patients, and are better able to understand how each patient’s experiences potentially influence their treatment. Yet in our science-centric world – where scientific research is seen as the ultimate validation of an idea – other forms of evidence are becoming marginalized. There is great wisdom in experience and tradition, and while we cannot accept an idea that is based on belief alone, we can – and should – respect empiricism more than we currently do. Seeking certainty, we make the error of rationalism – being blinded by ‘what makes sense’ (Taleb). Intuition is also increasingly derided – in many cases for good reason, but let’s not throw the baby out with the bath water. There is increasing amounts of research on the validity of intuition (research from Kahnemann and Tversky is particularly instructive). “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.” (Einstein). In practice – within the chaos that is coaching, system 1 thinking is paramount. We have a plan and an objective going into each day – but it is what we see in front of us that determines the details. If we get too caught up on gathering scientific data, we lose the ability to understand and process what we see, and this retards the necessary ability to adjust on the fly. It’s OK if we don’t fully understand something. It’s OK if we don’t yet have scientific validity; if something is working, let’s take advantage of that while we figure out why.
With the current obsession with data, monitoring, and science, coaches have forgotten how to coach; I often wonder whether many coaches actually even want to coach – it seems most would prefer to be scientists. If you truly do want to coach, you must remember that science should be seen as the beginning of an inquiry, rather than an unquestionable truth. Science should help inform our opinions, but we should always question the source. We should act on facts, and on the most accurate interpretation of them, using the best scientific information, but understand that we don’t need to have 100% ‘evidence’ about everything. We should all question our knowledge – with an understanding that introspection and a readiness to admit uncertainty of opinion is the key to moving forward – both as individuals and as a profession.
Science seeks to understand complex processes by reducing them to their essential actions and studying the interplay of those actions – we then form conclusions based upon these studies. This reductionist scientific process expands our vision and understanding of how we work, but we must be careful that we do not assume that because we know the how, we then know the what. Greek physician-philosopher Menodotus of Nicomedia, a member of the Empirical School, sought practitioners who embodied techne (know-how) and not epistêmê (know-that). He had no use for the doctor who knew how the heart worked without knowing how to operate on it.
Know-that was more important that know-how.
Doubt is the origin of wisdom Descartes – Meditations on First Philosophy 1641
For further reading on evidence-based coaching, I highly recommend Sam Leahy’s excellent overview, as well as Brent Russell’s classic Coaching Development and the Second Law of Thermodynamics
Evidence-based Medicine original research:
Guyatt, et. al., 1992. Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine https://www.cebma.org/wp-content/uploads/EBM-A-New-Approach-to-Teaching-the-Practice-of-Medicine.pdf
Sackett 1996. Evidence based medicine: what it is and what it isn’t https://www.cebma.org/wp-content/uploads/Sackett-Evidence-Based-Medicine.pdf