Evidence Based Coaching
Contributors: Stu McMillan
Man, I hate that term.
Primarily because of what it has come to represent – not what the original intent was. While there has been much written about evidence-based practice over the course of the last couple of decades, it is clear that there is still no universal understanding of what this really means to coaches, and what responsibility, if any, the scientific community has in coach education.
As I see it, there are currently two challenges in respect to ‘evidence-based practice’:
- Bridging the gap between the (predominantly) old-school traditionalists, who are cautious (or even suspicious) of anything outside of their direct experience, and the hard-core (principally well-meaning) scientists-researchers (and occasionally coach educators), who take science as their starting point, and confuse scientific research with evidence.
- Coaches are forgetting how to actually coach. Growing up in an ‘evidence-based’ science culture, and confused as to what ‘evidence-based’ is intended to mean, younger coaches are threatening to co-opt the profession, turning it into an off-set of sport-science.
Within this short article, I would like to dig into what ‘evidence-based’ really means, what this means to coaches and their development, and what responsibility ‘sport science’ has to communicate research.
Firstly, some background on terminology:
The ‘traditional’ form of coaching has been termed belief-based coaching (BBC). It is guided primarily by the coaches’ personal experiences, as well as their understanding of current practice as it sits with their peers. It is generally subjective, biased, and unstructured (Rushall, 2002). And it has formed the basis of sports coaching for generations. In fact, in many sports, it still does.
Evidence-based coaching (EBC) is an extension of evidence-based medicine (EBM), which first came to prominence in the 1990s (although its roots go back to the 19th century). EBM was first introduced to shift emphasis in decision-making from “intuition, unsystematic clinical experience, and pathophysiologic rationale to scientific, clinically-relevant research“ (Guyatt et. al., 1992). Initially, EBM was explained as clinical decision-making based not only on research evidence, but also on clinical expertise and experience, and patient values and preference (Sackett, 1996). Aligning with these principles, evidence-based coaching is generally objective, structured, and (theoretically) highly predictive.
Regrettably, evidence-based practice has evolved to ignore the experience of the practitioner (or coach) and the perspective of the patient (or athlete).
*I will use the acronym EBP (evidence-based practice) throughout this article. This term is to be understood as synonymous with EBM and EBC.
As a starting point, I would argue that the old-timers have a point: generally, we place too much importance on scientific ‘evidence’, and not enough on intuition and experience. But that doesn’t mean we have the right to ignore ‘science’. It is still our best means to generate novel insight, and it is still the only way we can produce consistent reproducible results.
That being said, the dogmatism that currently exists in many scientific circles serves only to feed the mistrust that many experienced coaches have about ‘science’. This dogma is especially prevalent in nutrition, therapy, and sports medicine – basically any profession where it is easy to take a binary view on a subject.
A more appropriate view, however, is that almost nothing is black and white / good or bad. Nuance exists in every circumstance – especially as it relates to the human body. We still know far less than we do not about how we work, and to assume otherwise is simply arrogant.
This is especially evident in the overly-emotional world of elite sport – where everyone is battling for their small piece of the cake. The reality of current scientific practice in regards to sport simply does not allow for binary conclusions: besides suffering from various personal and professional biases, statistic limits, population specificity, straight-up errors, and misinterpretations, the linear-causal mechanistic, and reductionist simplicity of the typical scientific investigation frequently just does does not fit into the complex, ever-changing, interdependent, chaotic sports world where multiple, poorly-understood cause-effect interactions is the prevailing state of affairs.
It’s near impossible to do ‘good’ sport science – especially in an elite population. In order to detect meaningful difference, there has to be sufficient sample size relative to the effect in order to see something. The reality is that in elite sport we are dealing with the 1% of the 1%; so how do we develop good evidence when the sample size is so small? Sport science is constrained by the limitations of our environment – robust scientific evidence will always be tough to get.
Now – this is not to say that reverting back to a belief-based model is the answer. Evidence based practice is still the way forward – but with an acceptance of the original intention, involving the expertise and experience of the (in this case) the coach, as well as the individual circumstance of (in this case) the athlete(s): science helping to inform the decision-making process – but not dictating it.
Coaches are increasingly ready to generate their own knowledge and understanding with a more scientific approach to training, and I in no way want to discourage this practice. No doubt this requires significant effort and expertise, and it’s an essential part of the coaching journey.
But this journey is retarded when communication is compromised – whether from poor science, poor communication, or poor understanding. Additionally, less and less interest is being placed on the ‘soft-sciences’, and we are producing relatively ‘smarter’ (more educated) coaches who cannot communicate. Science and practice is simultaneously growing further apart, and coming closer together. As an industry, we need to do a better job of understanding what the dynamic is, and how we can work together better.
I contend that the current model is not working, and serious thought and discussion is required on the following:
- Evidence: Firstly, what do we mean by this? Does the industry (and by ‘industry’, I mean coaches and other support team personnel who work in elite sport, and the sports scientists who study in it) have a common definition? To many, ‘evidence’ is synonymous with ‘research’ – but it is important to understand that ‘research’ is just a part of the bigger picture. Evidence has been defined as “the available body of facts or information indicating whether a belief or proposition is true or valid” (Oxford Dictionary), and includes personal experience, intuition, anecdotes, and testimonials in addition to ‘scientific research’ (Guyatt). More specifically, what do we mean by evidence in how it relates to us as coaches? I like Sam Leahy’s ‘new’ definition of evidence-based coaching: “EBC is open, thoughtful, and professional coaching decision-making about the handling of athletes/clients that integrates the best available science evidence, research evidence, coaching experience, and the athlete/client values, preferences, and circumstances while considering the larger social context and logistics in which coaching services are provided.” Within this definition exists a launchpad from which we can communicate what evidence is valid, and how more traditional forms can be respected in a more science-centric world.
- Communication: Generally, it’s a growing reality that people are not really listening to each other anymore. We are less open to discussion. We begin with our gut feeling, and every piece of information we choose thereafter is chosen specifically to strengthen this feeling. We are interested only in information that supports our current beliefs, and tend to shy away from anything that challenges us. This categorical simplistic attitude is manifesting in an increasingly binary ‘us versus them’, black and white, and – ultimately – divisive world (the current political landscape the most salient example). As a society, our choice eventually comes down to talk or fight – there is no middle ground. Not finding a common stance to communicate from just leads to greater division; this is as true in the sports world as it is in society as a whole. Within our coaching world, one challenge is to breach the tension that exists between (generally less educated, less ‘science-based’) coaches with professional expertise and experience and well-informed (generally more-educated) coaches and scientists, and how this tension affects the possibility and quality of communication. Within this, we need to address how and when this scientific evidence is being disseminated to coaches. Too often – in the rush to publish – scientific ‘evidence’ is being presented that acts only to further muddy the waters, rather than bringing clarity to an already complicated world. This is most easily seen in the nutritional sciences, where seemingly every food is either cancer-causing or cancer-protective, but it exists in all research realms. Thirdly, the role of the scientist (and scientific community as a whole) has to pivot; from one where scientists primarily communicate with each other to one where scientists communicate with coaches and other practitioners in a manner in which is more understandable-relatable; talking over the heads of the less educated has become an enjoyable past-time for many researchers; this practice is simply self-indulgent ego massage, and does nothing to move the profession forward. Finally, scientists need to accept the limitations of the controlled, mechanistic, world they live in, and understand that there is as much they can learn from the practitioner as the practitioner can learn from them.
- Coaching education: Typical post-secondary education fails to deliver the tools to find out what works; instead teaching us how to regurgitate facts. Coaching education typically does no better – information is often decades old, there is little demand for comprehension of even basic methodology, there is rarely any requirement upon the appreciation of systemic interrelationships, no respect for the role that applied anatomy and biomechanics has in the coaching process, and little attention given to communication, psychology and belief. Curriculum is traditionally more interested in how coaches learn the syllabus rather than learning what will make them better coaches. I understand the difficulty in building high quality education content that is effectively communicable, but there is a great many excellent coach-educators in the world that have for years pointed out the problems with traditional coaching education pathways; yet, rarely do we see a more global acceptance of this, and an attempt to educate more creatively. Too often, ‘education’ has now become a CEU driven, ‘tick the box’ exercise, where ‘educators’ are more worried about collecting money than they are in actually educating their members. This is not to mention the escalation of the rise of the independent guru culture: we live in an age of unparalleled access to information; not only do we have a world of knowledge at our fingertips, but we are increasingly bombarded by the latest and greatest guru offerings in a traveling roadshow, that is generally more an attempt to cash in on the growth-dilution-desperation of the industry than it is to actually educate the masses.
- Research utility: Firstly, we need to do a better job of asking better questions, searching for better answers, and strive to increase the accuracy of our knowledge. Research has become a means to an end to justify employment, rather than a true search for what is meaningful ‘truth’. Too many people waste time debating if something is perfectly correct, when they should be focusing on whether it is practically useful. There is simply far too much useless research now being published (there has been a 300% increase in medical research in the last 25 years, for example). And like anything else that grows in numbers, there is a natural dilution in quality. More information isn’t necessarily a good thing; more information just means more bad information. It is perhaps too late to turn that ship around – now that researchers are on this never-ending quest to publish more and more – but the industry needs to think seriously about their publishing protocols; perhaps early (or new) research should not be reported until it is reproduced? Bond and Campbell (2008) proposed the following criteria for research within the confines of EBP: “clearly defined, designates the target group for whom it is intended, shown effective in a set of rigorous research studies, independently replicated by at least two research groups, addresses important needs in the target population, and capable of implementation in a wide range of settings”. To be counted as ‘evidence’, research should be translatable into practice. Coaches and scientists must work together to find ways to better integrate current elite sport practice into study design – paying heed to the unique environment that we exist within.
- Respecting experience and intuition: ‘Scientific evidence’ is just one of three core components of EBP. The expertise and experience of the practitioner, as well as the patient’s personal preferences and unique concerns of the patient were seen to be equally clinically-relevant. Originally, EBP was not about replacing traditional means of medicine, but about better understanding the value of research, and how it would be implemented into a responsible treatment plan. EBP appreciated that, for example, more experienced clinicians can have deeper contextual conversations with their patients, and are better able to understand how each patient’s experiences potentially influence their treatment. Yet in our science-centric world – where scientific research is seen as the ultimate validation of an idea – other forms of evidence are becoming marginalized. There is great wisdom in experience and tradition, and while we cannot accept an idea that is based on belief alone, we can – and should – respect empiricism more than we currently do. Seeking certainty, we make the error of rationalism – being blinded by ‘what makes sense’ (Taleb). Intuition is also increasingly derided – in many cases for good reason, but let’s not throw the baby out with the bath water. There is increasing amounts of research on the validity of intuition (research from Kahnemann and Tversky is particularly instructive). “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.” (Einstein). In practice – within the chaos that is coaching, system 1 thinking is paramount. We have a plan and an objective going into each day – but it is what we see in front of us that determines the details. If we get too caught up on gathering scientific data, we lose the ability to understand and process what we see, and this retards the necessary ability to adjust on the fly. It’s OK if we don’t fully understand something. It’s OK if we don’t yet have scientific validity; if something is working, let’s take advantage of that while we figure out why.
With the current obsession with data, monitoring, and science, coaches have forgotten how to coach; I often wonder whether many coaches actually even want to coach – it seems most would prefer to be scientists. If you truly do want to coach, you must remember that science should be seen as the beginning of an inquiry, rather than an unquestionable truth. Science should help inform our opinions, but we should always question the source. We should act on facts, and on the most accurate interpretation of them, using the best scientific information, but understand that we don’t need to have 100% ‘evidence’ about everything. We should all question our knowledge – with an understanding that introspection and a readiness to admit uncertainty of opinion is the key to moving forward – both as individuals and as a profession.
Science seeks to understand complex processes by reducing them to their essential actions and studying the interplay of those actions – we then form conclusions based upon these studies. This reductionist scientific process expands our vision and understanding of how we work, but we must be careful that we do not assume that because we know the how, we then know the what. Greek physician-philosopher Menodotus of Nicomedia, a member of the Empirical School, sought practitioners who embodied techne (know-how) and not epistêmê (know-that). He had no use for the doctor who knew how the heart worked without knowing how to operate on it.
Know-that was more important that know-how.
Doubt is the origin of wisdom Descartes – Meditations on First Philosophy 1641
For further reading on evidence-based coaching, I highly recommend Sam Leahy’s excellent overview, as well as Brent Russell’s classic Coaching Development and the Second Law of Thermodynamics
Evidence-based Medicine original research:
Guyatt, et. al., 1992. Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine https://www.cebma.org/wp-content/uploads/EBM-A-New-Approach-to-Teaching-the-Practice-of-Medicine.pdf
Sackett 1996. Evidence based medicine: what it is and what it isn’t https://www.cebma.org/wp-content/uploads/Sackett-Evidence-Based-Medicine.pdf
Excellent article Stu! Thanks! Altus seems to have an open door policy for apprentice coaches to watch and learn, but my limited experience seems to indicate that many organizations and/or coaches are not so willing to discuss ideas or “truths”. My experiences with national teams in skiing and bobsleigh tend to make me think that the competitive side of things forces coaches to be secretive and insulated. So communication is not so easy. The “powerhouses” rarely share, and then usually only with the weak little brothers. I have had the honor of learning from some great coaches, but I don’t know how open they were when first starting out. I am more open now, after almost 30 years in the business, but was not as a young coach because 1) I thought I knew a hell of a lot and 2) insecurity to admit that I had questions. As a scientist I have had the privilege of testing truly the “best” in the world in their sport, but have seen that lab tests have little value in predicting performance. Science indeed has a place in our “education” as coaches, but I think that nowadays science has replaced religion as being infallible and “true”. We need to figure out a balance of scientific support for sport performance and/or coach education. But if a certain nation or federation has found a positive and constructive way to do this, they are certainly not going to share this strategy because we are in “competition”. I have seen some coaches and federations be very open, others can be very secretive. The recent doping scandals have once again shown how insidious competition can be. I love reading and talking about this, but see it more as a philosophical exercise. My most valuable learning experiences are often over a beer with a coaching colleague. Your blogs and articles continue to stimulate and motivate me to learn and coach better.
Super article. Stopping the rot of second class psychological studies is central to my approach working with coaches and athletes. Your point about elites is especially well-taken: Science makes statements about populations – sport is about individuals.
thankyou for providing this insightful article. It highlights the complexity of this issue and direct ramifications for coach education. This means considering how education can be structured to assist all levels of coaches to ‘improve their practice’ drawing on principles associated with learning & teaching plus valuing what ‘experts’ offer and the innovations they bring. Its about making an effort to create engaging & meaningful education programs for coaches focusing on and valuing ‘their practice’ as the central point of development.
Thanks for this Stu. It sounds like we think alike about the opportunity in coaching education. The polarization of adult learning between (often intuition-centric, tradition-based) education and (often behavioural, technocratic, reified) training has become a problem across all industries in the last 40 years. In the quest to make work more efficient, measurable, and objective we often forget that the outputs of any work by communities of people are the result of ever-changing practices that are situated, complex, and emergent. The attributes of the system are not always visible in the attributes of the parts of the system. Looking forward to reading and hearing more!
“…talking over the heads of the less educated has become an enjoyable past-time for many researchers; this practice is simply self-indulgent ego massage, and does nothing to move the profession forward.”
If all we had to do was get ‘the science right’, then all of our jobs would already be done. But of course we still must get the science right!:)
There is no such thing as objectivity in a vacuum–our human lens is all we have.
Thank you for a very poignant and needed article.
Very interesting article. I am in education and also a coach and this is a bug bear for me as we are always pushed to deliver the syllabus and not the reality as the learners are tesTed on the syllabus. When they finish the course they are not ready for the real world..
Good morning from Greece and best Season greetings!I am Christine,I’m a veterinarian and small animal practitioner and have been trained in middle distances for a couple of years on my own,based strictly on scientific data I had been collecting from abroad,since in my country the sport has been broadly not scientifically evidence based but rather experienced based.Please allow me to add a few thoughts on this article:
1.It is impossible for non scientists to utilize scientific conclusions.Practically, it is as if you give a tailor the task to perform surgery.Undoubtedly they both are capable of designing a construction,cutting and sewing, but the former is not trained to perform a medical activity.Thus,maybe the issue is co – working and implementing one’s conclusions into practicality.
2.Einstein was a Mathematical genius and for brains of that extension and broadth, intuition is one component very commonly met.Common mind, though, even with a very high IQ score, is thousands of miles far from that point, so as to safely make comparisons and draw conclusions.
3.From personal experience, the effectiveness of a coaching programme vastly depends on the mentality of the athlete.An athlete with scientific knowledge himself wants to know the structure of his sport, the goals, the biochemical and biomechanical paths through which it functions, the chemical elements that get implied.
Because he realizes there’s so much underneath. Such an athlete demands a knowledgeable coach and definitely can’t stay with one who just designs a training program based on his “guts” and asks the athlete to just keep silent and perform.
4.Physiotherapy and nutrition in my experience and in my country lack huge scientific basis.The results are very variable and a lot of athletes suffer great time out of training due to misconceptions and poor therapeutic choices.
To cut the story short, sciences were not designed to create doubts as to their effectiveness, but to provide with hints that need to be evaluated in large samples, so as to get data that will be a strong feedback for further research and investigation. And all that with ethical criteria, always.
Thank you very much for permitting the readers to state their opinions.
I am a sport scientist (or better I was a sport scientist since now I work in another medical field), but anyway I agree with some parts of the articles that are actually arguments well-known and discussed within the scientific community. I agree with the comments above since it is not clear if this is a critic to the scientific or coaching world. I think should be a critic for both. Hackett (BMJ 1996) wrote: “Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.” And again “Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough.”. This is a famous paper published 20 years ago and just reinforced what written by Guyatt few years before. It is clear that the role of experience AND evidence was never negated. The problem is how this approach (EBP) has been applied and interpreted by some scientists for sure but also by many coaches. For example, nowadays several coaches use monitoring tools and they think the more they measure the more they are using a scientific data-driven approach. But science and EBM is not the use of tools, EBM is the use of knowledge (together with experience). So there is a problem in both “worlds” and it is not because EBP is wrong (okay can be improved but this is another issue) is because it has been applied in the wrong way. As editor of scientific journals we are trying to cover the practice-theory gap sensitising researchers to cover the practice-theory gap. I expect/hope the coaches will do the same. Just a couple of main remarks: sport science has since a long time well recognised the uniqueness of elite populations indeed, as in other scientific fields, there is an increase attention to individual responses both in terms of research designs, statistical analysis and interpretation. But to be fair, you should admit that there are several coaches (or sport data analysis or whatever) using pseudo-science for supporting their claims (showing wired and not-understandable graphs, providing tons of data and attempting to build a scientific rationale on what they do) and at the same time denigrating science and scientists. In other words, they try to take advantage of the “reputation” of science but refusing at the same time to follow the rules of science. You wrote “With the current obsession with data, monitoring, and science, coaches have forgotten how to coach; I often wonder whether many coaches actually even want to coach – it seems most would prefer to be scientists.” I completely agree. To be precise, coaches don’t need and have to be scientists, but they should learn (or someone should teach them) to understand the scientific information, and the scientists should learn (or someone should teach them) to communicate better the scientific findings to practitioners.
Great article. I agree that sport scientists have been too dogmatic about all practice requiring research evidence. I actually like the term “rationale-based practice” where the coach must have a reasoning that can be defended for doing what he or she does. This rationale can be based on “the best available science evidence, research evidence, coaching experience, and the athlete/client values, and preferences…”
I am no elite coach or athlete. I’ve come across this via Twitter and I found the article gripping as it echoes the problems we once had in the field of medicine.
Following the evolution of evidence based medicine we faced a similar dilemma particularly with regard to patients who are diagnosed with cancer. Essentially, these are a very complex group of patients with very complex needs and treatment regimes. Interestingly, I think this particular group mirror some of the issues you highlight in elite sports:
1) sample sizes tend to be extremely small and the ‘quality’ of research extremely variable.
2) patients responses vary considerably to treatment.
3) the volume of generated research for new oncological treatments is vast and confusing even to experienced clincians.
4) multifaceted patient needs beyond the simple: take this 3x a day.
The current best approach for this at least in England where I practice is a multi disciplinary team approach where the oncology doctor is part of a much larger team. I propose that the same or similar approach is applied to elite coaching where coaches and researchers,are part of a team, rather than each specialist attempting to canablise on the others field. In a typical weekly MDT we have an MDT co-ordinator, clinicians, nurses, physiotherapists, nutritionlists, meeting under a single roof to discuss individual patient needs under evidence based umbrella. Conflict does arise: for example Surgery A is proven to work best, but the clnician/ head coach experience suggest the patient would be unable to tolerate the surgery. Thus one moves to second best treatment strategy. I propose final say must land with the coach who has first hand experience of the athlete.
Forgive me I’m not aware if this a current approach or not. In summary, it is prudent that specialists do not attempt to enter a ‘turf-war’ with each other and that the individual athlete remains at the heart of the training plan.
The single advantage that the field of medicine has is that MDTs often share their experience and patients together. Our particular MDT often has 15 mind slot to discuss patients from smaller hospitals who lack experience we built over the years. Unfortunately this is unlikely to be feasible in coaching.