Last week, I wrote about how L&D get research so very wrong at nearly every turn.
So let’s clarify some things we often mistake for being in the same bucket as research that isn’t.
- Anecdotes are not research. Anecdotes are just that. Stories. Powerful, yes. Insightful, yes. And always subjective. From one person’s point of view. Anecdotes add power when many others report similarly (think #MeToo), and when they add emotional weight to research (think charity campaigns focusing on the plight of one person).
- Case studies are not research. They are an example of how something worked in a very specific context. Case studies are very helpful for learning about new settings and different contexts. They are not research. They highlight a peculiarity which is interesting to study deeper and further. Case studies are strong because they have deep context.
- White papers are a mixed bag. I could write a white paper tomorrow about my thoughts on leadership. It would be classed as a thought leadership piece. It wouldn’t have to cite where I make any assertions or provide other sources for reference. At the same time, there are white papers written which are robust, thorough, comprehensive, insightful because of the data.
- A random poll isn’t research. Often, it’s one person’s way of gaining insight into a question important to them. Polls can be highly effective forms of research when addressing key research questions. The thing which makes a poll powerful and predictive is when it is driven by research methodology, and not by one person’s attempt at justifying their opinion.
And there are different ways we can be more evidence driven. Now, here comes the challenging element with research and evidence. Sometimes, we see evidence and think it must mean it equates to research. It does not. Researchers take years to pull together data from myriad points, use this to assess whether their research is valid or not, and are heavily driven by the data. They have to look for counter-data to try and account for other interpretations and influences. They have to test against different conditions to ensure they’re not just getting their desired set of results. Research is a laborious and time intensive approach, often taking years before conclusions are drawn and can be shared. Even then, it becomes open to scrutiny, challenge and criticism. It is picked apart and debated. That’s often before it hits mainstream attention – if indeed if it ever gets that far.
So, now we’ve got some baseline established, how can L&D get better at critically evaluating evidence and research?
- Don’t take it at face value that the research or evidence you’re being shown is as robust as I’ve described above. It is incredibly common for presenters, trainers and consultants to share models and theories that have very little evidence to support their usage, and little to no research. E.g. a leadership consultant argues that they have found 7 key behaviours that lead to better leadership. Sounds great, right? Based on what evidence? Who carried out the research? What were the results of the research? What other factors and variables could have accounted for leadership success? Is it just one context we’re dealing with (sports) or is it from multiple industries and sectors? Or a personal development consultant argues they know how to unlock human potential. A very common claim. So once again, based on what? Their personal experience? That’s valid, but limited. Have their methods worked with people facing 100s of different life problems or are they working with a certain kind of person? When counselling and psychiatric interventions are more research and evidence based, how is this practice better than those? Whose theories are they drawing on? Are those theories validated? Often, anecdotes and case studies are used as evidence of success. That’s valuable, but limited.
- Learn about research methodology. Research methodology is a rigorous process. Often the results from research aim to highlight specific research questions that are not meant to be extrapolated to other contexts. The Mehrabian Myth is probably one of the most quoted examples and yet was never meant to encompass every example of communication or have it be used so widely. Things like double blind studies, control groups, stats, design of questions, validity, reliability, all have specific definitions when it comes to research, beyond our own understanding of those terms. Just because we use the English language commonly, doesn’t mean we always understand what we’re being told.
- Get comfortable with being a critical thinker on research and evidence. It is ok to be that person that wants to know more. In L&D we like to think we are critical thinkers. A lot of us really aren’t. We will readily swallow any graph or stat or data presented to us as being ample evidence and quote it back to others willingly. There are many good sources of evidence and research available, and many times, L&D just don’t bother.
- L&D really needs to stop trying to pretend they are psychologists. If you have heard or read some fantastic piece of work or research and are enthusiastic about it, that is fantastic. Having read the stuff doesn’t make you a psychologist. It just makes you a well read person who has a better appreciation of the human condition. If you like the work, and think it will have relevance to your organisation or client needs, get that person in to do the work. They will understand the work far stronger than you will in your enthusiasm and excitement. Your job as a trainer or facilitator or consultant is not to be the font of all knowledge.
- You don’t have to be a researcher to appreciate good research or evidence. You do need to understand it, cos that makes you a good L&Der. When we are delivering our solutions and interventions it’s better to be armed with solid research and evidence over personal feelings and opinions. Personal feelings and opinions are helpful, and carry more weight when there is solid research and evidence to back them up.
- Claiming that anyone can do their own research, or that you can show evidence for anything, shows a deep misunderstanding of both good quality research and gathering evidence. It shows a superiority which is unfounded. It shows a disrespect for a set of practices which can significantly improve what we do in L&D.
When it comes to the design of L&D solutions, there are several things we can and should be doing.
- Find out what research or evidence exists for the models/theories we’re trying to use. If the research/evidence looks weak, don’t use it. Find something more relevant/applicable.
- If you’re interested in a model/theory, find out from the consultant/vendor what evidence base or research they have for it. If the model/theory has any weight, there will be research/evidence in its favour. Often, the researchers will want to talk to you themselves to really be sure you are getting great information. If it’s been designed by a consultancy or an individual, often the research and evidence is lacking. Instead they will share case studies, white papers or anecdotes. They are forms of evidence, and should be taken as light forms of evidence. Often, this is where claims about the value of research are called into question.
- Test your model/theory with others before inclusion in a solution. Good research comes from a place of trying to disprove your approach. Give people an option of other approaches and ask them to evaluate which is more effective. If you’re only testing one approach, you are highly likely to receive positive feedback and nothing of value with criticism.
So when it comes to getting better at being a research lead and evidence based profession, it involves education and time to better evaluate what we do and how we do the work we do.