L&D, Research and Evidence

Last week, I wrote about how L&D get research so very wrong at nearly every turn.

So let’s clarify some things we often mistake for being in the same bucket as research that isn’t.

  • Anecdotes are not research. Anecdotes are just that. Stories. Powerful, yes. Insightful, yes. And always subjective. From one person’s point of view. Anecdotes add power when many others report similarly (think #MeToo), and when they add emotional weight to research (think charity campaigns focusing on the plight of one person).
  • Case studies are not research. They are an example of how something worked in a very specific context. Case studies are very helpful for learning about new settings and different contexts. They are not research. They highlight a peculiarity which is interesting to study deeper and further. Case studies are strong because they have deep context.
  • White papers are a mixed bag. I could write a white paper tomorrow about my thoughts on leadership. It would be classed as a thought leadership piece. It wouldn’t have to cite where I make any assertions or provide other sources for reference. At the same time, there are white papers written which are robust, thorough, comprehensive, insightful because of the data.
  • A random poll isn’t research. Often, it’s one person’s way of gaining insight into a question important to them. Polls can be highly effective forms of research when addressing key research questions. The thing which makes a poll powerful and predictive is when it is driven by research methodology, and not by one person’s attempt at justifying their opinion.

And there are different ways we can be more evidence driven. Now, here comes the challenging element with research and evidence. Sometimes, we see evidence and think it must mean it equates to research. It does not. Researchers take years to pull together data from myriad points, use this to assess whether their research is valid or not, and are heavily driven by the data. They have to look for counter-data to try and account for other interpretations and influences. They have to test against different conditions to ensure they’re not just getting their desired set of results. Research is a laborious and time intensive approach, often taking years before conclusions are drawn and can be shared. Even then, it becomes open to scrutiny, challenge and criticism. It is picked apart and debated. That’s often before it hits mainstream attention – if indeed if it ever gets that far.

So, now we’ve got some baseline established, how can L&D get better at critically evaluating evidence and research?

  1. Don’t take it at face value that the research or evidence you’re being shown is as robust as I’ve described above. It is incredibly common for presenters, trainers and consultants to share models and theories that have very little evidence to support their usage, and little to no research. E.g. a leadership consultant argues that they have found 7 key behaviours that lead to better leadership. Sounds great, right? Based on what evidence? Who carried out the research? What were the results of the research? What other factors and variables could have accounted for leadership success? Is it just one context we’re dealing with (sports) or is it from multiple industries and sectors? Or a personal development consultant argues they know how to unlock human potential. A very common claim. So once again, based on what? Their personal experience? That’s valid, but limited. Have their methods worked with people facing 100s of different life problems or are they working with a certain kind of person? When counselling and psychiatric interventions are more research and evidence based, how is this practice better than those? Whose theories are they drawing on? Are those theories validated? Often, anecdotes and case studies are used as evidence of success. That’s valuable, but limited.
  2. Learn about research methodology. Research methodology is a rigorous process. Often the results from research aim to highlight specific research questions that are not meant to be extrapolated to other contexts. The Mehrabian Myth is probably one of the most quoted examples and yet was never meant to encompass every example of communication or have it be used so widely. Things like double blind studies, control groups, stats, design of questions, validity, reliability, all have specific definitions when it comes to research, beyond our own understanding of those terms. Just because we use the English language commonly, doesn’t mean we always understand what we’re being told.
  3. Get comfortable with being a critical thinker on research and evidence. It is ok to be that person that wants to know more. In L&D we like to think we are critical thinkers. A lot of us really aren’t. We will readily swallow any graph or stat or data presented to us as being ample evidence and quote it back to others willingly. There are many good sources of evidence and research available, and many times, L&D just don’t bother.
  4. L&D really needs to stop trying to pretend they are psychologists. If you have heard or read some fantastic piece of work or research and are enthusiastic about it, that is fantastic. Having read the stuff doesn’t make you a psychologist. It just makes you a well read person who has a better appreciation of the human condition. If you like the work, and think it will have relevance to your organisation or client needs, get that person in to do the work. They will understand the work far stronger than you will in your enthusiasm and excitement. Your job as a trainer or facilitator or consultant is not to be the font of all knowledge.
  5. You don’t have to be a researcher to appreciate good research or evidence. You do need to understand it, cos that makes you a good L&Der. When we are delivering our solutions and interventions it’s better to be armed with solid research and evidence over personal feelings and opinions. Personal feelings and opinions are helpful, and carry more weight when there is solid research and evidence to back them up.
  6. Claiming that anyone can do their own research, or that you can show evidence for anything, shows a deep misunderstanding of both good quality research and gathering evidence. It shows a superiority which is unfounded. It shows a disrespect for a set of practices which can significantly improve what we do in L&D.

When it comes to the design of L&D solutions, there are several things we can and should be doing.

  • Find out what research or evidence exists for the models/theories we’re trying to use. If the research/evidence looks weak, don’t use it. Find something more relevant/applicable.
  • If you’re interested in a model/theory, find out from the consultant/vendor what evidence base or research they have for it. If the model/theory has any weight, there will be research/evidence in its favour. Often, the researchers will want to talk to you themselves to really be sure you are getting great information. If it’s been designed by a consultancy or an individual, often the research and evidence is lacking. Instead they will share case studies, white papers or anecdotes. They are forms of evidence, and should be taken as light forms of evidence. Often, this is where claims about the value of research are called into question.
  • Test your model/theory with others before inclusion in a solution. Good research comes from a place of trying to disprove your approach. Give people an option of other approaches and ask them to evaluate which is more effective. If you’re only testing one approach, you are highly likely to receive positive feedback and nothing of value with criticism.

So when it comes to getting better at being a research lead and evidence based profession, it involves education and time to better evaluate what we do and how we do the work we do.

What L&D get wrong about research

We’re living in a period of time where it’s more and more acceptable that we can hammer experts for their subject matter knowledge in preference for someone’s personal opinion. It is genuinely a troubling attitude when someone feels so entitled to their opinion that they create a well articulated argument in favour of it and will willingly disregard opinions from those who have clear evidence suggesting better practice.

Amongst other societal challenges this trend represents it is indicative of lazy thinking and a real lack of critical thinking faculty.

In L&D we’ve been guilty of such lazy thinking and lack of critical thinking for a long time.

There are a number of factors that play a part in the lazy thinking and lack of critical thinking.


Many trainers and facilitators in L&D believe they are so good at what they do as a trainer or facilitator, that it overrules any need for being fact or evidence driven in their approach.

They will run exercises and activities that are designed to do nothing genuinely insightful and are simplistic at best. Such stuff is superficial and is often a result of some creative thinking on how to get someone to think about a topic in a different way.

The personal experience they create is often mistaken and misrepresented as effectual learning.


Researchers take years to become subject matter experts. They ask multiple questions like:

  • Why does this happen?
  • What literature is helpful for this research?
  • What do the results tell us?
  • Is our data biased?
  • Have we studied the right things?
  • What else could be a factor?
  • What does the research help us understand better?

There are many more questions researchers ask.

L&D do not do research. Not even close.

We have multiple case studies and anecdotes. In nearly every case, we lack actual research.


It is a very common affair that the multiduinous models and theories used are nothing more than one person’s view of the world and lack anything close to evidence.

Actual proper research and evidence would show things like:

  • Here’s how this model/theory tangibly made a difference based on these factors
  • Here are the control groups who received nothing, the control groups who recieved something different and the control group who used our model and here are the results from all those groups
  • Here’s the variety of different work groups / sectors / industries the model was tested in, and these are the results
  • Here’s the limitations of this model and how it should not be used
  • Here’s replicated studies of other people using our model and the results they got
  • Here’s peer based reviews of our work and their criticisms of the model

There are so many models and theories put forward in L&D and I would wager 90% of it lacks credible research and evidence.

Case studies are not the same thing. Anecdotes are not the same thing. Both of those are examples of marking your own homework.


In so many cases, we just don’t have the time for actual research. When the likes of Saba, Cornerstone OnDemand, SumTotal or Pluralsight put their stuff out there, they’re not showing us evidence based results.

They believe they are, because they are in multiple big clients and have volumes of clients using their products. Just because a product is widely used does not constitute data as evidence. They do not have the validated research by external sources to verify if their products deliver the successes they claim. They have anecdotes and customer success stories. They are not the same thing.

When the likes of leadership consultancies and personal development trainers make claims their methods work, it is because they have no evidence to show the contrary. If all you are using are your own methods and models, you’re not testing them against anything else so all you’re going to get are positive results in your favour. That is so much skew.

The time to do the proper research would take years. Most people have mouths to feed and bills to pay. Life gets in the way. So the evidence is of a much reduced quality and we’re often asked to move forward based on faith and trust in the person and not worry about the models / theories / methods.


There have been many times I’ve asked a potential vendor for actual research to back up their methods / approaches, and many times I got very poor responses ranging from no research, to them providing case studies and anecdotes, to them claiming research doesn’t matter.

If we can’t be confident the methods being used are tested and have validity and reliability then why are we moving forward with them?

I’ll also get people giving me alternative results from other research, which supports their original thinking, even though it is poor research in and of itself.

I’ll also get people telling me research can be anything we want it to be. This just displays such a lack of understanding about research and why it’s important.


Many in L&D don’t want to ask the question about research. They don’t care about it, they don’t see the relevance of it, and it gets in the way of them doing what they perceive as their job.

It is a problem which is inherent across everything we do. E-learning, in-person sessions, virtual training, mobile solutions, all have such poor research into what good actually looks like we are often left with sub-par solutions and asked to trust in the person more than the tools or resources they put forward.

I have no problem trusting a person. If the tools and stuff they use are flawed, then that basis for trust is automatically problematic. If they do not understand fundamentals about research, it only adds to the problem.

What solutions can L&D provide if courses aren’t the answer?

A couple of weeks ago, Myles Runham wrote this really interesting piece on whether or not L&D can work without courses.

His main provocation is this:
In L&D… we seek problems for which the course, a programme, an event and some content are the answer. These may or may not be learning, training or performance problems… We like these problems because we are ready to create (these) solutions, not because they are the most valuable problems to solve.

So I’d like to posit stuff on solutions which are not event based in any shape. Many of you will recognise some of these as part of stuff you’ve either delivered or take part in yourselves.

  • Define the actual performance problem not the training problem. Starting from there means better exploration of solutions for the manager/leader you’re working with.
  • Ask yourself, what practical digital resources will help the people who need it most effectively?
  • Work with the manager/leader to figure out what operational/tactical/day to day stuff is preventing the desired performance from taking place. Work to resolve that.
  • One of the ways Twitter has grown its user base is using hashtags for regular chats. It creates good habits around community and a healthy space for exploration of topics of interest.
  • Find people in your organisation who can answer stuff on your behalf – be that managers, HR Business Partners, comms folks – a lot of people are willing to help.
  • If you’re dealing with compliance issues, learn more about behavioral science and less about mandatory training.
  • If you want leaders to improve, expose them to what great looks like and let them figure it out.
  • Coaching is a powerful tool to enable ownership of issues and developing personal thinking capabilities.
  • If you’re dealing with diversity issues, look at your organisational design systems and how you can influence those, not worry about diversity training.
  • If you’re looking at inclusion, look at your culture and organisational development practices and how you can influence those, not worry about thought leadership pieces.
  • If you’re looking at leadership development, give them proper organisational problems to resolve, with the right support in place to enable success.

These are just some examples of how L&D can add value and provide solutions without needing to resort to an event of any sort being the default answer. Sometimes, a course/event/training is necessary – for things like core skill development. For most organisational needs and problems, a course won’t be the solution. I have friends whose bread and butter is to deliver courses. That’s good and I’m not arguing against delivery of courses – they often serve a clear need. In many organisational issues, we should be looking at better solutions.

If not Learning Styles, then what?

Last week I wrote this blog post about why Learning Styles should be consigned to the bin. Not just my opinion on it, but because there is no research of any kind that supports the use of Learning Styles in the design or delivery of training/learning solutions.

For those who have trained for years using this type of model – and don’t forget, Learning Styles is many theories not just one thing – it can leave a sour taste. If I don’t use Learning Styles to design a course, then what do I use?

It’s a valid question.

The answer lies in our understanding of training problems / learning needs.

Often, business leaders will inform us of the problem they face. My team need communication skills training. My team don’t deliver their sales targets.

Then comes the request. Can you deliver the training course we need to fix either of those two problems?

A lot of trainers / L&Ders see this as their mandate to deliver for the business. That they can show their value by doing what was asked for. That they will have clear learning objectives.


Before you’ve even started designing or thinking about how you could deliver on the solution, consider these further lines of enquiry back to the business leader.

What change are you seeking from the training?

What are you doing to manage that need yourself?

How are you giving the team feedback on these needs?

What are the structures in place that either support what you’re asking for or hinder the outcome you’re expecting?

What have you tried already?

Who else is doing something impressive that you have seen?

How are the team measured on these needs?

Any one of those questions will start a further line of enquiry with the business leader. What I like about this performance consultancy approach is that you’re engaging the business leader and asking them to take responsibility for the results they’re seeking without defaulting to your training course / learning solution.

That whole process above will nearly always result in a different set of solutions – one of which may still be a course – but that doesn’t become the default option trainers are used to.

Then when it comes to the design of the thing, don’t start from the content side of things. The content is only important if you understand the context the team are working in.

Spend time with the team. Sit with them to understand what challenges they face with the training need identified. Use their input to inform the content you need to design for. Ask questions on what kind of practical solutions they want a training/learning solution to provide. Then design for those things specifically.

Through none of that does it matter what kind of learning preference the individuals may or may not have. You’re not delivering against their learning preferences – that’s not the business need. You’re delivering against their performance needs. That’s what will be measured.

There are many other options when it comes to thinking about the design of training / learning solutions including areas like user experience, behavioral economics, the 70:20:10 model, and experiential learning. Learning Styles has no place in the work we do.

It can be hard to have to re-learn what we know and trust, and if we can’t demonstrate that in ourselves then how are we hoping our learners will fare any better?

Why does it matter if Learning Styles is still used?

In the world of training and learning and development, there are certain practices that just do not go away. A lot of us in our early days of becoming trainers were taught about some variation of learning styles. How we have to design learning for people who need to hear things (auditory), for people who need to see things (visual), and for people who need to do things (kinaesthetic) – and possibly even people who like to read things (reading).

It was staple training 101.

And then some smart researchers decided they would test out the theory to see how effective it actually is in learning outcomes. So they tested learning design against the proposed theory(ies).

In all empirical studies, learning styles has been left wanting every single time. Never once has the use of learning styles in the design of learning made a tangible difference to learning outcomes. It’s been tested in all sorts of settings – corporate, schools, universities, public sector. And it’s been tested and retested year after year. We’re not talking stuff which is out of date research. There is absolutely masses of research that tells us learning styles as a theory has no validity, has no reliability, and should not be used for designing anything related to learning or training.

Yet it persists. Like a bad rash, it has potency. Its potency lies in nothing more than its simplicity as a theory. Even though the theory is wrong, it has the semblance of offering some insight. And just like a bad rash, the more you scratch at it and pick at it, the more it persists and doesn’t go away.

So does it matter if trainers and some L&Ders still use learning styles? Yes, yes it does because they’re designing and delivering things in a fundamentally flawed way.

In a world where we are peddled many mistruths about many things, where evidence and research is available to help us make good decisions then we should avail ourselves of it. If we don’t, we are essentially being lazy about our craft and arrogant in our knowledge. Ego has a place in any work, but should never come before insight and better work.

Does it harm anyone if learning styles is used? Not direct harm, no.

The harm is caused from the blind acceptance from our learners that the design of the training they’ve been given is ethical and research based. Otherwise we’re not helping our people genuinely learn stuff in a helpful way, we’re just touting a belief based on no more than faith, and religion is not part of the training/learning function.

We should just let people be! I hear some of you say. It’s just a tool! It can be helpful to some people!

Mates, I don’t want the ambition of the stuff we do to be helpful to some people.

If we’re using design principles that are unhelpful to most people, then why in the world are we insisting the theory has any value at all?

If there are better tools for design of learning/training that can help more people why are we pretending that it’s acceptable to use a poor theory to design our solutions?

We have the strong fortune of having access to very many strong L&D designers and thinkers in this age. There are some solid frameworks we can work with that can enable the kind of outcomes we’re seeking to provide.

As always, let me know what you make of this. Very interested to hear others thoughts and comments.

Lessons in personal resilience

Life has thrown some stuff my way in recent weeks, and it’s meant I’ve had to really look at my own practice when it comes to my personal resilience. I need to keep resilient and positive for those around me, ensure I have a fair handle on the reality of stuff, and that I’m being aware of others needs and supportive where I can be.

In no particular order, here’s how I’m trying to maintain my personal resilience:

  • I’m audio journalling far more than I have done previously. I’m checking in with myself every other day or so. Asking myself questions like: What is today’s situation? How is it different from yesterday? What happens next? How am I being affected? What does that mean for me? What is different about my own behaviour? Is anything happening of concern? What patterns of thought / emotion / action are happening I need to be watchful of?
  • Talking regularly with friends and family. Different people ask different questions. Different people need different answers. Some I trust more and can talk more freely. Others I say what I need and no more. But talking is super helpful. It helps me make sense of what I think is happening and what I think will be happening next.
  • Making sure I’m eating well and sleeping well. Most evenings by 10pm I’m exhausted from the day and sleep easily until 8am the next day. I am getting more than enough sleep! But I know I need it. I’m also really trying not to just eat junk food. It’s easy for me to do, so I’ve been making super conscious choices about what I’m eating. Have discovered I’m a really big fan of salads.
  • Talking to the professionals and trusting their choices. They are experts in what they are doing, and the best I can do is listen and understand. From that I can make as informed a set of decisions as I can.
  • Accepting there are days when I will wain. I have had a few low days, and that’s ok. I don’t force myself to feel positive. I also don’t wallow. I just let that down day happen. The body and mind have their own way of preserving our condition, and it’s important to pay attention.
  • I have massively restricted my social media output. I’m known to be a prolific tweeter, and post across multiple different social media. I know that I will readily throw myself into conversations of all sorts. I know that I will readily burrow down rabbit holes of content and topics of interest. I’m keeping fairly well disciplined about limiting all that. My emotional resilience is significantly lessened by me actively engaging in social media, and it won’t allow me to be well for those around me.

There are some things I haven’t been able to maintain as well as I’d like:

  • Pretty much all gym and physical activity routine has been put on halt. Normal routines are not in play for now. The best I’ve been able to do is pilates at home. I’m keen to restart swimming and the gym, and I’m not trying not to unduly pressure myself into going, nor feeling guilty about not going. This period demands other focus, so that’s what I’m doing.
  • Blogging and podcasting. Both are normally good outlets for me. Pursuant to the above point on limiting how I engage with social media, I miss producing content. It’s such a regular part of my life, that not doing that stuff is noticeable for me.
  • I’ve also pulled back on the different groups and networks I’m connected to. I just don’t have the energy or the attention I could so easily give before. It’ll return once this stuff settles, so I’m giving myself permission to let those things idle along as they best can.

One final note. I am not writing this for sympathy, nor for solutions, nor for any kind of self-aggrandising. I am not seeking anyone’s well-wishes or solidarity. I’m writing this as I think it’s important to have that level of congruence that I write about wellbeing and resilience, and I live it as best I can. The above, I hope, is a personal example.

The deep bias facing L&D

As we continue our understanding of bias, prejudice and privilege in society, we also start to develop the capability of interrogating our own spheres. I have written before many times how there is a problem of diversity of people of colour in L&D. The most obvious place we can observe this is on the conference circuit. In the main, speakers will be white. It is incredibly rare to see Black people, Chinese people, Indian people, taking the stage. I’m not saying it doesn’t happen, I’m saying it’s rare. In the L&D space, it’s hard to find leaders who fit those demographics, and who are willing to speak on the conference circuit.

But that’s just one place where we can readily observe the lack of diversity.

If we think about things from a systems perspective, we start to see just how the L&D ecosystem is perpetuating completely and thoroughly white perspectives on the world, and it’s in every aspect of everything we experience. I’m specifically talking about L&D here – not the wider societal impacts of white thinking.

And I want to be very clear here – I am being critical of the complete lack of diversity in the system. I am not being critical of any individuals nor their thinking nor their contributions to L&D. In a lot of cases, we have very strong L&D thinkers, leaders, practitioners, consultants who design and deliver fantastic solutions and products. I applaud of all that, thoroughly.

What I am seeking to highlight here is that at the vast majority of our events that we hold, from the books that are written, from the speakers we hear, from the consultants who design, we are getting – in the majority – white perspectives. Yes, there are those who are of colour doing good work in our space, and they are in the minority.

What I see and experience is that we perpetuate and roll along very willingly with all this, and it seems like there is little effort to positively make a difference. White voices are heard, white voices make decisions in our profession, white voices determine the models and theories we follow, white people lead the vendors – do you see?

And let me be clear – I am not saying this makes any of us racist. It is just how it is. If we look at this more critically, it means we’re all complicit in the perpetuation of the same.

This isn’t about people of colour not taking the opportunities to present themselves, or put themselves forward – some already do that. It’s that through the systemic ways in which we operate, so much is done with a white person lens, that we willfully neglect and do not consider that there is a lack of any other voice other than the white ones.

What do I mean? When a call is made to the vendor and an account manager takes the call and puts forward the brief to their team. When a project needs to be managed and the project manager decides on the actions that need to be taken and who’s accountable for what. When a conference organising team is a handful of people. When the academics we laud and talk about and listen to and whose models and theories we want to learn more about. When the books we read are given accolades. When the awards are judged and the decisions are made about the winners. When a new product is launched and there’s a big marketing campaign.

In nearly every one of those scenarios, I’m willing to bet that there are more white voices involved in all of those scenarios than there are active voices from people of colour. I am not suggesting we stop working as we do. I am highlighting that the system enforces the voice of the white person.

This is the deep bias.

Our problem in L&D is that we think we’re above prejudice, above bias, above discrimination. We think we are more inclusive than most, more accommodating of needs than most, more aware of bias and prejudice than most. And yet, it would be tangibly very difficult for most of us to genuinely put forward multiple examples of where diversity bias isn’t so clearly lacking. I’m saying multiple examples because one or two examples from your own experience isn’t enough. I’m talking everyday actions, not specific moments in time.

Our problem in L&D is the same problem in society at large. We believe that our everyday interactions and actions are as genuine as they can be and that we’re treating people well. This isn’t about how well you personally treat others, or what you personally do to make a difference. This is about how the system at large is designed so heavily in favour of white voices that we don’t even recognise the lack of non-white voices.

And for completeness, we are woefully biased against so many other demographics, which I’ve not even tried to address here – disability, social class, sexual orientation, gender orientation, formal education or not, and so many more.

This is the deep bias.

There are no easy solutions here. I’m not asking for solutions. I’m also purposefully not proposing solutions. This blog post isn’t about that. It’s to cause debate. It’s to state observations which I believe to be blatant and very present. This blog post is to bring this discussion to the fore.

Discussing this topic won’t get you in trouble for being racist – unless you use racist language, or say things in racist and discriminatory ways. In the main, most of us will understand how to not do these things, so your contributions and thoughts will be welcome. If you’re not comfortable commenting in the open space, then DM me on Twitter or send me a private LinkedIn message. It is from discussion that we can keep things moving forward. When we don’t discuss things like this openly, we remain complicit in perpetuating the strength of white voices and do not do enough to include voices from people of colour.

Final Point – I hope in this writing you will have seen that I haven’t accused anyone of anything. I’m not singling out any one individual. I’m being quite measured in my language and the points I’m making. I am talking about the L&D ecosystem which is all of us.