I had a really good conversation with a friend about how we assess the impact of online learning tools.
It got me thinking about Amazon and their approach to selling products. Ultimately you know if it was worth placing your product on Amazon if it was sold. That’s the ultimate feedback and measurement right there.
Because of their sophisticated systems they can tell you all sorts of information about the experience of buying: how long someone was on the site, the page, what else they viewed, how many items were in their basket before they bought, if a special offer made a difference, and so on. But ultimately none of that information matters. Did the product sell? That’s all that matters.
In the world of L&D and online systems specifically, what matters is performance improvement. Did performance improve because of the time spent using the tool? The hard part of answering that question is in the client defining and being very clear on what performance improvement looks like.
If they can’t, then all the supplier/provider can do is to report on the metrics of the experience. That will be useful information for the client to know, but they are the ones who need to define what an improvement in performance looks like.