AI and machine learning. Big data. Learning Evaluations, ROI, ROE (Return on Expectations).
All very exciting terms at the moment, being thrown about – with ‘first movers’ trying to grab a bit of the action before anyone really knows the future applications of these technologies. So here’s my take on the future, and where big data is going to be really useful. But first, we need to look to the other side of the training, to the outcomes.
Various texts I’ve read complain about assessment results. Candidates learning reduced to a simple pass/fail, or a percentage. In this collected assessment data, usually cast aside is a mine of useful information, both about the candidate, and the training itself.
There are a minefield of issues around this data. From a purely deterministic point of view, we should pre-assess everyone, to measure improvement. Topics where a high percentage of candidates struggle raise questions about how those topics are presented. In assessing the learner, we are also assessing the training. So this is where Big Data and groups come in. As a purely average subject, a learner can be a known quantity. The deviations from the reference groups become the points of interest, strengths and weaknesses. The assessment needs to capture areas where further training may be needed – or flag that a learner has affinities for a certain subject. It’s just that creating an assessment that measures all this data is a logistical nightmare. Do we even have the tools?
Something not entirely new but which I’d be keen to see more of is 3D Assessment – where a candidate answers a question, but also gives a rating on how sure they are about the answer they have given. If a candidate selects the right answer and is 100% certain, that’s a full mark. If they select a wrong answer, but are only 20% sure of that answer, for example, the mark would be better than if they select a wrong answer but are sure that it is right. In other words, the candidate is rewarded for showing that they know what they know and what they don’t (this reminds me of the whole ‘known unknowns vs. stuff we don’t even know exists’ argument).
At an individual subject level, the data is pretty subjective. But if you start getting areas where learners come out thinking they have learnt things but have the wrong answers, you have a problem with the training materials. Learners coming out with areas where they are uncertain of answers, whether the answers are right or not, would suggest a need for further training.
I see three issues currently stopping this from happening:
- Actually creating the assessments
- Lack of suitable tools
- Lack of numbers – this is real more users, more useful scenario
And the danger for the designers of the training – the assessment could finish up as such a rigid and complex framework, this will limit the options available when creating training to fit such a frame – when the outputs are so clearly defined, how can the trainer miss?
As always, I’m interested to hear any thoughts on this – has anyone used 3D assessment in this way?