Risk-based Oversight & the Chemicals Industry

Risk-based Oversight Strategies were the ‘Hot topic’ when I was at EASA. I see the documents published:

https://www.easa.europa.eu/document-library/general-publications/practices-risk-based-oversight

I have suddenly got rather excited about this topic and its application in the Petrochemicals Industry. If it isn’t a big thing already, it soon will be once the savings vs. benefits are properly understood. 

To understand Risk-based approaches, it’s useful to compare with a traditional Procedure-based approach. Think of the procedure as a checklist. Before a plane takes off, the crew go through a list and tick items off. That list must then contain everything that might conceivably go wrong, so that approach takes time. Risk-based approaches would concentrate on areas where failures are more likely – a more focused approach, where the same amount of resources are used more effectively. Think of a group of inspectors at an airport – they can’t check every plane, so they focus on the older models, or operators who have poorer safety records, and let the others through with more trust and less oversight (trust is a key issue in Risk assessment).

In the real world, both of these approaches complement each other. A team working a plant need their procedures and checklists, and the oversight team need to focus their efforts on areas where they know there are the greatest risks. The oversight team need to take these risks into account when they revise the procedures to mitigate areas they have identified. The teams need to establish a base of trust with the oversight that they are following procedures correctly. And so it becomes a cycle.

Any risk analysis accepts that there is a risk of failure. The ‘allowable risk’ is calculated on the basis of probability vs. consequences, and is a figure representing costs. And safety is a cost. But it isn’t a linear relationship. 100% more money spent won’t mean 100% more safety. No more than spending twice as much in a restaurant will guarantee the food tastes twice as good. Any industry involving safety is finding a balance between the levels of safety and profitability – finally, it must be in everyone’s interest that the resources are used in the most effective ways possible. 

The one thing that is missing in this data-driven world, which was notable in the aviation and transport industries, is a common incident database. The Aviation one is ECCAIRS ( http://www.aviationreporting.eu/AviationReporting/ )  – I wonder if similar could exist for the Chemicals Industry? Probably industrial secrecy prevents it? I’m going to find out more…

ROI vs. ROE

Working on a chemical plant in Saudi’s Eastern Region has been educational, especially in the time I’ve been granted to consider things. And bearing in mind the plant contains all sorts of nasty chemicals, which could wipe out entire towns if it were ever to go bang, the effectiveness of the training being given is one of those things worth considering.

Anyone in training must be familiar with Return on Investment. It’s that notoriously difficult to pin down figure that training managers use to justify the existence of their departments to the management. It’s difficult to pin down because it is not always possible to show the effectiveness of a training program in terms of business efficiency. It’s also folly to link gains or losses to a training when there are a thousand other factors that might influence how the business is running – new competitors in the marketplace, or increased cost of raw materials for instance.

Finally, returning to the plant, and safety issues, the ROI in training is hopefully an absence of significant accidents. The cost of even one such event could be staggering. But how to quantify the value of something not happening?  

The ‘Safety Culture’ places great value on ‘Near Miss’ incidents – where lessons can be learned from events that didn’t have any major consequences, but could have done. When they happen, investigations take place and systems and processes are modified. But even this approach relies on incidents in the first place to use as test cases. So what do we look at when we are aiming for total prevention?

This is where Expectations come in.

Let’s take proper use of gloves as an example. The training expectations are that once they have done this training, employees not only know how to use the right gloves, but from that point forward will always use the gloves as intended. Spot checks at monthly intervals might be needed to confirm this. And, assuming the spot checks check out, we can confirm the ROE for the training has been met. There is no obvious cost implication to this success, but no-one has been hurt in the evaluation of this training. 

We can follow the path back to the training, too. It makes sense to define the outputs before we start. Kirkpatricks’ model helps – the output we want is a level 3 – Behaviour. The level 4 goal, having no injuries in the organization is linked, but will follow from that level 3.

Knowing this, the Instructional Designer has a clear job. There’s the information level of the training, which gloves are needed for what. A tactile exercise would be nice, trying them on and performing tasks. You may drive the message home with pictures of hand injuries that resulted from bad choices. You could have classroom activities with pictures of work situations and everyone holds up the right glove. That’s half the story covered.

Of course, outside the classroom, there are things that contribute too. Making sure everyone has a proper set, that fits, personalized with their own name, and somewhere to keep them where they are available. Finally, notices in areas warning employees of penalties for being glove-free may, in the end, have more effect than anything. This last part is arguably the most important because it brings the knowledge out of the training and into the daily grind.

It’s a culture change that’s needed. In the right circumstances, culture change can be deceptively fast.  

Big Data and Learning Evaluations

AI and machine learning. Big data. Learning Evaluations, ROI, ROE (Return on Expectations).

All very exciting terms at the moment, being thrown about – with ‘first movers’ trying to grab a bit of the action before anyone really knows the future applications of these technologies. So here’s my take on the future, and where big data is going to be really useful. But first, we need to look to the other side of the training, to the outcomes.

Various texts I’ve read complain about assessment results. Candidates learning reduced to a simple pass/fail, or a percentage. In this collected assessment data, usually cast aside is a mine of useful information, both about the candidate, and the training itself.

There are a minefield of issues around this data. From a purely deterministic point of view, we should pre-assess everyone, to measure improvement. Topics where a high percentage of candidates struggle raise questions about how those topics are presented. In assessing the learner, we are also assessing the training. So this is where Big Data and groups come in. As a purely average subject, a learner can be a known quantity. The deviations from the reference groups become the points of interest, strengths and weaknesses. The assessment needs to capture areas where further training may be needed – or flag that a learner has affinities for a certain subject. It’s just that creating an assessment that measures all this data is a logistical nightmare. Do we even have the tools?

Something not entirely new but which I’d be keen to see more of is 3D Assessment – where a candidate answers a question, but also gives a rating on how sure they are about the answer they have given. If a candidate selects the right answer and is 100% certain, that’s a full mark. If they select a wrong answer, but are only 20% sure of that answer, for example, the mark would be better than if they select a wrong answer but are sure that it is right. In other words, the candidate is rewarded for showing that they know what they know and what they don’t (this reminds me of the whole ‘known unknowns vs. stuff we don’t even know exists’ argument).

At an individual subject level, the data is pretty subjective. But if you start getting areas where learners come out thinking they have learnt things but have the wrong answers, you have a problem with the training materials. Learners coming out with areas where they are uncertain of answers, whether the answers are right or not, would suggest a need for further training.

I see three issues currently stopping this from happening:

  • Actually creating the assessments
  • Lack of suitable tools
  • Lack of numbers – this is real more users, more useful scenario

And the danger for the designers of the training – the assessment could finish up as such a rigid and complex framework, this will limit the options available when creating training to fit such a frame – when the outputs are so clearly defined, how can the trainer miss?

As always, I’m interested to hear any thoughts on this – has anyone used 3D assessment in this way?