David Tyler
(MM) Secondary Read Validation
Updated: Sep 7, 2018
This is part of our Missing Market series for suggested additional transactions and actions #MissingMarket
The Hypothesis
Read validation rules should be applied as part of settlement processing to identify cases of significant anomaly. Current read validation is applied on upload to determine whether a read should be accepted but once loaded it is used, almost regardless of the consequence.

The Context
Read validation processes in both the MOSL and CMA systems are geared towards validating the latest read to a defined series of rules. But once you're in, you're in. So even if the reading failed initial validation, a user can override the error and ensure that the read is submitted. That's a sound process in terms of enforcing discipline but not flawless. Behavioural science shows that there is a diminishing effect from this type of "are you sure?" warnings.
For example, a user may check the read itself and be satisfied that it is correct. But what if there had been a missed meter exchange along the way? The read may be correct for the site but not for the meter. The correct action would be to perform the meter exchange first and then upload the read. This often needs coordination between retailer and wholesaler and can take time, which all makes it more likely that the easy path - approve the read as it is - is chosen.
What's the problem?
Most commonly, there is no problem. The majority of meters will look like Meter A in the graph above, with reads incrementing steadily. Poor reads can have a major impact on settlement, however. Sometimes this is only a timing issue. The reads for Meter B are generally incrementing on a fairly steady trajectory. There appears to be an error in one or both of reads 3 and 4 but it "comes out in the wash". It is not clear immediately which read is at fault but you could remove one or the other (or both) and end up with a suitable looking profile.
Sometimes the effect can be more profound. Meter C has a suspect Read 5 (assuming the rollover indicator is false), because it is lower than both of the previous two reads. On the face of it, this would look like read 5 is the one with the error, which may well be the case. Perhaps the read itself was just taken incorrectly, or perhaps the read was correct for a different meter - either because of exchange or just reading the wrong one. If it is just a one-off error, normal progression will return when the next read is received, as with Meter B (although the fluctuation is greater). But otherwise, the error needs to be corrected before the problem builds further.
What's the idea?
In essence, we say define some acceptance terms for when a read shall be ignored for settlement calculations and reported out for investigation. The main difficulty is that an effective approach requires a decent amount of sophistication* to reliably pull out error cases, so the main capture to enforce in Codes would be ones like Meter C where the latest reads appear to paint an incongruous path and require further investigation, presumably continuing forward on an estimation path until the situation is resolved.
It may require a backstop that says the reads will be used after a certain number of failures or for RF (the final settlement run for a period). But the rules should also look at rollover indicators as well, which creates another exception category. It can be complex so is perhaps best done as an offline validation exercise periodically as part of informing what the formal rules should be. Ideally, analysis algorithms would be learning ones that improve with time.
* Like our own validation algorithms!