๐ Meridian Design Doc 01: Model Exploration
๐ย Meridian Design Doc 03: Evaluation dissected
MERidian stands for Measure, Evaluate, Reward, the three steps of the impact evaluator framework. The Meridian project aims to create an impact evaluator for โoff-chainโ networks. I.e. a network of nodes that do not maintain a shared ledger or blockchain of transactions.
This doc proposes a framework and model for Meridian that will cater initially for both the Saturn payouts system and the SPARK Station module. For a bonus point, it should be able to cater for any Station module. We believe that trying to generalise beyond these few use cases at this point may be counterproductive.
We will structure this design doc based on the three steps of an Impact Evaluator (IE), measure, evaluate and reward.
Fraud detection happens in multiple stages of the system, therefore we cover it first before going further into the system.
Fraud detection happens during measure, additional fraud filtering happens pre evaluate.
flowchart TD
subgraph Measure
subgraph Logs
L1[Log]
L2[Log]
L3[Log]
end
subgraph Buckets
BH[Honest logs]
BF[Fraudulent logs]
end
Logs--"detect fraud + aggregate"-->Buckets
L1[Log] --> BH
L2[Log] --> BH
L3[Log] --> BF
end
BH--"detect fraud + evaluate"-->Evaluation
This section applies to fraud detection algorithms that can be cheated by tweaking its inputs, where therefore fraud detection needs to be as opaque a process as possible. This applies to Saturn, and doesnโt apply to Spark.
With a public function, gaming the system becomes easy, therefore the algorithm needs to be concealed. More research needs to be done on how to safely publish fraud detection algorithms.
The results of fraud detection will also need to stay private too, because otherwise machine learning can be used to infer the fraud detection function. In order to give participants some insight into what fraud is, the percentage of fraudulent records will be exposed as part of the evaluate results.
Intuitively, fraud detection is part of evaluate. The lesson from Saturn however is that fraud detection is part of measure, not evaluate. Fraud detection needs to look at every single request log, which is too large a chunk of work for the once-per-payment-epoch-evaluate step. Therefore, fraud is detected often during measure, where the Orchestrator periodically measures and aggregates logs (e.g. every 20 minutes).
One significant downside of detecting fraud during measure is that results are committed on chain and canโt be changed later on. This means that if there is a bug in fraud detection, it is possible for the data for a whole payment epoch to be messed up.
Whether fraud detection could be moved to evaluate is unclear at this point.