👉 Meridian Design Doc 02: Orchestrator model in depth

Introduction

MERidian stands for Measure, Evaluate, Reward, the three steps of the impact evaluator framework. The Meridian project aims to create an impact evaluator for “off-chain” networks. I.e. a network of nodes that do not maintain a shared ledger or blockchain of transactions.

This doc proposes a framework and model for Meridian that will cater initially for both the Saturn payouts system and the SPARK Station module. For a bonus point, it should be able to cater for any Station module. We believe that trying to generalise beyond these few use cases at this point may be counterproductive.

We will structure this design doc based on the three steps of an Impact Evaluator (IE), measure, evaluate and reward, and we look at our use cases against key criteria.

Measure

In the measurement step, we refer to each atomic item that gets measured as a job. For example, each retrieval served by a Saturn node is a job. For Spark, each retrieval made from an SP is a job. In order to proceed to the Evaluation step, we need to gather together a set of the jobs, or some summary statistics about the jobs, in one location for each evaluation epoch. We begin with the naive solution.

Models

Naive Model

Untitled

The naive solution is for each Impact Evaluator system (Saturn, Spark) to have a measurement smart contract on chain and for each node in the network to submit each job they perform into this smart contract on chain. This would allow the evaluation smart contract to access all the jobs in each epoch. However, this the naive solution is unsuitable for the majority of use cases for the following reasons:

  1. Scalability: The chain may not be able to deal with a large number of jobs submitted. E.g. Saturn creates 137M jobs each day and, for $l$ logs in an epoch, we have $O(l)$ transactions on chain per epoch.
  2. Privacy: The details included in each job may be private.
  3. Fraud: The nodes in the network may be able to submit fraudulent logs to increase their impact, and therefore their rewards, in the network.

We analyse Saturn and Spark in the naive solution, based on these criteria:

Scalability Privacy Fraud
Saturn We cannot submit all the logs on chain. There are far too many. Clients of Saturn are highly unlikely to be happy with retrieval job info being publicly available Saturn nodes can easily self deal if they are allowed to submit jobs onto the chain.
Spark The Spark orchestrator (smart contract) can rate limit the number of jobs. However, it would be better if not all jobs were submitted on chain so that Spark can scale. This data could be made public. In fact, if Spark joins the SP Reputation WG, then the jobs could end up in a public database 1. Since job records can be cross-referenced with orchestrator records, they cannot “self-deal”.
  1. Since they may be asked to retrieve from, and create a signature chain with, any SP, they cannot fake the signatures without being detected. |

We now look at some more advanced solution that aim to move the needle on one or more of the three criteria.

IPC Subnets

To target scalability, one option is to look into Filecoin’s scalability solution: IPC subnets - would they be able to handle the number of jobs that these networks would like to handle? However, IPC subnets would not move the needle on either privacy or fraud beyond the naive solution.