Public project doc: Storage Metrics DAO
Parts under discussion and not currently belonging to Protocol v0.1 are marked in brown
We have an initial consortium of N auditor parties P_1,...,P_N. At this stage, the aim of these nodes is to scan the network according to storage deals that are on-chain (each storage deal needs to provide retrieval).
In particular, this means that parties being audited are essentially storage providers. In future versions of the protocol, we’ll open the possibility to any IPFS node to be audited. In order to do so we’ll put in place mechanisms like indices which will allow non-SP parties to be audited according to what they claim to store and keep available for retrieval
Step 0: Auditors join the Consortium
For v0.1 we assume Auditors parties to be pre-determined and known. Thus, there is no particular joining protocol to become an auditor. This will change in future version of the protocol where we will open the possibility to other parties to join as Auditors.
Step 1: Auditors scan the network
Each party in the consortium periodically query (ideally) all the SPs involved in a storage deal, checking if the retrieval of the file is successful. Each SP is then scored by auditors according to the following metrics:
Note (TBD): Given some of the metrics are influenced by proximity/location, it can be useful to ask Auditor Parties to be spread all over the world (in local clusters) in order for measures to be as accurate as possible. Note that accuracy of measurements is a priority for Auditors: the more Auditors measures reflects the reality, the more their work is valued by clients
Step 2: Auditors take Part to a Survey Protocol and produce a Report
Option 1: Fixed Aggregator (MVP). Results are sent to an aggregator node that aggregates survey results in a report checking the correspondence between what he received from auditors and what auditors committed on-chain.
A report is valid once it is signed by the aggregator node. Once valid, a report is posted on-chain and serves as the metrics for that round
Option 1b: Random Aggregator [not MVP]. Aggregator node is chosen at random and rewarded once the report is shipped. Rest of Step 2 then follows Option 1.
Option 2: Consortium Internal Agreement [not MVP]. Results are shared inside the consortium and results are aggregated in a report (for instance via round robin). The report is signed by all the Auditor parties that agree.
A report is valid if it is signed by at least 50% + 1 auditors. Once valid, a report is posted on-chain and serves as the metrics for that round.
On the reason of asking to commit and reveal: given that we are using a majority-based mechanism, we want to ensure Auditors do not change their mind according to the majority, once results are aggregated. Moreover, in Option 1 and 1b this commit and reveal approach allows for public verifiability even in the presence of an aggregator node
Step 3: Metrics
Metrics are still a work in progress, but a nice example is represented by what Bedrock Project is considering:
Those metrics should be known and public
Step 4: Retrieval SPs are ranked according to their metrics in the Report
Once the report is published, it is used for assigning points to SPs for that round (the final result is a ranking acting as a reputation system).
Option 0: SP are ranked according to their metrics
Metrics gathered by Auditors and aggregated by the aggregator node result in a ranking of SP for each metric. As a result we’ll have a board showing SP quality of service according to each metric
Option 1: All or Nothing (binary) ****
There are base metrics and there is a fixed number of points given for meeting these metrics. An SP is getting 0 points if any of the metric is not met by any value. The global score of a miner is (TBD)
Option 2: Decreasing Score [weighted average]
Maximum number of points is granted to SP who satisfy all the metrics. SP who are not satisfying all the metrics are given points according to a (decreasing) scoring function (TDB)
(Step 5: Auditors are Scored with auditing points)[WIP, not mandatory for MVP]
For each SP which is audited, all auditors according with the majority are given points.
In order to disincentivize collusion, we have a fixed number of points per round. This would incentivize auditors not to share their observation with other auditor parties, since sharing those information is potentially lowering down the number of points they get.
Auditor Score Overview (WIP)
There is a total number of points R for each auditing round.
The total R is divided by the number of SPs being audited
For each Storage Provider SP_j we have that SP_j is divided among the W > N/2 Auditors that agreed on SP_j score and voted accordingly (we are using majority vote). This means that each of the W auditors is getting R_j/W points.
Auditors whose observation does not agree with the majority and auditors who don’t sign/don’t open their commitment are not getting any point.
Note: Irrationality of collusion: One can think that having a fixed number of points per round is incentivizing Auditors to collude. Indeed, at a first glance the number of points is maximized if 50%+1 auditors agree, and goes down if more that 50% + 1 auditors do so. This can be true on a single R_i but it does not hold in the grand scheme of things. Indeed, according to the above, the best strategy would be for Auditors to fully share their observation and artificially agree that for each round there are exactly 50%+1 auditors who agree on a certain outcome, while the remaining 50% -1 agree on the opposite. This would maximize the points earned by each winning Auditor and give 0 to the others.
Given that all auditors would take part to this strategy, all auditors would have to earn the maximum amount of points for some SP_j and win 0 for some other on SP_j’. We show that this strategy is equivalent to the one where all the auditors always agree.
Indeed, let’s consider the following toy example (which can be fully generalized):
Example: We have 5 Auditors. They fully coordinate so that everyone is part of the majority the same number of times as the others and part of the minority the same number of times as the others. In order to to do we need to consider N subset of points R_1,...,R_5 (if not, this is not achievable) This means that everyone earns R_i/3 points for 3 different times, and wins 0 for 2 times. It follows that Total_Points = R_i.
Now, if everyone agrees with each other in all of the 5 rounds, everyone has Total_Points = (R_i/5)*5 = R_i.
This means that colluding and try to maximize the earned points among all the auditors correspond to the strategy where everyone agrees on everything
Idea Introducing Auditors Score Weights since v0.1: Even in Protocol v0.1 we could put in place some sort of Auditors Score in terms of weight (given that we are not envisioning an actual token reward for auditors at this stage). It could work as follows:
Pro: we could have a sort of reward (via points) even in v0.1 “for free”, in order to disincentivize collusion
Cons: Bribing?
Auditors Anonymity (not for MVP)
For now we are working under the assumption that Auditors will find their own countermeasures to the fact that SPs can identify them as auditors and behave differently with Auditors and non-auditors.
We stress that it is in Auditors’ interest to do so, given that the fact that the metrics are accurate and reflect the reality is something that gives value to the consortium itself.