Timelines
- Early January: Bacalhau’s LDN datacap request should be approved and our FIL+ incentive structure will be confirmed.
- End of January: Bacalhau’s integration with Spade can be completed and we can begin offering FIL+ incentives for Storage Providers
- TBD: Sandbox environment and Demo for an SP running a Compute Provider node
High Level Value Proposition
You can now earn DataCap by running Bacalhau jobs over any workload approved by the Bacalhau team that your compute node bids on. Our integration with Spade (formerly Evergreen) means that storage deals made from the output of moderated jobs will earn DataCap and so increase your chance of earning FIL block rewards for storing the results of Bacalhau jobs you run.
Please see the example flow here for more detail.
Compute Provider Profitability Calculator
“If you store this much data, you will have to keep this much unsealed, and you should expect to run this many jobs, and earn this much in FIL+”
Inputs: how much verified data stored
Outputs: amount of data to keep unsealed, number of jobs to be ran, expected FIL+ datacap for result datasets
Open Questions
- Which subset of the entire customer's sealed dataset (typically multi-PB) needs to be maintained as a hot/unsealed copy?
- How could we incentivize SPs to maintain an unsealed copy of their datasets for use with Compute jobs? Are FIL+ rewards enough?
- How much resources should they make available to execute on the unsealed copy? E.g. is 1GB memory and 1vCPU enough?
- How many times would they need to run the job in order to earn the DataCap for the resulting dataset? E.g. once is enough?
Alternative Incentives