We decided to build SP Retrieval Checker as the first Zinnia module and the first module paying rewards to Station operators. You can find the design discussions in Station Module: SP Retrieval Checker (Spark)

The milestones below outline the engineering work. In parallel to the technical work, we need to answer the business & product questions, especially the most important one: Who is going to fund the FIL pool for paying out rewards?

We need to start working on rewards early on and parallel to the technical work, see M3 Oct 5th: Rewards Alpha

M1 June 13th: Reward-less Walking Skeleton

[19 days = 5 weeks of work]

Build a walking skeleton covering several functional areas. Implement as little functionality as possible while still delivering a meaningful system.

  1. Spark API - job scheduling (web2-style) [5 days of work]
  2. A Zinnia module to perform the retrieval checks. [5 days of work]
  3. Improve Zinnia DX for building Station modules [6 days]
  4. Spark API - Ingester [2 days]
  1. DevOps & Monitoring [1 day]

  2. Security [already covered]

    The trouble: Station installations are permissionless and anonymous. Anybody can run a Station and it’s easy to run thousands to millions of Station instances concurrently. The code of Station Modules is open source, attackers can inspect which HTTP APIs we are calling and call them in an automated way. It’s easy to flood our backend services.

M2 Jun 30th (+/- 1 week): Lassie Retrievals

[12 days + 6 days for unknown unknowns = 3-5 weeks]

Replace the code making HTTP requests to IPFS Gateway with a retrieval client like Lassie.

Important: Retrieval requests from this module should be indistinguishable from “legit” requests made by other actors in the network (e.g. Saturn). Otherwise SPs can prioritise checker requests over regular traffic.