Strategy Summary

We hardcode an IPNS key into a kubo release and orchestrate the nodes that upgrade to that release to be requesting this key every day. Simplistically, the IPNS key includes the switch date and time. This way, everyone gets to “hear” about the switch date, given that they fetch and check the record on a daily basis. On the switch date, everyone needs to follow their migration plan, depending on their role in the network (e.g., DHT server vs DHT client) - more details later.

Migration components, parameters and key players

IPNS Key: the IPNS key here plays the role of the main switch to notify everyone (by way of providing a date and time) that the migration is happening. It is the component, where we can define migration strategies and options for the different players. The structure of the IPNS key is currently been worked on as of April 4, 2023 but it is expected to include:

The same IPNS key will be used to communicate when support for the old DHT will be dropped.

Switch date: the Switch Date will be set by the IPFS team and communicated to the community to get approval, once X% of network nodes have upgraded to the kubo release that includes the double-hash DHT.

Transition period: the period of time when peers in the network will have the option to use both DHTs. The suggested duration will be decided and communicated soon with the objective to keep network and node overhead at a minimum is 1 week and can be either hard-coded, i.e., a given date, or programmatically calculated, e.g., based on the number of upgraded nodes seen in the node’s routing table.

Bootstrapper nodes: these nodes will keep providing peers running both the old and the new DHT protocol for the transition period. The intention is to be able to support peers that operate on either the old or the new DHT.

DHT Clients: we will consider different options to make sure that content is going to be discoverable, regardless of whether the content provider of the content being requested has migrated or not. DHT Clients will be configured in such a way so that they find content, but at the same time keep overhead to a minimum. There are different options here, which will be considered closer to the date. One of them is to request content from the new DHT first, indirectly adding delay when content is provided to the old DHT only. This indirectly puts pressure on the Content Providers to upgrade. If content is not available on the new DHT, content is still discoverable on the old DHT, but only after the request to the new DHT has timed out - hence, there is some non-negligible delay. Furthermore, clients are configured to fetch the IPNS Key as the first action when getting online. This ensures that clients who have disappeared during the transition period for longer than the transition period itself, are not being left operating on the old DHT (where content won’t be discoverable).

DHT Servers: DHT Servers (i.e., the nodes that serve requests for provider records) run both DHTs for some period of time, until they deprecate the old DHT and continue with the new one only. This is so that they can store and serve records that have been provided to either the old or the new DHT. Similarly to clients, DHT Servers are configured to fetch the IPNS Key as the first action when getting online. This ensures that DHT Servers who have disappeared during the transition period for longer than the transition period itself, are not being left operating on the old DHT.

Content Providers: by default they switch to the new DHT, but have the option to stay on the old one too, if they do so manually. They have the option to publish provider records to: i) the new DHT only, ii) the old DHT only, iii) both DHTs. Operating on both DHTs means that they have the extra load of publishing the same content twice. They also add more load to DHT Servers running both DHTs who will have to store and serve twice the number of records.

Timeline & Sequence of Events

Pre-release

Release