{% extends "_layout.html" %} {% block title %}How the Network Database (netDb) Works{% endblock %} {% block content %}
Updated July 2010, current as of router version 0.8
I2P's netDb is a specialized distributed database, containing just two types of data - router contact information (RouterInfos) and destination contact information (LeaseSets). Each piece of data is signed by the appropriate party and verified by anyone who uses or stores it. In addition, the data has liveliness information within it, allowing irrelevant entries to be dropped, newer entries to replace older ones, and protection against certain classes of attack.
The netDb is distributed with a simple technique called "floodfill", where a subset of all routers, called "floodfill routers", maintains the distributed database.
When an I2P router wants to contact another router, they need to know some key pieces of data - all of which are bundled up and signed by the router into a structure called the "RouterInfo", which is distributed under the key derived from the SHA256 of the router's identity. The structure itself contains:
The following text options, while not strictly required, are expected to be present:
These values are used by other routers for basic decisions. Should we connect to this router? Should we attempt to route a tunnel through this router? The bandwidth capability flag, in particular, is used only to determine whether the router meets a minimum threshold for routing tunnels. Above the minimum threshold, the advertised bandwidth is not used or trusted anywhere in the router, except for display in the user interface and for debugging and network analysis.
Additional text options include a small number of statistics about the router's health, which are aggregated by sites such as stats.i2p for network performance analysis and debugging. These statistics were chosen to provide data crucial to the developers, such as tunnel build success rates, while balancing the need for such data with the side-effects that could result from revealing this data. Current statistics are limited to:
The data published can be seen in the router's user interface, but is not used or trusted within the router. As the network has matured, we have gradually removed most of the published statistics to improve anonymity, and we plan to remove more in future releases.
RouterInfos have no set expiration time. Each router is free to maintain its own local policy to trade off the frequency of RouterInfo lookups with memory or disk usage. In the current implementation, there are the following general policies:
RouterInfos are periodically written to disk so that they are available after a restart.
The second piece of data distributed in the netDb is a "LeaseSet" - documenting a group of tunnel entry points (leases) for a particular client destination. Each of these leases specify the tunnel's gateway router (with the hash of its identity), the tunnel ID on that router to send messages (a 4 byte number), and when that tunnel will expire. The LeaseSet itself is stored in the netDb under the key derived from the SHA256 of the destination.
In addition to these leases, the LeaseSet includes the destination itself (namely, the destination's 2048bit ElGamal encryption key, 1024bit DSA signing key, and certificate) and an additional signing and encryption public keys. The additional encryption public key is used for end-to-end encryption of garlic messages. The additional signing public key was intended for LeaseSet revocation but is currently unused.
Lease specification
LeaseSet specification
Lease Javadoc
LeaseSet Javadoc
A LeaseSet for a destination used only for outgoing connections is unpublished. It is never sent for publication to a floodfill router. "Client" tunnels, such as those for web browsing and IRC clients, are unpublished.
A LeaseSet may be revoked by publishing a new LeaseSet with zero leases. Revocations must be signed by the additional signing key in the LeaseSet. Revocations are not fully implemented, and it is unclear if they have any practical use. This is the only planned use for the that signing key, so it is currently unused.
In an encrypted LeaseSet, all Leases are encrypted with a separate DSA key. The leases may only be decoded, and thus the destination may only be contacted, by those with the key. There is no flag or other direct indication that the LeaseSet is encrypted. Encrypted LeaseSets are not widely used, and it is a topic for future work to research whether the user interface and implementation of encrypted LeaseSets could be improved.
All Leases (tunnels) are valid for 10 minutes; therefore, a LeaseSet expires 10 minutes after the earliest creation time of all its Leases.
There is no persistent storage of LeaseSet data since they expire so quickly.
The netDb is decentralized, however you do need at
least one reference to a peer so that the integration process
ties you in. This is accomplished by "reseeding" your router with the RouterInfo
of an active peer - specifically, by retrieving their routerInfo-$hash.dat
file and storing it in your netDb/
directory. Anyone can provide
you with those files - you can even provide them to others by exposing your own
netDb directory. To simplify the process,
volunteers publish their netDb directories (or a subset) on the regular (non-i2p) network,
and the URLs of these directories are hardcoded in I2P.
When the router starts up for the first time, it automatically fetches from
one of these URLs, selected at random.
The floodfill netDb is simple distributed storage mechanism. The storage algorithm is simple- send the data to the closest peer that has advertised itself as a floodfill router. Then wait 10 seconds, pick another floodfill router and ask them for the entry to be sent, verifying its proper insertion / distribution. If the verification peer doesn't reply, or they don't have the entry, the sender repeats the process. When the peer in the floodfill netDb receives a netDb store from a peer not in the floodfill netDb, they send it to all of the peers in the floodfill netDb.
Determining who is part of the floodfill netDb is trivial - it is exposed in each router's published routerInfo as a capability.
Floodfills have no central authority and do not form a "consensus" - they only implement a simple DHT overlay.
Unlike Tor, where the directory servers are hardcoded and trusted, and operated by known entities, the members of the I2P floodfill peer set need not be trusted, and change over time.
To increase reliability of the netDb, and minimize the impact of netDb traffic on a router, floodfill is automatically enabled only on routers that are configured with high bandwidth limits. Routers with high bandwidth limits (which must be manually configured, as the default is much lower) are presumed to be on lower-latency connections, and are more likely to be available 24/7. The current minimum share bandwidth for a floodfill router is 128 KBytes/sec.
In addition, a router must pass several additional tests for health (outbound message queue time, job lag, etc.) before floodfill operation is automatically enabled.
With the current rules for automatic opt-in, approximately 6% of the routers in the network are floodfill routers.
While some peers are manually configured to be floodfill, others are simply high-bandwidth routers who automatically volunteer when the number of floodfill peers drops below a threshold. This prevents any long-term network damage from losing most or all floodfills to an attack. In turn, these peers will un-floodfill themselves when there are too many floodfills outstanding.
A floodfill router's only services that are in addition to those of non-floodfill routers are in accepting netDb stores and responding to netDb queries. Since they are generally high-bandwidth, they are more likely to participate in a high number of tunnels (i.e. be a "relay" for others), but this not directly related to their distributed database services.
The netDb uses a simple Kademlia-style XOR metric to determine closeness. The SHA256 Hash of the key being looked up or stored is XOR-ed with the hash of the router in question to determine closeness (there is an additional daily keyspace rotation to increase the costs of Sybil attacks, as explained below).
I2NP DatabaseStoreMessages containing the local RouterInfo are exchanged with peers as a part of the initialization of a NTCP or SSU transport connection.
I2NP DatabaseStoreMessages containing the local LeaseSet are periodically exchanged with peers by bundling them in a garlic message along with normal traffic from the related Destination. This allows an initial response, and later responses, to be sent to an appropriate Lease, without requiring any LeaseSet lookups, or requiring the communicating Destinations to have published LeaseSets at all.
A router publishes its own RouterInfo by directly connecting to a floodfill router and sending it a I2NP DatabaseStoreMessage with a nonzero Reply Token. The message is not end-to-end garlic encrypted, as this is a direct connection, so there are no intervening routers (and no need to hide this data anyway). The floodfill router replies with a I2NP DeliveryStatusMessage, with the Message ID set to the value of the Reply Token.
Storage of LeaseSets is much more sensitive than for RouterInfos, as a router must take care that the LeaseSet cannot be associated with the router.
A router publishes a local LeaseSet by sending a I2NP DatabaseStoreMessage with a nonzero Reply Token over an outbound client tunnel for that Destination. The message is end-to-end garlic encrypted using the Destination's Session Key Manager, to hide the message from the tunnel's outbound endpoint. The floodfill router replies with a I2NP DeliveryStatusMessage, with the Message ID set to the value of the Reply Token. This message is sent back to one of the client's inbound tunnels.
After a floodfill router receives a DatabaseStoreMessage containing a valid RouterInfo or LeaseSet which is newer than that previously stored in its local NetDb, it "floods" it. To flood a NetDb entry, it looks up the 7 floodfill routers closest to the key of the NetDb entry. (The key is the SHA256 Hash of the RouterIdentity or Destination)
It then directly connects to each of the 7 peers and sends it a I2NP DatabaseStoreMessage with a zero Reply Token. The message is not end-to-end garlic encrypted, as this is a direct connection, so there are no intervening routers (and no need to hide this data anyway). The other routers do not reply or re-flood, as the Reply Token is zero.
The I2NP DatabaseLookupMessage is used to request a netdb entry from a floodfill router. Lookups are sent out one of the router's outbound exploratory tunnels. The replies are specified to return via one of the router's inbound exploratory tunnels.
Lookups are generally sent to the two "good" floodfill routers closest to the requested key, in parallel.
If the key is found locally by the floodfill router, it responds with a I2NP DatabaseStoreMessage. If the key is not found locally by the floodfill router, it responds with a I2NP DatabaseSearchReplyMessage containing a list of other floodfill routers close to the key.
Lookups are not encrypted and thus are vulnerable to snooping by the outbound endpoint (OBEP) of the client tunnel.
As the requesting router does not reveal itself, there is no recipient public key for the floodfill router to encrypt the reply with. Therefore, the reply is exposed to the inbound gateway (IBGW) of the inbound exploratory tunnel. An appropriate method of encrypting the reply is a topic for future work.
(Reference: Hashing it out in Public Section 2.3 for terms below in italics)
Due to the relatively small size of the network, the flooding redundancy of 8x, and a lookup redundancy of 2x, lookups are currently O(1) rather than O(log n) -- a router is highly likely to know a floodfill router close enough to the key to get the answer on the first try. Neither recursive nor iterative routing for lookups is implemented.
Node IDs are verifiable in that we use the router hash directly as both the node ID and the Kademlia key. Given the current size of the network, a router has detailed knowledge of the neighborhood of the destination ID space.
Queries are sent throughmultiple routes simultaneously to reduce the chance of query failure.
After network growth of 5x - 10x, there will be a significant chance of lookup failure due to the O(1) lookup strategy, and implementation of an iterative lookup strategy will be required. See below for more information.
To verify a storage was successful, a router simply waits about 10 seconds, then sends a lookup to another floodfill router close to the key (but not the one the store was sent to). Lookups sent out one of the router's outbound exploratory tunnels. Lookups are end-to-end garlic encrypted to prevent snooping by the outbound endpoint(OBEP).
To verify a storage was successful, a router simply waits about 10 seconds, then sends a lookup to another floodfill router close to the key (but not the one the store was sent to). Lookups sent out one of the outbound client tunnels for the destination of the LeaseSet being verified. To prevent snooping by the OBEP of the outbound tunnel, lookups are end-to-end garlic encrypted. The replies are specified to return via one of the client's inbound tunnels.
As for regular lookups, the reply is unencrypted, thus exposing the reply to the inbound gateway (IBGW) of the reply tunnel, and an appropriate method of encrypting the reply is a topic for future work. As the IBGW for the reply is one of the gateways published in the LeaseSet, the exposure is minimal.
Exploration is a special form of netdb lookup, where a router attempts to learn about new routers. It does this by sending a floodfill router a I2NP DatabaseLookupMessage, looking for a random key. As this lookup will fail, the floodfill would normally respond with a I2NP DatabaseSearchReplyMessage containing hashes of floodfill routers close to the key. This would not be helpful, as the requesting router probably already knows those floodfills, and it would be impractical to add ALL floodfill routers to the "don't include" field of the lookup. For an exploration query, the requesting router adds a router hash of all zeros to the "don't include" field of the DatabaseLookupMessage. The floodfill will then respond only with non-floodfill routers close to the requested key.
Destinations may be hosted on multiple routers simultaneously, by using the same private and public keys (traditionally named eepPriv.dat files). As both instances will periodically publish their signed LeaseSets to the floodfill peers, the most recently published LeaseSet will be returned to a peer requesting a database lookup. As LeaseSets have (at most) a 10 minute lifetime, should a particular instance go down, the outage will be 10 minutes at most, and generally much less than that. The multihoming function has been verified and is in use by several services on the network.
Also discussed on the threat model page.
A hostile user may attempt to harm the network by creating one or more floodfill routers and crafting them to offer bad, slow, or no responses. Some scenarios are discussed below.
There are currently almost 100 floodfill routers in the network. Most of the following attacks will become more difficult, or have less impact, as the network size and number of floodfill routers increase.
Via flooding, all netdb entries are stored on the 8 floodfill routers closest to the key.
All netdb entries are signed by their creators, so no router may forge a RouterInfo or LeaseSet.
Each router maintains an expanded set of statistics in the peer profile for each floodfill router, covering various quality metrics for that peer. The set includes:
Each time a router needs to make a determination on which floodfill router is closest to a key, it uses these metrics to determine which floodfill routers are "good". The methods, and thresholds, used to determine "goodness" are relatively new, and are subject to further analysis and improvement. While a completely unresponsive router will quickly be identified and avoided, routers that are only sometimes malicious may be much harder to deal with.
An attacker may mount a Sybil attack by creating a large number of floodfill routers spread throughout the keyspace.
(In a related example, a researcher recently created a large number of Tor relays.) If successful, this could be an effective DOS attack on the entire network.
If the floodfills are not sufficiently misbehaving to be marked as "bad" using the peer profile metrics described above, this is a difficult scenario to handle. Tor's response can be much more nimble in the relay case, as the suspicious relays can be manually removed from the consensus. Some possible responses in the I2P case, none of them satisfactory:
This attack becomes more difficult as the network size grows.
An attacker may mount a Sybil attack by creating a small number (8-15) of floodfill routers clustered closely in the keyspace, and distribute the RouterInfos for these routers widely. Then, all lookups and stores for a key in that keyspace would be directed to one of the attacker's routers. If successful, this could be an effective DOS attack on a particular eepsite, for example.
As the keyspace is indexed by the cryptographic (SHA256) Hash of the key, an attacker must use a brute-force method to repeatedly generate router hashes until he has enough that are sufficiently close to the key. The amount of computational power required for this, which is dependent on network size, is unknown.
As a partial defense against this attack, the algorithm used to determine Kademlia "closeness" varies over time. Rather than using the Hash of the key (i.e. H(k))to determine closeness, we use the Hash of the key appended with the current date string, i.e. H(k + YYYYMMDD). A function called the "routing key generator" does this, which transforms the original key into a "routing key". In other words, the entire netdb keyspace "rotates" every day at UTC midnight. Any partial-keyspace attack would have to be regenerated every day, as after the rotation, the attacking routers would no longer be close to the target key, or to each other.
This attack becomes more difficult as the network size grows.
One consequence of daily keyspace rotation is that the distributed network database may become unreliable for a few minutes after the rotation -- lookups will fail because the new "closest" router has not received a store yet. The extent of the issue, and methods for mitigation (for example netdb "handoffs" at midnight) are a topic for further study.
An attacker could attempt to boot new routers into an isolated or majority-controlled network by taking over a reseed website, or tricking the developers into adding his reseed website to the hardcoded list in the router.
Several defenses are possible, and most of these are planned:
See also lookup (Reference: Hashing it out in Public Section 2.3 for terms below in italics)
Similar to a bootstrap attack, an attacker using a floodfill router could attempt to "steer" peers to a subset of routers controlled by him by returning their references.
This is unlikely to work via exploration, because exploration is a low-frequency task. Routers acquire a majority of their peer references through normal tunnel building activity. Exploration results are generally limited to a few router hashes, and each exploration query is directed to a random floodfill router.
For floodfill router references returned in a I2NP DatabaseSearchReplyMessage response to a lookup, these references are not immediately followed. The requesting router does not trust that the references are closer to the key (i.e. they are verifiable correct, and the references are not immediately queried. In other words, the Kademlia lookup is not iterative. This means the query capture attack described in Hashing it out in Public much less likely, until iterative lookups are implemented.
(Reference: Hashing it out in Public Section 3)
This doesn't have much to do with floodfill, but see the peer selection page for a discussion of the vulnerabilities of peer selection for tunnels.
Moved to the netdb discussion page.
After network growth of 5x - 10x, there will be a significant chance of lookup failure due to the O(1) lookup strategy, and implementation of an iterative lookup strategy will be required.
End-to-end encryption of additional netDb lookups and responses.
{% endblock %}