From 8b9ee4dfd743a93eff67712607dba32e807d1746 Mon Sep 17 00:00:00 2001 From: jrandom Date: Thu, 17 Feb 2005 00:48:18 +0000 Subject: [PATCH] updated to reflect what was implemented --- router/doc/tunnel-alt.html | 119 ++++++++++++++++++++----------------- 1 file changed, 65 insertions(+), 54 deletions(-) diff --git a/router/doc/tunnel-alt.html b/router/doc/tunnel-alt.html index 31b7808092..4708765639 100644 --- a/router/doc/tunnel-alt.html +++ b/router/doc/tunnel-alt.html @@ -1,4 +1,4 @@ -$Id: tunnel-alt.html,v 1.5 2005/01/19 18:13:10 jrandom Exp $ +$Id: tunnel-alt.html,v 1.6 2005/01/25 00:46:22 jrandom Exp $
 1) Tunnel overview
 2) Tunnel operation
@@ -91,8 +91,8 @@ requested.

tunnel ID to listen for messages with and what tunnel ID they should be forwarded on as to the next hop, and each hop chooses the tunnel ID which they receive messages on. Tunnels themselves are short lived (10 minutes at the -moment), but depending upon the tunnel's purpose, and though subsequent tunnels -may be built using the same sequence of peers, each hop's tunnel ID will change.

+moment), and even if subsequent tunnels are built using the same sequence of +peers, each hop's tunnel ID will change.

2.1) Message preprocessing

@@ -103,9 +103,9 @@ each I2NP message should be handled by the tunnel endpoint, encoding that data into the raw tunnel payload:

-

Which to use? no padding is most efficient, random padding is what -we have now, fixed size would either be an extreme waste or force us to -implement fragmentation. Padding to the closest exponential size (ala freenet) -seems promising. Perhaps we should gather some stats on the net as to what size -messages are, then see what costs and benefits would arise from different -strategies? See gathered -stats. The current plan is to pad to a fixed 1024 byte message size with -fragmentation.

+

These padding strategies can be used on a variety of levels, addressing the +exposure of message size information to different adversaries. After gathering +and reviewing some statistics +from the 0.4 network, as well as exploring the anonymity tradeoffs, we're starting +with a fixed tunnel message size of 1024 bytes. Within this however, the fragmented +messages themselves are not padded by the tunnel at all (though for end to end +messages, they may be padded as part of the garlic wrapping).

2.6) Tunnel fragmentation

To prevent adversaries from tagging the messages along the path by adjusting -the message size, all tunnel messages are a fixed 1KB in size. To accommodate +the message size, all tunnel messages are a fixed 1024 bytes in size. To accommodate larger I2NP messages as well as to support smaller ones more efficiently, the gateway splits up the larger I2NP messages into fragments contained within each tunnel message. The endpoint will attempt to rebuild the I2NP message from the fragments for a short period of time, but will discard them as necessary.

+

Routers have a lot of leeway as to how the fragments are arranged, whether +they are stuffed inefficiently as discrete units, batched for a brief period to +fit more payload into the 1024 byte tunnel messages, or opportunistically padded +with other messages that the gateway wanted to send out.

+

2.7) Alternatives

2.7.1) Adjust tunnel processing midstream

@@ -268,17 +271,17 @@ along the same peers.

2.7.3) Backchannel communication

-

At the moment, the preIV values used are random values. However, it is +

At the moment, the IV values used are random values. However, it is possible for that 16 byte value to be used to send control messages from the gateway to the endpoint, or on outbound tunnels, from the gateway to any of the -peers. The inbound gateway could encode certain values in the preIV once, which +peers. The inbound gateway could encode certain values in the IV once, which the endpoint would be able to recover (since it knows the endpoint is also the creator). For outbound tunnels, the creator could deliver certain values to the -participants during the tunnel creation (e.g. "if you see 0x0 as the preIV, that +participants during the tunnel creation (e.g. "if you see 0x0 as the IV, that means X", "0x1 means Y", etc). Since the gateway on the outbound tunnel is also -the creator, they can build a preIV so that any of the peers will receive the +the creator, they can build a IV so that any of the peers will receive the correct value. The tunnel creator could even give the inbound tunnel gateway -a series of preIV values which that gateway could use to communicate with +a series of IV values which that gateway could use to communicate with individual participants exactly one time (though this would have issues regarding collusion detection)

@@ -308,17 +311,14 @@ still exists as peers could use the frequency of each size as the carrier (e.g. two 1024 byte messages followed by an 8192). Smaller messages do incur the overhead of the headers (IV, tunnel ID, hash portion, etc), but larger fixed size messages either increase latency (due to batching) or dramatically increase -overhead (due to padding).

+overhead (due to padding). Fragmentation helps ammortize the overhead, at the +cost of potential message loss due to lost fragments.

-

Perhaps we should have I2CP use small fixed size messages which are -individually garlic wrapped so that the resulting size fits into a single tunnel -message so that not even the tunnel endpoint and gateway can see the size. We'll -then need to optimize the streaming lib to adjust to the smaller messages, but -should be able to squeeze sufficient performance out of it. However, if the -performance is unsatisfactory, we could explore the tradeoff of speed (and hence -userbase) vs. further exposure of the message size to the gateways and endpoints. -If even that is too slow, we could then review the tunnel size limitations vs. -exposure to participating peers.

+

Timing attacks are also relevent when reviewing the effectiveness of fixed +size messages, though they require a substantial view of network activity +patterns to be effective. Excessive artificial delays in the tunnel will be +detected by the tunnel's creator, due to periodic testing, causing that entire +tunnel to be scrapped and the profiles for peers within it to be adjusted.

3) Tunnel building

@@ -364,6 +364,10 @@ the hop after A may be B, B may never be before A. Other configuration options include the ability for just the inbound tunnel gateways and outbound tunnel endpoints to be fixed, or rotated on an MTBF rate.

+

In the initial implementation, only random ordering has been implemented, +though more strict ordering will be developed and deployed over time, as well +as controls for the user to select which strategy to use for individual clients.

+

3.2) Request delivery

As mentioned above, once the tunnel creator knows what peers should go into @@ -372,11 +376,11 @@ messages, each containing the necessary information for that peer. For instance participating tunnels will be given the 4 byte nonce with which to reply with, the 4 byte tunnel ID on which they are to send out the messages, the 32 byte hash of the next hop's identity, the 32 byte layer key used to -remove a layer from the tunnel, and a 32 byte layer IV key used to transform the -preIV into the IV. Of course, outbound tunnel endpoints are not +remove a layer from the tunnel, and a 32 byte IV key used to encrypt the IV. +Of course, outbound tunnel endpoints are not given any "next hop" or "next tunnel ID" information. To allow replies, the request contains a random session tag and a random session key with -which the peer may garlic encrypt their decision, as well as the tunnel to which +which the peer should garlic encrypt their decision, as well as the tunnel to which that garlic should be sent. In addition to the above information, various client specific options may be included, such as what throttling to place on the tunnel, what padding or batch strategies to use, etc.

@@ -391,6 +395,13 @@ router on which that tunnel listens). Upon receipt of the reply at the tunnel creator, the tunnel is considered valid on that hop (if accepted). Once all peers have accepted, the tunnel is active.

+

Peers may reject tunnel creation requests for a variety of reasons, though +a series of four increasingly severe rejections are known: probabalistic rejection +(due to approaching the router's capacity, or in response to a flood of requests), +transient overload, bandwidth overload, and critical failure. When received, +those four are interpreted by the tunnel creator to help adjust their profile of +the router in question.

+

3.3) Pooling

To allow efficient operation, the router maintains a series of tunnel pools,