From 5e1cff3fdca770c09a31e2e2d3aa432cd9a59d0c Mon Sep 17 00:00:00 2001 From: zzz Date: Wed, 4 Aug 2010 14:19:34 +0000 Subject: [PATCH] - tunnel-alt-creation rework - More how_crypto and i2np_spec fixups - Quick NTCP fixup, move discussion to new page --- www.i2p2/pages/how_cryptography.html | 4 +- www.i2p2/pages/i2np_spec.html | 43 +- www.i2p2/pages/ntcp.html | 687 +++--------------------- www.i2p2/pages/ntcp_discussion.html | 559 +++++++++++++++++++ www.i2p2/pages/tunnel-alt-creation.html | 220 ++++++-- 5 files changed, 831 insertions(+), 682 deletions(-) create mode 100644 www.i2p2/pages/ntcp_discussion.html diff --git a/www.i2p2/pages/how_cryptography.html b/www.i2p2/pages/how_cryptography.html index 02863d79..00340dc6 100644 --- a/www.i2p2/pages/how_cryptography.html +++ b/www.i2p2/pages/how_cryptography.html @@ -35,8 +35,8 @@ block is formatted (in network byte order):

The H(data) is the SHA256 of the data that is encrypted in the ElGamal block, and is preceded by a random nonzero byte. The data encrypted in the block -can be up to 222 bytes long. Specifically, see -[the code]. +can be up to 223 bytes long. See +the ElGamal Javadoc.

ElGamal is never used on its own in I2P, but instead always as part of ElGamal/AES+SessionTag. diff --git a/www.i2p2/pages/i2np_spec.html b/www.i2p2/pages/i2np_spec.html index 99dcedc1..2a90607a 100644 --- a/www.i2p2/pages/i2np_spec.html +++ b/www.i2p2/pages/i2np_spec.html @@ -174,7 +174,7 @@ iv_key :: SessionKey reply_key :: SessionKey length -> 32 bytes -reply_iv :: Integer +reply_iv :: data length -> 16 bytes flag :: Integer @@ -182,6 +182,7 @@ flag :: Integer request_time :: Integer length -> 4 bytes + Hours since the epoch, i.e. current time / 3600 send_message_id :: Integer length -> 4 bytes @@ -191,17 +192,27 @@ padding :: Data source -> random +total length: 223 + encrypted: toPeer :: Hash length -> 16 bytes encrypted_data :: ElGamal-2048 encrypted data - length -> 514 + length -> 512 + +total length: 528 {% endfilter %} +

Notes

+

+ See also the tunnel creation specification. +

+ +

BuildResponseRecord

 {% filter escape %}
@@ -224,9 +235,17 @@ byte  527  : reply
 
 encrypted:
 bytes 0-527: AES-encrypted record(note: same size as BuildRequestRecord!)
+
+total length: 528
+
 {% endfilter %}
 
+

Notes

+

+ See also the tunnel creation specification. +

+

Messages

@@ -667,6 +686,11 @@ Total size: 8*528 = 4224 bytes {% endfilter %} +

Notes

+

+ See also the tunnel creation specification. +

+

TunnelBuildReply

@@ -675,6 +699,11 @@ same format as TunnelBuild message
 {% endfilter %}
 
+

Notes

+

+ See also the tunnel creation specification. +

+

VariableTunnelBuild

 {% filter escape %}
@@ -697,9 +726,19 @@ Total size: 1 + $num*528
 {% endfilter %}
 
+

Notes

+

+ See also the tunnel creation specification. +

+

VariableTunnelBuildReply

 {% filter escape %}
 same format as VariableTunnelBuild message
 {% endfilter %}
 
+ +

Notes

+

+ See also the tunnel creation specification. +

diff --git a/www.i2p2/pages/ntcp.html b/www.i2p2/pages/ntcp.html index c21f9997..16d0698d 100644 --- a/www.i2p2/pages/ntcp.html +++ b/www.i2p2/pages/ntcp.html @@ -2,20 +2,25 @@ {% block title %}NTCP{% endblock %} {% block content %} -

NTCP (NIO-based TCP)

+Updated August 2010 for release 0.8 + +

NTCP (NIO-based TCP)

-NTCP was introduced in I2P 0.6.1.22. -It is a Java NIO-based transport, enabled by default for outbound -connections only. Those who configure their NAT/firewall to allow -inbound connections and specify the external host and port -(dyndns/etc is okay) on /config.jsp can receive inbound connections. -NTCP is NIO based, so it doesn't suffer from the 1 thread per connection issues of the old TCP transport. +NTCP +is one of two transports currently implemented in I2P. +The other is SSU. +NTCP +is a Java NIO-based transport +introduced in I2P release 0.6.1.22. +Java NIO (new I/O) does not suffer from the 1 thread per connection issues of the old TCP transport.

-As of 0.6.1.29, NTCP uses the IP/Port +By default, +NTCP uses the IP/Port auto-detected by SSU. When enabled on config.jsp, -SSU will notify/restart NTCP when the external address changes. +SSU will notify/restart NTCP when the external address changes +or when the firewall status changes. Now you can enable inbound TCP without a static IP or dyndns service.

@@ -23,71 +28,47 @@ The NTCP code within I2P is relatively lightweight (1/4 the size of the SSU code because it uses the underlying Java TCP transport.

-

Transport Bids and Transport Comparison

+

NTCP Protocol Specification

+ + +

Standard Message Format

-I2P supports multiple transports simultaneously. -A particular transport for an outbound connection is selected with "bids". -Each transport bids for the connection and the relative value of these bids -assigns the priority. -Transports may reply with different bids, depending on whether there is -already an established connection to the peer. -

- -To compare the performance of UDP and NTCP, -you can adjust the value of i2np.udp.preferred in configadvanced.jsp -(introduced in I2P 0.6.1.29). -Possible settings are -"false" (default), "true", and "always". -Default setting results in same behavior as before -(NTCP is preferred unless it isn't established and UDP is established). -

- -The table below shows the new bid values. A lower bid is a higher priority. -

-

- - - - - - -
i2np.udp.preferred setting -
Transportfalsetruealways -
NTCP Established252525 -
UDP Established501515 -
NTCP Not established707070 -
UDP Not established10006520 -
- - - -

NTCP Transport Protocol

- - + The NTCP transport sends individual I2NP messages AES/256/CBC encrypted with + a simple checksum. The unencrypted message is encoded as follows:
- * Coordinate the connection to a single peer.
- *
- * The NTCP transport sends individual I2NP messages AES/256/CBC encrypted with
- * a simple checksum.  The unencrypted message is encoded as follows:
  *  +-------+-------+--//--+---//----+-------+-------+-------+-------+
- *  | sizeof(data)  | data | padding | adler checksum of sz+data+pad |
+ *  | sizeof(data)  | data | padding | Adler checksum of sz+data+pad |
  *  +-------+-------+--//--+---//----+-------+-------+-------+-------+
- * That message is then encrypted with the DH/2048 negotiated session key
- * (station to station authenticated per the EstablishState class) using the
- * last 16 bytes of the previous encrypted message as the IV.
- *
- * One special case is a metadata message where the sizeof(data) is 0.  In
- * that case, the unencrypted message is encoded as:
+
+ That message is then encrypted with the DH/2048 negotiated session key + (station to station authenticated per the EstablishState class) using the + last 16 bytes of the previous encrypted message as the IV. +

+ +

+0-15 bytes of padding are required to bring the total message length +(including the six size and checksum bytes) to a multiple of 16. +The maximum message size is currently 16 KB. +Therefore the maximum data size is currently 16 KB - 6, or 16378 bytes. +The minimum data size is 1. +

+ +

Time Sync Message Format

+

+ One special case is a metadata message where the sizeof(data) is 0. In + that case, the unencrypted message is encoded as: +

  *  +-------+-------+-------+-------+-------+-------+-------+-------+
  *  |       0       |      timestamp in seconds     | uninterpreted             
  *  +-------+-------+-------+-------+-------+-------+-------+-------+
- *          uninterpreted           | adler checksum of sz+data+pad |
+ *          uninterpreted           | Adler checksum of bytes 0-11  |
  *  +-------+-------+-------+-------+-------+-------+-------+-------+
- * 
- *
 
+Total length: 16 bytes. The time sync message is sent at approximately 15 minute intervals. + +

Establishment Sequence

In the establish state, the following communication happens. There is a 2048-bit Diffie Hellman exchange. For more information see the cryptography page. @@ -99,571 +80,33 @@ For more information see the cryptography pa * E(#+Alice.identity+tsA+padding+S(X+Y+Bob.identHash+tsA+tsB+padding), sk, hX_xor_Bob.identHash[16:31])---> * <----------------------E(S(X+Y+Alice.identHash+tsA+tsB)+padding, sk, prev) + +Todo: Explain this in words. + + +

Check Connection Message

Alternately, when Bob receives a connection, it could be a check connection (perhaps prompted by Bob asking for someone to verify his listener). -It does not appear that 'check connection' is used. -However, for the record, check connections are formatted as follows: -
-     * a check info connection will receive 256 bytes containing:
-     * - 32 bytes of uninterpreted, ignored data
-     * - 1 byte size
-     * - that many bytes making up the local router's IP address (as reached by the remote side)
-     * - 2 byte port number that the local router was reached on
-     * - 4 byte i2p network time as known by the remote side (seconds since the epoch)
-     * - uninterpreted padding data, up to byte 223
-     * - xor of the local router's identity hash and the SHA256 of bytes 32 through bytes 223
+Check Connection is not currently used.
+However, for the record, check connections are formatted as follows.
+     A check info connection will receive 256 bytes containing:
+
    +
  • 32 bytes of uninterpreted, ignored data +
  • 1 byte size +
  • that many bytes making up the local router's IP address (as reached by the remote side) +
  • 2 byte port number that the local router was reached on +
  • 4 byte i2p network time as known by the remote side (seconds since the epoch) +
  • uninterpreted padding data, up to byte 223 +
  • xor of the local router's identity hash and the SHA256 of bytes 32 through bytes 223 +
+

Discussion

+Now on the
NTCP Discussion Page. -

NTCP vs. SSU Discussion, March 2007

-

NTCP questions

-(adapted from an IRC discussion between zzz and cervantes) -
-Why is NTCP preferred over SSU, doesn't NTCP have higher overhead and latency? -It has better reliability. -
-Doesn't streaming lib over NTCP suffer from classic TCP-over-TCP issues? -What if we had a really simple UDP transport for streaming-lib-originated traffic? -I think SSU was meant to be the so-called really simple UDP transport - but it just proved too unreliable. - -

"NTCP Considered Harmful" Analysis by zzz

-Posted to new Syndie, 2007-03-25. -This was posted to stimulate discussion, don't take it too seriously. -

-Summary: NTCP has higher latency and overhead than SSU, and is more likely to -collapse when used with the streaming lib. However, traffic is routed with a -preference for NTCP over SSU and this is currently hardcoded. +

Future Work

+

The maximum message size should be increased to approximately 32 KB.

-

Discussion

-

-We currently have two transports, NTCP and SSU. As currently implemented, NTCP -has lower "bids" than SSU so it is preferred, except for the case where there -is an established SSU connection but no established NTCP connection for a peer. -

- -SSU is similar to NTCP in that it implements acknowledgments, timeouts, and -retransmissions. However SSU is I2P code with tight constraints on the -timeouts and available statistics on round trip times, retransmissions, etc. -NTCP is based on Java NIO TCP, which is a black box and presumably implements -RFC standards, including very long maximum timeouts. -

- -The majority of traffic within I2P is streaming-lib originated (HTTP, IRC, -Bittorrent) which is our implementation of TCP. As the lower-level transport is -generally NTCP due to the lower bids, the system is subject to the well-known -and dreaded problem of TCP-over-TCP -http://sites.inka.de/~W1011/devel/tcp-tcp.html , where both the higher and -lower layers of TCP are doing retransmissions at once, leading to collapse. -

- -Unlike in the PPP over SSH scenario described in the link above, we have -several hops for the lower layer, each covered by a NTCP link. So each NTCP -latency is generally much less than the higher-layer streaming lib latency. -This lessens the chance of collapse. -

- -Also, the probabilities of collapse are lessened when the lower-layer TCP is -tightly constrained with low timeouts and number of retransmissions compared to -the higher layer. -

- -The .28 release increased the maximum streaming lib timeout from 10 sec to 45 -sec which greatly improved things. The SSU max timeout is 3 sec. The NTCP max -timeout is presumably at least 60 sec, which is the RFC recommendation. There -is no way to change NTCP parameters or monitor performance. Collapse of the -NTCP layer is [editor: text lost]. Perhaps an external tool like tcpdump would help. -

- -However, running .28, the i2psnark reported upstream does not generally stay at -a high level. It often goes down to 3-4 KBps before climbing back up. This is a -signal that there are still collapses. -

- -SSU is also more efficient. NTCP has higher overhead and probably higher round -trip times. when using NTCP the ratio of (tunnel output) / (i2psnark data -output) is at least 3.5 : 1. Running an experiment where the code was modified -to prefer SSU (the config option i2np.udp.alwaysPreferred has no effect in the -current code), the ratio reduced to about 3 : 1, indicating better efficiency. -

- -As reported by streaming lib stats, things were much improved - lifetime window -size up from 6.3 to 7.5, RTT down from 11.5s to 10s, sends per ack down from -1.11 to 1.07. -

- -That this was quite effective was surprising, given that we were only changing -the transport for the first of 3 to 5 total hops the outbound messages would -take. -

- -The effect on outbound i2psnark speeds wasn't clear due to normal variations. -Also for the experiment, inbound NTCP was disabled. The effect on inbound -speeds on i2psnark was not clear. -

-

Proposals

- - - -

Response by jrandom

-Posted to new Syndie, 2007-03-27 -

-On the whole, I'm open to experimenting with this, though remember why NTCP is -there in the first place - SSU failed in a congestion collapse. NTCP "just -works", and while 2-10% retransmission rates can be handled in normal -single-hop networks, that gives us a 40% retransmission rate with 2 hop -tunnels. If you loop in some of the measured SSU retransmission rates we saw -back before NTCP was implemented (10-30+%), that gives us an 83% retransmission -rate. Perhaps those rates were caused by the low 10 second timeout, but -increasing that much would bite us (remember, multiply by 5 and you've got half -the journey). -

- -Unlike TCP, we have no feedback from the tunnel to know whether the message -made it - there are no tunnel level acks. We do have end to end ACKs, but only -on a small number of messages (whenever we distribute new session tags) - out -of the 1,553,591 client messages my router sent, we only attempted to ACK -145,207 of them. The others may have failed silently or succeeded perfectly. -

- -I'm not convinced by the TCP-over-TCP argument for us, especially split across -the various paths we transfer down. Measurements on I2P can convince me -otherwise, of course. -

- - -The NTCP max timeout is presumably at least 60 sec, which is the RFC -recommendation. There is no way to change NTCP parameters or monitor -performance. - -

- - -True, but net connections only get up to that level when something really bad -is going on - the retransmission timeout on TCP is often on the order of tens -or hundreds of milliseconds. As foofighter points out, they've got 20+ years -experience and bugfixing in their TCP stacks, plus a billion dollar industry -optimizing hardware and software to perform well according to whatever it is -they do. -

- - -NTCP has higher overhead and probably higher round trip times. when using NTCP -the ratio of (tunnel output) / (i2psnark data output) is at least 3.5 : 1. -Running an experiment where the code was modified to prefer SSU (the config -option i2np.udp.alwaysPreferred has no effect in the current code), the ratio -reduced to about 3 : 1, indicating better efficiency. - -

- - -This is very interesting data, though more as a matter of router congestion -than bandwidth efficiency - you'd have to compare 3.5*$n*$NTCPRetransmissionPct -./. 3.0*$n*$SSURetransmissionPct. This data point suggests there's something in -the router that leads to excess local queuing of messages already being -transferred. -

- - -lifetime window size up from 6.3 to 7.5, RTT down from 11.5s to 10s, sends per -ACK down from 1.11 to 1.07. - - -

- -Remember that the sends-per-ACK is only a sample not a full count (as we don't -try to ACK every send). Its not a random sample either, but instead samples -more heavily periods of inactivity or the initiation of a burst of activity - -sustained load won't require many ACKs. -

- -Window sizes in that range are still woefully low to get the real benefit of -AIMD, and still too low to transmit a single 32KB BT chunk (increasing the -floor to 10 or 12 would cover that). -

- -Still, the wsize stat looks promising - over how long was that maintained? -

- -Actually, for testing purposes, you may want to look at -StreamSinkClient/StreamSinkServer or even TestSwarm in -apps/ministreaming/java/src/net/i2p/client/streaming/ - StreamSinkClient is a -CLI app that sends a selected file to a selected destination and -StreamSinkServer creates a destination and writes out any data sent to it -(displaying size and transfer time). TestSwarm combines the two - flooding -random data to whomever it connects to. That should give you the tools to -measure sustained throughput capacity over the streaming lib, as opposed to BT -choke/send. -

- - -1A) - -This is easy - -We should flip the bid priorities so that SSU is preferred for all traffic, if -we can do this without causing all sorts of other trouble. This will fix the -i2np.udp.alwaysPreferred configuration option so that it works (either as true -or false). - -

- - -Honoring i2np.udp.alwaysPreferred is a good idea in any case - please feel free -to commit that change. Lets gather a bit more data though before switching the -preferences, as NTCP was added to deal with an SSU-created congestion collapse. -

- - -1B) -Alternative to 1A), not so easy - -If we can mark traffic without adversely affecting our anonymity goals, we -should identify streaming-lib generated traffic -and have SSU generate a low bid for that traffic. This tag will have to go with -the message through each hop -so that the forwarding routers also honor the SSU preference. - -

- - -In practice, there are three types of traffic - tunnel building/testing, netDb -query/response, and streaming lib traffic. The network has been designed to -make differentiating those three very hard. - -

- - -2) -Bounding SSU even further (reducing maximum retransmissions from the current -10) is probably wise to reduce the chance of collapse. - -

- - -At 10 retransmissions, we're up shit creek already, I agree. One, maybe two -retransmissions is reasonable, from a transport layer, but if the other side is -too congested to ACK in time (even with the implemented SACK/NACK capability), -there's not much we can do. -

- -In my view, to really address the core issue we need to address why the router -gets so congested to ACK in time (which, from what I've found, is due to CPU -contention). Maybe we can juggle some things in the router's processing to make -the transmission of an already existing tunnel higher CPU priority than -decrypting a new tunnel request? Though we've got to be careful to avoid -starvation. -

- - -3) -We need further study on the benefits vs. harm of a semi-reliable protocol -underneath the streaming lib. Are retransmissions over a single hop beneficial -and a big win or are they worse than useless? -We could do a new SUU (secure unreliable UDP) but probably not worth it. We -could perhaps add a no-ACK-required message type in SSU if we don't want any -retransmissions at all of streaming-lib traffic. Are tightly bounded -retransmissions desirable? - - -

- -Worth looking into - what if we just disabled SSU's retransmissions? It'd -probably lead to much higher streaming lib resend rates, but maybe not. -

- - -4) -The priority sending code in .28 is only for NTCP. So far my testing hasn't -shown much use for SSU priority as the messages don't queue up long enough for -priorities to do any good. But more testing needed. - - -

- -There's UDPTransport.PRIORITY_LIMITS and UDPTransport.PRIORITY_WEIGHT (honored -by TimedWeightedPriorityMessageQueue), but currently the weights are almost all -equal, so there's no effect. That could be adjusted, of course (but as you -mention, if there's no queuing, it doesn't matter). -

- - -5) -The new streaming lib max timeout of 45s is probably still too low. The TCP RFC -says 60s. It probably shouldn't be shorter than the underlying NTCP max timeout -(presumably 60s). - -

- - -That 45s is the max retransmission timeout of the streaming lib though, not the -stream timeout. TCP in practice has retransmission timeouts orders of magnitude -less, though yes, can get to 60s on links running through exposed wires or -satellite transmissions ;) If we increase the streaming lib retransmission -timeout to e.g. 75 seconds, we could go get a beer before a web page loads -(especially assuming less than a 98% reliable transport). That's one reason we -prefer NTCP. -

- - -

Response by zzz

-Posted to new Syndie, 2007-03-31 -

- - -At 10 retransmissions, we're up shit creek already, I agree. One, maybe two -retransmissions is reasonable, from a transport layer, but if the other side is -too congested to ACK in time (even with the implemented SACK/NACK capability), -there's not much we can do. -
-In my view, to really address the core issue we need to address why the -router gets so congested to ACK in time (which, from what I've found, is due to -CPU contention). Maybe we can juggle some things in the router's processing to -make the transmission of an already existing tunnel higher CPU priority than -decrypting a new tunnel request? Though we've got to be careful to avoid -starvation. -
-

- -One of my main stats-gathering techniques is turning on -net.i2p.client.streaming.ConnectionPacketHandler=DEBUG and watching the RTT -times and window sizes as they go by. To overgeneralize for a moment, it's -common to see 3 types of connections: ~4s RTT, ~10s RTT, and ~30s RTT. Trying -to knock down the 30s RTT connections is the goal. If CPU contention is the -cause then maybe some juggling will do it. -

- -Reducing the SSU max retrans from 10 is really just a stab in the dark as we -don't have good data on whether we are collapsing, having TCP-over-TCP issues, -or what, so more data is needed. -

- - -Worth looking into - what if we just disabled SSU's retransmissions? It'd -probably lead to much higher streaming lib resend rates, but maybe not. - -

- -What I don't understand, if you could elaborate, are the benefits of SSU -retransmissions for non-streaming-lib traffic. Do we need tunnel messages (for -example) to use a semi-reliable transport or can they use an unreliable or -kinda-sorta-reliable transport (1 or 2 retransmissions max, for example)? In -other words, why semi-reliability? -

- - -(but as you mention, if there's no queuing, it doesn't matter). - -

- -I implemented priority sending for UDP but it kicked in about 100,000 times -less often than the code on the NTCP side. Maybe that's a clue for further -investigation or a hint - I don't understand why it would back up that much -more often on NTCP, but maybe that's a hint on why NTCP performs worse. - -

- -

Question answered by jrandom

-Posted to new Syndie, 2007-03-31 -

-measured SSU retransmission rates we saw back before NTCP was implemented -(10-30+%) -

- -Can the router itself measure this? If so, could a transport be selected based -on measured performance? (i.e. if an SSU connection to a peer is dropping an -unreasonable number of messages, prefer NTCP when sending to that peer) -

- - - -Yeah, it currently uses that stat right now as a poor-man's MTU detection (if -the retransmission rate is high, it uses the small packet size, but if its low, -it uses the large packet size). We tried a few things when first introducing -NTCP (and when first moving away from the original TCP transport) that would -prefer SSU but fail that transport for a peer easily, causing it to fall back -on NTCP. However, there's certainly more that could be done in that regard, -though it gets complicated quickly (how/when to adjust/reset the bids, whether -to share these preferences across multiple peers or not, whether to share it -across multiple sessions with the same peer (and for how long), etc). - - -

Response by foofighter

-Posted to new Syndie, 2007-03-26 -

- -If I've understood things right, the primary reason in favor of TCP (in -general, both the old and new variety) was that you needn't worry about coding -a good TCP stack. Which ain't impossibly hard to get right... just that -existing TCP stacks have a 20 year lead. -

- -AFAIK, there hasn't been much deep theory behind the preference of TCP versus -UDP, except the following considerations: - -

-

- - -On that background, a small diversity of transports (as many as needed, but not -more) appears sensible in either case. Which should be the main transport, -depends on their performance-wise. I've seen nasty stuff on my line when I -tried to use its full capacity with UDP. Packet losses on the level of 35%. -

- -We could definitely try playing with UDP versus TCP priorities, but I'd urge -caution in that. I would urge that they not be changed too radically all at -once, or it might break things. - -

- -

Response by zzz

-Posted to new Syndie, 2007-03-27 -

- -AFAIK, there hasn't been much deep theory behind the preference of TCP versus -UDP, except the following considerations: - - -

- -These are all valid issues. However you are considering the two protocols in -isolation, whether than thinking about what transport protocol is best for a -particular higher-level protocol (i.e. streaming lib or not). -

- -What I'm saying is you have to take the streaming lib into consideration. - -So either shift the preferences for everybody or treat streaming lib traffic -differently. - -That's what my proposal 1B) is talking about - have a different preference for -streaming-lib traffic than for non streaming-lib traffic (for example tunnel -build messages). -

- - - -On that background, a small diversity of transports (as many as needed, but -not more) appears sensible in either case. Which should be the main transport, -depends on their performance-wise. I've seen nasty stuff on my line when I -tried to use its full capacity with UDP. Packet losses on the level of 35%. - - -

- -Agreed. The new .28 may have made things better for packet loss over UDP, or -maybe not. - -One important point - the transport code does remember failures of a transport. -So if UDP is the preferred transport, it will try it first, but if it fails for -a particular destination, the next attempt for that destination it will try -NTCP rather than trying UDP again. -

- - -We could definitely try playing with UDP versus TCP priorities, but I'd urge -caution in that. I would urge that they not be changed too radically all at -once, or it might break things. - -

- -We have four tuning knobs - the four bid values (SSU and NTCP, for -already-connected and not-already-connected). -We could make SSU be preferred over NTCP only if both are connected, for -example, but try NTCP first if neither transport is connected. -

- -The other way to do it gradually is only shifting the streaming lib traffic -(the 1B proposal) however that could be hard and may have anonymity -implications, I don't know. Or maybe shift the traffic only for the first -outbound hop (i.e. don't propagate the flag to the next router), which gives -you only partial benefit but might be more anonymous and easier. -

- -

Results of the Discussion

-... and other related changes in the same timeframe (2007): - - {% endblock %} diff --git a/www.i2p2/pages/ntcp_discussion.html b/www.i2p2/pages/ntcp_discussion.html new file mode 100644 index 00000000..ce013d3f --- /dev/null +++ b/www.i2p2/pages/ntcp_discussion.html @@ -0,0 +1,559 @@ +{% extends "_layout.html" %} +{% block title %}NTCP Discussion{% endblock %} +{% block content %} + +Following is a discussion about NTCP that took place in March 2007. +It has not been updated to reflect current implementation. +For the current NTCP specification see the main NTCP page. + +

NTCP vs. SSU Discussion, March 2007

+

NTCP questions

+(adapted from an IRC discussion between zzz and cervantes) +
+Why is NTCP preferred over SSU, doesn't NTCP have higher overhead and latency? +It has better reliability. +
+Doesn't streaming lib over NTCP suffer from classic TCP-over-TCP issues? +What if we had a really simple UDP transport for streaming-lib-originated traffic? +I think SSU was meant to be the so-called really simple UDP transport - but it just proved too unreliable. + +

"NTCP Considered Harmful" Analysis by zzz

+Posted to new Syndie, 2007-03-25. +This was posted to stimulate discussion, don't take it too seriously. +

+Summary: NTCP has higher latency and overhead than SSU, and is more likely to +collapse when used with the streaming lib. However, traffic is routed with a +preference for NTCP over SSU and this is currently hardcoded. +

+ +

Discussion

+

+We currently have two transports, NTCP and SSU. As currently implemented, NTCP +has lower "bids" than SSU so it is preferred, except for the case where there +is an established SSU connection but no established NTCP connection for a peer. +

+ +SSU is similar to NTCP in that it implements acknowledgments, timeouts, and +retransmissions. However SSU is I2P code with tight constraints on the +timeouts and available statistics on round trip times, retransmissions, etc. +NTCP is based on Java NIO TCP, which is a black box and presumably implements +RFC standards, including very long maximum timeouts. +

+ +The majority of traffic within I2P is streaming-lib originated (HTTP, IRC, +Bittorrent) which is our implementation of TCP. As the lower-level transport is +generally NTCP due to the lower bids, the system is subject to the well-known +and dreaded problem of TCP-over-TCP +http://sites.inka.de/~W1011/devel/tcp-tcp.html , where both the higher and +lower layers of TCP are doing retransmissions at once, leading to collapse. +

+ +Unlike in the PPP over SSH scenario described in the link above, we have +several hops for the lower layer, each covered by a NTCP link. So each NTCP +latency is generally much less than the higher-layer streaming lib latency. +This lessens the chance of collapse. +

+ +Also, the probabilities of collapse are lessened when the lower-layer TCP is +tightly constrained with low timeouts and number of retransmissions compared to +the higher layer. +

+ +The .28 release increased the maximum streaming lib timeout from 10 sec to 45 +sec which greatly improved things. The SSU max timeout is 3 sec. The NTCP max +timeout is presumably at least 60 sec, which is the RFC recommendation. There +is no way to change NTCP parameters or monitor performance. Collapse of the +NTCP layer is [editor: text lost]. Perhaps an external tool like tcpdump would help. +

+ +However, running .28, the i2psnark reported upstream does not generally stay at +a high level. It often goes down to 3-4 KBps before climbing back up. This is a +signal that there are still collapses. +

+ +SSU is also more efficient. NTCP has higher overhead and probably higher round +trip times. when using NTCP the ratio of (tunnel output) / (i2psnark data +output) is at least 3.5 : 1. Running an experiment where the code was modified +to prefer SSU (the config option i2np.udp.alwaysPreferred has no effect in the +current code), the ratio reduced to about 3 : 1, indicating better efficiency. +

+ +As reported by streaming lib stats, things were much improved - lifetime window +size up from 6.3 to 7.5, RTT down from 11.5s to 10s, sends per ack down from +1.11 to 1.07. +

+ +That this was quite effective was surprising, given that we were only changing +the transport for the first of 3 to 5 total hops the outbound messages would +take. +

+ +The effect on outbound i2psnark speeds wasn't clear due to normal variations. +Also for the experiment, inbound NTCP was disabled. The effect on inbound +speeds on i2psnark was not clear. +

+

Proposals

+ + + +

Response by jrandom

+Posted to new Syndie, 2007-03-27 +

+On the whole, I'm open to experimenting with this, though remember why NTCP is +there in the first place - SSU failed in a congestion collapse. NTCP "just +works", and while 2-10% retransmission rates can be handled in normal +single-hop networks, that gives us a 40% retransmission rate with 2 hop +tunnels. If you loop in some of the measured SSU retransmission rates we saw +back before NTCP was implemented (10-30+%), that gives us an 83% retransmission +rate. Perhaps those rates were caused by the low 10 second timeout, but +increasing that much would bite us (remember, multiply by 5 and you've got half +the journey). +

+ +Unlike TCP, we have no feedback from the tunnel to know whether the message +made it - there are no tunnel level acks. We do have end to end ACKs, but only +on a small number of messages (whenever we distribute new session tags) - out +of the 1,553,591 client messages my router sent, we only attempted to ACK +145,207 of them. The others may have failed silently or succeeded perfectly. +

+ +I'm not convinced by the TCP-over-TCP argument for us, especially split across +the various paths we transfer down. Measurements on I2P can convince me +otherwise, of course. +

+ + +The NTCP max timeout is presumably at least 60 sec, which is the RFC +recommendation. There is no way to change NTCP parameters or monitor +performance. + +

+ + +True, but net connections only get up to that level when something really bad +is going on - the retransmission timeout on TCP is often on the order of tens +or hundreds of milliseconds. As foofighter points out, they've got 20+ years +experience and bugfixing in their TCP stacks, plus a billion dollar industry +optimizing hardware and software to perform well according to whatever it is +they do. +

+ + +NTCP has higher overhead and probably higher round trip times. when using NTCP +the ratio of (tunnel output) / (i2psnark data output) is at least 3.5 : 1. +Running an experiment where the code was modified to prefer SSU (the config +option i2np.udp.alwaysPreferred has no effect in the current code), the ratio +reduced to about 3 : 1, indicating better efficiency. + +

+ + +This is very interesting data, though more as a matter of router congestion +than bandwidth efficiency - you'd have to compare 3.5*$n*$NTCPRetransmissionPct +./. 3.0*$n*$SSURetransmissionPct. This data point suggests there's something in +the router that leads to excess local queuing of messages already being +transferred. +

+ + +lifetime window size up from 6.3 to 7.5, RTT down from 11.5s to 10s, sends per +ACK down from 1.11 to 1.07. + + +

+ +Remember that the sends-per-ACK is only a sample not a full count (as we don't +try to ACK every send). Its not a random sample either, but instead samples +more heavily periods of inactivity or the initiation of a burst of activity - +sustained load won't require many ACKs. +

+ +Window sizes in that range are still woefully low to get the real benefit of +AIMD, and still too low to transmit a single 32KB BT chunk (increasing the +floor to 10 or 12 would cover that). +

+ +Still, the wsize stat looks promising - over how long was that maintained? +

+ +Actually, for testing purposes, you may want to look at +StreamSinkClient/StreamSinkServer or even TestSwarm in +apps/ministreaming/java/src/net/i2p/client/streaming/ - StreamSinkClient is a +CLI app that sends a selected file to a selected destination and +StreamSinkServer creates a destination and writes out any data sent to it +(displaying size and transfer time). TestSwarm combines the two - flooding +random data to whomever it connects to. That should give you the tools to +measure sustained throughput capacity over the streaming lib, as opposed to BT +choke/send. +

+ + +1A) + +This is easy - +We should flip the bid priorities so that SSU is preferred for all traffic, if +we can do this without causing all sorts of other trouble. This will fix the +i2np.udp.alwaysPreferred configuration option so that it works (either as true +or false). + +

+ + +Honoring i2np.udp.alwaysPreferred is a good idea in any case - please feel free +to commit that change. Lets gather a bit more data though before switching the +preferences, as NTCP was added to deal with an SSU-created congestion collapse. +

+ + +1B) +Alternative to 1A), not so easy - +If we can mark traffic without adversely affecting our anonymity goals, we +should identify streaming-lib generated traffic +and have SSU generate a low bid for that traffic. This tag will have to go with +the message through each hop +so that the forwarding routers also honor the SSU preference. + +

+ + +In practice, there are three types of traffic - tunnel building/testing, netDb +query/response, and streaming lib traffic. The network has been designed to +make differentiating those three very hard. + +

+ + +2) +Bounding SSU even further (reducing maximum retransmissions from the current +10) is probably wise to reduce the chance of collapse. + +

+ + +At 10 retransmissions, we're up shit creek already, I agree. One, maybe two +retransmissions is reasonable, from a transport layer, but if the other side is +too congested to ACK in time (even with the implemented SACK/NACK capability), +there's not much we can do. +

+ +In my view, to really address the core issue we need to address why the router +gets so congested to ACK in time (which, from what I've found, is due to CPU +contention). Maybe we can juggle some things in the router's processing to make +the transmission of an already existing tunnel higher CPU priority than +decrypting a new tunnel request? Though we've got to be careful to avoid +starvation. +

+ + +3) +We need further study on the benefits vs. harm of a semi-reliable protocol +underneath the streaming lib. Are retransmissions over a single hop beneficial +and a big win or are they worse than useless? +We could do a new SUU (secure unreliable UDP) but probably not worth it. We +could perhaps add a no-ACK-required message type in SSU if we don't want any +retransmissions at all of streaming-lib traffic. Are tightly bounded +retransmissions desirable? + + +

+ +Worth looking into - what if we just disabled SSU's retransmissions? It'd +probably lead to much higher streaming lib resend rates, but maybe not. +

+ + +4) +The priority sending code in .28 is only for NTCP. So far my testing hasn't +shown much use for SSU priority as the messages don't queue up long enough for +priorities to do any good. But more testing needed. + + +

+ +There's UDPTransport.PRIORITY_LIMITS and UDPTransport.PRIORITY_WEIGHT (honored +by TimedWeightedPriorityMessageQueue), but currently the weights are almost all +equal, so there's no effect. That could be adjusted, of course (but as you +mention, if there's no queuing, it doesn't matter). +

+ + +5) +The new streaming lib max timeout of 45s is probably still too low. The TCP RFC +says 60s. It probably shouldn't be shorter than the underlying NTCP max timeout +(presumably 60s). + +

+ + +That 45s is the max retransmission timeout of the streaming lib though, not the +stream timeout. TCP in practice has retransmission timeouts orders of magnitude +less, though yes, can get to 60s on links running through exposed wires or +satellite transmissions ;) If we increase the streaming lib retransmission +timeout to e.g. 75 seconds, we could go get a beer before a web page loads +(especially assuming less than a 98% reliable transport). That's one reason we +prefer NTCP. +

+ + +

Response by zzz

+Posted to new Syndie, 2007-03-31 +

+ + +At 10 retransmissions, we're up shit creek already, I agree. One, maybe two +retransmissions is reasonable, from a transport layer, but if the other side is +too congested to ACK in time (even with the implemented SACK/NACK capability), +there's not much we can do. +
+In my view, to really address the core issue we need to address why the +router gets so congested to ACK in time (which, from what I've found, is due to +CPU contention). Maybe we can juggle some things in the router's processing to +make the transmission of an already existing tunnel higher CPU priority than +decrypting a new tunnel request? Though we've got to be careful to avoid +starvation. +
+

+ +One of my main stats-gathering techniques is turning on +net.i2p.client.streaming.ConnectionPacketHandler=DEBUG and watching the RTT +times and window sizes as they go by. To overgeneralize for a moment, it's +common to see 3 types of connections: ~4s RTT, ~10s RTT, and ~30s RTT. Trying +to knock down the 30s RTT connections is the goal. If CPU contention is the +cause then maybe some juggling will do it. +

+ +Reducing the SSU max retrans from 10 is really just a stab in the dark as we +don't have good data on whether we are collapsing, having TCP-over-TCP issues, +or what, so more data is needed. +

+ + +Worth looking into - what if we just disabled SSU's retransmissions? It'd +probably lead to much higher streaming lib resend rates, but maybe not. + +

+ +What I don't understand, if you could elaborate, are the benefits of SSU +retransmissions for non-streaming-lib traffic. Do we need tunnel messages (for +example) to use a semi-reliable transport or can they use an unreliable or +kinda-sorta-reliable transport (1 or 2 retransmissions max, for example)? In +other words, why semi-reliability? +

+ + +(but as you mention, if there's no queuing, it doesn't matter). + +

+ +I implemented priority sending for UDP but it kicked in about 100,000 times +less often than the code on the NTCP side. Maybe that's a clue for further +investigation or a hint - I don't understand why it would back up that much +more often on NTCP, but maybe that's a hint on why NTCP performs worse. + +

+ +

Question answered by jrandom

+Posted to new Syndie, 2007-03-31 +

+measured SSU retransmission rates we saw back before NTCP was implemented +(10-30+%) +

+ +Can the router itself measure this? If so, could a transport be selected based +on measured performance? (i.e. if an SSU connection to a peer is dropping an +unreasonable number of messages, prefer NTCP when sending to that peer) +

+ + + +Yeah, it currently uses that stat right now as a poor-man's MTU detection (if +the retransmission rate is high, it uses the small packet size, but if its low, +it uses the large packet size). We tried a few things when first introducing +NTCP (and when first moving away from the original TCP transport) that would +prefer SSU but fail that transport for a peer easily, causing it to fall back +on NTCP. However, there's certainly more that could be done in that regard, +though it gets complicated quickly (how/when to adjust/reset the bids, whether +to share these preferences across multiple peers or not, whether to share it +across multiple sessions with the same peer (and for how long), etc). + + +

Response by foofighter

+Posted to new Syndie, 2007-03-26 +

+ +If I've understood things right, the primary reason in favor of TCP (in +general, both the old and new variety) was that you needn't worry about coding +a good TCP stack. Which ain't impossibly hard to get right... just that +existing TCP stacks have a 20 year lead. +

+ +AFAIK, there hasn't been much deep theory behind the preference of TCP versus +UDP, except the following considerations: + +

+

+ + +On that background, a small diversity of transports (as many as needed, but not +more) appears sensible in either case. Which should be the main transport, +depends on their performance-wise. I've seen nasty stuff on my line when I +tried to use its full capacity with UDP. Packet losses on the level of 35%. +

+ +We could definitely try playing with UDP versus TCP priorities, but I'd urge +caution in that. I would urge that they not be changed too radically all at +once, or it might break things. + +

+ +

Response by zzz

+Posted to new Syndie, 2007-03-27 +

+ +AFAIK, there hasn't been much deep theory behind the preference of TCP versus +UDP, except the following considerations: + + +

+ +These are all valid issues. However you are considering the two protocols in +isolation, whether than thinking about what transport protocol is best for a +particular higher-level protocol (i.e. streaming lib or not). +

+ +What I'm saying is you have to take the streaming lib into consideration. + +So either shift the preferences for everybody or treat streaming lib traffic +differently. + +That's what my proposal 1B) is talking about - have a different preference for +streaming-lib traffic than for non streaming-lib traffic (for example tunnel +build messages). +

+ + + +On that background, a small diversity of transports (as many as needed, but +not more) appears sensible in either case. Which should be the main transport, +depends on their performance-wise. I've seen nasty stuff on my line when I +tried to use its full capacity with UDP. Packet losses on the level of 35%. + + +

+ +Agreed. The new .28 may have made things better for packet loss over UDP, or +maybe not. + +One important point - the transport code does remember failures of a transport. +So if UDP is the preferred transport, it will try it first, but if it fails for +a particular destination, the next attempt for that destination it will try +NTCP rather than trying UDP again. +

+ + +We could definitely try playing with UDP versus TCP priorities, but I'd urge +caution in that. I would urge that they not be changed too radically all at +once, or it might break things. + +

+ +We have four tuning knobs - the four bid values (SSU and NTCP, for +already-connected and not-already-connected). +We could make SSU be preferred over NTCP only if both are connected, for +example, but try NTCP first if neither transport is connected. +

+ +The other way to do it gradually is only shifting the streaming lib traffic +(the 1B proposal) however that could be hard and may have anonymity +implications, I don't know. Or maybe shift the traffic only for the first +outbound hop (i.e. don't propagate the flag to the next router), which gives +you only partial benefit but might be more anonymous and easier. +

+ +

Results of the Discussion

+... and other related changes in the same timeframe (2007): + + +{% endblock %} diff --git a/www.i2p2/pages/tunnel-alt-creation.html b/www.i2p2/pages/tunnel-alt-creation.html index 8612f978..e07ef24c 100644 --- a/www.i2p2/pages/tunnel-alt-creation.html +++ b/www.i2p2/pages/tunnel-alt-creation.html @@ -2,32 +2,51 @@ {% block title %}Tunnel Creation{% endblock %} {% block content %} -Note: This documents the current tunnel build implementation as of release 0.6.1.10. -
-
-1) Tunnel creation
-1.1) Tunnel creation request record
-1.2) Hop processing
-1.3) Tunnel creation reply record
-1.4) Request preparation
-1.5) Request delivery
-1.6) Endpoint handling
-1.7) Reply processing
-2) Notes
-
+This page documents the current tunnel build implementation. +Updated August 2010 for release 0.8 -

1) Tunnel creation encryption:

+

Tunnel Creation Specification

+ +

+This document specifies the details of the encrypted tunnel build messages +used to create tunnels using a "non-interactive telescoping" method. +See the tunnel build document +for an overview of the process, including peer selection and ordering methods.

The tunnel creation is accomplished by a single message passed along the path of peers in the tunnel, rewritten in place, and transmitted back to the tunnel creator. This single tunnel message is made up -of a fixed number of records (8) - one for each potential peer in +of a variable number of records (up to 8) - one for each potential peer in the tunnel. Individual records are asymmetrically encrypted to be read only by a specific peer along the path, while an additional symmetric layer of encryption is added at each hop so as to expose the asymmetrically encrypted record only at the appropriate time.

-

1.1) Tunnel creation request record

+

Number of Records

+Not all records must contain valid data. +The build message for a 3-hop tunnel, for example, may contain more records +to hide the actual length of the tunnel from the participants. +There are two build message types. The original +Tunnel Build Message (TBM) +contains 8 records, which is more than enough for any practical tunnel length. +The recently-implemented +Variable Tunnel Build Message (VTBM) +contains 1 to 8 records. The originator may trade off the size of the message +with the desired amount of tunnel length obfuscation. +

+In the current network, most tunnels are 2 or 3 hops long. +The current implementation uses a 5-record VTBM to build tunnels of 4 hops or less, +and the 8-record TBM for longer tunnels. +The 5-record VTBM (which fits in 3 1KB tunnel messaages) reduces network traffic +and increases build sucess rate, because larger messages are less likely to be dropped. +

+The reply message must be the same type and length as the build message. + + +

Request Record Specification

+ +Also specified in the +I2NP Specification

Cleartext of the record, visible only to the hop being asked:

   bytes     0-3: tunnel ID to receive messages as
@@ -49,49 +68,79 @@ endpoint, they specify where the rewritten tunnel creation reply
 message should be sent.  In addition, the next message ID specifies the
 message ID that the message (or reply) should use.

-

The flags field currently has two bits defined:

- bit 0: if set, allow messages from anyone
- bit 1: if set, allow messages to anyone, and send the reply to the
-        specified next hop in a tunnel message
+

The flags field contains the following: +

+ Bit order: 76543210 (bit 7 is MSB)
+ bit 7: if set, allow messages from anyone
+ bit 6: if set, allow messages to anyone, and send the reply to the
+        specified next hop in a tunnel message
+ bits 5-0: Undefined
+
-

That cleartext record is ElGamal 2048 encrypted with the hop's +Bit 7 indicates that the hop will be an inbound gateway (IBGW). +Bit 6 indicates that the hop will be an outbound endpoint (OBEP). + +

Request Encryption

+ +

That cleartext record is ElGamal 2048 encrypted with the hop's public encryption key and formatted into a 528 byte record:

-  bytes   0-15: SHA-256-128 of the current hop's router identity
+  bytes   0-15: First 16 bytes of the SHA-256 of the current hop's router identity
   bytes 16-527: ElGamal-2048 encrypted request record

Since the cleartext uses the full field, there is no need for additional padding beyond SHA256(cleartext) + cleartext.

-

1.2) Hop processing

+

Hop Processing and Encryption

-

When a hop receives a TunnelBuildMessage, it looks through the 8 +

When a hop receives a TunnelBuildMessage, it looks through the records contained within it for one starting with their own identity hash (trimmed to 8 bytes). It then decrypts the ElGamal block from that record and retrieves the protected cleartext. At that point, they make sure the tunnel request is not a duplicate by feeding the -AES-256 reply key into a bloom filter and making sure the request -time is within an hour of current. Duplicates or invalid requests +AES-256 reply key into a bloom filter. +Duplicates or invalid requests are dropped.

After deciding whether they will agree to participate in the tunnel or not, they replace the record that had contained the request with -an encrypted reply block. All other records are AES-256/CBC -encrypted with the included reply key and IV (though each is +an encrypted reply block. All other records are AES-256/CBC +encrypted with the included reply key and IV (though each is encrypted separately, rather than chained across records).

-

1.3) Tunnel creation reply record

+

Reply Record Specification

After the current hop reads their record, they replace it with a reply record stating whether or not they agree to participate in the tunnel, and if they do not, they classify their reason for rejection. This is simply a 1 byte value, with 0x0 meaning they agree to participate in the tunnel, and higher values meaning higher -levels of rejection. The reply is encrypted with the AES session -key delivered to it in the encrypted block, padded with random data -until it reaches the full record size:

-  AES-256-CBC(SHA-256(padding+status) + padding + status, key, IV)
+levels of rejection. +

+The following rejection codes are defined: +

+To hide other causes, such as router shutdown, from peers, the current implementation +uses TUNNEL_REJECT_BANDWIDTH for almost all rejections. -

1.4) Request preparation

+

+ The reply is encrypted with the AES session +key delivered to it in the encrypted block, padded with 527 bytes of random data +to reach the full record size. +The padding is placed before the status byte: +

+  AES-256-CBC(SHA-256(padding+status) + padding + status, key, IV)
+This is also described in the +I2NP spec. + +

Request Preparation

When building a new request, all of the records must first be built and asymmetrically encrypted. Each record should then be @@ -103,31 +152,49 @@ right hop after their predecessor encrypts it.

The excess records not needed for individual requests are simply filled with random data by the creator.

-

1.5) Request delivery

+

Request Delivery

For outbound tunnels, the delivery is done directly from the tunnel creator to the first hop, packaging up the TunnelBuildMessage as if the creator was just another hop in the tunnel. For inbound -tunnels, the delivery is done through an existing outbound tunnel -(and during startup, when no outbound tunnel exists yet, a fake 0 -hop outbound tunnel is used).

+tunnels, the delivery is done through an existing outbound tunnel. +The outbound tunnel is generally from the same pool as the new tunnel being built. +If no outbound tunnel is available in that pool, an outbound exploratory tunnel is used. +At startup, when no outbound exploratory tunnel exists yet, a fake 0-hop +outbound tunnel is used.

-

1.6) Endpoint handling

+

Endpoint Handling

-

When the request reaches an outbound endpoint (as determined by the +

+For creation of an outbound tunnel, +when the request reaches an outbound endpoint (as determined by the 'allow messages to anyone' flag), the hop is processed as usual, encrypting a reply in place of the record and encrypting all of the other records, but since there is no 'next hop' to forward the TunnelBuildMessage on to, it instead places the encrypted reply -records into a TunnelBuildReplyMessage and delivers it to the +records into a +TunnelBuildReplyMessage +or +VariableTunnelBuildReplyMessage +(the type of message and number of records must match that of the request) +and delivers it to the reply tunnel specified within the request record. That reply tunnel forwards the reply records down to the tunnel creator for processing, as below.

-

When the request reaches the inbound endpoint (also known as the -tunnel creator), the router processes each of the replies, as below.

+

The reply tunnel was specified by the creator as follows: +Generally it is an inbound tunnel from the same pool as the new outbound tunnel being built. +If no inbound tunnel is available in that pool, an inbound exploratory tunnel is used. +At startup, when no inbound exploratory tunnel exists yet, a fake 0-hop +inbound tunnel is used.

-

1.7) Reply processing

+

+For creation of an inbound tunnel, +when the request reaches the inbound endpoint (also known as the +tunnel creator), there is no need to generate an explicit Reply Message, and +the router processes each of the replies, as below.

+ +

Reply Processing by the Request Creator

To process the reply records, the creator simply has to AES decrypt each record individually, using the reply key and IV of each hop in @@ -137,18 +204,37 @@ why they refuse. If they all agree, the tunnel is considered created and may be used immediately, but if anyone refuses, the tunnel is discarded.

-

2) Notes

+

+The agreements and rejections are noted in each peer's +profile, to be used in future assessments +of peer tunnel capacity. + + +

History and Notes

+

+This strategy came about during a discussion on the I2P mailing list + between Michael Rogers, Matthew Toseland (toad), and jrandom regarding + the predecessor attack. See:

+It was introduced in release 0.6.1.10 on 2006-02-16, which was the last time +a non-backward-compatible change was made in I2P. +

+ +

+Notes:

+

References

+ + +

Future Work

+ + + {% endblock %}