{% extends "_layout.html" %} {% block title %}I2P Development Meeting 177{% endblock %} {% block content %}

I2P dev meeting, April 25, 2005

16:12 < jrandom> 0) hi

16:12 < jrandom> 1) Net status and 0.6.1.17

16:12 < jrandom> 2) I2Phex

16:13 < jrandom> 3) ???

16:13 < jrandom> 0) hi

16:13 * jrandom waves

16:13 <@cervantes> 'lo

16:13 < jrandom> weekly status notes posted up @ http://dev.i2p.net/pipermail/i2p/2006-April/001283.html

16:14 < jrandom> while y'all skim that, lets jump into 1) Net status

16:14 < jrandom> so, as most of y'all have seen, we've got a new release out, and so far, the results have been pretty positive

16:15 <@cervantes> (yay!)

16:15 < jrandom> still not where we need to be, but it pretty much sorts out the main issues we were seeing

16:15 < jrandom> aye, 'tis nice to have halfway decent tunnel build rates again, at 2+ hop tunnels :)

16:16 * jrandom has 50%+ success rates on another router w/ 1hop tunnels

16:17 < jrandom> I think the last few changes in 0.6.1.17 should help avoid this sort of congestion collapse in the future as well

16:17 < jrandom> the user-visible result though is that we'll occationally see lease expirations, but rather than compounding itself, it'll back off

16:17 * cervantes sparks up azureus

16:18 <+Complication> This morning, I recorded client tunnel (length 2 +/- 1) success rates near 35%

16:18 <+Complication> Currently it's lower, since I tried making some modifications, and the latest of them wasn't so great :D

16:18 <@cervantes> jrandom: well done tracking that down - we were beginning to look like freenet for a bit :)

16:19 < jrandom> *cough* ;)

16:20 <+fox> <inkeystring> jrandom: would you mind briefly describing the backoff mechanism? i'm working on something like that for freenet 0.7 at the moment

16:21 < jrandom> inkeystring: we've had a transport layer backoff mechanism in place to cut down transmissions to a peer when the transport layer is overloaded, but that wasn't sufficient

16:21 <@cervantes> *cough* did I say freenet, I meant tor

16:21 <+fox> <inkeystring> :-)

16:22 < jrandom> inkeystring: the new change was to propogate that up to a higher level so that we stopped trying to build tunnels when our comm layer was saturated

16:22 < jrandom> (rather than sending more tunnel build attempts)

16:22 <+fox> <inkeystring> thanks - does the transport layer only back off when packets are lost, or is there some way for the receiver to control the flow?

16:23 * jrandom seems to recall discussing the impact of congestion *vs* routing w/ toad a few times (on irc and my old flog), though i don't recall any net-positive solution :/

16:23 < jrandom> the receiver can NACK, and we've got hooks for ECN, but they haven't been necessary

16:23 <+fox> <inkeystring> yeah the debate has resurfaced on freenet-dev :-) still no silver bullet

16:24 <+fox> <inkeystring> cool, thanks for the information

16:24 <+Complication> They're using UDP too these days, aren't they?

16:24 < jrandom> currently, the highly congested peers have trouble not with per-peer throttling, but with the breadth of the peer comm

16:24 <+Complication> (as the transport protocol)

16:24 <+fox> <inkeystring> breadth = number of peers?

16:24 < jrandom> yes

16:25 < jrandom> with the increased tunnel success rates, peers no longer need to talk to hundreds of peers just to get a tunnel built

16:25 < jrandom> so they can get by with just 20-30 peers

16:25 < jrandom> (directly connected peers, that is)

16:26 <+fox> <inkeystring> i guess that's good news for nat hole punching, keepalives etc?

16:26 < jrandom> otoh, w/ 2-300 active SSU connections, a 6KBps link is going to have trouble

16:26 < jrandom> aye

16:26 <+fox> <inkeystring> Complication: yes

16:27 <+fox> <inkeystring> (in the 0.7 alpha)

16:27 <+Complication> Aha, then they're likely facing some similar stuff

16:27 <+Complication> I hope someone finds the magic bullet :D

16:27 < jrandom> in a different way though. the transport layer is a relatively easy issue

16:27 <+fox> <inkeystring> i think they might have reused some of the SSU code... or at least they talked about it

16:27 < jrandom> (aka well studied for 30+ years)

16:28 < jrandom> but i2p (and freenet) load balancing works at a higher level than point to point links, and has different requirements

16:28 <+fox> <inkeystring> yeah it's the interaction with routing that's tricky

16:29 < jrandom> aye, though i2p's got it easy (we don't need to find specific peers with the data in question, just anyone with capacity to participate in our tunnels)

16:30 <+fox> <inkeystring> so there's no efficiency loss if you avoid an overloaded peer...

16:30 <+fox> <inkeystring> whereas in freenet, routing around an overloaded peer could increase the path length

16:30 <+fox> <inkeystring> anyway sorry this is OT

16:31 < jrandom> np, though explaining why the changes in 0.6.1.17 affect our congestion collapse was relevent :)

16:31 < jrandom> ok, anyone else have anything for 1) Net status?

16:32 <+Complication> Well, as actually mentioned before, while running pure .17, I observed a noticable periodism in bandwidth and active peers

16:32 <+Complication> And a few other people seem to experience it too, though I've got no clue about how common it is

16:33 <+Complication> I've been wondering about its primary causes, mostly from the perspective of tunnel throttling, but no solution yet

16:33 <+Complication> I managed to get my own graphs to look flatter, but only at the cost of some overall deterioration

16:33 <+Complication> Tried modifications like:

16:34 <+Complication> > _log.error("Allowed was " + allowed + ", but we were overloaded, so ended up allowing " + Math.min(allowed,1));

16:34 <+Complication> (this was to avoid it totally refraining from build attempts for its own tunnels)

16:35 < jrandom> ah right

16:35 <+Complication> (oh, and naturally the loglevel is wacky, since I changed those for testing)

16:35 < jrandom> we've got some code in there that tries to skew the periodicity a bit, but it isn't working quite right (obviously)

16:36 * perv just shot his system :(

16:36 <+Complication> But I tried some things like that, and tried reducing the growth factor for tunnel counts

16:36 < perv> is there an undelete for reiser4?

16:36 < jrandom> basically, if we just act as if tunnels expire (randomly) earlier than they actually do, it should help

16:36 <+Complication> Currently reading the big "countHowManyToBuild" function in TunnelPool.java :D

16:36 <+Complication> But I've not read it through yet

16:37 < jrandom> (though it'd obviously increase the tunnel build frequency, which prior to 0.6.1.17, wouldn't have been reasonable)

16:37 <+Complication> perv: there is something

16:37 < jrandom> hmm, putting a randomization in there would be tough Complication, as we call that function quite frequently

16:38 * perv considers salvaging and switching to gentoo

16:38 < jrandom> what i'd recommend would be to look at randomizing the expiration time of successfully built tunnels

16:38 <+Complication> perv: you're better off with reiser than ext3, certainly

16:38 <+Complication> perv: but I don't know it by heart

16:38 <+Complication> jrandom: true, sometimes it could overbuild this way

16:38 < jrandom> (so that the existing countHowManyToBuild thinks it needs them before it actually does)

16:38 <+Complication> (and sometimes it inevitably overbuilds, when tunnels break and it gets hasty)

16:40 <+Complication> Hmm, a possibility I've not considered...

16:41 <+Complication> Either way, playing with it too, but no useful observations yet

16:42 < jrandom> cool, i've got some tweaks i've been playing with on that, perhaps we can get those together for the next build to see how it works on the reasonably-viable net ;)

16:43 < spinky> Is there a stat where you can see the amount of overhead the i2p network adds to the application data?

16:43 < jrandom> "overhead" is such a loaded term... ;)

16:43 < jrandom> we call it the cost of anonymity ;)

16:43 < spinky> hehe

16:45 < jrandom> (aka not really. application layer payload on a perfect net w/ 0 congestion & 1+1hops gets something like 70-80% efficiency for the endpoints)

16:45 < jrandom> ((last i measured))

16:45 < jrandom> but thats really lab conditions

16:45 < jrandom> live net is much more complicated

16:47 < spinky> Right, I meant just the amount of extra data used for setting up tunnels, keys, padding etc

16:47 < spinky> ...compared to the application data transferred

16:47 < jrandom> depends on the message framing, congestion, tunnel build success rates, etc

16:48 < jrandom> a 2 hop tunnel can be built by the network bearing 20KB

16:48 <+Complication> I've wanted to test that sometimes, primarily with the goal of estimating the "wastefulness" of mass transfer applications like BitTorrent and I2Phex

16:48 <+Complication> But I never got around to doing a clean measurement between my two nodes

16:48 <+Complication> Some day, I'll get back to that, though

16:49 < jrandom> Complication: its pretty tough with chatty apps, much simpler to measure wget :)

16:49 <+Complication> How very true

16:50 <+Complication> In what I managed to try, no resemblance of precision was involved

16:54 < jrandom> ok, if there's nothing else on 1), lets slide on over to 2) I2Phex

16:55 < jrandom> Complication: whatcha upta? :)

16:55 <+Complication> Well, yesterday's commit was a fix to certain problems which some people experienced with my silly first-run detector

16:56 <+Complication> The first-run detector is now less silly, and bar reported that it seemed to start behaving normally

16:56 <+Complication> However, since I2Phex seems runnable already in current network conditions,

16:56 <+Complication> I'll try finding the rehash bug too.

16:57 <+Complication> If I only can

16:57 < jrandom> cool, i know that one has been haunting you for months now

16:57 <+Complication> What is interesting that mainline Phex may also have it, and locating + reading their observations is something I'll try doing too

16:58 < jrandom> but nice to hear the startup fix is in there

16:58 < jrandom> ah word

16:58 <+Complication> =is that

16:58 <+Complication> I can't confirm currently if mainline Phex has it or not, though - never seen it personally there

16:59 < jrandom> (intermittent bugs)--

16:59 <+Complication> It's difficult to cause in controlled fashion, and thus difficult to find

17:00 <+Complication> And on my side, that's about all currently

17:00 <+Complication> Later on, I was wondering if it would be worthwhile to limit the number of parallel peer contacting attempts I2Phex fires at a time

17:01 < jrandom> aye, likely

17:01 <+Complication> Since they'd create a whole bunch of NetDB lookups in a short time, and that could be potentially not-so-nice from an I2P router's perspective

17:02 < jrandom> and new destination contacts require elG instead of aes

17:02 <+Complication> But I've not read or written any actual code towards that goal yet

17:04 < jrandom> k np. perhaps the mythical i2phex/phex merge'll bundle a solution :)

17:04 <+Complication> And on my part, that's about all the news from the I2Phex front...

17:04 < jrandom> cool, thanks for the update and the effort looking into things!

17:05 < jrandom> ok, lets jump on over to 3) ???

17:05 < jrandom> anyone have anything else to bring up for the meeting?

17:05 < lsmith> hello! i just want to commend the devs on the fantastic improvements with the latest release, my total bw reads 0.9/1.4 KBps and i remain connected to irc... it's...insanely cool :)

17:05 <+Complication> :D

17:06 < jrandom> thanks for your patience along the way - supporting low bw users is critical

17:06 <@cervantes> lsmith: that's really good to

17:06 <@cervantes> * Connection Reset

17:06 < jrandom> heh

17:07 < lsmith> :)

17:09 < jrandom> oh, one other thing of note is that zzz is back, and with 'im comes stats.i2p :)

17:09 < jrandom> [wewt]

17:11 <+Complication> A quite useful source of comparison data :)

17:11 < jrandom> mos' def'

17:11 < jrandom> ok, anyone have anything else for the meeting?

17:13 < jrandom> if not...

17:13 < jdot> i have a post-baf question or two

17:13 < jrandom> heh ok, then lets get the baffer rollin' :)

17:13 * jrandom winds up...

17:13 * jrandom *baf*s the meeting closed

{% endblock %}