* Only send netDb searches to the floodfill peers for the time being
* Add some proof of concept filters for tunnel participation. By default,
it will skip peers with an advertised bandwith of less than 32KBps or
an advertised uptime of less than 2 hours. If this is sufficient, a
safer implementation of these filters will be implemented.
* Fix some oversights in my previous changes:
adjust some loglevels, make a few statements less wasteful,
make one comparison less confusing and more likely to log unexpected values
* Separate growth factors for tunnel count and tunnel test time
* Reduce growth factors, so probabalistic throttle would activate
* Square probAccept values to decelerate stronger when far from average
* Create a bandwidth stat with approximately 15-second half life
* Make allowTunnel() check the 1-second bandwidth for overload
before doing allowance calculations using 15-second bandwidth
* Tweak the overload detector in BuildExecutor to be more sensitive
for rising edges, add ability to initiate tunnel drops
* Add a function to seek and drop the highest-rate participating tunnel,
keeping a fixed+random grace period between such drops.
It doesn't seem very effective, so disabled by default
("router.dropTunnelsOnOverload=true" to enable)
* Allow a single build attempt to proceed despite 1-minute overload
only if the 1-second rate shows enough spare bandwidth
(e.g. overload has already eased)
* Correct a misnamed property in SummaryHelper.java
to avoid confusion
* Make the maximum allowance of our own concurrent
tunnel builds slightly adaptive: one concurrent build per 6 KB/s
within the fixed range 2..10
* While overloaded, try to avoid completely choking our own build attempts,
instead prefer limiting them to 1
* Adjust the tunnel build timeouts to cut down on expirations, and
increased the SSU connection establishment retransmission rate to
something less glacial.
* For the first 5 minutes of uptime, be less aggressive with tunnel
exploration, opting for more reliable peers to start with.
* Adjust how we pick high capacity peers to allow the inclusion of fast
peers (the previous filter assumed an old usage pattern)
* New set of stats to help track per-packet-type bandwidth usage better
* Cut out the proactive tail drop from the SSU transport, for now
* Reduce the frequency of tunnel build attempts while we're saturated
* Don't drop tunnel requests as easily - prefer to explicitly reject them
* Adjust the proactive tunnel request dropping so we will reject what we
can instead of dropping so much (but still dropping if we get too far
overloaded)
* Throttling improvements on SSU - throttle all transmissions to a peer
when we are retransmitting, not just retransmissions. Also, if
we're already retransmitting to a peer, probabalistically tail drop new
messages targetting that peer, based on the estimated wait time before
transmission.
* Fixed the rounding error in the inbound tunnel drop probability.
* Include a combined send/receive graph (good idea cervantes!)
* Proactively drop inbound tunnel requests probabalistically as the
estimated queue time approaches our limit, rather than letting them all
through up to that limit.
* Process inbound tunnel requests more efficiently
* Proactively drop inbound tunnel requests if the queue before we'd
process it in is too long (dynamically adjusted by cpu load)
* Adjust the tunnel rejection throttle to reject requeusts when we have to
proactively drop too many requests.
* Display the number of pending inbound tunnel join requests on the router
console (as the "handle backlog")
* Include a few more stats in the default set of graphs