one for peer selection and organization. reliability is kept around
for the moment and shown on the router console, but only to provide a
comparison (it is not used in any way)
* new stat in the TunnelHistory: failRate
* coallesce TunnelHistory stats (duh!)
* new ProfileOrganizer CLI ("ProfileOrganizer[ filename]*"
* implement reasonable 'failure' logic - if they are actively rejecting
tunnels or tunnels they've agreed to are failing, mark them as failing
* when choosing peers to test, exclude all fast ones
Some of them think that its ok for an inner class of a subclass to access protected data of the
outer class's parent when the parent is in another package.
Others do not.
Kaffe doesn't care (but thats because Kaffe doesn't do much for verification ;)
The JLS is aparently confusing, but it doesnt matter whether its a code or javac bug, we've got to change the code.
The simplest change would be to just make the JobImpl._context public, but I loath public data, so we make it private and add an accessor
(and change dozens of files)
whee
if more tokens become available while the first pending request is still blocked on
read/write (aka after allocation and before next .waitForAllocation()), give the tokens
to the next request
* refactor the satisfy{In,Out}boundRequests methods into smaller logical units
this will give a much smoother traffic pattern, as instead of waiting 6 seconds to write a 32KB message under a 6KB rate, it'll write 6KB for each of the first 5 seconds, and 2KB the next
this also allows people to have small buckets (but again, bucket sizes smaller than the rate just don't make sense)
(this is what caused the runtime errors on sun jvms but not on kaffe)
((aka i slacked and didn't test sufficiently. off with my head))
this now builds and runs fine in sun 1.3-1.5 jvms, as well as kaffe
the simple RouterThrottleImpl bases its decision entirely on how congested the jobQueue is - if there are jobs that have been waiting 5+ seconds, reject everything and stop reading from the network
(each i2npMessageReader randomly waits .5-1s when throttled before rechecking it)
minor adjustments in the stats published - removing a few useless ones and adding the router.throttleNetworkCause (which is the average ms lag in the jobQueue when an I2NP reader is throttled)
these had been broken out into seperate jobs before to reduce thread and lock contention, but that isn't as serious an issue anymore (in these cases) and the non-contention-related delays of these mini-jobs are trivial
current 0.3.2 throws an NPE which causes the submitMessageHistory functionality to fail, which isn't really a loss since i send that data to /dev/null at the moment ;)
(but you'll want to router.keepHistory=false and router.submitHistory=false)
this'll go into the next rev, whenever it comes out
(thanks ugha!)
dont time out for too many messages (just time out individual ones)
however, if any of the messages that time out have been there for a minute, kill the con (since its hung)
kaffe workaround for fast closing sockets
don't consider a connection valid until it has been up for 30 seconds (so people who are simply establishing connections but whose nats are still messed up get the error)
when dealing with expired after accepted, dont drop unless it expired outside the fudge factor
increase the default maxWaitingJobs to 100, since we can get lots at once (and we dont gobble as much memory as we used to)
also, don't wake up the jobQueueRunner in getNext once a second, instead just let the threads updating the queue notify