* removed SourceRouteBlock & SourceRouteReplyMessage, as they're a redundant concept
that 1) takes up bandwidth 2) takes up CPU 3) smell funny.
now the TunnelCreateMessage includes a replyTag, replyKey, replyTunnel, and
replyGateway that they garlic encrypt their ACK/NACK through and with.
* tunnelCreateMessage doesn't need a seperate ACK - either we get a
TunnelCreateStatusMessage back or we don't.
* message structure mods for unique tunnel ID per hop (though currently all hops have
the same tunnel ID)
(making a searchReply message ~100 bytes, down from ~30KB, and the lookup message ~64 bytes, down from ~10KB)
* when we get the netDb searchReply or lookup message referencing someone we don't know,
we fire off a lookup for them
* reduced some excessive padding
* dropped the DbSearchReplyMessageHandler, since it shouldn't be used (all search replies
should be handled by a MessageSelector built by the original search message)
* removed some oddball constructors from the SendMessageDirectJob and SendTunnelMessageJob (always must specify a timeout)
* refactored SendTunnelMessageJob main handler method into smaller logical methods
cleaned up rebuild / verification process so that the select*TunnelIds will always return what is necessary
for the moment, don't automatically kill all tunnels of a peer who fails just once (they can recover)
logging
new piece of data exposed and maintained is a list of router contexts - shown as a singleton off RouterContext - allowing an app in the same JVM to find the routers (and chose between which one they want)
one for peer selection and organization. reliability is kept around
for the moment and shown on the router console, but only to provide a
comparison (it is not used in any way)
* new stat in the TunnelHistory: failRate
* coallesce TunnelHistory stats (duh!)
* new ProfileOrganizer CLI ("ProfileOrganizer[ filename]*"
* implement reasonable 'failure' logic - if they are actively rejecting
tunnels or tunnels they've agreed to are failing, mark them as failing
* when choosing peers to test, exclude all fast ones
Some of them think that its ok for an inner class of a subclass to access protected data of the
outer class's parent when the parent is in another package.
Others do not.
Kaffe doesn't care (but thats because Kaffe doesn't do much for verification ;)
The JLS is aparently confusing, but it doesnt matter whether its a code or javac bug, we've got to change the code.
The simplest change would be to just make the JobImpl._context public, but I loath public data, so we make it private and add an accessor
(and change dozens of files)
whee
if more tokens become available while the first pending request is still blocked on
read/write (aka after allocation and before next .waitForAllocation()), give the tokens
to the next request
* refactor the satisfy{In,Out}boundRequests methods into smaller logical units
this will give a much smoother traffic pattern, as instead of waiting 6 seconds to write a 32KB message under a 6KB rate, it'll write 6KB for each of the first 5 seconds, and 2KB the next
this also allows people to have small buckets (but again, bucket sizes smaller than the rate just don't make sense)
(this is what caused the runtime errors on sun jvms but not on kaffe)
((aka i slacked and didn't test sufficiently. off with my head))
this now builds and runs fine in sun 1.3-1.5 jvms, as well as kaffe