Commit Graph

87 Commits

Author SHA1 Message Date
bbf73f0937 enforce some sanity checks on the payload size. see recent rant in DatabaseSearchReplyMessage commit for why
this is necessary.
2004-06-19 23:58:24 +00:00
592519c45c somehow some people are getting situations where the payload doesnt decompress. wtf?
do i need to wrap the Input/Output streams we use to pipe data over the net with a verification wrapper for the messages?
e.g. prefix the serialization of all I2NPMessages sent on the wire with the SHA256 of that serialization and verify on read?
Ho hum, dunno.  maybe its something else, but the ElG/AES+SessionTag already has integrity verification so the only thing I can
think of is a checksum error that got past TCP's checking and corrupted the AES stream.
2004-06-19 23:56:41 +00:00
deff14dfd8 lets see if this fixes bug 66 (http://dev.i2p.net/bugzilla/show_bug.cgi?id=66) 2004-06-13 20:27:44 +00:00
ba6a2e3fd2 unit test for the bandwidth limiting functionality (TrivialBandwidthLimiter, BandwidthLimiter, BandwidthLimited{In,Out}putStream) 2004-06-13 20:03:21 +00:00
a3136a19e9 big ol' rewrite, requiring new config settings and, er, it works pretty well.
see router.config.template mods and the new unit tests.
this implementation can cause starvation -
  e.g. lots of 1KB writes will go through before a 32KB write if the queue is low and the bwlimiter only replenishes say, 16KBps
  another impl would enforce a FIFO through thread wait/notify, etc, but would have the related overhead.
i dont know whether starvation situations will be the norm or the exception.  i'm running this on a few routers so we'll see.
2004-06-13 19:56:57 +00:00
b631568003 deal with null routers 2004-06-13 19:48:23 +00:00
1d0c03eca4 use the router context's properties (which now include the config settings) 2004-06-13 19:47:44 +00:00
eb30525a26 deal with null routers (useful for testing) 2004-06-13 19:47:02 +00:00
3e66ea3f56 include the router's config in the property settings 2004-06-13 19:45:53 +00:00
9f1189e606 use a bandwidth limited stream instead of asking for the allocation of the entire buffer at once (since, uh, its not likely that the bandwidth limiter will ever have hundreds of KBytes available for use) 2004-06-13 19:39:42 +00:00
698927bed4 logging 2004-06-13 19:37:18 +00:00
8fd02ee8dd allow the stream to optionally pull from the output stream's bandwidth limit queue (useful in very strange situations) 2004-06-13 19:36:41 +00:00
f3154e8f5e buffer between the bandwidth limiter and the raw stream 2004-06-13 19:34:46 +00:00
878af163a9 handle null boolean value (legal, but not in this context), fixes bug reported by nickster 2004-06-13 19:32:58 +00:00
ca6884dbca imports (sorry, includes alphabetizing, wee)
(shendaras)
2004-05-24 03:21:21 +00:00
1ebb0ac5fb 0.3.1.4 (backwards compatible)
i'll package & push later this evening
2004-05-23 16:54:29 +00:00
bf0e53f13b i'll swallow your soul!
er... make it queue up to 20 messages (in case of bursts), and do some more verbose logging
2004-05-23 16:52:56 +00:00
04be41aac5 if the send queue to the peer is too large, fail the message but also mark it as a comm error (since either their net con is insanely saturated, or disconnected)
logging
2004-05-22 12:05:34 +00:00
fd1313d49f bugger the speed estimate - always use the measured values, and if there aren't enough measured values, use 0. 2004-05-22 12:03:38 +00:00
3599dba5c3 javadoc 2004-05-22 12:02:51 +00:00
67edc437d2 properly !LART on comm error, and initialize the log correctly 2004-05-22 12:02:18 +00:00
7a39d9240c logging 2004-05-22 12:00:08 +00:00
ddb6348bfd type safety? we dont need no stinkin' type safety!
(aka expiration vs timeoutMs, as shown by tunnel test messages that don't expire for another 30 years)
this'll leak memory for failed tunnels, so we'll push this in a new rev in the next day or two, or maybe later today if it gets really bad.
2004-05-20 13:43:09 +00:00
d70c5df5a0 0.3.1.3 (not backwards compatible, yadda yadda yadda) 2004-05-20 11:32:32 +00:00
f2fa2038b1 * made dbStore use a pessimistic algorithm - requiring confirmation of a store, rather than optimistically considering all store messages successful (NOT BACKWARDS COMPATIBLE)
* when allocating tunnels for a client, make sure it has a good amount of time left in it (using default values, this means at least 7.5 minutes)
* allow overriding the profile organizer's thresholds so as to enforce a minimum number of fast and reliable peers, allowing a base level of tunnel diversification.  this is done through the "profileOrganizer.minFastPeers" router.config / context property (default minimum = 4 fast and reliable peers)
* don't be so harsh with the isFailing calculator regarding db lookup responses, since we've decreased the timeout.  however, include "participated in a failed tunnel" as part of the criteria
* more logging than god
* for dropped messages, if it is a DeliveryStatusMessage its not an error, its just lag / congestion (keep the average delay as the new stat "inNetPool.droppedDeliveryStatusDelay")
2004-05-20 11:06:25 +00:00
bfd59e64ea refactored the cleanup job
logging
2004-05-20 10:53:31 +00:00
60e05e270a cleaned up slice processing
reduced max queued messages per connection to 10 (additional ones are immediately marked as failed)
update the I2P_FLAG byte to '*' making this NOT BACKWARDS COMPATIBLE
formatting
2004-05-20 10:51:22 +00:00
0e5d164a8a support shutting down the router from the web console:
specify a "router.shutdownPassword" value in the router.config (or in the environment [ala -D]),
  then specify that password on the shutdown form in the web console and hit submit.  after 30 seconds, it'll kill the router (and unless you're running the sim, it'll kill the JVM too, including clientApp.* started tunnels / etc)
if we had some sort of ACL for accessing the web console, we could avoid the password field altogether, but we dont, so we cant.
2004-05-19 22:00:32 +00:00
7243963106 removed the insane explicit GC 2004-05-18 18:39:43 +00:00
292363eb65 imports (sorry, includes alphabetizing, wee)
(shendaras)
2004-05-17 03:38:53 +00:00
07e79ce61a * do a db store after a successful db search (healing the netDb)
* timeout each peer in a db search after 10 seconds, not 30
* logging
2004-05-17 00:59:29 +00:00
ff0023a889 big ol' memory, cpu usage, and shutdown handling update. main changes include:
* rather than have all jobs created hooked into the clock for offset updates, have the jobQueue stay hooked up and update any active jobs accordingly (killing a memory leak of a JobTiming objects - one per job)
* dont go totally insane during shutdown and log like mad (though the clientApp things still log like mad, since they don't know the router is going down)
* adjust memory buffer sizes based on real world values so we don't have to expand/contract a lot
* dont display things that are completely useless (who cares what the first 32 bytes of a public key are?)
* reduce temporary object creation
* use more efficient collections at times
* on shutdown, log some state information (ready/timed jobs, pending messages, etc)
* explicit GC every 10 jobs.  yeah, not efficient, but just for now we'll keep 'er in there
* only reread the router config file if it changes (duh)
2004-05-16 04:54:50 +00:00
61c97ab940 0.3.1.2 (backwards compatible, etc) 2004-05-13 23:49:08 +00:00
4c7af01edc allow dynamic update to the reliability threshold factor (e.g. rather than the top 2/3rds being considered "reliable", allow that to be the top 1/3, or 1/2, etc)
the router.config var "profileOrganizer.reliabilityThresholdFactor=0.75" (or environment property -DprofileOrganizer.reliabilityThresholdFactor=0.5) etc
2004-05-13 23:24:09 +00:00
0d431213cd include the previous period in the measurements (since they're discrete, not rolling)
also include the other elements as necessary by default
2004-05-13 07:14:54 +00:00
ad9dd9a2e2 Lots of updates. I'm not calling this 0.3.1.2, still need to
"burn it it" some more, but its looking good.
* test all tunnels we manage every period or two. later we'll want to include some randomization to help fight traffic analysis, but that falls into the i2p 3.0 tunnel chaff / mixing / etc)
* test inbound tunnels correctly (use an outbound tunnel, not direct)
* only give the tunnels 30 seconds to succeed
* mark the tunnel as tested for both the inbound and outbound side and adjust the profiles for all participants accordingly
* keep track of the 'last test time' on a tunnel
* new tunnel test response time profile stat, as well as overall router stat (published in the netDb as "tunnel.testSuccessTime")
* rewrite of the speed calculator - the value it generates now is essentially "how many round trip messages can this router pass in a minute".
  it also allows a few runtime configurable options:
  = speedCalculator.eventThreshold:
    we use the smallest measurement period that has at least this many events in it (10m, 60m, 24h, lifetime)
  = speedCalculator.useInstantaneousRates:
    when we use the estimated round trip time, do we use instantaneous or period averages?
  = speedCalculator.useTunnelTestOnly:
    do we only use the tunnel test time (no db response or tunnel create time, or even estimated round trip times)?
* fix the reliability calculator to use the 10 minute tunnel create successes, not the (almost always 0) 1 minute rate.
* persist the tunnel create response time and send success time correctly (duh)
* add a new main() to PeerProfile - PeerProfile [filename]* will calculate the values of the peer profiles specified.  useful for tweaking the calculator, and/or the configurable options.  ala:
     java -DspeedCalculator.useInstantaneousRates peerProfiles/profile-*.dat
2004-05-13 04:32:26 +00:00
57d7979d51 removed obsolete code
minor reorganization to help track down whats intermittently b0rking my kaffe instance
logging
2004-05-12 07:55:25 +00:00
af2f5cd2e1 kaffe shits a brick if you want the socket's address after .close() (grumble) 2004-05-11 01:55:22 +00:00
ea9b9fbf17 0.3.1.1 (tastes like chicken)
backwards compatible but some config changes necessary
2004-05-07 17:52:49 +00:00
303e257841 synchronization reduction and keep track of the 'last' job for each runner (to help debug something i see once a week on kaffe) 2004-05-07 17:51:28 +00:00
cd37c301d9 if (log.shouldLog())
yeah, aspect oriented programming sure would be nice.
2004-05-07 04:07:14 +00:00
f5fa26639e minor html cleanup 2004-05-07 03:45:48 +00:00
13952ebd8b include the expiration and messageId in the toString (shown on msg drop due to "unknown tunnel") 2004-05-07 03:31:33 +00:00
2df0007a10 logging 2004-05-07 03:29:06 +00:00
89bc5db3e1 increase the bundle probability to yet another arbitrary value
add the jobId to log messages to simplify tracing individual parallel sends
logging cleanup
2004-05-07 03:28:22 +00:00
9a06a5758d check shouldBundle only when its ready to be checked (duh) 2004-05-06 01:02:50 +00:00
e5a2a9644f *cough* [d'oh] 2004-05-05 23:01:36 +00:00
cdaeb4d176 track and publish two new stats:
* netDb.failedPeers (how many peers didn't reply to a lookup in time)
* netDb.searchCount (how many searches we send out in a 3 hour period)
probabalistically include the leaseSet of the sender in the garlic sent
to a peer if the client requests it to be included (aka if they want
replies).  By default, this is enabled (disable by setting the I2CP
prop "shouldBundleReplyInfo=false").  Also, by default the probability is
30% (w00 h00, arbitrary values!), which can be overridden via the I2CP
prop "bundleReplyInfoProbability=80" (or =10, or =100, etc).  The tradeoff
here is quicker replies in exchange for bandwidth (the dbStore).
Yeah, it'd be nice if there were something keeping track of which leaseSet
each client sent to each peer so that it could explicitly include it only
if it were necessary, but for now that's probably overkill.
2004-05-05 22:46:10 +00:00
d9f0cc27ef formatting 2004-05-05 03:37:26 +00:00
cd82089d4d upgrade deprecated argument
fix ze german
(duck)
2004-05-04 17:17:10 +00:00