a6b5211fa7
congestion is only a warning, not an error
2004-06-29 22:30:14 +00:00
4e89b9c363
reliability threshold = median of active and nonfailing (inactive nonfailing can be a large number of 0 reliability peers)
2004-06-29 20:32:36 +00:00
f3e267d2d0
active peer testing - every minute, grab two reliable peers, throw a db store at them, and measure their response time
...
the db store sent is their own, and we use tunnels both ways, so they wont know who we are. we also mark the
success/failure of the tunnels accordingly
2004-06-29 19:45:26 +00:00
2fd87dc1f1
when we select peers to test, lets use all of the reliable peers, not just well integrated peers
2004-06-29 19:41:30 +00:00
40b6b77cfa
use the median reliability value of nonfailing peers for the reliabilty threshold, and simplify determining them for the speed and integration
2004-06-29 19:40:08 +00:00
d0c61dbf4d
use the explicit max ID values (I2NPMessage.MAX_ID_VALUE and TunnelId.MAX_ID_VALUE)
...
logging
2004-06-29 19:32:46 +00:00
1cd5a3fcf7
include the "addedBy" if we're debugging the job, not if we're debugging JobImpl
2004-06-29 19:29:57 +00:00
af81cf2c50
explcitly define the max I2NP message ID value and validate against it
2004-06-29 19:28:40 +00:00
cbc6aea8b4
logging
2004-06-27 21:20:31 +00:00
5c1e001a73
logging
2004-06-27 19:39:45 +00:00
77a8a46d8e
lets try to reduce creating new objects during finalization
2004-06-26 21:15:51 +00:00
95c7cd55c2
logging
2004-06-26 21:15:16 +00:00
5f0ef5e0e8
lets not crap on the secondary tunnel too (even though doing so isn't wrong)
...
this helps avoid catastrophic failures, at least a little, since a failure doesnt kill two sets of tunnels
2004-06-26 21:13:52 +00:00
1f26c603e0
for now, lets disable the tunnel pool persistance. this means that after a router crashes, tunnels it was participating in will fail even if the router comes back up before they expire.
...
disabling this saves us some IO contention (though this may only be relevent on my kaffe box... dunno)
2004-06-26 21:11:22 +00:00
7e2227ad42
lets keep track of how many messages die on our queue due to us being slow
2004-06-26 21:07:07 +00:00
9b4899da07
always use the cached host/port rather than grabbing the socket's InetAddress (in case it disconnects and throws NPEs)
...
use the NativeBigInteger as part of the session key negotiation (oops, forgot this one last time)
logging
2004-06-26 21:05:02 +00:00
a8ad8644c8
0.3.1.5 (backwards compatible)
...
lots of bugfixes. still no rate limiting, but, uh, lots of bugfixes
(release will be packaged and deployed later today)
2004-06-25 19:25:33 +00:00
4e91bb88a5
workaround an aggressively up-to-spec kaffe implementation (the spec says Socket.getInetAddress() is null if not connected,
...
but sun lets the getInetAddress() return a value if it had connected then disconnected, while kaffe buggers off and NPEs)
2004-06-25 19:21:11 +00:00
784dc0f6a7
boot up quicker
2004-06-25 18:42:27 +00:00
e80e627fba
more tests with the real TCP transport, not just the VM comm system (and for larger sims, dont keepHistory)
2004-06-25 18:41:50 +00:00
5ced441b17
dont fail the peer based on tunnel activity (it may not be their fault)
...
we *do* still penalize the peer based on tunnel failures, but thats in the reliability calculator, not this one.
2004-06-25 18:15:32 +00:00
57801202fd
flush the protocol flag explicitly
...
make the tcp connection handler nonblocking by adding another (very short lived) thread - this prevents a peer connecting to us that is very slow (or unconnectable) from forcing other cons to timeout
completely ripped out the fscking bandwidth limiter until i get it more reliable
gave threads more explicit names (for the sim)
logging
2004-06-25 18:14:12 +00:00
a019399c3c
reduce synchronization on static (instead use per context objects, for large sims)
2004-06-25 17:21:41 +00:00
e6f610a86c
dont synchronize on statics, instead use a seperate format object per context (so large sims dont get bogged down on synchronization)
2004-06-25 17:20:08 +00:00
7ef528bbde
add some minimal security to the admin console, requiring a passphrase to be entered when updating the clock offset
...
this works by a simple substring match of the URL - if the router.config contains the adminTimePassphrase=blah, the time update will only succeed if the URL contains "blah" in it
if the router.config does NOT contain an adminTimePassphrase, the time update WILL BE REFUSED.
aka to use the timestamper, you MUST set adminTimePassphrase AND update the clientApp.0.args= line to include the passphrase in the URL!
e.g.
clientApp.0.args=http://localhost:7655/setTime?blah pool.ntp.org pool.ntp.org pool.ntp.org
2004-06-25 17:18:21 +00:00
a351a29bf3
if it expired waiting on the queue for processing, kill 'er
2004-06-25 17:12:01 +00:00
983d258bce
logging
2004-06-25 17:09:55 +00:00
f6d38dd5e0
reduce SimpleDateFormat usage (implicit in Date.toString())
2004-06-25 17:03:13 +00:00
d51245aada
logging
2004-06-25 17:02:22 +00:00
94feb762ca
keep detailed info for the sim
2004-06-23 19:55:52 +00:00
40b59d5a5a
more valid display of bw usage (but not as fresh)
2004-06-23 19:54:12 +00:00
9ffd147470
handle writing the stats before the period has been reached
2004-06-23 19:53:20 +00:00
3fea4ad2ba
we dont need to use this fudge in this fashion (its done on the receiving end)
2004-06-23 19:51:58 +00:00
1ab5536879
la la la
...
(yeah, this is what broke cvs HEAD, causing transmission failures, disconnects, encryption errors, etc. oops)
2004-06-23 19:50:41 +00:00
9690a89a6d
sliices are only too slow if there's something pending
...
logging mods
i really need to rewrite the tcp transport - the code is all functional, but the design sucks.
with the FIFO bandwidth limiter we could get away with a single 'send' thread rather than each TCPConnection having its own writer thread (but we'd still need the per-con reader thread, at least until nio is solid enough)
but maybe the rewrite can hold off until the AMOC implementation. we'll see
2004-06-23 19:48:25 +00:00
e8734ef1e7
more logging for shutdown info
2004-06-22 04:42:27 +00:00
14b9f9509f
* allow the client subsystem to tell the clientMessagePool that a message is definitely remote (since the client subsystem should know). this reduces the churn of the message pool asking all over again
...
* add a new ClientWriterRunner thread (1 per I2CP connection) so that a client application that hangs or otherwise doesn't read from its i2cp socket quickly doesn't hang the whole router (since we've previously used the jobQueue for pushing I2CP messages). This may or may not clear the intermittent eepsite bug, but I'm not counting on it to (yet).
* update various points to deal with the client writer's operation (aka doSend won't throw IOException)
* logging
* lots and lots of metrics (yeah i know some of them vary based on the compiler)
2004-06-22 04:41:31 +00:00
b1f973d304
during initial router startup, we may try to publish "my.info" before the netDb/ dir is created, so lets make sure
2004-06-22 04:31:25 +00:00
2f17bfd71c
minor refactoring. i hate how large that method is, but beyond the essential stuff, its pretty much just logging and benchmarking.
...
plus, yeah, this method still takes too long in some situations. working on identifying why...
2004-06-22 04:29:28 +00:00
b6670ee23a
lets see how fast this can theoretically go (leaving simulated delays to other components)
2004-06-22 04:26:56 +00:00
f1036df1f6
new debugging data point
2004-06-22 04:25:24 +00:00
5166eab5ee
replaced double check locking ( http://www.javaworld.com/javaworld/jw-02-2001/jw-0209-double.html ) with the actual logic
...
- prepare the cache prior to use if you want to have the hash cache.
also fix the ejection policy to not clear the cache, but merely to remove sufficient values.
though maybe clearing the cache is the right thing to do so as to avoid ejection churn... hmm.
both of these fixes brought to you by the keen eyes of the one called mihi
2004-06-20 04:27:58 +00:00
26138e213f
new method - processingComplete(), which functions much just like OutNetMessage's discardData()
...
so drop the data when called, updating the MessageStateMonitor (and also telling the monitor on finalization, just cuz)
2004-06-20 01:40:12 +00:00
d82796e3ad
note that we've successfully processed a message (and as such drop its payload) ASAP, and only use safely cached snippets of it afterwards
2004-06-20 01:37:01 +00:00
cdcb81c867
dont be so aggressive about waking up more jobs, since this just causes excess locking when we dont need it
2004-06-20 01:34:16 +00:00
5669e8f060
deal with discarded payloads and use the cached version
2004-06-20 01:31:23 +00:00
d84a40b4dc
add some randomization to the startup time, so we're not too synchronous
...
also don't shut down so quickly, as the routers may dump some useful stats when they die a horrible death
2004-06-20 01:29:00 +00:00
591be43763
default to building more tunnels, because tunnels r k00l
...
(and fix the arg parsing)
2004-06-20 01:26:59 +00:00
97d0686354
new method: discardData() to be called as soon as we dont need the payload of a message anymore (but may still need the associated jobs/etc)
...
check in with the MessageStateMonitor, and cache some key attributes from the message (type, unique id, size, etc)
2004-06-20 01:21:24 +00:00
e2da05b197
more accurrate (but less lively) bandwidth rate calculation (since we dont necessarily calculate exactly on the edge of a measurement period, we use the data from the last full period)
...
logging on OOM
2004-06-20 01:18:31 +00:00