* Use a buffered PRNG, pulling the PRNG data off a larger precalculated
buffer, rather than the underlying PRNG's (likely small) one, which in
turn reduces the frequency of recalcing.
* More tuning to reduce temporary allocation churn
* Within the tunnel, use xor(IV, msg[0:16]) as the flag to detect dups,
rather than the IV by itself, preventing an attack that would let
colluding internal adversaries tag a message to determine that they are
in the same tunnel. Thanks dvorak for the catch!
* Drop long inactive profiles on startup and shutdown
* /configstats.jsp: web interface to pick what stats to log
* Deliver more session tags to account for wider window sizes
* Cache some intermediate values in our HMACSHA256 and BC's HMAC
* Track the client send rate (stream.sendBps and client.sendBpsRaw)
* UrlLauncher: adjust the browser selection order
* I2PAppContext: hooks for dummy HMACSHA256 and a weak PRNG
* StreamSinkClient: add support for sending an unlimited amount of data
* Migrate the tests out of the default build jars
2005-06-22 Comwiz
* Migrate the core tests to junit
* Phase 1 of the unit test bounty completed. (The router build script was modified not to build the router
tests because of a broken dependancy on the core tests. This should be fixed in
phase 3 of the unit test bounty.)
* Cleaned up the peers page a bit more.
more udp stuff:
* add new config option: i2np.udp.alwaysPreferred=true to adjust the bidding
so that UDP is picked first, even if a TCP connection exists
* fixed the initial clock skew problem (duh)
* reduced the MTU to 576 (largest nearly-universally-safe, and allows a
tunnel message in 2 fragments)
* handle some races @ connection establishment (thanks duck!)
* if there are more ACKs than we can send in a packet, reschedule another
ACK immediately
* Added a small new page to the web console (/peers.jsp) which contains
the peer connection information. This will be cleaned up a lot more
before 0.6 is out, but its a start.
* Reduced some SimpleTimer churn
* add hooks for per-peer choking in the outbound message queue - if/when a
peer reaches their cwin, no further messages will enter the 'active' pool
until there are more bytes available. other messages waiting (either later
on in the same priority queue, or in the queues for other priorities) may
take that slot.
* when we have a message acked, release the acked size to the congestion
window (duh), rather than waiting for the second to expire and refill the
capacity.
* send packets in a volley explicitly, waiting until we can allocate the full
cwin size for that message
* more stats. including per-peer KBps (updated every second)
* improved blocking/timeout situations on the send queue
* added drop simulation hook
* provide logical RTO limits
* Reduce the peer profile stat coallesce overhead by inlining it with the
reorganize.
* Limit each transport to at most one address (any transport that requires
multiple entry points can include those alternatives in the address).
udp stuff:
* change the UDP transport's style from "udp" to "SSUv1"
* keep track of each peer's skew
* properly handle session reestablishment over an existing session, rather
than requiring both sides to expire first
* More fixes for the I2PTunnel "other" interface handling (thanks nelgin!)
* Add back the code to handle bids from multiple transports (though there
is still only one transport enabled by default)
* Adjust the router's queueing of outbound client messages when under
heavy load by running the preparatory job in the client's I2CP handler
thread, thereby blocking additional outbound messages when the router is
hosed.
* No need to validate or persist a netDb entry if we already have it
And for some udp stuff:
* only bid on what we know (duh)
* reduceed the queue size in the UDPSender itself, so that ACKs go
through more quickly, leaving the payload messages to queue up in
the outbound fragment scheduler
* rather than /= 2 on congestion, /= 2/3 (still AIMD, but less drastic)
* adjust the fragment selector so a wsiz throttle won't force extra
volleys
* mark congestion when it occurs, not after the message has been
ACKed
* when doing a round robin over the active messages, move on to the
next after a full volley, not after each packet (causing less "fair"
performance but better latency)
* reduced the lock contention in the inboundMessageFragments by
moving the ack and complete queues to the ACKSender and
MessageReceiver respectively (each of which have their own
threads)
* prefer new and existing UDP sessions to new TCP sessions, but
prefer existing TCP sessions to new UDP sessions
* Added a pool of PRNGs using a different synchronization technique,
hopefully sufficient to work around IBM's PRNG bugs until we get our
own Fortuna.
* In the streaming lib, don't jack up the RTT on NACK, and have the window
size bound the not-yet-ready messages to the peer, not the unacked
message count (not sure yet whether this is worthwile).
* Many additions to the messageHistory log.
* Handle out of order tunnel fragment delivery (not an issue on the live
net with TCP, but critical with UDP).
* Added a pool of PRNGs using a different synchronization technique,
hopefully sufficient to work around IBM's PRNG bugs until we get our
own Fortuna.
* In the streaming lib, don't jack up the RTT on NACK, and have the window
size bound the not-yet-ready messages to the peer, not the unacked
message count (not sure yet whether this is worthwile).
* Many additions to the messageHistory log.
* Handle out of order tunnel fragment delivery (not an issue on the live
net with TCP, but critical with UDP).
and for udp stuff:
* implemented tcp-esque rto code in the udp transport
* make sure we don't ACK too many messages at once
* transmit fragments in a simple (nonrandom) order so that we can more easily
adjust timeouts/etc.
* let the active outbound pool grow dynamically if there are outbound slots to
spare
* use a simple decaying bloom filter at the UDP level to drop duplicate resent
packets.
* In the SDK, we don't actually need to block when we're sending a message
as BestEffort (and these days, we're always sending BestEffort).
* Pass out client messages in fewer (larger) steps.
* Have the InNetMessagePool short circuit dispatch requests.
* Have the message validator take into account expiration to cut down on
false positives at high transfer rates.
* Allow configuration of the probabalistic window size growth rate in the
streaming lib's slow start and congestion avoidance phases, and default
them to a more conservative value (2), rather than the previous value
(1).
* Reduce the ack delay in the streaming lib to 500ms
* Honor choke requests in the streaming lib (only affects those getting
insanely high transfer rates)
* Let the user specify an interface besides 127.0.0.1 or 0.0.0.0 on the
I2PTunnel client page (thanks maestro^!)
(plus minor udp tweaks)
* Added the possibility for i2ptunnel client and httpclient instances to
have their own i2p session (and hence, destination and tunnels). By
default, tunnels are shared, but that can be changed on the web
interface or with the sharedClient config option in i2ptunnel.config.
2005-04-17 jrandom
* Marked the net.i2p.i2ptunnel.TunnelManager as deprecated. Anyone use
this? If not, I want to drop it (lots of tiny details with lots of
duplicated semantics).
* Added new user-editable eepproxy error page templates.
2005-04-17 jrandom
* Revamp the tunnel building throttles, fixing a situation where the
rebuild may not recover, and defaulting it to unthrottled (users with
slow CPUs may want to set "router.tunnel.shouldThrottle=true" in their
advanced router config)
* use the new raw i2np message format (the previous corruptions were due to above)
* add a new test component (UDPFlooder) which floods all peers at the rate desired
* packet munging fix for highly fragmented messages
* include basic slow start code
* fixed the UDP peer rate refilling
* cleaned up some nextSend scheduling
* Make sure we don't get cached updates (thanks smeghead!)
* Clear out the callback for the TestJob after it passes (only affects the
job timing accounting)
* Retry I2PTunnel startup if we are unable to build a socketManager for a
client or httpclient tunnel.
* Add some basic sanity checking on the I2CP settings (thanks duck!)
* After a successfull netDb search for a leaseSet, republish it to all of
the peers we have tried so far who did not give us the key (up to 10),
rather than the old K closest (which may include peers who had given us
the key)
* Don't wait 5 minutes to publish a leaseSet (duh!), and rather than
republish it every 5 minutes, republish it every 3. In addition, always
republish as soon as the leaseSet changes (duh^2).
* Minor fix for oddball startup race (thanks travis_bickle!)
* Minor AES update to allow in-place decryption.
2005-03-29 jrandom
* Decreased the initial RTT estimate to 10s to allow more retries.
* Increased the default netDb store replication factor from 2 to 6 to take
into consideration tunnel failures.
* Address some statistical anonymity attacks against the netDb that could
be mounted by an active internal adversary by only answering lookups for
leaseSets we received through an unsolicited store.
* Don't throttle lookup responses (we throttle enough elsewhere)
* Fix the NewsFetcher so that it doesn't incorrectly resume midway through
the file (thanks nickster!)
* Updated the I2PTunnel HTML (thanks postman!)
* Added support to the I2PTunnel pages for the URL parameter "passphrase",
which, if matched against the router.config "i2ptunnel.passphrase" value,
skips the nonce check. If the config prop doesn't exist or is blank, no
passphrase is accepted.
* Implemented HMAC-SHA256.
* Enable the tunnel batching with a 500ms delay by default
* Dropped compatability with 0.5.0.3 and earlier releases
* Implemented the news fetch / update policy code, as configurated on
/configupdate.jsp. Defaults are to grab the news every 24h (or if it
doesn't exist yet, on startup). No action is taken however, though if
the news.xml specifies that a new release is available, an option to
update will be shown on the router console.
* New initialNews.xml delivered with new installs, and moved news.xml out
of the i2pwww module and into the i2p module so that we can bundle it
within each update.
* New /configupdate.jsp page for controlling the update / notification
process, as well as various minor related updates. Note that not all
options are exposed yet, and the update detection code isn't in place
in this commit - it currently says there is always an update available.
* New EepGet component for reliable downloading, with a CLI exposed in
java -cp lib/i2p.jar net.i2p.util.EepGet url
* Added a default signing key to the TrustedUpdate component to be used
for verifying updates. This signing key can be authenticated via
gpg --verify i2p/core/java/src/net/i2p/crypto/TrustedUpdate.java
* New public domain SHA1 implementation for the DSA code so that we can
handle signing streams of arbitrary size without excess memory usage
(thanks P.Verdy!)
* Added some helpers to the TrustedUpdate to work off streams and to offer
a minimal CLI:
TrustedUpdate keygen pubKeyFile privKeyFile
TrustedUpdate sign origFile signedFile privKeyFile
TrustedUpdate verify signedFile
* Fixed the tunnel fragmentation handler to deal with multiple fragments
in a single message properly (rather than release the buffer into the
cache after processing the first one) (duh!)
* Added the batching preprocessor which will bundle together multiple
small messages inside a single tunnel message by delaying their delivery
up to .5s, or whenever the pending data will fill a full message,
whichever comes first. This is disabled at the moment, since without the
above bugfix widely deployed, lots and lots of messages would fail.
* Within each tunnel pool, stick with a randomly selected peer for up to
.5s before randomizing and selecting again, instead of randomizing the
pool each time a tunnel is needed.
* Fixed the tunnel fragmentation handler to deal with multiple fragments
in a single message properly (rather than release the buffer into the
cache after processing the first one) (duh!)
* Added the batching preprocessor which will bundle together multiple
small messages inside a single tunnel message by delaying their delivery
up to .5s, or whenever the pending data will fill a full message,
whichever comes first. This is disabled at the moment, since without the
above bugfix widely deployed, lots and lots of messages would fail.
* Within each tunnel pool, stick with a randomly selected peer for up to
.5s before randomizing and selecting again, instead of randomizing the
pool each time a tunnel is needed.
2005-03-18 jrandom
* Minor tweak to the timestamper to help reduce small skews
* Adjust the stats published to include only the relevent ones
* Only show the currently used speed calculation on the profile page
* Allow the full max # resends to be sent, rather than piggybacking the
RESET packet along side the final resend (duh)
* Add irc.postman.i2p to the default list of IRC servers for new installs
* Drop support for routers running 0.5 or 0.5.0.1 while maintaining
backwards compatability for users running 0.5.0.2.
* Update the old speed calculator and associated profile data points to
use a non-tiered moving average of the tunnel test time, avoiding the
freshness issues of the old tiered speed stats.
* Explicitly synchronize all of the methods on the PRNG, rather than just
the feeder methods (sun and kaffe only need the feeder, but it seems ibm
needs all of them synchronized).
* Properly use the tunnel tests as part of the profile stats.
* Don't flood the jobqueue with sequential persist profile tasks, but
instead, inject a brief scheduling delay between them.
* Reduce the TCP connection establishment timeout to 20s (which is still
absurdly excessive)
* Reduced the max resend delay to 30s so we can get some resends in when
dealing with client apps that hang up early (e.g. wget)
* Added more alternative socketManager factories (good call aum!)
* Adjust the old speed calculator to include end to end RTT data in its
estimates, and use that as the primary speed calculator again.
* Use the mean of the high capacity speeds to determine the fast
threshold, rather than the median. Perhaps we should use the mean of
all active non-failing peers?
* Updated the profile page to sort by tier, then alphabetically.
* Added some alternative socketManager factories (good call aum!)
* New strict speed calculator that goes off the actual number of messages
verifiably sent through the peer by way of tunnels. Initially, this only
contains the successful message count on inbound tunnels, but may be
augmented later to include verified outbound messages, peers queried in
the netDb, etc. The speed calculation decays quickly, but should give
a better differential than the previous stat (both values are shown on
the /profiles.jsp page)
2005-03-11 jrandom
* Rather than the fixed resend timeout floor (10s), use 10s+RTT as the
minimum (increased on resends as before, of course).
* Always prod the clock update listeners, even if just to tell them that
the time hasn't changed much.
* Added support for explicit peer selection for individual tunnel pools,
which will be useful in debugging but not recommended for use by normal
end users.
* More aggressively search for the next hop's routerInfo on tunnel join.
* Give messages received via inbound tunnels that are bound to remote
locations sufficient time (taking into account clock skew).
* Give alternate direct send messages sufficient time (10s min, not 5s)
* Always give the end to end data message the explicit timeout (though the
old default was sufficient before)
* No need to give end to end messages an insane expiration (+2m), as we
are already handling skew on the receiving side.
* Don't complain too loudly about expired TunnelCreateMessages (at least,
not until after all those 0.5 and 0.5.0.1 users upgrade ;)
* Properly keep the sendBps stat
* When running the router with router.keepHistory=true, log more data to
messageHistory.txt
* Logging updates
* Minor formatting updates