Write profiles to disk more often
Delete old profiles on disk more often
Reduce max age of profiles
Limit age of profiles read in at startup based on downtime
Limit total profiles read in at startup
Change loaded profiles from a Set to a List for efficiency
Log tweaks
Start expire job sooner if forced floodfill or downtime was short
Don't run refresh routers job if forced floodfill or downtime was short or vmcommsystem
Increase expire probability
Don't expire routers close to us just before midnight
Don't start expire leases job until 11 minutes after startup
Base probability out of 128 to reduce random usage
Consolidate now() calls
Set caps based on job lag and share bandwidth
Add 20 minute rate to tunnel.participatingMessageCountAvgPerTunnel to
be used for congestion caps
Don't set congestion cap if hidden
Remove old non-required client.timeoutCongestion* stats in OCMOSJ.
Remove RouterThrottle.getInboundRateDelta(), used only for those stats.
Remove transport.sendMessageSize rates longer than 60s, used only for getInboundRateDelta().
Remove transport.receiveMessageSize rates longer than 60s, unused anywhere.
Remove transport.sendProcessingTime rates longer than 60s, unused anywhere.
getInboundRateDelta() was broken anyway as it was looking at send, not receive size.
All of this was untouched since 2004.
9 total rates for required stats removed.
to save a large amount of space.
30m rates were almost always within 10% of the 60m rates,
so we clearly don't need both.
Keep 60m because that's what we publish in the netdb.
Adjust rate weighting in CapacityCalculator accordingly.
Previously did not count probabalistic or transient codes
Reduce 10 minute acceptance rate for these codes
(previously adjusted the averaged rate)
Make rates final in TunnelHistory
Static rates array to reduce object churn
Prep for removing 30m rates
Reduce now() calls, pull out of loops
by not using high cap tier. Congestion starts from the top.
Right now, the fastest routers have the worst expl. build success.
This will help correct that.
by checking if a peer qualifies when adding, rather than
iterating through the whole netdb to generate an exclusion list at the start.
This was very inefficient and generated needless lookup storms via lookupBeforeDropping()
Same idea for getClosetHopExclude()
Goal is to never iterate through all the known routers, profiles, or connected peers
to select peers for a single tunnel.
Add ExcluderBase and 4 classes for various cases:
Excluder, ClosestHopExcluder, IBGWExcluder, and OBEPExcluder.
Change CSFI.getEstablished() from a Set to a List for efficiency
Improve efficiency of selectActivePeersNotFailingPeers()
by iterating through connected list rather than profiles.
Do not add not-connected peers to exclude set,
which would become huge for hidden routers.
Change getExclude() to shouldExclude()
The exclude set calls shouldExclude() in contains().
Pass the exclude set to ProfileOrganizer.
For client tunnels, do OBEP and IBGW checks at peer selection time,
not afterwards in ConnectChecker, so it doesn't fail at the end in checkTunnel().
Check closest hop when hidden.
Fail-fast for inbound when no connected peers and hidden, do not fall back to non-connected peers.
Should improve startup time for hidden routers.
Use ArraySet for matches to save space
Remove unused selectPeersLocallyUnreachable() and selectPeersRecentlyRejecting()
No peer selection policy changes here.