[55562] in North American Network Operators' Group
Re: mSQL Attack/Peering/OBGP/Optical exchange
daemon@ATHENA.MIT.EDU (Iljitsch van Beijnum)
Fri Jan 31 19:08:12 2003
Date: Sat, 1 Feb 2003 01:08:15 +0100 (CET)
From: Iljitsch van Beijnum <iljitsch@muada.com>
To: Jack Bates <jbates@brightok.net>
Cc: <nanog@merit.edu>
In-Reply-To: <013001c2c950$711407e0$b66e1ece@brightok.net>
Errors-To: owner-nanog-outgoing@merit.edu
On Fri, 31 Jan 2003, Jack Bates wrote:
> If a proper rulebased system were implemented, wouldn't this account for the
> issues? For example, implementation of an increase is only allowed by peer E
> if the traffic has been a gradual increase and X throughput has been met for
> T amount of time. Peer E would also have specific caps allotted for peer S
> and T along with priority in granting the increases. In the case of the
> worm, it is important to have a good traffic analyzer to recognize that the
> increase in bandwith has been too drastic to constitute a valid need.
If my regular saturday morning traffic is 50 Mbps and a worm generates
another 100, then 150 Mbps is a valid need as being limited to my usual
50 Mbps would mean 67% packet loss, TCP sessions go into hibernation and
I end up with 49.9% Mbps of worm traffic.
> Of course, traffic patterns to vary abit in short periods of time, but the
> average sustained throughput and the average peak do not increase rapidly.
Sometimes they do: star report, mars probe, that kind of thing...
> What was seen with Saphire should never be confused with normal traffic and
> requests for bandwidth increments should be ignored by any automated system.
So you're proposing the traffic is inspected very closely, and then
either its rate limited/priority queued or more bandwidth is provisioned
automatically? That sure adds a lot of complexity but I guess this is
the only way to do it right.
> Of course, I realize that to implement the necessary rules would add a
> complexity that could cost largs sums of money due to mistakes.
Right.