[11260] in bugtraq
Re: FW-1 DOS attack: PART II
daemon@ATHENA.MIT.EDU (Darren Reed)
Thu Aug 5 05:48:17 1999
Content-Type: text
Message-Id: <199908040125.LAA29137@cheops.anu.edu.au>
Date: Wed, 4 Aug 1999 11:25:37 +1000
Reply-To: Darren Reed <avalon@COOMBS.ANU.EDU.AU>
From: Darren Reed <avalon@COOMBS.ANU.EDU.AU>
X-To: Sean_Boyle@MENTORG.COM
To: BUGTRAQ@SECURITYFOCUS.COM
In-Reply-To: <37A5C59B.35A364DA@MentorG.COM> from "Sean Boyle" at Aug 2,
99 09:21:47 am
As an author of a free NAT/firewalling tool for Unix (IP Filter), I know
there are DoS possibilities due to size limitations of tables, etc. When
I coded it I knew they were there - the design is for it to work, and work
well, not withstand a situation that will always overwhelm it - and very few
have reached them in real use.
One problem here is, unless you have *very* agressive timeouts, there's not
much you can do in defence unless you work with an IDS that decides "too much
traffic of type X from Y", resulting in you blocking more packets. Of
course, with agressive timeouts come performance problems over connections
with large RTT's and packets being dropped (not to mention idle connections).
Or you open up holes (like FW-1 does) to allow arbitrary packets (ACKs)
through regardless of rules. _Maybe_ a firewall can decide for itself
when "too much is enough" but where do you draw the line ? Start imposing
more arbitrary limits such as "no more than 50 concurrent connections to
the mail server" ? (Tempting, mind you :)
To give you some idea of the maximum size a table could be, to support
NAT'ing to a /24 address space, you're looking at max. of about 16M entries.
In real life, I just can't see that happening.
The only real advantage of making source available, in this instance, is
people can recompile the code with bigger/smaller tables, taking up more/less
kernel memory.
In reality, these DoS `relevations' are not new, except in how they affect
one particular product (FW-1). The Internet at large is still an easy
target for someone willing to deploy a DoS attack - stories of hosts being
blackholed using routing to circumvent large (200Mb/s) attacks where routers
`melt' when filtering is enabled, are not unheard of.
All very well for people to go and find these problems, but having made the
obvious more obvious, how about some suggestions on finding real solutions?
Tip: this isn't quite as easy as fixing root exploits.
Darren