Skip site navigation (1)Skip section navigation (2)
Date:      Sun, 23 Jan 2000 00:33:27 -0600
From:      Jason Young <jyoung@accessus.net>
To:        'Brett Glass' <brett@lariat.org>, Matthew Dillon <dillon@apollo.backplane.com>, Dag-Erling Smorgrav <des@flood.ping.uio.no>
Cc:        Keith Stevenson <k.stevenson@louisville.edu>, freebsd-security@FreeBSD.ORG
Subject:   RE: Some observations on stream.c and streamnt.c
Message-ID:  <ABD44D466F85D311A69900A0C900DB6BC601@staff.accessus.net>

next in thread | raw e-mail | index | archive | help

> In fact,
> using ipfw or ipfilter to impose policy (only the latter can be used
> in case of the stream.c exploit) is redundant, since ipfilter must
> retain tables of connections which duplicate information stored by
> the protocol stack already.

Both ipfilter and the kernel correctly drop the packet. The fact that
ipfilter is being successfully used to defend against it simply means that
it's more efficient at dropping these particular rogue packets. It's my
understanding from the list that the main reason the kernel is bogged down
by this attack is that it is computationally expensive to generate all the
return RSTs required by protocol. ipfilter doesn't send RSTs.

Matt Dillon's rate-limiting patch is a form of hardening the TCP/IP stack
while maintaining all possible protocol compliance. I agree with this. In
the usual case you should stick to protocol, and in a failure or overload or
attack situation you should do your best.

Personally, I don't understand why people are so antsy about being
portscanned or having their OS identified. Yes, it's more information for an
attacker. But if you have a security hole, you have a security hole. This is
exploitable if the intruder knows you're running XYZ unrelated service or
no. See other posts about people simply shotgunning exploits until they work
somewhere.

People seem to be regarding the fact that you can scan a machine for its
open ports to be a fault in its TCP/IP stack. In fact, it is doing what it's
supposed to do. If you don't like what it's doing, then that comes back to
imposing local policy.

> >Drop packets to the ports you're not using and don't want scanned. 
> 
> That's fine, and for that reason there should be an option that 
> doesn't clobber all RSTs. But if you want to hinder scans, you 
> should also drop some packets going to ports you ARE using. The 
> overhead to do this in the stack is small. 
>
> Let the admin have his choice of policies. I don't think we should 
> penalize him or her for disagreeing with someone else.

ipfw allows me to make 65,534 choices about policy. :)

My whole point is that an admin should be able to do anything they want with
ipfw or ipfilter, without us having to meddle with the stack for all FreeBSD
users. If there's a general fault that hoses the machine and/or its TCP/IP
stack for anyone anywhere, then the stack is in need of help.

There are occasions when the stack can't get enough help, like in the case
of forged syn floods. I remember back when I was part of the administration
staff running the EFNet server irc.anet-stl.com under BSDI 4.0, we came up
with a nearly completely safe and effective (but very
security-through-obscurity) method for dropping synfloods before they
entered TCP processing. I love BSDI's bpf-language filtering. 

> >Envision a situation where somebody accidentally bumps the 
> Big Red Button on
> >ftp.cdrom.com, and immediately brings it back up. If it 
> rate-limited its
> >outgoing RSTs and hit this limit momentarily (and I really 
> think this would
> >be unlikely in the extreme if the RST rate-limiting threshold is
> >reasonable), 
> 
> Would it be? Let's suppose that ftp.cdrom.com was handling 
> 5000 connections
> when you hit the switch. Within a second, you'd get AT LEAST 
> 5000 packets to 
> which you'd need to respond with RSTs. Almost certainly more, due to
> windowing.

Any client/server interaction that has pending activity after the reboot
will be in some varying stage of exponential backoff or have timed out
depending on when the activity took place and how long the server took to
"come back". 5000 connections will certainly not result in 5000 packets in
one second worth of RST-able client traffic. Many of the clients will not
have pending activity. 

Remember that a client will not be sending data on the ftp-data channel, and
that it doesn't expect or wait for a response to ACKs of server-sent data.
This means that the server isn't going to have any sort of massive data
queue piled up for it.

According to my TCP/IP Illustrated Vol. 1, my brief reading of the source,
and the default values of net.inet.tcp.keep*, keepalives are not terribly
worrisome (we have to be idle for a default of 4 hours). This is a cursory
examination of a subject I'm rusty on and I could very well be wrong.
Application level timeouts should kick in far sooner.

What I'm driving at is that there's almost no _normal_ situation where you
could expect to fire off more than a few dozen RSTs a second in a short
burst, which is likely lost in the statistical noise as for CPU utilization.

> The DoS we're talking about here actually sends a volume of packets of
> the same order of magnitude.

Certainly not, the DoS attack variant is sending tens of thousands per
second continuously.

Jason Young
accessUS Chief Network Engineer
 


To Unsubscribe: send mail to majordomo@FreeBSD.org
with "unsubscribe freebsd-security" in the body of the message




Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?ABD44D466F85D311A69900A0C900DB6BC601>