[17211] in bugtraq
Re: Shred 1.0 Bug Report
daemon@ATHENA.MIT.EDU (Mitchell Blank Jr)
Fri Oct 13 19:18:07 2000
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Message-ID: <20001012152018.B79122@sfgoth.com>
Date: Thu, 12 Oct 2000 15:20:18 -0700
Reply-To: Mitchell Blank Jr <mitch@SFGOTH.COM>
From: Mitchell Blank Jr <mitch@SFGOTH.COM>
X-To: Alfred Perlstein <bright@WINTELCOM.NET>
To: BUGTRAQ@SECURITYFOCUS.COM
In-Reply-To: <20001011162008.U272@fw.wintelcom.net>; from bright@WINTELCOM.NET
on Wed, Oct 11, 2000 at 04:20:08PM -0700
Alfred Perlstein wrote:
> Programs like shred are particularly bad, they offer a false sense
> of security, this instance shows a complete lack of understanding
> of how most UNIX filesystems are implemented.
>
> Shred won't work reliably on:
>
> a) data logging filesystems
> b) transactional filesystems
> c) filesystems that perform online defrag (FreeBSD-FFS+reallockblks)
> d) filesystems that offer snapshot capabilities.
> e) (well i'm sure there's more)
>
> Programs like this offer a false sense of security, the proper way
> to do it is to implement some sort of 'scrub(2)' syscall
Even a scrub syscall wouldn't work very well. In the case of online
defrag or log structured filesystems you could scrub the blocks that
currently hold the file but there may be other blocks which are now
free that have data from that same file. A better solution is something
like ext2's "secure removal" flag that indicates that the data should
be scrubbed automatically upon removal or truncation (unfortunately the
current implementation of ext2 in linux ignores this flag... more of
that false sense of security). Then the filesystem knows to scrub blocks
that came from that file any time it makes a copy of the data.
Then there's the issue of what happens if some of the data in your
super-secret file got wrote out to disk just before a crash, but the
metadata never made it to disk (or the journal). Solvable, but a pain.
-Mitch