[17267] in bugtraq

home help back first fref pref prev next nref lref last post

Re: IIS %c1%1c remote command execution

daemon@ATHENA.MIT.EDU (Cris Bailiff)
Thu Oct 19 13:12:38 2000

Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-Id:  <39EEC7FD.6ACF1FAC@e-secure.com.au>
Date:         Thu, 19 Oct 2000 21:07:57 +1100
Reply-To: Cris Bailiff <c.bailiff@E-SECURE.COM.AU>
From: Cris Bailiff <c.bailiff@E-SECURE.COM.AU>
To: BUGTRAQ@SECURITYFOCUS.COM

> Florian Weimer <Florian.Weimer@RUS.UNI-STUTTGART.DE> writes:
>
> This is one of the vulnerabilities Bruce Schneier warned of in one of
> the past CRYPTO-GRAM isssues.  The problem isn't the wrong time of
> path checking alone, but as well a poorly implemented UTF-8 decoder.
> RFC 2279 explicitly says that overlong sequences such as 0xC0 0xAF are
> invalid.

As someone often involved in reviewing and improving other peoples web code, I
have been citing the unicode security example from RFC2279 as one good reason why
web programmers must enforce 'anything not explicitly is allowed is denied'
almost since it was written. In commercial situations I have argued myself blue
in the face that the equivalent of (perl speak) s!../!!g is not good enough to
clean up filename form input parameters or other pathnames (in perl, ASP, PHP
etc.). I always end up being proved right, but it takes a lot of effort. Should
prove a bit easier from now on :-(

>
> It's a pity that a lot of UTF-8 decoders in free software fail such
> tests as well, either by design or careless implementation.

The warning in RFC 2279 hasn't been heeded by a single unicode decoder that I
have ever tested, commercial or free, including the Solaris 2.6 system libraries,
the Linux unicode_console driver, Netscape commuicator  and now, obviously, IIS.
Its unclear to me whether the IIS/NT unicode decoding is performed by a system
wide library or if its custom to IIS - either way, it can potentially affect
almost any unicode aware NT application.

I have resisted upgrading various cgi and mod_perl based systems to perl5.6
because it has inbuilt (default?) unicode support, and I've no idea which
applications or perl libraries might be affected. The problem is even harder than
it looks - which sub-system, out of the http server, the perl (or ASP or PHP...)
runtime, the standard C libraries and the kernel/OS can I expect to be performing
the conversion? Which one will get it right? I think Bruce wildly understated the
problem, and I've no idea how to put the brakes on the crash dive into a
character encoding standard which seems to have no defined canonical encoding and
no obvious way of performing deterministic comparisons.

I suppose as a security professional I should be happy, looking forward to a
booming business...

Cris Bailiff
c.bailiff@e-secure.com.au

home help back first fref pref prev next nref lref last post