[45] in java-interest
Re: Re: Decompression Speed
daemon@ATHENA.MIT.EDU (Thom Hickey)
Thu May 4 12:40:29 1995
Date: Thu, 4 May 1995 09:19:23 +0500
From: hickey@oclc.org (Thom Hickey)
To: java-interest@java.Eng.Sun.COM
>From: eichin@mit.edu
>Date: Thu, 4 May 95 00:36:38 -0400
>Subject: Re: Decompression Speed
>Interesting. What do you think are the major bottlenecks? Is the Java
>array model part of the speed difference? (I'm interested in
>implementing some cryptographic algorithms directly in Java, and it
>would be useful to know what constructions to avoid...)
Sun's explanation is that it is array bounds and null pointer checking,
which sounds right. Add to that any inefficiencies introduced in the
original byte-code compilation, the C code generation and in any
macros/routines that support the resulting C code, and it's easy see
how a factor of 5 could be introduced. All of which could probably be
improved on. I've seen papers on optimizing array bounds checking out
of loops, but it takes some pretty sophisticated analysis of the code
to do this safely. Java may need this if actual parity with C is
needed (I'm not sure that it is).
I should mention that the Java code is really more similar to an
earlier C++ version which ran about twice as slowly as the C version.
Inline routines just weren't being compiled to as efficient code as a
skilled programmer could get using some (fairly extensive) macros.
Looking at it that way, makes the Java code come within a factor of
about 3 of some pretty good C++ code.
--Th
-
Note to Sun employees: this is an EXTERNAL mailing list!
Info: send 'help' to java-interest-request@java.sun.com