[2534] in Kerberos_V5_Development

home help back first fref pref prev next nref lref last post

Re: Prototype hell

daemon@ATHENA.MIT.EDU (Ken Raeburn)
Thu Oct 9 19:57:13 1997

To: Tom Yu <tlyu@MIT.EDU>
Cc: Jeffrey Hutzelman <jhutz+@cmu.edu>, Ken Hornstein <kenh@cmf.nrl.navy.mil>,
        krbdev@MIT.EDU
From: Ken Raeburn <raeburn@cygnus.com>
Date: 09 Oct 1997 19:56:43 -0400
In-Reply-To: Tom Yu's message of "Thu, 9 Oct 1997 17:57:26 -0400"

Tom Yu <tlyu@MIT.EDU> writes:

> The following should *not* give an error with an ANSI compliant
> compiler:

I can't find our copies of the standard, but the ANSI draft I've got
should be substantially the same.

> int foo(int a, short b, char c);
> 
> int foo(a, b, c)
> 	int a;
> 	short b;
> 	char c;
> {
> 	/* ... */
> }

In draft section 3.5.4.3 (ISO 6.5.4.3), regarding compatible function
types: "If one type has a parameter type list and the other type is
specified by a function definition that contains a (possibly empty)
identifier list, ... the type of each prototype parameter shall be
compatible with the type that results from the application of the
default argument promotions to the type of the corresponding
identifier."

So the prototype and definition above do *not* describe compatible
types for "foo", and the code violates the constraints of section 3.5
(6.5).  The correct prototype should list "int"; the value gets passed
as an "int", and is "assigned" to the parameter which is of type
"short" or "char".

> So basically, Ken's problem is either a compiler bug (does the problem
> occur with gcc?), or an actual code bug that results from failing to
> declare the type of a argument in the definition of the function.

GNU C permits the above usage as an extension, precisely because of
this sort of problem with types like "uid_t".

> Nevermind that krb5_int32 should probably mean "integral type
> containing at least 32 bits"....

Actually, I think that's where the problem lies.  It's nice to not
have to use "long" if "int" is big enough and "long" is even bigger
(e.g., on Alpha, where "long" will bloat your structures).  But having
it be an unwidened type leads to trouble.  Maybe it should be "int" if
that has at least 32 bits, and "long" otherwise?  That may be the
simplest fix for Ken that still provides prototype-based type
checking.

Or go Jeff's second suggestion -- eliminate the non-prototype forms.
Most new platforms ship with ANSI compilers, and most older ones will
run gcc.  I like that solution best.  Are there any platforms we care
about which don't have any ANSI C compiler, even gcc, available?


Jeff also suggested:
    #ifdef __STDC__
    int foo(int, short, int);
    #endif

    .
    .
    .

    #ifdef __STDC__
    int foo(int a, short b, int c)
    #else
    int foo(a, b, c)
    int a, c;
    short b;
    #endif
    {
      return a + b + c;
    }

The problem with this approach is that the compiler (or compiler
"ansi" option) you use when compiling the library dictates how the
function must be called.  Remember that "short" in a prototype implies
that the value does *not* have to be passed as an "int" on all
systems.

Imagine a calling convention where arguments are arranged on the stack
as they would be in a structure, using the types that are supposed to
be passed.  For simplicity, let's assume it's otherwise like a typical
32-bit machine.

So, if you have a function:

	/* foo.h */
	#ifdef __STDC__
	int foo(int a, short b, char c, float d);
	#else
	int foo ();
	#endif
and
	/* foo.c, in library */
	#ifdef __STDC__
	int foo (int a, short b, char c, float d)
	#else
	int foo (a, b, c, d) int a; short b; char c; float d;
	#endif
	{ ... }

Then if the library is compiled with __STDC__ defined, the argument
list is arranged as
	struct { int a; short b; char c; float d; }
with "c" at offset 6, and "d" a 4-byte type at offset 8.  If you
compile it without __STDC__, you get
	struct { int a; int b; int c; double d; }
with "c" at offset 8, and "d" an 8-byte type (possibly with a
different layout from the "float" type) at offset 12 or 16.

Clearly, then, you must compile all applications using this library in
the same mode in which the library itself was compiled.  So long as
you do that, it's consistent, but if you might mix-and-match, you'll
lose.  There's nothing that guarantees that doing so will work at all,
of course, but it would be regarded as a poor implementation if it did
not.

It's unlikely that you'll find a system where this doesn't work in
practice for integers, but I could certainly imagine it failing for
"float" arguments.


If you want to be compatible with non-ANSI compilers, you should use
promoted types in your prototypes, and old-style definitions or
new-style definitions using the promoted types.  If you don't want to
be compatible, don't conditionalize your prototypes.

home help back first fref pref prev next nref lref last post