[1382] in Kerberos_V5_Development
Re: build system redesign ideas
daemon@ATHENA.MIT.EDU (E. Jay Berkenbilt)
Sat Jul 13 12:55:10 1996
Date: Sat, 13 Jul 1996 12:53:31 -0400
From: "E. Jay Berkenbilt" <qjb@netrail.net>
To: tlyu@MIT.EDU
Cc: krbdev@MIT.EDU
In-Reply-To: <199607130142.VAA15261@dragons-lair.MIT.EDU> (message from Tom Yu
on Fri, 12 Jul 1996 21:42:50 -0400)
Well, I've been out of touch with kerberos development for a while,
but I've done a lot with build systems. Without going into great
detail, I believe that you pretty much have to decide to standardize
on a particular version of make such as either pmake or gnu make. One
of the hardest parts about putting a good build system together is
handling the differences between different vendors' makes. Imake
helps somewhat, but really, all it does is shift the problem over to
dealing with differences between different vendors' postprocessors. I
personally think make itself is no longer a sufficient tool for doing
large builds because you really want something that not only has
native support for dependency graphs (like make basically does) but
also has functions, conditionals, lists, loops, etc. You can do most
of this with make by resorting such tactics as using the shell for
conditionals and loops, invoking make recursively, or defining very
complex macros in terms of make variables. Gnu make helps a bit by
providing some support for some of these constructs, but even it is
not really adequate. However, gnu make is good enough for most cases,
and usually does quite well in instances where the structure of the
source tree is not too fluid and the number of different types of
things you need to build is somewhat limited. (We were pusing the
limits on a two-million line system I designed a build system for with
several different platforms some of which were embedded processors
where we had to be concerned with the host and target langauges, not
to mention that this was a large integration job including about half
a million lines of original code and the rest was existing systems.)
In any case, the basic model I use is to have a simple makefile in
each directory that basically just sets a few variables and then
includes another makefile. The path to the other makefile can be
relative to the current directory (which is what I do for things that
are relatively small or that I am going to release publicly) if you're
not going to change the shape of your build tree too often, or it can
be relative to some path specified by an environment variable (which
is what I do on real on-going development systems). The thought that
goes into the desigining the structure and contents of your included
makefiles is not signficiantly different from writing Imake
templates. I typically use .mk as the extension for user-includable
makefiles and .mki for things used internally. I'll typically have a
global.mki that defines global parameters, a config.mki that is
generated from config.mki.in using autoconf (and this is generally the
only makefile that autoconf touches), and a rules.mki that defines the
main workhorse rules including my automatic dependency generation
which which I do in a manner based on but fundamentally different from
what is described in the gnu make manual. I'll then have simple
makefiles like library.mk, simple_prog.mk, or whatever, that set up
variables and include rules.mki which in turn includes global.mki and
config.mki.
If you want to see a baby example of this, you can look at the system
I use to build programs in my home directory. I've made a couple of
minor fixes to it since the last time I updated my Athena home
directory, but you can find a fully functional and pretty recent
version of it in /mit/qjb/source/tools/make. The stuff in
/mit/qjb/source/util/login uses it as do a few other things I don't
keep up-to-date in my Athena home directory.
The biggest drawbacks to this system is that perl 5 and gnu make are
requried. It is probably safe to say that everyone who will be
beta-testing kerberos 5 will have or can easily obtain gnu make. Perl
5 is a little trickier with the variety of platforms you support. The
only thing it is used for is dependency generation, though, and that
could be replaced by something else.
Anyway, take all this for what it's worth, but I thought I'd chime in
a bit here since I've thought a lot about this problem for projects
ranging in size from a few thousand source lines to a few million and
including systems that are being released free on the net, proprietary
systems for internal use, little utlities in my home directory, and
large government systems....
--
E. Jay Berkenbilt (qjb@netrail.net) | Member, League for Programming Freedom
| lpf@uunet.uu.net, http://www.lpf.org
--formerly qjb@MIT.EDU