[497] in magellan
Requirements analysis
daemon@ATHENA.MIT.EDU (Greg Anderson)
Tue Aug 22 07:56:24 2000
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Message-Id: <v04020a00b5c81a976464@[18.18.1.143]>
Date: Tue, 22 Aug 2000 07:56:21 -0400
To: magellan@mit.edu
From: Greg Anderson <ganderso@MIT.EDU>
Good morning,
I found this Cutter Edge message on requirements analysis and
thought it fit well with Discovery - how much is enough?
Greg
---------
Welcome to The Cutter Edge, the weekly e-mail service for IT
professionals, provided free by Cutter Information Corp. and Cutter
Consortium.
If this article has been forwarded to you by a friend, you can
register for your own free weekly subscription to the Cutter Edge
at http://www.cutter.com/consortium/index_e-mail.html .
"JUST ENOUGH REQUIREMENTS ANALYSIS" (JERA)
During a discussion of requirements analysis, the question of how
much is enough is raised several times. Alistair Cockburn, author
of *Surviving Object-Oriented Projects* and a forthcoming book on
developing use cases, has some very interesting ideas about how
much is enough. He thinks developing software is somewhat akin to
writing epic poetry, as much a collaborative knowledge-sharing as
an engineering activity. "Software development is a craft, it is
an engineering discipline, it is mathematical, it is a mysterious
art. It is like getting a whole community to write poetry together.
There are temperamental geniuses, hard requirements, communication
needs, and, under it all, humans working together building something
they don't quite understand," he writes. Cockburn views software
development as "a cooperative, finite, goal-seeking, group game."
Cockburn thinks (as I do) that light methods are generally best --
specifically that "lighter is better, to a point." His philosophy
embraces "stronger on communications, lighter on deliverables." He
proposes a set of light methodologies he calls crystal methodologies
(for "crystal clear") based on an analysis of a project's criticality
(a 4-point scale from "comfortable" to "life threatening") and
project size (a 7-point scale, rated from 1-6 team members to more
than a thousand members). For example, a "C6" methodology is needed
to support the smallest team (1-6 people) working on a project that
might impact the user's "comfort" but not threaten life or cause
significant monetary loss. Cockburn begins to take the dread out of
discussing "methodology" by emphasizing communications and
collaboration and by introducing a range of characteristics by
which to evaluate methodology: weight, size, and specific density;
criticality; staff size; problem size; and project size. These
characteristics are all discussed from the perspective of "less is
better." (Cockburn's Web site, http://members.aol.com/acockburn/ ,
is worth a visit.)
Cockburn's approach supports the contention that rigorous
documentation doesn't necessarily lead to improved mutual
understanding. Documentation is often developed with the unspoken
objective of being able to assign blame for failure, rather than to
improve understanding. At the end of a project, whether it has been
a success or a failure, we want to look back and be able to assign
a cause for either success or failure. Projects with outstanding
requirements documentation have failed. Projects with absolutely
no requirements documentation have succeeded. However, projects
with shared understanding of requirements, documented or not,
succeed. It's dangerous to confuse the two issues.
A set of factors for evaluating how much is enough might be:
* Make goals drive practices.
* Balance tacit and explicit knowledge.
* Balance collaboration and engineering.
* Practice zero-base documentation.
* Contain change.
* Use basic tools.
Make goals drive practices, or, in a similar vein, make sure the
development organization understands "requirements for use." A
project's requirements should incorporate a prioritized set of
scope, schedule, resource, and defect-level goals. The word
prioritized is important, particularly in fast-moving, turbulent
environments. If the highest priority is meeting a very aggressive
schedule, then the cost may be higher, the scope may need to be
reduced, or the permissible defect levels may need to be relaxed.
Alternately, if the software controls a potentially life-threatening
medical device, then scope, schedule, and costs may need to be
subordinated to thorough defect removal. Different requirements
practices, or at least different levels of rigor for those practices,
would be appropriate for each of these projects.
Requirements for use means that every project team, for every
project, needs to customize its approach to requirements analysis
by asking who will use the requirements and what they will be used
for. I did some consulting work for a large software developer
several years ago. For one project, their requirements document,
specifically their set of API specifications, was being used
extensively by other project teams, both inside and outside the
company. By the time the product was delivered, these external
groups had been coding to APIs that were nearly two years out of
date. Ouch! A very poor matching of practices with users and uses.
Don't confuse the thought process with the recording process. At
the end of their book, Larry Constantine and Lucy Lockwood (*Software
for Use: A Practical Guide to the Models and Methods of Usage-
Centered Design* http://www.cutter.com/consortium/index_books.html )
make the point that the "task model" is the most important model in
their approach, yet they are not saying "don't think about the
content model." For example, a good user interface designer might
just think through a content model beforehand, rearranging
possibilities in the planning stages before starting to build an
actual interface. While a beginner may need to go through each step
and document each phase, an experienced designer would be slowed
down by such a documentation process.
An intense team discussion, the exchange of both tacit and some
explicit knowledge scratched on a whiteboard, can lead to high
levels of shared understanding. A group of individuals in separate
cubicles, each producing elaborate documents, may contribute little
to shared understanding. Balancing tacit and explicit knowledge,
collaboration, and engineering practices creates an effective,
efficient requirements process.
Two ways to achieve this balance? Practice zero-base documentation
and contain change. First, for the purposes of this discussion,
let's define "code" as a catchall for anything that delivers a
solution to the end user (code, stored procedures, etc.). If we
follow a rigorous engineering approach to requirements, we can have
virtually thousands of requirements artifacts, each with multiple
links to other requirements or code artifacts. Introduce high rates
of change, and the team can spend enormous amounts of time
maintaining documents and tracing relationships. A simpler solution
is to maintain only the code and change only the code. Now, this can
create a variety of other problems, but it sure eliminates a lot of
work. Paperwork, according to Caper Jones's statistics, can consume
25%-35% of a project's cost. What is the right percentage for your
project?
The first task in zero-base documentation is to differentiate working
papers from formal deliverables. Working papers may be scratches on
a napkin or UML models in Rational Rose, but they are not formally
maintained; they are temporary. The criteria for formal deliverables
is that they are maintained. Note that I did not say *should* be
maintained, but *are* maintained -- there is a big difference. For a
tester working on test cases or a maintenance programmer enhancing a
program, the only thing worse than no documentation is incorrect
documentation. Labeling something as a working paper at least lets
the downstream users know there is no pretense that it is up-to-date.
Containing change, as opposed to controlling change, speaks to
understanding the formality and rigor of a change process. For
example, if I had to record and maintain a record of every change
that was made to this journal article, it would get published about
once a quarter. However, once it enters the publication process, as
opposed to the initial writing process, changes are more closely
monitored. For low to moderate levels of change, recording,
tracking, and evaluating (i.e., a formal process) may work well.
However, in a seeming reversal of logic, high levels of change will
overwhelm traditional change-management processes if the development
process is also highly rigorous.
In turbulent, e-business environments, one of the assumptions we have
to discard is that change is bad. Karl Wiegers (author of *Creating
a Software Engineering Culture* and the just-published *Software
Requirements* http://www.cutter.com/consortium/index_books.html ),
for example, writes that "the most effective technique for
controlling requirements creep is the ability to say 'No.' "
Consider the analogy of a motocross race: 50 participants in a line
at the start and only room for 2 or 3 in the first curve several
hundred yards away. Those who don't slow down in time crash. Those
who slow down too soon get mired in the pack of 47 other bikes and
fall behind, and it is very difficult to catch up. I think our
response to change should be similar -- those who deny too many
changes lose out to competitors, those who make too many last-minute
changes crash.
Rather than view change from a negative perspective (our traditional
view is that change is a necessary evil, but it's still evil), we
often need to view it as an opportunity for competitive advantage.
If my development or project management practices can accommodate
change later in the project than yours can, I have an advantage --
unless I crash, of course. One of the biggest challenges we face
in today's world is the unknown, the uncertainty. The manifestation
of uncertainty is change. Rapid response to change is a competitive
advantage. Knowing when to respond, and when not to, requires a
sophisticated decisionmaking process and a revised outlook on change
management.
I also recommend using basic tools. The rise and subsequent crash
of the computer-aided software engineering (CASE) market was in part
a reaction to "over-tooling." It is interesting to note that some
of the simple, easy-to-use, less feature-laden tools lasted longer
in the market than others that were highly integrated, full-featured,
hard to learn, and by the way, very expensive. Obviously, the tool
should fit the usage, but as a guide, for example, I would opt for
QSSrequireit over DOORS until I knew the expanded features were
necessary.
--Jim Highsmith, Editor, *e-business Application Delivery* (formerly
*Application Development Strategies*)
Senior Consultant, Sourcing Advisory Service
http://www.cutter.com/ead/
http://www.cutter.com/consortium/consultants/jhbio.html
+++++++++++++++++++++++++++++++++++
Software Management and Measurement Conference is Coming to San Jose!
Join your colleagues in San Jose, California, on 6-10 March 2000 for
the SM/ASM 2000 Conference! Attend informative sessions taught by
software management and measurement experts. Discover the tools and
services you need to support your software development efforts at the
Management & Measurement EXPO on 8-9 March 2000. Your conference
registration allows you to attend sessions from either conference!
For more information, call 800-423-8378/904-278-0707, or visit the
Web site: http://www.sqe.com/smasm .
+++++++++++++++++++++++++++++++++++
If you'd like to comment on today's Cutter Edge, send e-mail to
theedge@cutter.com, or send a letter by fax to +1 781 648 8707 or by
mail to The Cutter Edge, 37 Broadway, Arlington, MA 02474-5552 USA
+++++++++++++++++++++++++++++++++++
To unsubscribe to The Cutter Edge, send e-mail to
majordomo@cutter.com and include the message "unsubscribe
edge_list" in the body of the message.
(c) 2000 Cutter Information Corp. All rights reserved.