[41038] in North American Network Operators' Group
RE: multi-homing fixes
daemon@ATHENA.MIT.EDU (David R. Conrad)
Tue Aug 28 16:10:46 2001
Message-Id: <5.0.2.1.2.20010828111648.030c4df0@localhost>
Date: Tue, 28 Aug 2001 13:09:58 -0700
To: "David Schwartz" <davids@webmaster.com>
From: "David R. Conrad" <david.conrad@nominum.com>
Cc: <nanog@merit.edu>
In-Reply-To: <NOEJJDACGOHCKNCOGFOMGEEHDIAA.davids@webmaster.com>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"; format=flowed
Errors-To: owner-nanog-outgoing@merit.edu
David,
At 08:07 AM 8/28/2001 -0700, David Schwartz wrote:
> If someone were to argue that, someone could reply that unless people
>cheat, no IP address space is wasted because the registries still only
>allocate based upon demonstrated need.
While "demonstrated need" is easy to say, it is much more difficult to
actually verify, particularly when the demonstrated need is projected into
the future.
>One could even argue that a smaller
>allocation policy saves IP space because it stops people from cheating by
>asking for more IP space than they need.
Exactly. The RIRs are forced to balance conservation of the remaining free
pool of addresses (the only thing the RIRs really have any control over and
even that is pretty tenuous) with the number of route prefixes in the
default free zone (something the RIRs have no control over but which ISPs
do). Historically (since CIDR and 2050), the balance has been swung
towards limiting the number of prefixes in the DFZ, primarily by
restricting the number of new prefixes allocated (there were other
policies, e.g., APNIC's policy permitting the return of multiple prefixes
for a single prefix of the next largest CIDR block with no questions asked,
but most of the focus has been on preventing new prefixes from being
allocated).
From my perspective, the whole point of micro-allocations is to try to
move the balance back towards neutral a bit. Address space would be
allocated for those applications that need to be announced in the DFZ but
which don't represent a large amount of address space. Of course, figuring
out exactly what those applications are will be a bit of a challenge for
the policy makers, but hey, that's what they get paid for (well, if they
got paid for doing it, of course).
> I'm not sure I believe that this tragedy of the commons exists
> where people
>route on allocation boundaries.
The tragedy of the commons exist because there is a limited resource,
incentive to do the wrong thing, and disincentives to do the right
thing. Until there are disincentives to do the wrong thing, e.g., filter
routes, apply a charge to routes in the DFZ to encourage aggregation, etc.,
incentives to do the right thing, and/or the limitations in the DFZ are
removed, you _will_ get a tragedy of the commons.
>A distinct route for a distinct network of at
>least some minimal value doesn't create a tragedy of the commons.
Of course it can.
>Where you
>do have a tragedy of the commons is where people place routes without
>technical justification.
Technical justification does not remove the limitations on a resource, it
merely allows triage as to who gets to use the resource.
Micro-allocations and filtering are treating symptoms. The underlying
disease (rational route announcement policy) could conceivably be treated
by applying standard market economics to the problem, but there hasn't yet
been enough incentive to figure out how to do it (and/or get over the
historical resistance to doing it).
Rgds,
-drc
Speaking only for myself