[12528] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: Traffic Engineering (fwd)

daemon@ATHENA.MIT.EDU (Michael Shields)
Thu Sep 18 22:13:37 1997

From: shields@crosslink.net (Michael Shields)
Mail-Copies-To: never
To: "Sean M. Doran" <smd@clock.org>
Cc: <osborne@terra.net>, nanog@merit.edu
Date: 19 Sep 1997 01:59:14 +0000
In-Reply-To: "Sean M. Doran"'s message of "18 Sep 1997 16:31:36 -0400"

In article <ytafhal5xz.fsf@cesium.clock.org>,
"Sean M. Doran" <smd@clock.org> wrote:
> Why?  ping and traceroute are poor predictors of locality
> and available bandwidth between the originator and target
> of those tools.  

You can get an interesting set of data by sending a few rounds of
pings of different sizes.  Make a scatterplot of ping packet size
vs. delay and find the line that fits the data; its slope represents
the inverse of available bandwidth and the y-intercept represents the
raw latency.  Standard deviation gives you an idea of congestion.

Is it crude?  Yes, but I don't know of anything better.  Is it ugly?
Yes, but no uglier than traceroute.

This isn't my idea; I saw it in "bing".

Once you have a metric coded up, you can have each potential data
source measure the quality of their connectivity to the endpoint.
After they have agreed amongst themselves (probably via an
intermediary server) which server is best they can cause the session
to be moved to the most probably optimal server.
-- 
Shields, CrossLink.

home help back first fref pref prev next nref lref last post