[98784] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: Extreme congestion (was Re: inter-domain link recovery)

daemon@ATHENA.MIT.EDU (Patrick W. Gilmore)
Fri Aug 17 07:44:14 2007

In-Reply-To: <20070817105704.GS29648@MrServer.telecomplete.net>
Cc: "Patrick W. Gilmore" <patrick@ianai.net>
From: "Patrick W. Gilmore" <patrick@ianai.net>
Date: Fri, 17 Aug 2007 07:28:43 -0400
To: nanog@merit.edu
Errors-To: owner-nanog@merit.edu


On Aug 17, 2007, at 6:57 AM, Stephen Wilcox wrote:
> On Thu, Aug 16, 2007 at 09:07:31AM -0700, Hex Star wrote:
>>    How does akamai handle traffic congestion so seamlessly?  
>> Perhaps we should
>>    look at existing setups implemented by companies such as akamai  
>> for
>>    guidelines regarding how to resolve this kind of issue...
>
> and if you are a Content Delivery Network wishing to use a cache  
> deployment architecture you should do just that ... but for  
> networks with big backbones as per this discussion we need to do  
> something else

Ignoring "Akamai" and looking at just content providers (CDN or  
otherwise) in general, there is a huge difference between telling a  
web server "do not serve more than 900 Mbps on your GigE port", and a  
router which simply gets bits from random sources to be forwarded to  
random destinations.

IOW: Steve is right, those are two different topics.

-- 
TTFN,
patrick


home help back first fref pref prev next nref lref last post