[152190] in North American Network Operators' Group

home help back first fref pref prev next nref lref last post

Re: OpenFlow @ GOOG

daemon@ATHENA.MIT.EDU (Marshall Eubanks)
Tue Apr 17 13:10:29 2012

In-Reply-To: <20120417163725.GC28282@leitl.org>
Date: Tue, 17 Apr 2012 13:08:49 -0400
From: Marshall Eubanks <marshall.eubanks@gmail.com>
To: Eugen Leitl <eugen@leitl.org>
Cc: NANOG list <nanog@nanog.org>
Errors-To: nanog-bounces+nanog.discuss=bloom-picayune.mit.edu@nanog.org

I wonder if this will be contributed to the DC (DataCenter) work
currently gearing up in the IETF.

Regards
Marshall

On Tue, Apr 17, 2012 at 12:37 PM, Eugen Leitl <eugen@leitl.org> wrote:
>
> http://www.wired.com/wiredenterprise/2012/04/going-with-the-flow-google/a=
ll/1
>
> Going With The Flow: Google=92s Secret Switch To The Next Wave Of Network=
ing
>
> By Steven Levy April 17, 2012 | 11:45 am |
>
> Categories: Data Centers, Networking
>
> In early 1999, an associate computer science professor at UC Santa Barbar=
a
> climbed the steps to the second floor headquarters of a small startup in =
Palo
> Alto, and wound up surprising himself by accepting a job offer. Even so, =
Urs
> H=F6lzle hedged his bet by not resigning from his university post, but ta=
king a
> year-long leave.
>
> He would never return. H=F6lzle became a fixture in the company =97 calle=
d
> Google. As its czar of infrastructure, H=F6lzle oversaw the growth of its
> network operations from a few cages in a San Jose co-location center to a
> massive internet power; a 2010 study by Arbor Networks concluded that if
> Google was an ISP it would be the second largest in the world (the larges=
t is
> Tier 3, which services over 2,700 major corporations in 450 markets over
> 100,000 fiber miles.) =91You have all those multiple devices on a network=
 but
> you=92re not really interested in the devices =97 you=92re interested in =
the
> fabric, and the functions the network performs for you,=92 H=F6lzle says.
>
> Google treats its infrastructure like a state secret, so H=F6lzle rarely =
speaks
> about it in public. Today is one of those rare days: at the Open Networki=
ng
> Summit in Santa Clara, California, H=F6lzle is announcing that Google
> essentially has remade a major part of its massive internal network,
> providing the company a bonanza in savings and efficiency. Google has don=
e
> this by brashly adopting a new and radical open-source technology called
> OpenFlow.
>
> H=F6lzle says that the idea behind this advance is the most significant c=
hange
> in networking in the entire lifetime of Google.
>
> In the course of his presentation H=F6lzle will also confirm for the firs=
t time
> that Google =97 already famous for making its own servers =97 has been de=
signing
> and manufacturing much of its own networking equipment as well.
>
> =93It=92s not hard to build networking hardware,=94 says H=F6lzle, in an =
advance
> briefing provided exclusively to Wired. =93What=92s hard is to build the =
software
> itself as well.=94
>
> In this case, Google has used its software expertise to overturn the curr=
ent
> networking paradigm.
>
> If any company has potential to change the networking game, it is Google.=
 The
> company has essentially two huge networks: the one that connects users to
> Google services (Search, Gmail, YouTube, etc.) and another that connects
> Google data centers to each other. It makes sense to bifurcate the
> information that way because the data flow in each case has different
> characteristics and demand. The user network has a smooth flow, generally
> adopting a diurnal pattern as users in a geographic region work and sleep=
.
> The performance of the user network also has higher standards, as users w=
ill
> get impatient (or leave!) if services are slow. In the user-facing networ=
k
> you also need every packet to arrive intact =97 customers would be pretty
> unhappy if a key sentence in a document or e-mail was dropped.
>
> The internal backbone, in contrast, has wild swings in demand =97 it is
> =93bursty=94 rather than steady. Google is in control of scheduling inter=
nal
> traffic, but it faces difficulties in traffic engineering. Often Google h=
as
> to move many petabytes of data (indexes of the entire web, millions of ba=
ckup
> copies of user Gmail) from one place to another. When Google updates or
> creates a new service, it wants it available worldwide in a timely fashio=
n =97
> and it wants to be able to predict accurately how quickly the process wil=
l
> take.
>
> =93There=92s a lot of data center to data center traffic that has differe=
nt
> business priorities,=94 says Stephen Stuart, a Google distinguished engin=
eer
> who specializes in infrastructure. =93Figuring out the right thing to mov=
e out
> of the way so that more important traffic could go through was a challeng=
e.=94
>
> But Google found an answer in OpenFlow, an open source system jointly dev=
ised
> by scientists at Stanford and the University of California at Berkeley.
> Adopting an approach known as Software Defined Networking (SDN), OpenFlow
> gives network operators a dramatically increased level of control by
> separating the two functions of networking equipment: packet switching an=
d
> management. OpenFlow moves the control functions to servers, allowing for
> more complexity, efficiency and flexibility.
>
> =93We were already going down that path, working on an inferior way of do=
ing
> software-defined networking,=94 says H=F6lzle. =93But once we looked at O=
penFlow,
> it was clear that this was the way to go. Why invent your own if you don=
=92t
> have to?=94
>
> Google became one of several organizations to sign on to the Open Network=
ing
> Foundation, which is devoted to promoting OpenFlow. (Other members includ=
e
> Yahoo, Microsoft, Facebook, Verizon and Deutsche Telekom, and an innovati=
ve
> startup called Nicira.) But none of the partners so far have announced an=
y
> implementation as extensive as Google=92s.
>
> Why is OpenFlow so advantageous to a company like Google? In the traditio=
nal
> model you can think of routers as akin to taxicabs getting passengers fro=
m
> one place to another. If a street is blocked, the taxi driver takes anoth=
er
> route =97 but the detour may be time-consuming. If the weather is lousy, =
the
> taxi driver has to go slower. In short, the taxi driver will get you ther=
e,
> but you don=92t want to bet the house on your exact arrival time.
>
> With the software-defined network Google has implemented, the taxi situat=
ion
> no longer resembles the decentralized model of drivers making their own
> decisions. Instead you have a system like the one envisioned when all car=
s
> are autonomous, and can report their whereabouts and plans to some centra=
l
> repository which also knows of weather conditions and aggregate traffic
> information. Such a system doesn=92t need independent taxi drivers, becau=
se the
> system knows where the quickest routes are and what streets are blocked, =
and
> can set an ideal route from the outset. The system knows all the conditio=
ns
> and can institute a more sophisticated set of rules that determines how t=
he
> taxis proceed, and even figure whether some taxis should stay in their
> garages while fire trucks pass.
>
> Therefore, operators can slate trips with confidence that everyone will g=
et
> to their destinations in the shortest times, and precisely on schedule.
>
> Continue reading =91Going With The Flow: Google=92s Secret Switch To The =
Next
> Wave Of Networking=91 =85
>
> Making Google=92s entire internal network work with SDN thus provides all=
 sorts
> of advantages. In planning big data moves, Google can simulate everything
> offline with pinpoint accuracy, without having to access a single network=
ing
> switch. Products can be rolled out more quickly. And since =93the control
> plane=94 is the element in routers that most often needs updating, networ=
king
> equipment is simpler and enduring, requiring less labor to service.
>
> Most important, the move makes network management much easier. =A0By earl=
y this
> year, all of Google=92s internal network was running on OpenFlow. =91Soon=
 we will
> able to get very close to 100 percent utilization of our network,=92 H=F6=
lzle
> says.
>
> =93You have all those multiple devices on a network but you=92re not real=
ly
> interested in the devices =97 you=92re interested in the fabric, and the
> functions the network performs for you,=94 says H=F6lzle. =93Now we don=
=92t have to
> worry about those devices =97 we manage the network as an overall thing. =
The
> network just sort of understands.=94
>
> The routers Google built to accommodate OpenFlow on what it is calling =
=93the
> G-Scale Network=94 probably did not mark not the company=92s first effort=
 in
> making such devices. (One former Google employee has told Wired=92s Cade =
Metz
> that the company was designing its own equipment as early as 2005. Google
> hasn=92t confirmed this, but its job postings in the field over the past =
few
> years have provided plenty of evidence of such activities.) With SDN, tho=
ugh,
> Google absolutely had to go its own way in that regard.
>
> =93In 2010, when we were seriously starting the project, you could not bu=
y any
> piece of equipment that was even remotely suitable for this task,=94 says
> Hotzle. =93It was not an option.=94
>
> The process was conducted, naturally, with stealth =97 even the academics=
 who
> were Google=92s closest collaborators in hammering out the OpenFlow stand=
ards
> weren=92t briefed on the extent of the implementation. In early 2010, Goo=
gle
> established its first SDN links, among its triangle of data centers in No=
rth
> Carolina, South Carolina and Georgia. Then it began replacing the old
> internal network with G-Scale machines and software =97 a tricky process =
since
> everything had to be done without disrupting normal business operations.
>
> As H=F6lzle explains in his speech, the method was to pre-deploy the equi=
pment
> at a site, take down half the site=92s networking machines, and hook them=
 up to
> the new system. After testing to see if the upgrade worked, Google=92s
> engineers would then repeat the process for the remaining 50 percent of t=
he
> networking in the site. The process went briskly in Google=92s data cente=
rs
> around the world. By early this year, all of Google=92s internal network =
was
> running on OpenFlow.
>
> Though Google says it=92s too soon to get a measurement of the benefits, =
H=F6lzle
> does confirm that they are considerable. =93Soon we will able to get very=
 close
> to 100 percent utilization of our network,=94 he says. In other words, al=
l the
> lanes in Google=92s humongous internal data highway can be occupied, with
> information moving at top speed. The industry considers thirty or forty
> percent utilization a reasonable payload =97 so this implementation is li=
ke
> boosting network capacity two or three times. (This doesn=92t apply to th=
e
> user-facing network, of course.)
>
> Though Google has made a considerable investment in the transformation =
=97
> hundreds of engineers were involved, and the equipment itself (when desig=
n
> and engineering expenses are considered) may cost more than buying vendor
> equipment =97 H=F6lzle clearly thinks it=92s worth it.
>
> H=F6lzle doesn=92t want people to make too big a deal of the confirmation=
 that
> Google is making its own networking switches =97 and he emphatically says=
 that
> it would be wrong to conclude that because of this announcement Google
> intends to compete with Cisco and Juniper. =93Our general philosophy is t=
hat
> we=92ll only build something ourselves if there=92s an advantage to do it=
 =97 which
> means that we=92re getting something we can=92t get elsewhere.=94
>
> To H=F6lzle, this news is all about the new paradigm. He does acknowledge=
 that
> challenges still remain in the shift to SDN, but thinks they are all
> surmountable. If SDN is widely adopted across the industry, that=92s grea=
t for
> Google, because virtually anything that happens to make the internet run =
more
> efficiently is a boon for the company.
>
> As for Cisco and Juniper, he hopes that as more big operations seek to ad=
opt
> OpenFlow, those networking manufacturers will design equipment that suppo=
rts
> it. If so, H=F6lzle says, Google will probably be a customer.
>
> =93That=92s actually part of the reason for giving the talk and being ope=
n,=94 he
> says. =93To encourage the industry =97 hardware, software and ISP=92s =97=
 to look
> down this path and say, =91I can benefit from this.=92=94
>
> For proof, big players in networking can now look to Google. The search g=
iant
> claims that it=92s already reaping benefits from its bet on the new revol=
ution
> in networking. Big time.
>
> Steven Levy
>
> Steven Levy's deep dive into Google, In The Plex: How Google Thinks, Work=
s
> And Shapes Our Lives, was published in April, 2011. Steven also blogs at
> StevenLevy.com. =A0Check out Steve's Google+ Profile .
>
> Read more by Steven Levy
>
> Follow @StevenLevy on Twitter.
>


home help back first fref pref prev next nref lref last post