The Cloud and the Emergence of Data Center Interconnect Networks

Stu Elby 1By Stuart Elby, Ph. D.

Senior Vice President, Cloud Network Strategy and Technology

The Cloud is comprised of three systems: data centers, the networks that interconnect these data centers to each other and the networks that connect end users (which may be machines) to these data centers. The intense concentration of content within a relatively small number of Internet content providers, the proliferation of social media, video, SaaS and the rise of machine-to-machine have fueled the explosive growth of  Cloud traffic. Until recently, the data center interconnect (DCI) networks and the end user to data center networks were logically carved out of a common network, most often the Internet.  Recently, we are hearing that differing networking requirements and capacity demands are causing many Cloud operators to separate the DCI network from the user access network.  Designing networking equipment optimized for the unique needs of DCI is an embryonic opportunity but one that Infinera expects to outpace all other WAN growth as it quickly matures to a multi-billion dollar market.

The topology of the Cloud can be broken into a hierarchy consisting of the user access to the local data centers within their metropolitan area (metro data centers) and the interconnection of the metro data centers to each other as well as to the more geographically sparse mega data centers.   These metro data centers are proliferating so that they can be closer to end users. However, metropolitan constraints in terms of physical space and electricity supply as well as cost limit the size and scale of metro data centers to medium scale (~50,000 servers).  A metro or regional area houses multiple metro data centers for each Cloud operator and for multiple Cloud operators, and continues to grow fast. ACG predicts that by 2019 there will be 60% more metro data centers than today1.  All of these data centers are interconnected either directly in the case of a single Cloud operator or via Cloud exchange data centers in the case of interconnecting data centers between Cloud operators. We refer to these connections as Metro Cloud (<600km) (aka Metro DCI).

End users connect to these metro Clouds via their ISP either directly or through a Cloud exchange; we have found the latter to be more typical for large enterprise customers.  We refer to these connections as Metro Aggregation.

At the other end of the data center spectrum are hyper-scale or mega data centers with 200,000+ servers. Economics of space and power drive these data centers to typically be built on rural campuses in remote places around the globe (think Iowa, Iceland, and Sweden).  These data centers are connected back to the metro data centers and Cloud exchanges, thereby requiring long haul DCI network connectivity. Figure 1 illustrates a simplified view of the Cloud topology.

Cloud Blog 1

Over the next four years, the bandwidth growth rate for Metro Aggregation, Metro DCI, and LH DCI networks are expected to be 30-35%2, 120%3, and 42%3 respectively. The DCI growth rates are outpacing the Metro Aggregation growth rate due to the bandwidth amplification effect coupled with the growth in metro data centers. A good but not unique example of the bandwidth amplification effect is seen in Facebook’s traffic.  A single http request from an end user will result in data center traffic >900 times the size of the original query4. Much of this traffic remains inside a single data center, but even with 15% of this traffic traversing the DCI network, the result is >100X bandwidth amplification relative to the traffic traversing the metro aggregation network to the Cloud.

DCI Networking: DCI networking design is driven by the same drivers that are paramount for the intra-data center infrastructure.  Ignoring price, some key decision factors are:

(more…)

February 24, 2015 at 1:28 PM Leave a comment

It’s Not Just Your Customers That are Hungry for Bandwidth

Chris ChampionBy Chris Champion

Senior Vice President, Sales, EMEA

Last year Google revealed that sharks had been caught on camera attacking its undersea cables. And, it’s not just sharks that pose a physical threat. Fishing trawlers and natural disasters have the power to cause extensive and expensive damage too.

Network resiliency has long been the goal for network operators. However, as age-old physical challenges join forces with increasing demands from new bandwidth-hungry services, the hurdles are getting increasingly harder to jump. Laying hundreds of meters of fibre cable on the ocean floor has long been a test for network operators.  But so is their ability to meet an insatiable appetite for non-stop data streaming.

We’ve witnessed an explosive rise in data traffic in recent years. Businesses and consumers are increasingly demanding the safe and speedy delivery of data-intensive content, for the likes of video streaming and gaming. Moreover, the move to cloud-based data centres and storage architectures adds additional strain to existing networks.

(more…)

February 18, 2015 at 12:11 PM Leave a comment

Out of the Telco, Into the Datacenter

Vinay Rathore headshot low resBy Vinay Rathore

Sr. Director, Field and Segment Marketing

In December 2010, Google announced the purchase of 111 8th Avenue, New York at a hefty price tag of nearly $2B.  For those of you familiar with the location, you may be aware that 111 8th Avenue is one of the premier carrier hotels in Manhattan and offers easy interconnectivity between different carriers, as well as a central location for Manhattan data center users.  While $2B is a hefty price tag, the location and the 2.9M square foot space place Google in a central location that is the envy of many of its telco peers.

The acquisition of 111 8th Avenue is just one example of the value Internet content providers (ICPs) like Google are placing on the future of data centers.  Key questions that many are asking include: Why such a massive investment? What are they planning for the future?

111 8th Avenue

111 8th Avenue

Over the last few years, ICPs and data center operators have made significant investments including the purchase of real estate, construction of massive datacenters, and investment in optical networking equipment and fiber that rivals some of the world’s largest operators.  In fact, the rate of spend has accelerated so much that according to ISI Research, the top seven ICPs will spend as much as AT&T and Verizon combined by 2017.  That figure is staggering once you factor in that AT&T and Verizon spend enormous sums of money to support legacy infrastructure, something ICPs don’t have to do.

Based on these trends, there seem to be more questions than answers:

(more…)

February 13, 2015 at 1:44 PM Leave a comment

NANOG 63 Beer ‘n Gear

Vinay Rathore headshot low resBy Vinay Rathore

Sr. Director, Field and Segment Marketing

I was a NANOG virgin.  In fact, the last few days were my first initiation into the critical mass of people that in many cases are responsible for things that help make the Internet run, and run well – from small companies that help optimize how things must run inside the server, inside a data center (e.g., DNS optimization) to large companies here to learn of better ways to design, build, troubleshoot and manage their next generation cloud network. It felt like a fairly tight knit group, but there were plenty of pointed discussions that remind me that these are smart people with strong opinions that they are willing to defend.

In any case, there were plenty of technical presentations and idea sharing that took place here, but the best part of the event was the social activities.  The one activity that stands out is the Beer ‘n Gear event.  The name says it all, sort of.  Free food, open bar and swag…. Grab something for yourself, your better half or even for the kids. The better the swag, the more likely someone will come visit your booth. The standard fare seems to be T-shirts, which are very popular. But then there were the beer glasses with colored lights on the bottom, there were stainless steel flasks with the Texas Longhorns logo on them, and Infinera gave out nice silver metal flashlight-USB charger packs. But it wasn’t all about the swag, it was about the conversations that took place while giving away the swag. Some just came and grabbed the giveaway, others stopped and spent time asking questions.

Two unexpected surprises for me:

1) An Infinera customer, Telia Sonera International Carrier, who was also a sponsor at the Beer ‘n Gear, had a simple black T-shirt promoting their bandwidth service, but more importantly touting what you need to know about them…

NANOG Blog

What they sell – 100G (their wholesale bandwidth offer), AS1299 (their Internet Autonomous System Number) and DTN-X (the Infinera platform that makes it happen).

2) After looking at our demo, a cloud operator showed me the current system he is using to connect 26x10G connections between two data centers. It was an entire rack of equipment from a popular European vendor. I asked him if he saw value in replacing all of that with a single 2RU box… that only consumes about 500W… and gives him an additional 22 extra ports to upsell. After a lot of disbelief, and further explanation of the demo and system capabilities, he smiled, gave me his card, took a flashlight/USB charger and told me to have my sales guy call him.

All in all, it was a great event and I look forward to the next one.  Oh yeah, I think we gave away about 250 Infinera-branded flashlights with USB chargers built-in. So for those of you who got our swag, light up and power on.

To get up to speed with the Infinera Cloud Xpress platform, watch here. And click here for the latest information on the Infinera DTN-X platform, now enhanced with packet functionality.

Legal Disclaimer

February 5, 2015 at 1:28 PM Leave a comment

Customer Profile: Pacnet

andy lumsden pacnetBy Andy Lumsden

Chief Technology Officer, Pacnet

Pacnet is Asia-Pacific’s leading provider of managed data connectivity solutions to major telecommunications carriers, large multinational enterprises and government entities.  We run the region’s most extensive privately-owned submarine cable systems with connectivity to 19 interconnected data centers across 15 cities and 10 countries. Pacnet was the first to provide 100G connection across Asia and into the US as well as deliver and commercially offer an SDN- and NFV-based platform to customers.  Infinera’s technology and expertise have supported us in bringing these key innovations to market.

As the market continues its rapid adoption of cloud development, increased mobility and bandwidth-hungry applications, customers seek an alternative to traditional networks and bandwidth-provisioning models that struggle to keep pace.  With the proliferation of the hybrid cloud model, they look for a solution that allows added burstability as well as more agile systems that can move workloads from one location to another.

(more…)

January 27, 2015 at 11:01 AM Leave a comment

PTC 2015: New Paradigms in Networking: The Cloud Effect

Steve GrubbBy Steve Grubb

Infinera Fellow

There were many trends discussed here at the global PTC 2015 conference, but one that seemed to stand out above the others and that was discussed across a broad spectrum of networks and carriers, both subsea and terrestrial, was one that has been termed “the cloud effect.”   Financial analysts have noted that Internet Content Providers (ICPs) have been moving into new markets very quickly and are projected to match the spending levels of several major Tier 1 carriers within two to three years.  The ICPs generally have little to no legacy infrastructure and are therefore able to focus almost entirely on innovation, speed and performance.  In response, traditional carriers are forced to at least try stay on par with the aggressive growth of ICPs and are starting to adopt more flexible, cloud-based strategies.

These aggressive, fast moving carriers who are responding to this cloud-driven challenge are beginning to transform their existing networks into simplified, collapsed, software-driven networks, as shown in the figure below.  We are beginning to see the convergence of Layers 0 through 2 into a single intelligent transport platform, enabling significant network savings and increased ease of operations.  Higher layers are bundled together into services and get virtualized in the cloud.

PTC Blog 1

As Jim Fagan, president, Managed Services at Pacnet, succinctly summarized in an industry briefing session:  the new breed of carriers and their customers are demanding, “Capacity where they want it, when they want it, for the duration they want it, and in flexible capacity increments.”

(more…)

January 22, 2015 at 1:18 PM Leave a comment

Infinera’s Expanding FlexCoherent Toolkit allows Subsea Operators to Maximize Fiber Capacity and Reach

Burmeister

By Emily Burmeister, Ph.D.

Principal Subsea Development Engineer

Infinera’s subsea team has the exciting responsibility of traveling around the world in order to test and demonstrate Infinera’s world class technology on subsea networks. We recently had the opportunity to work with Telstra Global Enterprise Services to test our expanded tool set of FlexCoherent™ modulation formats and methods to improve fiber transmission performance. On this trial we focused on stressing our polarization-multiplexed (PM) 8QAM and (PM) 16QAM formats with our new, higher gain SD-FEC to achieve higher maximum capacity and further stretch the value and lifetime of the wet plant for our customer. This trial complemented an earlier one we performed with Telstra that tested our novel PM-3QAM modulation. The elegance of the Infinera Intelligent Transport Network design is that all modulation types (PM-BPSK, PM-3QAM, PM-QPSK, PM-8QAM, PM-16QAM) can be supported on a single line card based on the FlexCoherent Processor working in conjunction with our Photonic Integrated Circuit (PIC).

The objective of a trial is to work toward achieving the most bits per Hz of repeater bandwidth and the most bits in a single line card to deliver the most efficient solution. The subsea link provides the boundary conditions – the repeater gain shape and power, the fiber type, the dispersion map, and the distances. In subsea more than anywhere else is it up to the transmission technology to adapt to the link, rather than designing the network to fit the achievable reach. Infinera’s FlexCoherent technology can do this adaptation on the line card itself by allowing the user to software-configure the modulation format and thus the reach and spectral efficiency.

In December the team headed to Japan and South Korea to demonstrate the benefit of the modulation formats with the highest spectral efficiency over a 2200 km, dispersion-managed link between the two countries. Unlike many previous “hero” experiments, our test was going to be on conventional submarine fiber that is widely deployed and not on the large area fiber that is just starting to be deployed in new submarine builds. We wanted to show what service providers can do with existing assets. From simulations and previous data it was expected that QPSK would have plenty of margin with our new higher gain SD-FEC. Extra margin means untapped potential capacity. The goal was to demonstrate the usefulness of the new modulation formats in our toolbox to reap that potential.

(more…)

January 15, 2015 at 5:05 AM Leave a comment

Older Posts


Connect with Us

     


Follow

Get every new post delivered to your Inbox.

Join 43 other followers

%d bloggers like this: