Sr. Director, Field and Segment Marketing
One of the most exciting shows in the world of optics is the Optical Fiber Communication Conference and Exposition (OFC). This year in Los Angeles, the show is expected to draw over 12,000 attendees featuring more than 550 companies representing the entire networking ecosystem. This flagship show targets optical communications companies and networking professionals to discuss industry trends, new generation solutions and cutting edge technology.
Displayed on both the Infinera Express and at our booth #1859, visitors will be able to see demonstrations of Infinera Cloud Xpress (CX) and Packet Switching Module (PXM), announced in the third quarter of 2014. In addition, Infinera will show off the capabilities of its Fast Shared Mesh Protection (FastSMP™) technology, advances in photonics and will provide a software-defined networking (SDN) demo.
Senior Vice President, Cloud Network Strategy and Technology
The Cloud is comprised of three systems: data centers, the networks that interconnect these data centers to each other and the networks that connect end users (which may be machines) to these data centers. The intense concentration of content within a relatively small number of Internet content providers, the proliferation of social media, video, SaaS and the rise of machine-to-machine have fueled the explosive growth of Cloud traffic. Until recently, the data center interconnect (DCI) networks and the end user to data center networks were logically carved out of a common network, most often the Internet. Recently, we are hearing that differing networking requirements and capacity demands are causing many Cloud operators to separate the DCI network from the user access network. Designing networking equipment optimized for the unique needs of DCI is an embryonic opportunity but one that Infinera expects to outpace all other WAN growth as it quickly matures to a multi-billion dollar market.
The topology of the Cloud can be broken into a hierarchy consisting of the user access to the local data centers within their metropolitan area (metro data centers) and the interconnection of the metro data centers to each other as well as to the more geographically sparse mega data centers. These metro data centers are proliferating so that they can be closer to end users. However, metropolitan constraints in terms of physical space and electricity supply as well as cost limit the size and scale of metro data centers to medium scale (~50,000 servers). A metro or regional area houses multiple metro data centers for each Cloud operator and for multiple Cloud operators, and continues to grow fast. ACG predicts that by 2019 there will be 60% more metro data centers than today1. All of these data centers are interconnected either directly in the case of a single Cloud operator or via Cloud exchange data centers in the case of interconnecting data centers between Cloud operators. We refer to these connections as Metro Cloud (<600km) (aka Metro DCI).
End users connect to these metro Clouds via their ISP either directly or through a Cloud exchange; we have found the latter to be more typical for large enterprise customers. We refer to these connections as Metro Aggregation.
At the other end of the data center spectrum are hyper-scale or mega data centers with 200,000+ servers. Economics of space and power drive these data centers to typically be built on rural campuses in remote places around the globe (think Iowa, Iceland, and Sweden). These data centers are connected back to the metro data centers and Cloud exchanges, thereby requiring long haul DCI network connectivity. Figure 1 illustrates a simplified view of the Cloud topology.
Over the next four years, the bandwidth growth rate for Metro Aggregation, Metro DCI, and LH DCI networks are expected to be 30-35%2, 120%3, and 42%3 respectively. The DCI growth rates are outpacing the Metro Aggregation growth rate due to the bandwidth amplification effect coupled with the growth in metro data centers. A good but not unique example of the bandwidth amplification effect is seen in Facebook’s traffic. A single http request from an end user will result in data center traffic >900 times the size of the original query4. Much of this traffic remains inside a single data center, but even with 15% of this traffic traversing the DCI network, the result is >100X bandwidth amplification relative to the traffic traversing the metro aggregation network to the Cloud.
DCI Networking: DCI networking design is driven by the same drivers that are paramount for the intra-data center infrastructure. Ignoring price, some key decision factors are:
Senior Vice President, Sales, EMEA
Last year Google revealed that sharks had been caught on camera attacking its undersea cables. And, it’s not just sharks that pose a physical threat. Fishing trawlers and natural disasters have the power to cause extensive and expensive damage too.
Network resiliency has long been the goal for network operators. However, as age-old physical challenges join forces with increasing demands from new bandwidth-hungry services, the hurdles are getting increasingly harder to jump. Laying hundreds of meters of fibre cable on the ocean floor has long been a test for network operators. But so is their ability to meet an insatiable appetite for non-stop data streaming.
We’ve witnessed an explosive rise in data traffic in recent years. Businesses and consumers are increasingly demanding the safe and speedy delivery of data-intensive content, for the likes of video streaming and gaming. Moreover, the move to cloud-based data centres and storage architectures adds additional strain to existing networks.
Sr. Director, Field and Segment Marketing
In December 2010, Google announced the purchase of 111 8th Avenue, New York at a hefty price tag of nearly $2B. For those of you familiar with the location, you may be aware that 111 8th Avenue is one of the premier carrier hotels in Manhattan and offers easy interconnectivity between different carriers, as well as a central location for Manhattan data center users. While $2B is a hefty price tag, the location and the 2.9M square foot space place Google in a central location that is the envy of many of its telco peers.
The acquisition of 111 8th Avenue is just one example of the value Internet content providers (ICPs) like Google are placing on the future of data centers. Key questions that many are asking include: Why such a massive investment? What are they planning for the future?
Over the last few years, ICPs and data center operators have made significant investments including the purchase of real estate, construction of massive datacenters, and investment in optical networking equipment and fiber that rivals some of the world’s largest operators. In fact, the rate of spend has accelerated so much that according to ISI Research, the top seven ICPs will spend as much as AT&T and Verizon combined by 2017. That figure is staggering once you factor in that AT&T and Verizon spend enormous sums of money to support legacy infrastructure, something ICPs don’t have to do.
Based on these trends, there seem to be more questions than answers:
Sr. Director, Field and Segment Marketing
I was a NANOG virgin. In fact, the last few days were my first initiation into the critical mass of people that in many cases are responsible for things that help make the Internet run, and run well – from small companies that help optimize how things must run inside the server, inside a data center (e.g., DNS optimization) to large companies here to learn of better ways to design, build, troubleshoot and manage their next generation cloud network. It felt like a fairly tight knit group, but there were plenty of pointed discussions that remind me that these are smart people with strong opinions that they are willing to defend.
In any case, there were plenty of technical presentations and idea sharing that took place here, but the best part of the event was the social activities. The one activity that stands out is the Beer ‘n Gear event. The name says it all, sort of. Free food, open bar and swag…. Grab something for yourself, your better half or even for the kids. The better the swag, the more likely someone will come visit your booth. The standard fare seems to be T-shirts, which are very popular. But then there were the beer glasses with colored lights on the bottom, there were stainless steel flasks with the Texas Longhorns logo on them, and Infinera gave out nice silver metal flashlight-USB charger packs. But it wasn’t all about the swag, it was about the conversations that took place while giving away the swag. Some just came and grabbed the giveaway, others stopped and spent time asking questions.
Two unexpected surprises for me:
1) An Infinera customer, Telia Sonera International Carrier, who was also a sponsor at the Beer ‘n Gear, had a simple black T-shirt promoting their bandwidth service, but more importantly touting what you need to know about them…
What they sell – 100G (their wholesale bandwidth offer), AS1299 (their Internet Autonomous System Number) and DTN-X (the Infinera platform that makes it happen).
2) After looking at our demo, a cloud operator showed me the current system he is using to connect 26x10G connections between two data centers. It was an entire rack of equipment from a popular European vendor. I asked him if he saw value in replacing all of that with a single 2RU box… that only consumes about 500W… and gives him an additional 22 extra ports to upsell. After a lot of disbelief, and further explanation of the demo and system capabilities, he smiled, gave me his card, took a flashlight/USB charger and told me to have my sales guy call him.
All in all, it was a great event and I look forward to the next one. Oh yeah, I think we gave away about 250 Infinera-branded flashlights with USB chargers built-in. So for those of you who got our swag, light up and power on.
Chief Technology Officer, Pacnet
Pacnet is Asia-Pacific’s leading provider of managed data connectivity solutions to major telecommunications carriers, large multinational enterprises and government entities. We run the region’s most extensive privately-owned submarine cable systems with connectivity to 19 interconnected data centers across 15 cities and 10 countries. Pacnet was the first to provide 100G connection across Asia and into the US as well as deliver and commercially offer an SDN- and NFV-based platform to customers. Infinera’s technology and expertise have supported us in bringing these key innovations to market.
As the market continues its rapid adoption of cloud development, increased mobility and bandwidth-hungry applications, customers seek an alternative to traditional networks and bandwidth-provisioning models that struggle to keep pace. With the proliferation of the hybrid cloud model, they look for a solution that allows added burstability as well as more agile systems that can move workloads from one location to another.
There were many trends discussed here at the global PTC 2015 conference, but one that seemed to stand out above the others and that was discussed across a broad spectrum of networks and carriers, both subsea and terrestrial, was one that has been termed “the cloud effect.” Financial analysts have noted that Internet Content Providers (ICPs) have been moving into new markets very quickly and are projected to match the spending levels of several major Tier 1 carriers within two to three years. The ICPs generally have little to no legacy infrastructure and are therefore able to focus almost entirely on innovation, speed and performance. In response, traditional carriers are forced to at least try stay on par with the aggressive growth of ICPs and are starting to adopt more flexible, cloud-based strategies.
These aggressive, fast moving carriers who are responding to this cloud-driven challenge are beginning to transform their existing networks into simplified, collapsed, software-driven networks, as shown in the figure below. We are beginning to see the convergence of Layers 0 through 2 into a single intelligent transport platform, enabling significant network savings and increased ease of operations. Higher layers are bundled together into services and get virtualized in the cloud.
As Jim Fagan, president, Managed Services at Pacnet, succinctly summarized in an industry briefing session: the new breed of carriers and their customers are demanding, “Capacity where they want it, when they want it, for the duration they want it, and in flexible capacity increments.”