The Evolution of Next-Gen Optical Networks: Terabit Super-Channels and Flex-Grid ROADM Architectures
Director, CATV Marketing
I attended the SCTE Cable-Tec Expo four weeks ago in Denver, where I presented a paper I co-authored with Anuj Malik, also of Infinera. Our paper, “The Evolution of Next-Gen Optical Networks: Terabit Super-Channels and Flex-Grid ROADM Architectures,” was presented as part of the Super-Charging Fiber panel in the SCTE technical sessions.
Cable operators have made significant progress in deploying 100 Gb/s transport waves in their core networks. 100G waves allow 8 Tb/s or more of capacity on traditional fiber using the standard 50 GHz C-Band ITU-T grid. However, bandwidth growth projections already indicate that 8 Tb/s of capacity will be insufficient in the near future. To address the emerging capacity requirements and to achieve better spectral efficiency, next-generation optical networks will utilize a flexible grid channel plan (variable-width optical channels) and terabit super-channels implemented with higher order modulation.
The standard ITU grid has built-in guard bands between each optical channel to allow filtering and switching. These guard bands waste up to 25% of the fiber’s capacity. This capacity can be recovered by eliminating the ITU grid as we know it and migrating to wider super-channels, which can theoretically support up to 24 Tb/s of capacity per fiber using 16QAM modulation. Using higher order modulation (BPSK, QPSK, M-ary QAM), it is possible to provision the modulation format on a lambda by lambda basis, allowing operators to optimize their networks in the future for reach versus total capacity.
With super-channels, multiple optical carriers are digitally combined to create an aggregate channel of a higher data rate. However, managing super-channels is a challenge as super-channels occupy variable spectrum on the ITU grid depending upon the modulation scheme (BPSK, QPSK, etc.). This requires the use of flexible-grid ROADMs which can switch any amount of optical spectrum in increments of 12.5 GHz.
Traditionally, optical networks have used ROADMs to switch wavelengths optically. Since most end-user client services remain at 10 Gb/s, muxponders are used to aggregate these services in a point-to-point fashion onto wavelengths. But with super-channels, this muxponder based architecture can become highly inefficient if it lacks the capability to groom services into super-channels and hence can lead to low utilization of deployed bandwidth.
Director, Corporate Marketing
Today Infinera announced a radically new way to build metro cloud networks with the Infinera Cloud Xpress. This is the industry’s first super-channel transport platform purpose-built for metro data-center interconnects. Growth in data-center infrastructure has been staggering and at this point I see no end in sight. According to ISI Research, the capex of the top seven Internet content providers (ICPs) will exceed the capex of AT&T and Verizon combined by 2017. This is because ICPs are building data centers in both urban and rural areas on a massive scale, not only for their own B2C services but also to create capacity to offer IaaS, PaaS and hybrid cloud services. According to Synergy Research the embryonic market for cloud services will grow 10x from about $3B in 2009 to $35B in 2018.
For example, one rural data center (DC) being built in Iowa consists of three buildings with a cumulative area of 1.4 million square feet. With the rise in distributed compute, which stretches transactions across servers sitting in different buildings, the industry is seeing a dramatic increase in DC-to-DC bandwidth. The three buildings in this example must be connected together at the metro level, which in turn must be connected to the global network. If one does a simple calculation assuming 250K servers with 10G NICs per building, and only 10% of traffic exiting the buildings, we see the need for 2,500 100G ports per building.
DC operators need a new form factor and operational model for optical DWDM connectivity to manage this sort of scale. Storage and server technologies have already been adapted for the Cloud with a rack-and-stack model. Cloud Xpress is the first optical platform to deliver 1Tb/s of input and output capacity in two rack units in a data center rack-and-stack form. It takes up only 1/3 the space and 1/2 the power of the competition, and it provides a simple server-like provisioning experience.
I’m excited about Cloud Xpress. This video below describes it visually, and you can learn even more by going to http://www.infinera.com/go/cloud
At the same time, Infinera introduced Simplified Packet for Transport. This uses the new Packet Switching Module (PXM) for the award-winning DTN-X platform. It offers MPLS and Ethernet functions and just the right amount of packet for providers to build a highly scalable, converged transport infrastructure. These functions are managed by the Infinera NMS platform or any industry-standard SDN controller. In fact we provisioned EVPL services on the PXM from the OpenDayLight controller (Hydrogen release) to the attendees at the live event.
Infinera Product Evangelist
I think that Infinera’s DTN-X is the most powerful, scalable and configurable transport platform in the world today, and I was also fortunate enough to get first-hand experience of how robust this platform is during our Guinness World Record attempt last year.
The thing that struck me at the time was that the process of turning up fiber capacity as rapidly as possible is totally outside of the operational requirements of a transport network, so why would you expect the DTN-X to tolerate people like me slamming line cards into the chassis as fast as we could so we could beat the clock?
But it worked – and what’s more it worked three times in succession, which was a requirement of the Guinness World Records procedure for reproducibility. Even more amazing is that all three record attempts completed within one minute of each other (the fastest time was 19 minutes and 1 second), which is astonishingly consistent.
The Guinness World Record was established on the GÉANT network – operated by DANTE. You can read my other blog about our recent field trial, also on the GÉANT network, of a terabit super-channel line card and a comprehensive showcase of Infinera’s FlexCoherent Modulation technology.
As part of the scaling to terabit class cards, the DTN-X will have to scale also, because it has an integrated, non-blocking OTN switch that provides the deterministic switching and client service grooming that’s a fundamental advantage of the Intelligent Transport NetworkTM.
Integrated OTN switching is now practically a mandatory part of the service provider core architecture, and it’s interesting to see how long it’s taken for some vendors to fully support even 100Gb/s slot capacities in their switching platforms. With the DTN-X engineered to be able to upgrade capable to 1.2Tb/s per slot in the future, Infinera customers are well placed for investment protection and future network growth.
The thing is…how can the 46 DTN-X customers, who among them are running more than a petabit per second of super-channel capacity to feed the global internet, perform a chassis upgrade to more than double the capacity without bringing down the whole node?
Any of you reading this who work in Transport Networks will be scratching your head at that last statement. Surely, you say, that kind of upgrade capability is table stakes in the Transport Network world.
You would be correct – in fact the need for zero-disruption for in-service upgrades is just one of the things that sets the requirements of a Transport Network apart from the somewhat less “predictable” way that IP networks operate. Note that this is not a criticism of the IP equipment, but it’s an inevitable consequence of this equipment being dependent on a forty year old protocol architecture whereby (for example) an accidentally misconfigured BGP entry in one service provider can effectively knock out internet services in completely different parts of the world [1, 2, 3, 4, 5, 6, 7].
To illustrate the “carrier grade” characteristics of upgrading the DTN-X switch fabric I headed off to Infinera’s London training facility and set up my camera. The entire process took about an hour, but in the video that I created I’ve fast-forwarded through the individual boot sequences for the switching modules. However, I kept my phone stopwatch running and in the frame so you can see the total elapsed time.
So what makes this process “carrier grade”?
Director, Corporate Marketing
It’s the season of new product announcements. Some, like Apple’s Watch, have made a huge splash—even the fashion industry is involved. At Infinera we’re quietly and steadily building the next generation of transport solutions, one of which we’ve designed to address growing opportunities in the cloud.
Many enterprises (including us and very likely you) use the cloud for business functions like sales, operations, marketing and more. All these applications place new demands on the technology infrastructure, and providers are briskly building new data centers. Storage and server technologies have already been adapted for the cloud; now it’s time for networks to deliver.
Break free of constraints that prevent you from leveraging the expanding Cloud opportunity. Join us for Insight Infinera 2014 on September 18. Our solution is built on the same principles that have earned us top honors among customers for technology innovation, reliability, and service and support, as ranked by Infonetics Research earlier this year.
Contributing Deep Expertise to the OpenDaylight Project in the Areas of Enabling Abstraction and Programmability of Optical Transport Networks
Vice President, Corporate Marketing
On September 4, 2014 it was announced that Infinera joined the OpenDaylight Project in order to lend our expertise in the areas of enabling abstraction and programmability of optical transport networks. Our goal is to help shape how Software Defined Networks (SDN) and Network Function Virtualization (NFV) are designed and implemented across optical transport networks and ensure that our Intelligent Transport Network solutions easily integrate with any OpenDaylight based controllers.
The Infinera engineering team has participated in OpenDaylight sponsored events, including multiple hackfests, and meetings since the Linux Foundation formed the organization to accelerate the adoption, foster new innovation and create a more open and transparent approach to SDN. We view joining the OpenDaylight Project as Silver Members and formalizing our participation in this organization as an important step in the evolution of Infinera’s open SDN framework, and we look forward to continuing to collaborate with the other members of this community going forward.
Network operators are looking for open, programmable solutions to enable them to speed the delivery of new innovative services. In addition, they want to simplify the provisioning of multi-layer, multi-vendor networks through an open, standards-based Application Programming Interface (API). One of the most important reasons contributing to the success of Infinera’s DTN-X Multi-Terabit Packet Optical Network Platform is that it is highly software controllable and as a result is SDN-ready today. In combination with the DTN-X platform, our Open Transport Switch (OTS) can serve as an integral piece of a truly open SDN framework. It is a modern, lightweight Web 2.0 software construct that offers simple abstractions presented through a northbound REST API that can be integrated into the overall control layer to enable application and cloud-driven networking. OTS is designed to provide full virtualization of optical transport resources facilitating true automation and the programming of bandwidth services at the Ethernet layer, the OTN layer and the DWDM optical layer. Our approach to SDN is intended to allow any provider’s SDN controller solution, including those based upon OpenDaylight, to be easily integrated with Infinera’s Intelligent Transport Network solutions.
Infinera Product Evangelist
Wait a minute…a terabit super-channel? Haven’t we seen that somewhere before? Of course we have – as we all know optical hero experiments tend to lead real products by many years, and in some cases (like all-optical regeneration or the DeLorean Time Machine) the technology never seems to make it past the “concept car” status.
In fact I saw a recent terabit super-channel press release that claimed, “1Tb/s with production hardware, with prototype software.” When I checked the details it turned out that the demonstration actually used ten standard 100G line cards, and the “prototype software” simply allowed them to be tuned much closer than the classic 50GHz fixed grid in order to create the super-channel.
That one made me chuckle because the whole point of a super-channel is to implement all the optical circuits on a single line card and a single fiber, which means that it can be brought into service in a single operational cycle and dramatically reduce fiber management, and thus allow the service provider to scale operational effort and keep OpEx low.
The terabit super-channel trial I was involved in recently was on the GÉANT network, operated by DANTE (Delivery of Advanced Network Technology to Europe), and it ran from Budapest in Hungary to Bratislava in the Slovak Republic.
Field trials can be a hoot – especially as we’re often at the mercy of logistics and import procedures, and operating against tight deadlines.
That’s where working with our customer DANTE is like a breath of fresh air. As soon as I brought up the idea with Mark Johnston, DANTE’s Chief Network Operations Officer, he was keen to test out the resilience built into the GÉANT network.
“We trust your reliability and engineering prowess, so we’ll let you test on a production link, but you have only four days,” he told me at the Global Telecom Business Awards, where Infinera and DANTE were honored for last year’s Guinness World Records achievement.
Four days…easy, I thought. There were five of us in the team, and four of us could fly into the beautiful city of Budapest, while one of us would head to Bratislava and set up the loopback before joining us for the actual data measurements.
On the Thursday before I flew, when the freight forwarder told me that, instead of arriving a week before us, the gear would arrive the same day as we did, even then I wasn’t worried. Four whole days – that was plenty of time!
For the first day in the NIIF Institute in Budapest we made friends with some of the folks there, rerouted the existing connections, and drew up a few whiteboard diagrams of what we planned to do. Since NIIF hosts the Budapest Supercomputing Center that processes the data from the Large Hadron Collider, they were quite interested in high capacity network link technologies.
In between explanations to our hosts and fetching cups of coffee for our engineering lead Rene to stave off the hypothermia from sitting in a chilled comms room for hours on end, I spent quite a bit of time chatting to logistics people asking the same question – where’s our gear?
Meanwhile my colleague, Jeff Rahn, whose team actually designed the line card, was sitting in the PoP in Bucharest waiting for his gear to arrive also. He claimed he was trapped inside there, but photos he sent me clearly show he managed to escape and do some sightseeing in the beautiful mediaeval city!As it turns out, the logistics company was not able to deliver the gear on the same day we arrived but was able to get it to us the next afternoon, shortening our timeframe to 2.5 days.
Once the gear arrived we installed the equipment at both ends. Most impressive in my book is the prototype terabit super-channel line card that is the size of our standard 500G line card, part of the forward-scale design of the Infinera DTN-X. Using this technology we can more than double the capacity of a DTN-X chassis from 5 Tb/s to 12 Tb/s, allowing Infinera customers to scale their networks to keep ahead of internet demand without hiring an army of extra network engineers.