Stop by Infinera’s Booth at OFC to Slice Up Your Light

Pravin Mahajan - headshotBy Pravin Mahajan

Director, Corporate Marketing

OFC 2015 is in full swing. At our booth, Infinera is demonstrating an SDN-enabled packet-optical solution and showcasing packet Ethernet VLAN-based transport services running over a sliceable super-channel being controlled by an SDN OpenDayLight (ODL) Controller.

OFC Booth 2015

Because our customers are looking to deploy a single network with multi-layer functions working in harmony, we are demonstrating these capabilities as a single solution and not individually. The technologies involved in this effort include:

(more…)

March 23, 2015 at 10:46 AM Leave a comment

The 8QAM Sweet Spot: The Right Way to Do Advanced Modulation

infinera_geoff-bennettBy Geoff Bennett

Infinera Technology Evangelist

Coherent transmission opens up a fascinating toolbox of modulation options, as we’ve seen with Infinera’s pioneering FlexCoherent™ modulation technology.

For those of you who are not aware of the issue, here’s a quick summary.

Phase modulation (part of modern coherent transmission) allows us to encode varying numbers of bits per symbol in order to increase spectral efficiency (and thereby optical fiber capacity).

The current “workhorse” modulation technology is Pol-muxed (PM) Quadrature Phase Shift Keying (QPSK), which carries four bits per modulation symbol.

In various trials, Infinera has demonstrated alternative modulations that have longer reach but lower capacity than QPSK (i.e. BPSK, Enhanced BPSK and 3QAM), as well as modulations that have greater spectral efficiency, but shorter reach than QPSK (i.e. 8QAM and 16QAM).

The key to FlexCoherent modulation is to present all modulations in a software selectable format on a single line card so that the service provider can choose the optimum balance between optical reach and fiber capacity, and historically QPSK has been the most commonly used modulation precisely because it offers a great reach/capacity balance.

Looking at one of these newer modulations in particular, in two recent trials we’ve seen how PM-8QAM not only offers a 50% increase in fiber capacity versus QPSK, but also represents a “sweet spot” in terms of optical reach on both existing optical fiber types, and new types of large area/low loss fiber, such as OFS Terawave™.

In the first trial with Telstra, we achieved a reach of 2,200 kilometers over an existing submarine fiber.

In trial announced today, we achieved an astonishing increase in reach over OFS Terawave fiber – 7,400 kilometers, in fact – which would be enough to close Atlantic submarine routes! Just to be clear the specific Terawave fiber we used in this test is a type that’s optimized for terrestrial transmission, and we may have been able to do even better if we’d used the submarine-optimized Terawave SLA+ or ULA Ocean Fibers.

These numbers are important because optical reach has a direct impact on the total cost of ownership for a DWDM system.  Basically, the longer the reach, the lower the cost.  But since fiber capacity also affects cost of ownership, we can imagine that there are reach values that are “just enough” to close a given set of routes without resorting to regeneration, and yet by using one of the higher order modulation formats we could achieve much greater fiber capacity.

In other words, we can imagine a set of sweet spots emerging because a given proportion of city pairs are a given distance apart.

These sweet spots are reasonably well understood.  We know that analysts generally break down markets on the basis of reach – so that the ultra-long haul market would be distances of greater than 3,000 kilometers; long haul is distances greater than 1,800 kilometers; and at distances of less than 1,800 kilometers we have regional and metro networks.

So it’s clear that, with reach numbers of between 400 and 1,200 kilometers on conventional fiber depending on the specific type of fiber and amplification used, 16QAM is not a viable long haul modulation format.  But 8QAM, with reach of over 2,000 kilometers on conventional fiber, certainly could be.

AdvancedMod 1

When we include optical protection techniques, the picture becomes even more clear.  Figure 1 shows two common topologies: a ring and a mesh.  Generally speaking, metro networks still tend to be laid out in rings.  Regional and long haul networks are more likely to be logical meshes, but historically the fiber may have been deployed in a physical ring.

In the ring example, we seen Nodes A and B are connected by a relatively short working path, shown by the solid green line.  Let’s say that in this example, X = 500 kilometers – well within the reach of 16QAM modulation.  But if there is a fiber cut between A and B, the path length increases dramatically to three times that, or 1,500 kilometers.  This is currently beyond the vendor claims we’ve seen for 16QAM modulation, even using Raman amplification.

In the mesh example, the differences are generally less extreme, but the absolute distances are more likely to be longer because we’re now looking at a regional or long haul deployment.

So you could imagine that Y = 1,000 kilometers; which would be possible to close with a very high performance 16QAM solution, probably including Raman amplifiers.  But if the A-B link breaks and there is purely optical protection, then the link length increases to 2,000 kilometers, and once again 16QAM would not be able to cope.

But in both cases 8QAM could close the link, and so we can immediately appreciate there could be a reach sweet spot for this modulation technology.

By the way, if you’re wondering why I’m emphasizing optical protection, it’s because the companies who are heavily promoting 16QAM technology are also companies who promote optical protection.

Let’s move on to implementation, because 8QAM is not an “easy” modulation technology for vendors who are using discrete optical components.

AdvancedMod 2

Table 1 shows the data rates for the “new breed” of line cards that some vendors are now announcing.  Each line card can run in several different modes – which are usually a simplified form of Infinera’s FlexCoherent capability.

In BPSK mode the card runs at 50 Gb/s, in QPSK mode it runs at 100 Gb/s, in 8QAM mode it runs at 150 Gb/s, and in 16QAM mode (which is the mode that is normally heavily marketed) it runs at 200 Gb/s.

I’ve already mentioned that 16QAM probably has too short a reach to be generally useful, and it’s not clear how many of these “200 Gb/s” line cards are being used in that mode today.

Oddly none of the vendors who are promoting 200 Gb/s operation seem to be talking about 8QAM (apart from hero experiments), and this may be because it delivers an odd sort of data rate of 150 Gb/s for this type of discrete component card design.

We know already that some DWDM systems struggle to provide a non-blocking switching capability across the backplane between line cards at even 100 Gb/s, and that means it could be difficult to amalgamate these separate 8QAM signals into a useable data rate – typically a multiple of 100 Gb/s.

Last August I took part in a field trial on the GÉANT network with a team of colleagues, and we showcased a prototype terabit PIC technology.  We showed BPSK, 3QAM, QPSK, 8QAM and 16QAM technology over this link, using the same line card with FlexCoherent modulation.

AdvancedMod3

But an important aspect of Infinera’s implementation, shown in Figure 3, is that the production version of this line card will deliver the full 1.2 Tb/s capacity for all of the “terrestrial” modulation types – which means QPSK, 8QAM and 16QAM.  That means that instead of having to implement anywhere between six and 12 line cards to achieve 1.2 Tb/s, an Infinera customer can simply plug in one card.

Moreover the DTN-X platform that uses this line card will have a full 12 Tb/s non-blocking OTN switching capability to turn all of the link bandwidth into a virtual pool of digital capacity.  Any service – including 1GbE, 10GbE, 40GbE, 100GbE and future 400GbE can be easily and efficiently supported using this OTN switching capability – including 8QAM modulation. Thus it is not only important to have the line card with the right modulation options; the system must be designed with forward scale to accommodate those line cards when they become available.  Below is a video showing how any DTN-X ever deployed can be upgraded from 5 Tb/s to 12 Tb/s in service to accommodate 1.2 Tb/s line cards when they become generally available.

So we firmly believe that 8QAM will offer a “reach sweet spot” for high order modulation, but we also believe it’s essential to choose the right implementation to get the best out of this technology.

Legal Disclaimer

Terawave is a trademark of OFS Fitel, LLC.

March 17, 2015 at 5:00 AM Leave a comment

Moon Musings, OFC and the International Year of Light

Pravin Mahajan - headshotBy Pravin Mahajan

Director, Corporate Marketing

There’s a crater on the near side of the moon called Alhazen that is connected to the industry I work in. It is named after the Arab scholar Ibn al-Haytham, who was a pioneer in the field of optics way back in the 11th century. He authored the classic tome “Kitab al-Manazir,” or the Book of Optics. His work transformed the way in which light was understood, earning him the title of “the father of modern optics.”

Ibn al-Haytham

Ibn al-Haytham

"Thesaurus opticus Titelblatt". Licensed under Public Domain via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Thesaurus_opticus_Titelblatt.jpg#/media/File:Thesaurus_opticus_Titelblatt.jpg

Illustration from the Opticae Thesaurus, which included the first printed Latin translation of the Book of Optics.

As I dug deeper into Ibn al-Haytham’s work I realized how the same physical phenomenon can have two very different interpretations (for an example, read about the emission theory and the counteractive intromission theory on vision and light rays). I was fascinated by how much optics has evolved since then and its application today across all facets of life, including the Internet. Simply put, life as we know it would not exist without the practical application of optics technology (aka photonics).

(more…)

March 12, 2015 at 10:31 AM Leave a comment

Network Virtualization Finally Reaches the Optical Layer

ChrisLiouBy Chris Liou

Vice President, Network Strategy

Today, Pacnet announced the deployment  of Infinera’s Open Transport Switch (OTS) product to enable a new SDN service that extends network virtualization to the optical transport layer.  As a pioneer in the realm of transport SDN, this is truly an exciting milestone for Infinera, and a remarkable demonstration of the benefits that DevOps processes combined with industry cross-pollination can yield.   By leveraging Infinera’s Web 2.0-empowered OTS and engaging jointly in the development and refinement of the service concept, Pacnet and Infinera were able to achieve specification, development, and deployment of a new service capability to the market in a matter of months, much more quickly than what is typically achievable using conventional processes and technologies.  With this new service capability, Pacnet’s customers will now be able to request on-demand or scheduled high-speed bandwidth services between key Pacnet data center locations, from 10Gb up to 100Gb (and beyond), and pay only for the duration the service is requested, all without manual intervention.

What makes this feasible is a unique mix of technologies that includes both software and hardware, each contributing a key piece of the overall solution.

(more…)

March 10, 2015 at 2:00 PM Leave a comment

The Latest and Greatest: What is Infinera Showcasing in Booth #1859 at OFC 2015?

Vinay Rathore headshot low resBy Vinay Rathore

Sr. Director, Field and Segment Marketing

One of the most exciting shows in the world of optics is the Optical Fiber Communication Conference and Exposition (OFC).   This year in Los Angeles, the show is expected to draw over 12,000 attendees featuring more than 550 companies representing the entire networking ecosystem. This flagship show targets optical communications companies and networking professionals to discuss industry trends, new generation solutions and cutting edge technology.

OFC 2015

Demonstrations:

Displayed on both the Infinera Express and at our booth #1859, visitors will be able to see demonstrations of Infinera Cloud Xpress (CX) and Packet Switching Module (PXM), announced in the third quarter of 2014. In addition, Infinera will show off the capabilities of its Fast Shared Mesh Protection (FastSMP™) technology, advances in photonics and will provide a software-defined networking (SDN) demo.

(more…)

March 4, 2015 at 1:42 PM Leave a comment

The Cloud and the Emergence of Data Center Interconnect Networks

Stu Elby 1By Stuart Elby, Ph. D.

Senior Vice President, Cloud Network Strategy and Technology

The Cloud is comprised of three systems: data centers, the networks that interconnect these data centers to each other and the networks that connect end users (which may be machines) to these data centers. The intense concentration of content within a relatively small number of Internet content providers, the proliferation of social media, video, SaaS and the rise of machine-to-machine have fueled the explosive growth of  Cloud traffic. Until recently, the data center interconnect (DCI) networks and the end user to data center networks were logically carved out of a common network, most often the Internet.  Recently, we are hearing that differing networking requirements and capacity demands are causing many Cloud operators to separate the DCI network from the user access network.  Designing networking equipment optimized for the unique needs of DCI is an embryonic opportunity but one that Infinera expects to outpace all other WAN growth as it quickly matures to a multi-billion dollar market.

The topology of the Cloud can be broken into a hierarchy consisting of the user access to the local data centers within their metropolitan area (metro data centers) and the interconnection of the metro data centers to each other as well as to the more geographically sparse mega data centers.   These metro data centers are proliferating so that they can be closer to end users. However, metropolitan constraints in terms of physical space and electricity supply as well as cost limit the size and scale of metro data centers to medium scale (~50,000 servers).  A metro or regional area houses multiple metro data centers for each Cloud operator and for multiple Cloud operators, and continues to grow fast. ACG predicts that by 2019 there will be 60% more metro data centers than today1.  All of these data centers are interconnected either directly in the case of a single Cloud operator or via Cloud exchange data centers in the case of interconnecting data centers between Cloud operators. We refer to these connections as Metro Cloud (<600km) (aka Metro DCI).

End users connect to these metro Clouds via their ISP either directly or through a Cloud exchange; we have found the latter to be more typical for large enterprise customers.  We refer to these connections as Metro Aggregation.

At the other end of the data center spectrum are hyper-scale or mega data centers with 200,000+ servers. Economics of space and power drive these data centers to typically be built on rural campuses in remote places around the globe (think Iowa, Iceland, and Sweden).  These data centers are connected back to the metro data centers and Cloud exchanges, thereby requiring long haul DCI network connectivity. Figure 1 illustrates a simplified view of the Cloud topology.

Cloud Blog 1

Over the next four years, the bandwidth growth rate for Metro Aggregation, Metro DCI, and LH DCI networks are expected to be 30-35%2, 120%3, and 42%3 respectively. The DCI growth rates are outpacing the Metro Aggregation growth rate due to the bandwidth amplification effect coupled with the growth in metro data centers. A good but not unique example of the bandwidth amplification effect is seen in Facebook’s traffic.  A single http request from an end user will result in data center traffic >900 times the size of the original query4. Much of this traffic remains inside a single data center, but even with 15% of this traffic traversing the DCI network, the result is >100X bandwidth amplification relative to the traffic traversing the metro aggregation network to the Cloud.

DCI Networking: DCI networking design is driven by the same drivers that are paramount for the intra-data center infrastructure.  Ignoring price, some key decision factors are:

(more…)

February 24, 2015 at 1:28 PM Leave a comment

It’s Not Just Your Customers That are Hungry for Bandwidth

Chris ChampionBy Chris Champion

Senior Vice President, Sales, EMEA

Last year Google revealed that sharks had been caught on camera attacking its undersea cables. And, it’s not just sharks that pose a physical threat. Fishing trawlers and natural disasters have the power to cause extensive and expensive damage too.

Network resiliency has long been the goal for network operators. However, as age-old physical challenges join forces with increasing demands from new bandwidth-hungry services, the hurdles are getting increasingly harder to jump. Laying hundreds of meters of fibre cable on the ocean floor has long been a test for network operators.  But so is their ability to meet an insatiable appetite for non-stop data streaming.

We’ve witnessed an explosive rise in data traffic in recent years. Businesses and consumers are increasingly demanding the safe and speedy delivery of data-intensive content, for the likes of video streaming and gaming. Moreover, the move to cloud-based data centres and storage architectures adds additional strain to existing networks.

(more…)

February 18, 2015 at 12:11 PM Leave a comment

Older Posts


Connect with Us

     


Follow

Get every new post delivered to your Inbox.

Join 45 other followers

%d bloggers like this: