Infinera Product Evangelist
It’s been 10 years since the commercial introduction of a PIC-based transport network platform. Thinking about this, I combed through some archival video footage and came across an internal video that included interviews from some of the very first people to come on board at Infinera in 2001 with the dream of creating a large scale commercial Photonic Integrated Circuit (PIC).
Amongst those interviewed was Dr. Chuck Joyner, a former Bell Labs guru and keen bonsai grower. Some of you may be aware that back in 1969, the folks at Bell Labs were the first to come up with the idea of a PIC, although they didn’t really capitalize on that innovation in terms of transforming it into a product.
Chuck recalled his initial discussion with Infinera co-founder Dave Welch on the prospects for developing a commercial PIC, and I’d like to share his comment with you word for word…
Systems Engineering Director
Hello from New Orleans. It’s cold down here. SCinet is up and running a cool 1.5TB of capacity.
The theme of this year’s Supercomputing conference is HPCMatters (High Performance Computing). The technology shown throughout the show floor is all leading edge. Innovations from the Supercomputing community have far reaching impact – to every corner of science, to investment banking, to the discovery of new drugs, to the precise prediction of the next major storm.
Last night I watched a demonstration by GÉANT in which an astrophysicist in Cambridge controlled a satellite array in Australia while collecting and processing real-time data at his terminal.
And it’s not just about geeky scientific things either. Supercomputing has produced a number of short videos showing how High Performance Computing is used by companies and research organizations to make better products and new discoveries. This video shows how Procter & Gamble use HPC to make better shampoo and diapers. And as I said, it’s cold here this week. But fortunately that helps to add new data points to climate modeling, possibly one of the most difficult problems in the world. This video also shows why climate modeling is important.
Hopefully they can tell me when it’s going to warm up again.
Systems Engineering Director
November 17 begins this year’s Supercomputing Conference for high performance computing, networking, storage and analysis. This will be the 27th annual conference, and the seventh for which I have been a contributor.
Supercomputing has always been a unique conference. It brings together a diverse set of people and companies, researchers, educators, students, and collaborators dedicated to furthering science.
My part of Supercomputing has been to provide some of the communications facilities linking the many booths on the convention center floor with research and commercial networks around the world. Over the last seven years, I have seen the SCinet network grow from about 200 Gigabits of live capacity using 20 x 10 Gigabit client services to this year’s record of over 1400 Gigabits using 14 x 100 Gigabit client services. And this is on systems that have the ability to support a total of over 30 terabits of data carrying capacity.
The SCinet WAN team has always tried to use the most leading edge systems, often in pre-release versions from the WAN vendors. The last time the Supercomputing conference was held in New Orleans in 2010, we built the first ever long haul 100G facility from New Orleans to Chicago using specially built 100G LAN modules. Amazingly, it only took five days from shipment of the modules to bring up that 100G circuit.
Vice President, Network Strategy
Today, Telefónica announced the successful completion of a multi-domain and multi-vendor transport SDN interoperability proof-of-concept (PoC) trial with Infinera and three other optical vendors. Infinera’s DTN converged optical transport platforms equipped with Open Transport Switch (OTS) software were located in Telefonica labs in Madrid while each of the other vendors had equipment located in their own labs. The PoC showed that Telefónica was able to remotely and rapidly provision services across all four vendors’ optical domains, using an IETF-based Application Based Network Operations (ABNO) orchestrator-controller and a hierarchical controller architecture that leveraged distributed control planes for greater scale, faster service restoration, and greater network resource utilization. Infinera played an important role in this activity that enabled Telefónica to assess the state of the industry with regard to multi-domain control in an SDN environment and evaluate different vendor approaches.
One might ask what the motivation is for participating in such a demonstration and whether this makes all transport vendors appear the same. The answer to this question is two pronged.
- First, Infinera’s goal is to do what is right for our customers and the market. Customers today are asking for Transport SDN and standard open APIs that enable them to leverage their dynamic transport layer to deliver bandwidth services wherever and whenever needed within a vendor’s network domain, whether for enabling rapid activation of new services across multi-domain networks or to optimize core multi-layer backbones and reduce extraneous transit router port expenses.
- Second, the spirit of Transport SDN is about enabling best of breed choices while also enabling a pragmatic level of programmability. We believe that Intelligent Transport Networks based on converged WDM and switching technologies offer the highest performance and ease of use as well as the most software control. Platforms such as Infinera’s DTN-X are already providing customers with industry-leading flexibility and ease-of-use experience today. With OTS, this level of flexibility in the optical transport layer can be accessed programmatically.
In summary, we are making our already software-controllable platforms easily programmable using our lightweight, Web 2.0-centric OTS. By using OTS instead of a heavyweight EMS and/or controller-based solution, network operators can easily integrate Infinera’s Intelligent Transport Network with any SDN controller technology, facilitating rapid DevOps innovation of new features. In a multi-vendor solution under SDN control, we believe that choice of transport platform still matters, and factors such as flexibility, reliability, scale, ease of deployment and ease of use will continue to matter, along with providing network programmability APIs. While APIs can abstract the physical infrastructure and enable programmable services, what’s under the API matters more than ever.
Earlier this year, we collaborated with Telefónica to demonstrate Network-as-a-Service (NaaS) and simplified IP link turn-up in a multi-layer, multi-vendor network. Key SDN components used in this demonstration included Telefonica’s ABNO multi-layer PCE controller and Infinera’s OTS. Here, OTS provided programmable bandwidth through a simple abstraction of the transport network, enabling dynamic transport-on-demand capabilities for the IP/MPLS layer as seen in this video.
We were pleased to participate with Telefónica in both of these proof-of-concept trials to demonstrate how SDN can enable faster creation of new services as well as optimal path calculation to minimize network resource utilization, and to provide greater control plane scalability with faster, more efficient service restoration across multiple vendors, domains, and network layers.
By Andrew Schmitt
Principal Analyst – Optical, Infonetics Research
Infonetics conducts multiple surveys of service providers each year on a wide variety of subjects, but by far our most popular subject in the optical area has been the deployment of advanced WDM speeds and technologies. It was feedback from this survey that led to my opinion in 2011 that once it was widely available, 100G would obsolete 40G—something essentially complete outside of China. (Click here to download Infonetics’ white paper, The Fast Approaching 100G Era.)
Deployment of 100G coherent WDM technology in the past two years has been nothing short of stunning. Let’s look at some quick facts:
- Total 100G coherent port shipments tripled in 2013, after quadrupling the year before.
- Coherent 100G port shipments are on track to double during 2014 and will represent nearly half of all worldwide deployed WDM bandwidth this year.
- By 2016 virtually all long haul WDM installed will be coherent 100G or faster, and metro deployments will begin a period of rapid expansion.
While metro 100G WDM is generating a big buzz today, the reality is it accounts for less than a third of all installations to date. But service providers are now turning their attention to the metro, seeking to evaluate improved generations of coherent technology on the horizon for deployment in shorter reach and most cost sensitive metro applications. I expect this will result in a cascade of metro 100G shipments starting late next year.
Our recent survey was designed to help settle some of the big metro 100G debates, such as how much 100G is being installed in the metro today and in 2016. We asked service providers what percentage of their coherent 100G installations were in the metro versus the core and observed a very interesting trend.
The data clearly shows there are two kinds of service providers using 100G today:
- Those following the traditional model where 100G is mostly used for long spans and sparingly for the metro (early core adopters)
- And another type of service provider that is already using coherent technology almost exclusively for links under 600km (early metro adopters)
This is happening right now, which means equipment companies must make multiple products to address each market independently. It is unlikely that a single product can effectively serve the whole market.
We also wanted to understand service provider opinions on general market assumptions about the 100G metro coherent market. We asked respondents to agree or disagree with the following statements on a scale of 1 to 7, where 1 was ‘do not agree’, 4 was ‘somewhat agree’, and 7 was ‘strongly agree’. Here’s what we learned:
The Evolution of Next-Gen Optical Networks: Terabit Super-Channels and Flex-Grid ROADM Architectures
Director, CATV Marketing
I attended the SCTE Cable-Tec Expo four weeks ago in Denver, where I presented a paper I co-authored with Anuj Malik, also of Infinera. Our paper, “The Evolution of Next-Gen Optical Networks: Terabit Super-Channels and Flex-Grid ROADM Architectures,” was presented as part of the Super-Charging Fiber panel in the SCTE technical sessions.
Cable operators have made significant progress in deploying 100 Gb/s transport waves in their core networks. 100G waves allow 8 Tb/s or more of capacity on traditional fiber using the standard 50 GHz C-Band ITU-T grid. However, bandwidth growth projections already indicate that 8 Tb/s of capacity will be insufficient in the near future. To address the emerging capacity requirements and to achieve better spectral efficiency, next-generation optical networks will utilize a flexible grid channel plan (variable-width optical channels) and terabit super-channels implemented with higher order modulation.
The standard ITU grid has built-in guard bands between each optical channel to allow filtering and switching. These guard bands waste up to 25% of the fiber’s capacity. This capacity can be recovered by eliminating the ITU grid as we know it and migrating to wider super-channels, which can theoretically support up to 24 Tb/s of capacity per fiber using 16QAM modulation. Using higher order modulation (BPSK, QPSK, M-ary QAM), it is possible to provision the modulation format on a lambda by lambda basis, allowing operators to optimize their networks in the future for reach versus total capacity.
With super-channels, multiple optical carriers are digitally combined to create an aggregate channel of a higher data rate. However, managing super-channels is a challenge as super-channels occupy variable spectrum on the ITU grid depending upon the modulation scheme (BPSK, QPSK, etc.). This requires the use of flexible-grid ROADMs which can switch any amount of optical spectrum in increments of 12.5 GHz.
Traditionally, optical networks have used ROADMs to switch wavelengths optically. Since most end-user client services remain at 10 Gb/s, muxponders are used to aggregate these services in a point-to-point fashion onto wavelengths. But with super-channels, this muxponder based architecture can become highly inefficient if it lacks the capability to groom services into super-channels and hence can lead to low utilization of deployed bandwidth.