Data centers entering the 40G age

Ovum

Last week Blade Network Technologies announced the industry’s first 40G Ethernet top-of-rack switch. We expect a slew of similar announcements to support servers moving to 10Gbps port speeds. As 10GbE becomes the common currency in the datacenter, 40G and 100GbE will be needed to support aggregated traffic from 10GbE server connections.

 
You think 40Gbps is old hat after all the 100Gbps announcements this year? That would be a mistake akin to thinking your SUV comes with the latest Formula One gear. Those 40G and 100G deployments are for the core network, on the biggest routers and long-distance transport equipment.
 
This 40G is 40G Ethernet, for the enterprise, and sits on the smallest and most commonly sized box in a data center: the 1 RU so-called “pizza box” (a 1.75” rack unit is the height measurement for rack-mounted equipment).
 
Blade’s announced product, the RackSwitch G8264, fits into the mainstream datacenter design: racks holding multiple pizza-box servers topped with a couple of top-of-rack switches. (Don’t be confused between the company name and the product: this is not a blade switch.)
 
The servers connect to the access switches, which then uplink to distribution and core switches higher in the classic three-tier switch hierarchy. Earlier use of 10GbE was for these uplinks and the upper tiers of both datacenter and desktop-facing corporate LAN.
 
Penetration of 10GbE down to the high-volume access switch-to-server downlink is responsible for a notable acceleration in the 10GbE market. Until 2009, the quarterly shipments of 10GbE switch ports crept up linearly but remained under 1 million ports a year.
 
 
This began to rise at almost triple the original rate at the start of 2009, and the rate again almost tripled a year later. Barriers to 10GbE will be lowered another notch when Intel introduces 1/10GbE LAN on motherboard, projected in 2011. In response, 40Gbps is starting to come in on the aggregated uplinks.
 
“It’s not a bug, it’s a feature,” the old joke runs. Technologists were forced to use parallelism to achieve the 40Gbps physical interfaces, even the optical ones. The good news is that these 4×10Gbps slots – which take QSFP form factor copper or optical modules, where “Q” stands for “quad” – can be used with breakout cables that connect to four separate 10G ports at the other end.
 
Blade’s product has 48 ports at 10Gbps plus four ports that can each be run either as 40Gbps or quad 10Gbps for net port count of 64 at 10Gbps each. This flexibility in data rate should ease migration from 10Gbps to 40Gbps infrastructure. However, the QSFP modules take ribbon cables of a dozen fibers bonded together instead of the standard single-fiber jumpers or Cat 6 cables.
 
There’s no history of datacenters accepting ribbon cables in a big way. The only market to use QSFP today is InfiniBand – a quite different application in which the cables are managed parts of a cluster supercomputer. But that issue is far from the only issue to be resolved.
 
It was a long and twisted trail through many transceiver form factors to get to mature 10GbE. 40GbE may seem straightforward, with a dense interface already shipping, but 40GbE was originally an interim step between and 10GbE and 100GbE, and remains entangled with 100GbE.
 
100GbE has just started its journey, with a few long-reach interfaces shipping into big routers. Already controversy between roadmaps has emerged. Blade’s announcement does not mark the achievement of the next datacenter speed increment; it marks the beginning of a history chapter.