FS DACs QSFP28 To 4x SFP28

Because we use 100GbE switches and QSFP28 to 4x SFP28 DACs for 25GbE

FS QSFP28 to 4x SFP28 DAC

As a short weekend, we wanted to answer a question we sometimes get when we review 32x 100GbE QSFP28 switches, which is why we use 100GbE port switches, even for our 25GbE network in the lab. The reason is quite simple, most 100GbE switches allow the use of optics, DACs and even breakout DACs which allow us to serve 25GbE and 100GbE devices from 100GbE ports.

Because we use 100GbE switches and QSFP28 to 4x SFP28 DACs for 25GbE

A few months ago, did we make our own direct attach copper cable (DAC)? item. One of the things we mentioned is that using DACs on the lens is often less expensive and consumes less power. The trade-off is scope. Many switches are designed with this in mind, and it can actually be seen in the S5860-20SQ FS switch we reviewed nearly a year ago that the QSFP + ports are labeled “40G Breakout” ports. This is specifically to note that the ports can operate in 4x 10GbE mode. The Q in the 10/40 Gbps QSFP + era or the 25/100 Gbps QSFP28 era stands for quad, or four.

DAC and optics FS S5860 20SQ FS 1
DAC and optics FS S5860 20SQ FS 1

We had some cables in the lab from recent FS.com pieces and so we figured we’d just show what some of these look like. Here is a QSFP28 to QSFP28 cable that can handle 100GbE between two 100GbE QSFP28 ports.

FS DAC QSFP28 to QSFP28
FS DAC QSFP28 to QSFP28

These DACs are somewhat the copper cable equivalent of MPO / MTP cables. When we made the recent piece FS QSFP28-100G-SR4 v. QSFP28-100G-IR4 Differences, we noticed that the 100G-SR4 optic uses 8 of the 12 fibers in the MPO / MTP cables. These 8 fibers are four pairs in each direction. This contrasts with the 100G-IR4 which puts four channels on each of the two fibers for CWDM optics. That 100G-SR4 model where there are four lanes is similar to why we can switch to 25GbE links using QSFP28 DACs.

DAC FS QSFP28 DAC MTP 12 Pro Fiber On Top
DAC FS QSFP28 DAC MTP 12 Pro Fiber On Top

As the name suggests, the QSFP28 (remember “Q” stands for quad) combines four SFP28 lanes. 28 is for maximum lane throughput, but it manifests as 25GbE. As such, you can think of this as 4 SFP28 connectors combined into one QSFP28 port. Or 4x 25GbE fits into 1x 100GbE. This is no coincidence and we use these cables for many of the devices that are 25GbE (and 10GbE backward compatible) in labs.

FS QSFP28 to 4x SFP28 DAC
FS QSFP28 to 4x SFP28 DAC

The reason we usually have SFP + / SFP28 DACs, QSFP28 breakout DACs, and QSFP28 DACs is that they tend to cost a lot less than fiber. Also, DACs tend to be a little more forgiving with vendor coding because vendors know that each end of a DAC is fixed, so they can’t be easily customized like with optics. Between lower power, lower cost, and ease of switching between devices, we tend to use a lot of DACs in the lab.

DAC FS SFP28 DAC 4x SFP28 to QSFP28 Breakout DAC QSFP28
DAC FS SFP28 DAC 4x SFP28 to QSFP28 Breakout DAC QSFP28

The other key benefit is that it’s lower horsepower, so it helps a bit with cooling in the racks. The downside, of course, is that we tend to only be able to use DACs inside racks due to the shorter range. With the 100Gbps QSFP28 generation the DACs have gotten much thicker and are on track to get even thicker as we move to the 400Gbps generation and beyond.

Final words

There are many benefits to using optics for networking and we still use optical networking with both active optical cables (AOC) and traditional pluggable optics to switch between racks. For our 100GbE generation, we have mainly been using 32 100GbE switches in our lab since around 2018 and then use in-rack DACs to serve 100GbE or 25GbE devices. We have some weird 50GbE devices, but most are 25GbE or 100GbE these days.

One of the biggest challenges for the 400GbE and 800GbE generations is that noise and signal processing on DACs is becoming a bigger challenge. This means thicker cables. The advantage, of course, is the reduction in power and cooling needs. That is why we expect to use DACs until we can no longer do so. While we’ve already looked at a 400GbE switch, we’re excited about the prospect of getting a 32x 400GbE switch in the lab and getting 100GbE links for each node.

Leave a Comment

Your email address will not be published.