logo
painting

Slightly blurry picture of our 5712s in our lab

Edgecore 5712 Switch

By now most people in IT, especially if involved in networking, have heard the term “whitebox,” mostly in relation to switches. Whitebox switches are essentially commodity network devices that come without an operating system. For various reasons it’s now possible to purchase a fast, relatively low-cost switch based on merchant silicon [1] and run any of several (usually Linux based) network operating systems on it. This is quite powerful in terms of commoditization as well as providing access to a large open source ecosystem (again, Linux).

My use case

Currently my use case if fairly small or simplistic at this point. Basically our two 5712s will be used to provide the underlying physical network for an OpenStack lab. While I hope to get into larger network designs, our basic case will be to run Cumulus Linux on the switches and use their CLAG feature,ie. multi-chassis link aggregation, to provide network high availability to our hosts. Nothing too fancy, and in fact keeping it simple is one of the “features” I like about whitebox switching. Sometimes less is more. Further, because Cumulus Linux is Linux, it lends itself well to automation. I’ll be able to manage all our OpenStack lab network infrastructure using Ansible, which is the same tool I use to manage OpenStack itself.

Specs

Edgecore 5712

The unboxening

The Edgecore 5712 is a 10GB switch with 6 40GB uplink ports.

The Intel Rangeley CPU is an Atom chip. The switch also comes with 8GB of memory.

Trident II

From my layperson perspective the important part is the Trident II chip. This would be the “merchant silicon” that is the engine of this device, and it is a very powerful engine. A large number of switches will use this silicon so you will see the features that it provides in many whitebox switches.

From the Broadcom site;

The same chip is used in Edgecore’s 32 port 40GB switch, which I would have preferred to purchase because you can split out each 40GB port into four 10GB ports, and thus have 128 10GB ports from one switch! Most organizations could get away with only two switches in their data center. You could run a lot of virtual machines with 128x2 10GB ports. But I digress…

The VXLAN feature will also be useful, as I have deployed software defined networking systems such as Midokura’s Midonet that rely heavily on VXLAN overlays.

Available Network Operating systems

Fortunately the 5712 is one of the best supported whitebox switches. That said there are only a handful of network operating systems available right now. These are the ones that I know of:

Right now I have only installed OpenSwitch and Cumulus Linux. OpenSwitch is open source and is mostly backed by HP (I believe). However it’s not quite ready for production, as far as I know. I’ve yet to try out Pica8’s PicOS so I can’t really comment on it. Both Cumulus and OpenSwitch installed perfectly on the 5712. [2] I believe, as of right now, the only switch that OpenSwitch supports is the 5712, whereas Cumulus supports several makes and models.

Purchasing

We fluked out and one of the resellers we deal with had two in stock. They didn’t even know what they were. It was weird. But were like “hey, can we get these two switches off you” and after a while they sold them to us. We also ordered a 1GB Edgecore switch that was not in stock, and it still hasn’t come in yet (this was several weeks ago). Being in Canada doesn’t help, as everything has to get imported from the US.

This is part of why IaaS like AWS/Google/Azure is going to win a majority share: because it’s hard to even order physical equipment. Don’t even get me started on the process of buying licenses for Cumulus. The money wasn’t a problem, as we are getting good value, rather it’s the process…but that’s another blog post. A big part of open source software use is convenience–it’s only a download away. Certainly this convenience comes with its own set of issues, but location, location, location. I am on month three of waiting for hardware for my OpenStack/OPNFV lab, and still have another month or two to go.

While I try not to focus on how much hardware costs in terms of commoditization, as I feel that companies tend to focus on that too much, I can say that these switches are very cost effective, even including a Cumulus Linux license. The 40GB versions are even better value.

Conclusion

In my opinion, merchant silicon has paved the way for disaggregation of the switch hardware from the operating system which has thus enabled commoditization. Whitebox switches plus an OS like Cumulus makes for a powerful network platform. The trickle down of technology from the “hyperscalers” has allowed people like myself and the organizations we work for access to advanced network systems. This shift is very similar to what happened, or is happening, with x86 commodity servers and Linux/*BSD. That said we are still in the early days but I expect it to unfold much like commodity x86. [3]

Further, I believe that the somewhat limited feature set of the network operating systems is actually a feature in itself…when buying network gear from larger vendors you often pay for features you don’t need, so it allows some choice. That said, if you know exactly what you need and this whitebox model doesn’t provide it, then you’re still free to purchase from whatever vendor you would like. There will be many situations where that is the case, however, maybe I don’t need a huge network infrastructure, maybe all I need is two 32 port 40GB switches with MLAG to support 6000 virtual machines, or maybe I want to implement a relatively large-but-simple ECMP-based CLOS design. Choice is good.


1: Merchant silicon is a fancy phrase for “off the shelf chips”, ie. the vendor doesn’t have to spend money on developing and making their own network device chips. Cisco, for example, spends a considerable amount of money developing their own switching silicon.
2: Actually I did have one problem. I had OpenSwitch installed on one of the two switches, then I installed Cumulus over top. Something happened during the install that broke Cumulus, but it still completed the install and booted up. For some reason my ability to boot back into ONIE from the Cumulus operating system was broken. I had to use the console to boot into ONIE so I could continue testing my install automation setup. It was fairly unusual…as though the install had crashed part way through, just enough to boot the OS but break some functionality. I could not replicate the issue.
3: A recent Human Infrastructure Magazine entry briefly discussed merchant silicon, ECMP designs, etc.