free counter
Tech

Lightbits offers NVMe-over-TCP at 5x significantly less than NVMe-over-FC et al

Nicolas delafraye – stock.adobe.

Lightbits builds NVMe-overTCP SAN clusters for Linux servers with Intel cards that accelerate network processing to provide an incredible number of IOPS with storage for public and private clouds

By

Published: 15 Aug 2022 15: 15

NVMe-over-TCP 5x cheaper than equivalent NVMe-over-Ethernet (ROCE) solutions thats the promise of Lightbits LightOS, which enables customers to create flash-based SAN storage clusters on commodity hardware and using Intel network cards.

Lightbitsdemoed the machine showing performance equal to NVMe-over-Fibre Channel or ROCE/Ethernet both a lot more costly solutions where LightOS was configured on a three-node cluster using Intel Ethernet 100Gbps E810-CQDA2 cards throughout a press meeting attended by Computer Weeklys sister publication in France, LeMagIT.

NVMe-over-TCP works on a typical Ethernet network with the usual switches and cards in servers. Meanwhile, NVMe-over-Fibre Channel and NVME-over-ROCE need expensive hardware, but with the guarantee of rapid transfer rates. Their performance is because of the lack of the TCP protocol, which may be a drag on transfer rates since it does take time to process packets therefore slows access. The advantage of the Intel Ethernet cards is that it decodes section of this protocol to mitigate that effect.

Our promise is that people can provide a high-performance SAN on low-cost hardware, said Kam Eshghi, Lightbits strategy chief. We dont sell proprietary appliances that require proprietary hardware around them. You can expect something that you install on your own available servers and that works on your own network.

Cheaper storage for private clouds

Lightbitss demo showed 24 Linux servers each built with a dual-port 25Gbps Ethernet card. Each server accessed 10 shared volumes on the cluster. Observable performance at the storage cluster reached 14 million IOPS and 53GBps read, 6 million IOPS and 23GBps writes, or 8.4 million IOPS and 32GBps in a mixed workload.

In accordance with Eshghi, these performance levels act like NVMe SSDs directly installed in servers, with longer latency being the only real drawback, but only 200 or 300 microseconds in comparison to 100 microseconds.

As of this scale the difference is negligible, said Eshghi. The main element for a credit card applicatoin would be to have latency under a millisecond.

Besides cheap connectivity, LightOS offers functionality usually within the merchandise of mainstream storage array makers. Included in these are managing SSDs as a pool of storage with hot-swappable drives, intelligent rebalancing of data to slow wear rates, and replication on-the-fly in order to avoid lack of data in the event of unplanned downtime.

Lightbits allows around 16 nodes to be included in a cluster, said Abel Gordon, chief systems architect at Lightbits. With around 64,000 logical volumes for upstream servers. To provide our cluster as a SAN to servers we’ve a vCenter plug-in, a Cinder driver for OpenStack and a CSI driver for Kubernetes.

We dont support Windows servers yet, said Gordon. Our goal is quite that we will undoubtedly be another solution for public and private cloud operators who commercialise virtual machines or containers.

To the end, LightOS provides an admin console that may allot different performance and capacity limits to different users, or even to different enterprise customers in a public cloud scenario. Theres also monitoring predicated on Prometheus monitoring and Grafana visualisation.

Close dealing with Intel

In another demo, an identical hardware cluster but with open source Ceph object storage was shown and that was not optimised for the Intel network cards.

In the demo, 12 Linux servers running eight containers in Kubernetes simultaneously accessed the storage cluster. With a variety of reads and writes, the Ceph deployment achieved an interest rate of around 4GBps, in comparison to around 20GBps on the Lightbits version with TLC (higher performance flash) and 15GBps with capacity-heavy QLC drives.Ceph is Red Hats recommended storage for building private clouds.

Lightbits close relationship with Intel allows it to optimise LightOS with the most recent versions of Intel products, said Gary McCulley of the Intel datacentre product group. Actually, in the event that you install the machine on servers of the most recent generation, you automatically progress performance than with recent storage arrays that operate on processors and chips of the prior generation.

Intel is promoting its latest components among integrators using turnkey server concepts. One of these brilliant is really a 1U server with 10 hot-swappable NVMe SSDs, two Xeon latest generation processors and something of its new 800 series Ethernet cards. To check interest in the look in the framework of storage workloads, Intel thought we would run it with LightOS.

Intels 800 series Ethernet card doesnt completely integrate on-the-fly decoding of network protocols, unlike the SmartNIC 500X, that is FPGA-based, or its future Mount Evans network cards that work with a DPU-type acceleration card (which Intel calls IPU).

On the 800 series, the controller only boosts sorting between packets in order to avoid bottlenecks between each servers access. Intel calls this pre-IPU processing ADQ (application device queues).

However, McCulley promised that integration between LightOS and IPU-equipped cards is in the offing. It will become more of a proof-of-concept when compared to a fully developed product. Intel appears to desire to commercialise its IPU-based network cards as NVMe-over-ROCE cards instead, so for more costly solutions than those provided by Lightbits.

Read More

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker