Your Resource for All Things Apps, Ops, and Infrastructure

NVMe and its Impact on Enterprise Storage

AHEAD and NVMe Partners

As part of our Cloud Delivery Framework at AHEAD, we commonly assist our customers in the decision-making process, integration, and deployment of enterprise storage infrastructure. Storage plays an integral part in the delivery of enterprise services in most businesses today, and AHEAD has stayed abreast of all the current and upcoming technologies that are playing part in the current ecosystem. With partners like Pure Storage, who are utilizing NVMe on the backend of their FlashArray//X system, we continue to enable and assist our customers with the newest industry capabilities.

NVMe Overview

NVMe (Non-volatile memory express) is an optimized, high-performance host control interface, designed for systems that utilize PCI Express based Solid State Drives (SSDs). NVMe is a protocol that is built from the ground up to specifically handle these devices that are outside of the traditionally used SCSI protocol for legacy SATA and SAS HDD devices. What this has enabled in our storage devices is higher IOPS, lower latency and more parallel data access across this new breed of storage devices.

What changed for us to suddenly have new capabilities within an already progressive storage industry, especially in terms of SSDs?  

First, the use of PCIe is the initial enabler of this technology. PCIe (depending on which version) can have up to ~1 GB/s of throughput per lane, and some PCIe slots can go up to 16 lanes!

This tremendous amount of bandwidth is nice, but keep in mind that we now have a higher potential for parallel I/O requests, driving up the speed of these devices even further. In terms of parallel requests, NVMe supports up to 64K commands per queue with up to 64K queues, in sharp contrast to SAS devices which could only support up to 256 commands in a single queue. I/O commands and responses take place on the same processor core to take advantage of parallel processing on multi-core processors, driving our full stack of parallel I/O access.

As you can see in the graphs below, NVMe provides lower total time, in both percentage and actual time, in the software stacks to get a full operation that traditional HDDs and SSDs cannot. They also provide significantly more available IOPS and bandwidth than a traditional HDD or SSD.


NVMe has a streamlined and simple command set that uses less than half the number of CPU instructions (13 to be exact) to process an I/O than either SAS or SATA, resulting in higher IOPS and lower latency. NVMe already has a high adoption rate of drivers to support nearly every popular operating system. The biggest issue to date is that this storage has to be local to the server utilizing it, and isn’t being utilized in a traditional shared storage array model.

NVMe in the Enterprise

Unlike Fibre Channel and SAS protocols, NVMe gained early popularity, mostly in the commercial market, supporting PC’s needs for high-speed storage and small form factors. While PCIe has been used in the commercial market for graphics cards and other accessory components for nearly fifteen years (like high-speed NICs and HBAs) there hasn’t been a widespread usage of NVMe devices until recently. 

With the need to utilize PCIe (or M.2 interfaces for some motherboards which still hold the PCIe x 3 standard) the industry was initially limited to maintaining these high-speed devices as a local only device to the server. In terms of Software Defined Storage (SDS), like EMC’s ScaleIO product, this was just fine since the logical cluster was defined by its software and could serve up NVMe storage like any other device.

As this iteration of SDS is still in its early adoption phase, where and how do we fit this local, high-speed storage into our current enterprises?

Fibre Channel (FC) as a transport protocol has been well-proven and is mature and resilient enough to provide the highest end storage needs across any environment. SCSI has been the predominant storage protocol carried over Fibre Channel networks for most of this time.

What is stopping us from utilizing an already established technology to simply transport another storage protocol?

This is exactly what the NVMe over Fabrics standard is attempting to do (along with other transport protocols). NVMe over Fabrics aims to take advantage of current SAN investments and future fabric technologies. In the enterprise world, most SAN environments are Fibre Channel based, and the NVMe over Fabrics protocol is aimed to take advantage of this transport technology.  With this, current investments into a SAN may be able to be leveraged in order to reduce introductory costs while still utilizing skills, tools, and trusted technology.

What impact will this have in regards to latency and overall performance?

From a local machine, we’re looking at microsecond response time; what impact could we see in a fabric based environment? Currently, only 10 microseconds of latency is expected to be added using NVMe over fabrics, which could provide environments with consistent sub 100 microsecond response times!

Pure Storage is currently utilizing NVMe for their FlashArray//X storage array, but only for back-end connectivity to their NVMe SSD’s. This means that currently SCSI is coming into the front-end of the array and being translated into a backend NVMe request. Pure Storage currently has a roadmap item for NVMe over Fabrics native access, essentially adding to their current story of free software upgrades in order to provide new functionality to their customers. Add to this their “evergreen” solution and Pure Storage has provided a future-proof, up-to-date storage solution. 


NVMe as a protocol has been maturing since 2011. Unfortunately, NVMe over Fabrics has only been available as a standard since late 2016 and has not been widely adopted due to the lack of storage systems that have a native NVMe over Fabrics interface on the front end (although Brocade and Cisco have the capability to utilize this protocol over their FC transport layer).

So what does this mean for future adoption?

Like many other technologies, use cases are evolving that will make these devices necessary and will rapidly create more growth opportunity.

Data Warehouses, Hadoop with local storage, and highly transactional IoT environments amongst other analytics workloads can all benefit from this infrastructure enhancement, especially when we provide a lens of parallel requests, enabling jobs to be executed in minutes instead of hours. Once storage providers like Pure Storage grant NVMe over Fabrics front-end access, we will see the early adopters start to utilize this technology, especially when it becomes available in a simple-to-deploy converged platform, like FlashStack.  

We at AHEAD are excited to see that not only Pure Storage, but other vendors, such as Dell EMC, are investing resources in bringing this next generation of storage technology to fruition.

Like flash, virtualization, converged infrastructure and other technologies, AHEAD expects this technology to become standard in all of our enterprise customers in the future. While this technology is being baked for enterprise readiness, AHEAD is an early adopter of NVMe capabilities, and we are constantly utilizing forward-facing technologies such as this in order to assist our customers as best as possible.

Subscribe to the AHEAD i/o Newsletter