Big guns join the chorus for computational storage

September 13 2019
by Tim Stammers


Computational storage is a rapidly developing field that is promised to deliver major increases to overall application performance, especially for databases and emerging data-intensive workloads such as analytics and machine learning. Implemented in a range of ways, computational storage offloads processing work from server CPUs into hardware accelerators, which often process data in situ, where it is stored. A record level of attendees were at sessions covering the topic at the annual Flash Memory Summit (FMS) held in Santa Clara, California, which saw presentations from organizations such as Alibaba, Arm, Intel, Marvell, Microchip, Samsung, Western Digital and Xilinx, alongside computational storage startups like Eideticom, NGD Systems and ScaleFlux. A strong theme of the presentations was a call for industry collaboration to accelerate adoption of computational storage, with NVMe taking a central role as the foundation for future standards.

The 451 Take

The message being delivered collectively by multiple vendors has reinforced 451 Research's confidence that computational storage will become an established architecture. To quote one of the firms at FMS, the goal shared by multiple vendors is to make computational storage as transparent as possible to potential users. As a form of distributed computing, the architecture requires cooperation between multiple infrastructure elements, and the transparency goal will ultimately require standards to grease the wheels of collaboration between suppliers of application software and providers of computational storage devices. Right now, the latter appear very confident that an ecosystem and standards will soon emerge. The planned central role of NVMe is encouraging, because NVMe itself has already set a good example of rapid and successful development of an infrastructure standard.

Wider messages  

At FMS, Intel was among the multiple vendors calling for industry-wide support for the computational storage products that have already been brought to market by a handful of players. As Intel argued, this will help establish an ecosystem for computational storage. Intel also echoed multiple other vendors when it said the ecosystem should be based on NVMe, with the aim of minimizing changes to application software. The Storage Networking Industry Association (SNIA) is already considering the use of NVMe to both advertise and manage services offered by computational storage devices. This is being done by a technical working group (TWG) within SNIA, which was set up to promote interoperability and create interface standards for computational storage. The group was formed only last year, but already numbers over 40 companies and 150 individuals. Alongside the suppliers named in this report, other large companies in the TWG include Dell, IBM, Inspur, Lenovo, Micron, NetApp, Oracle, Toshiba, SK Hynix and VMware.

Intel also said a 'critical' step will be to address computational storage in the context of data objects, allowing access to data via memory semantics regardless of what medium is storing the data. In another session, NGD said computational storage workloads such as compression or error correction may need very little modification to applications, but other workloads will require data schema changes to make the most of the data to be passed from applications to computational storage devices. The latter will require collaboration with application specialists and alignment with the rest of the storage ecosystem, which SNIA's TWG said it will handle.

In another presentation, Marvell touted the benefits of flash drives providing native or inbuilt object access. It described object-style key-value protocols as being bottom-heavy, and therefore needing to be handled closer to the data. 451 notes that SNIA recently launched version 1 of a key-value protocol to be used in what it calls 'object drives.' Marvell also highlighted the benefits of flash drives providing native support for the networked NVMe-oF variant of NVMe, which it said would allow the use of Ethernet instead of PCIe switches within storage systems, reducing costs and boosting performance while also allowing peer-to-peer (P2P) links between drives and computational storage devices.

Arm argued that the simplest and easiest way to implement computational storage is to deploy flash drives that can undertake processing work by hosting Linux instances within the drives themselves, allowing processing work to be based on existing third-party code that can be updated or modified at will. The company might be expected to promote this approach because its technology is well suited for that role, but this view is supported by 451 and others such as NGD. However, Arm acknowledged that an alternative approach deploying FPGAs suits very high-speed but less-flexible processing in other computational storage devices. Echoing Marvell, Arm also predicted that alongside NVMe, other interfaces employed in computational storage will include NVMe-oF and Ethernet. 451 notes that startup Eideticom is already shipping computational storage devices that support NVMe-oF and P2P data transfers.

Product developments

Samsung is currently the only large supplier that has declared intentions to ship a computational storage product. The flash giant says it has been working in the field for several years, and reports that large potential customers have completed trials of a Samsung flash drive that incorporates an FPGA processor that handles processing tasks ranging from analytics to storage services such as data compression and encryption. The FPGA will be exposed to 'level zero' customers such as hyperscalers with sufficient skills to tune or modify the processor to suit their environments. At FMS, Samsung repeated previous statements that production deployment of its devices by hyperscalers will begin later this year using a U.2 format device.

Samsung has chosen the descriptive brand name of SmartSSD for its device. More widely, flash drives that process data in situ are classified by SNIA as Computational Storage Drives (CSDs). During its FMS presentation, Alibaba said the fifth generation of the custom flash drives it uses in its cloud are CSDs. The cloud behemoth said it is open to industry collaboration and named Samsung, ScaleFlux, Xilinx and open channel SSD maker Shannon Systems as part of the 'ecosystem' for its flash drives, with other vendors joining the project.

We do not know what individual contributions those vendors are making, except that Alibaba is deploying CSDs developed by startup ScaleFlux to accelerate its PolarDB database. Further details of the deployment are under wraps, but Alibaba confirmed its existence by applying jointly with ScaleFlux for an FMS Best of Show Customer Implementation award – and subsequently winning the award. 451 believes the deployment will involve very large numbers of the ScaleFlux devices. PolarDB is a combined OLTP and analytics database, and ScaleFlux claims that its devices almost entirely eliminate the impact of analytics on OLTP performance. Like Samsung's device, ScaleFlux's CSD uses an FPGA for in situ data processing. At FMS, ScaleFlux demonstrated its drives accelerating a combination of MySQL and RocksDB.

Startup NGD has also developed a CSD, but unlike the Samsung and ScaleFlux devices that handle specific, fixed tasks, NGD's device features a Linux instance that can host a customer's choice of existing software. This is the approach that Arm endorsed, and the Linux instance runs in Arm cores on NGD's device, which recently became entirely ASIC-powered. NGD is targeting its drive at a range of datacenter and IoT or Edge applications, and at FMS it demonstrated the device hosting Microsoft's Azure Edge software stack and, separately, running image-recognition software with no involvement of host CPUs. As the company said, its drives are effectively micro-servers. NGD argued that more image recognition is completed at the edge than at any other location, and unveiled a version of its drive in EDSFF or ruler form-factor, which it said will suit edge deployments.

The third startup currently shipping computational storage devices is Eideticom, which has adopted yet another approach. Like Samsung and ScaleFlux, Eideticom is offering NVMe-mounted devices that incorporate FPGAs and are designed to take on processing workloads that would otherwise be completed in server CPUs. However, unlike the ScaleFlux and Samsung devices, Eideticom's devices do not store data but include a mechanism developed by the company for P2P data transfers between its devices and NVMe flash drives, with no deployment of server CPU cycles or DRAM. The devices also fit with SNIA's classification of a CSP.

One of Eideticom's core arguments is that NVMe and NVMe-oF should be better exploited as data transports. NGD said it is also developing support for P2P data transfers in its devices. Echoing ScaleFlux, Eideticom cited RocksDB as an example workload, and claimed that its devices boosted transaction rates sixfold, improved QoS eightfold, and cut flash costs fourfold in a bench test comparing servers with and without its devices. Other workloads cited by Eideticom included Hadoop.

RocksDB is also one of the applications being targeted by startup Pliops, which has developed an NVMe-mounted, FPGA-powered card that also does not store data but is designed to accelerate emerging applications such as analytics and machine learning. Pliops is not pitching itself as a computational storage provider because it says it wants to stick to a message that its devices simply accelerate applications, but it is worth mentioning in this report because its device fits squarely with SNIA's definition of a CSP.