451 Alliance


Corporate Storage Trends

Resiliency and Adaptability Affect Infrastructure Plans

By Tracy Corbo

Cloud computing has become a major disruptive force that is reshaping and reimagining the role of IT in the corporate environment. Cloud along with virtualization is moving computers out of the dark ages of fixed physical devices and toward a more dynamic, flexible design that will be the hallmark of digital IT architectures in the not-so-distant future.

With the rise of public cloud as an alternative, infrastructure professionals are under pressure to improve their delivery of IT resources across a number of different areas. In this survey, 35% of respondents claimed that their current infrastructures were not adequate to meet either current or future requirements.

A February survey of 537 members of the 451 Alliance focuses on how cloud and other factors such as data loss and recovery are affecting IT infrastructure and storage decision-making.

Report Highlights

  • Workloads. Changes in applications and workload requirements have traditionally driven the adoption of new technologies in on-premises environments to meet those needs. Infrastructure is not always keeping pace.
  • Recovery and Resilience. As data and the subsequent applications and services that depend on it grow more core to the operation of the business itself, downtime becomes an increasing liability. The ability to reduce or eliminate downtime and data loss will provide a competitive edge for many companies.
  • All Flash Arrays (AFAs). The long transition from primarily disk-based storage to all-flash arrays has been long and arduous, with many incumbent vendors dragging their heels along the way. Although most respondents still do not use AFAs, sales of AFA offerings among established storage players are robust.
  • Containers. While containers are still in the early stages of adoption, one of the potential catalysts toward greater usage is if they can cut down on application provisioning time. The jury is still out, but there is hope that this technology can help assist IT in thwarting this persist pain point.

Workload Challenges

One of the greatest challenges IT departments are facing today is how to balance the demands of maintaining the existing core infrastructure while evaluating and integrating new emerging technologies. While 66% of respondents feel that their infrastructure is right sized for their current and future needs, another 29% say they are not future proof.

Our infrastructure isright sizedfor our current environment and architected to grow to match future demands


Our infrastructure can meet current demands but cannot scale to match future demands without significant changes


Our infrastructure is not meeting current demands


For very large organizations with more than 10,000 employees, that number jumps to 39% of respondents that feel that their infrastructure cannot scale to match their future needs.

Infrastructure Improvements. Speed and performance topped the list for 46% of respondents along with scalability (44%) and cost (41%) for top attributes that need to be improved. It is important to note that given nearly a third of respondents cited quality of service, that it is important not to lose sight of the customer experience when organizations refresh their architectures.


Recovery and Resiliency

As the ability to collect and analyze data to improve business operations grow, the value of that data increases. Consequently, requirements for resiliency and data loss are going to become more stringent and organizations must become more creative in how they meet these demands. It will not be enough to fortify on-premises environments; new options must be evaluated such as leveraging the public cloud as a potential designated recovery site.

Two metrics used to determine data loss and downtime are:

  • Recovery Point Objectives (RPO): This metric sets guidelines for the acceptable amount of data can be lost after an outage. A workload with an RPO of 15 minutes can acceptably lose the last 15 minutes of data creation and modifications while still meeting the Service Level Agreement (SLA).

  • Recovery Time Objective (RTO): This metric specifies the amount of time that is permitted for a recovery operation to complete and is used to determine how much downtime is acceptable for a workload or application. An application with an RPO of 1 hour for example must be recovered in that timeframe to meet the established SLA.

Respondents were asked to evaluate the RTO and RPO for each of the three application types: mission critical, business critical and non-critical.

Mission Critical. Overall 97% of organizations are looking for RTOs and RPOs of less than 24 hours when it comes to their mission-critical workloads. Another 22% are seeking RPOs under one minute, which shows that though speed of recovery is important, even more are focused on minimizing the potential for data loss. Only 11% of respondents are expecting sub-minute RTOs, for workloads which have the least amount of tolerance for downtime.


Business Critical. 31% of respondents are targeting RTOs under one hour. Just over one quarter of respondents (27%) are targeting RTOs between two and six hours, which should be attainable using conventional snapshot and backup technologies., while 41% are seeking RPOs with less than one hour of potential data loss. 


Non-critical. More than half (54%) of respondents are able to tolerate RTOs over a day.  RPO requirements were less forgiving though with 53% unwilling to lose a day or less of data in the event of an outage.  Given these requirements, the traditional daily backup run at many organizations is not enough data loss prevention for many organizations

 All Flash Arrays (AFAs)

 Although AFA adoption remains modest, just over a third of the overall respondents are using AFAs. However, a closer look by company size shows that number jumps to more than 50% for respondents from larger organizations (1,000 or more employees).


Respondents are split down the middle when it comes to whether AFAs will completely replace traditional hard disk-drive-based approaches for block-based storage workloads. However, for those who have already made the move, 58% say AFAs are their “go to” for their new block-based deployments.



Container usage has been increasing gradually. Containers are still in a relatively early stage of adoption with 19% of respondents currently using containers and another 19% running proof-of-concept and pilot projects typically in test/dev use cases.

For the most part, enterprise usage is being driven by those in DevOps roles rather than infrastructure specialists like storage or VM admins. Consequently, usage is slightly higher (over 25%) for organizations with more than 250 employees.


Impact on Provisioning. Provisioning speed for IT resources such as storage, compute and networking continues to be the “long pole in the tent” for infrastructure professionals, and many believe containers will alleviate this pain point.

One third of respondents believe containers will reduce provisioning time from a full day to the span of hours, while 21% expect deployments to accelerated from hours to minutes.


Only 6% believe container provisioning will be slower than the provisioning of traditional and virtual infrastructures.

Provisioning Process. More than half of the respondents describe their current provisioning process is either highly manual or manual with limited automation – making automation the exception and not the rule that is anticipated to change.


When respondents were asked to describe their expectations over the next two years, there is clearly hope of moving towards greater automation and policy driven provisioning, especially among large organizations.



Storage Definitions

All-Flash Array (AFA)

SAN systems that exclusively use flash-based storage. AFAs include all-flash configurations of legacy storage systems, as well as newer ‘ground up’ systems designed specifically for flash.

Converged Infrastructure

The combination of storage, compute and/or networking hardware integrated with software, which includes both infrastructure software (hypervisor, operating system, etc.) and management components (orchestration, monitoring, self-service portal, etc.). This converged infrastructure may be provided by one or more vendors, but all cases are designed as integrated solutions and validated by the vendor(s).

Hyperconverged Infrastructure (HCI)

Clusters of virtualized x86 servers running dedicated software to create a single virtual appliance incorporating both compute and shared storage.

SDS on x86 Servers

Software-defined storage running on standard x86 servers. SDS is a dedicated scale-out storage platform designed to provide networked block and/or file storage based on a combination of software, commodity server hardware and onboard storage.