A new reality for IT supply chain risk: 'We're gonna need a bigger boat'
December 21 2020
by Scott Crawford
In a recent report, we called out the tendency to 'blame and shame' those seen as responsible for what appear to be lapses that lead to a successful attack on an enterprise's security. Such a superficial response reflects an inadequate understanding, and threatens the success, of security strategies – when failing to comprehend the reasons why attackers target the objectives and victims they do, and why attacks can succeed no matter how adept or well-invested the victim may be in its security measures.
This comprehension is essential to understanding incidents such as the recent high-profile attacks against SolarWinds, FireEye, Microsoft – and possibly others yet to become evident as this quintessentially 2020 incident unfolds. Clearly, such an understanding will be essential, given the recent evidence of where attacks are headed.
In this report, we examine another – and arguably even more critical – element: the targeting of a vendor of widely adopted software on which the management of a significant aspect of IT across hundreds of organizations depends, and what it means for managing cyber-risk among third-party providers and the IT supply chain.
The 451 Take
In Steven Spielberg's 1975 blockbuster movie Jaws, a crew of three sets out to tackle the shark that has been preying on unsuspecting victims along the beaches of a seaside tourist village. The crew believes they are well equipped and prepared to take out the beast, until one catches a glimpse of just how potent the adversary is, and grasping the magnitude of their exposure, just how inadequately prepared they are.
As 2020 turns to 2021, the reality of IT's interconnectedness and dependencies on third parties is being driven home as never before. But this isn't the first time attackers have exploited the opportunity they gain from compromising a target whose products exist in a host of other organizations. Nearly a decade ago, the breach of RSA Security pointed in this same direction, when at least one objective of the attackers was to target the strong authentication systems often used by administrators to access high-sensitivity environments and privileges.
Today, what's different is the greatly expanded dependence on third-party and cloud-delivered IT services that increasingly displace, or replace, legacy centralized applications historically deployed on-premises. But therein is the paradox of the SolarWinds incident, which targeted the sort of IT management systems that were in use long before the rise of the cloud. Organizations still depend on these tactics, and this attack will force a rethink of the extent of IT supply-chain exposure.
More disconcerting still – the SolarWinds attack may have extended to users who were doing exactly what the industry has been urging everyone do for years: maintain good hygiene through regular and consistent updates, not only of software, but of software that asserts the legitimacy of representing accepted industry practices.
These are just two of the factors speaking to the levels of complexity inherent in getting a handle on third-party and IT supply chain risk. Like Chief Brody's startling realization on Quint's fishing boat, there is (if you will) a jaw-dropping challenge facing the enterprise going forward.
The Three Ds that challenge third parties: depth, detail and daunting scale
It's no secret that IT continues to expand its dependence on third-party technology providers. Those providers may offer hosted services, or they may offer software, hardware, or other products or services (proprietary as well as open source) that are deployed in an organization's own environment.
Even so, the various levels of indirection, and the dependencies of one stack on another in these models, adds considerable murkiness to the concept of 'an organization's own environment.' The variety of deployment models are evident in enterprise computing plans over the next two years, reflected in 451 Research's Cloud, Hosting and Managed Services, Workloads and Key Projects 2020 study.
Already, organizations feel the pain of trying to figure out the distinctions between a supplier's obligations and their own in so-called 'shared responsibility models' of compliance and risk, particularly in resources hosted by third-party service providers. But the difficulties only begin there.
One key aspect of the SolarWinds breach was the amplification that was intended by targeting a provider whose products are deployed by hundreds, if not thousands, worldwide. In such cases, not only does the user of these products face exposure when its suppliers are compromised, but its own customers, users and other relying parties may be exposed by its use of third-party offerings. And if those users, in turn, incorporate these same functionalities in their own offerings, they further risk having their own relying parties exposed – and so on. It's a web that can assume fractal proportions in an extreme case. Where does the responsibility of one party begin, and another end, in such a scenario?
Another aspect of this case is that the widely deployed products targeted are used to manage IT infrastructure. While not specifically the case in this particular incident, concern exists that this category of compromise may introduce the potential that the adversary gains direct visibility, if not direct control, over the environment of any of the affected parties in this extended web – depending on the nature of compromise, the vehicle used to extend its reach, and the value to the attacker of the environment it is deployed in.
One of the most troubling facets of the SolarWinds incident is that it exploited the legitimate process of software updates. This is a process the industry has encouraged organizations to follow, to keep software well maintained against (ironically) security threats, when, for example, new vulnerabilities are discovered. Technology providers have invested significantly in these processes, as a foundational aspect of keeping their products and customers secure.
Widening the aperture of zero trust
A theme that does not appear to be directly related, but is still relevant, is the broad interpretation of the cybersecurity concept of 'zero trust.' Organizations have come to trust the processes of software updates – not because it is inherently trustworthy, but because providers take steps such as code signing that relying parties can use to verify authenticity. When compromise is upstream of these processes – say, when an attacker has obtained the credentials of software developers who can sign software updates – evidence of such compromise may be beyond the relying party's 'event horizon,' so to speak. The relying party can only see the code signature. They may have little way of knowing if the code validation and signature process itself have been breached.
Or do they? In fact, if the content or behavior of compromised products or functionality delivered as a service were analyzed, it's possible that malicious activity could be detected. Currently, however, many, if not most, organizations may be unprepared to incorporate that level of visibility into their security diligence. Doing so would require making such measures a priority – which means diverting resources away from other priorities. How important will such security observability be going forward, if it were feasible to attain consistently and at scale at all?
No slam dunk for third-party cyber risk management
This highlights some of the fundamental issues with risk visibility, particularly when it comes to risks introduced by third parties and in the IT supply chain:
The level of instrumentation and analysis of detail in such cases is potentially extreme. Few seem ready to take it on to the extent needed for this degree of detective sensitivity.
When multiplied by the number of suppliers used by an organization, the problem becomes even greater.
The existing reliance on processes that have security measures invested in them that can be compromised beyond the relying party's existing event horizon calls into question provider measures to mitigate risks to those processes.
Yet another complication is then introduced: How much visibility will providers be willing to extend to relying parties, to demonstrate the measures they take are sufficient and have not also been compromised? Asking organizations to submit to a highly observable standard of due care means exposing their internal operations to a degree many would find uncomfortable (to say the least).
Even if making determinations of risk in such cases were feasible, estimating impact in tangible terms like financial figures remains a challenge. With so many variables in play, and so many daunting aspects to assessment, how much progress can we expect in this area, and when and how will we see it unfold?
So far, much of third-party risk management revolves around what can be observed from the outside, and what the subject being assessed is willing – or forced – to disclose from within. Outside assessment techniques include those that security teams are already familiar with, such as network scanning or, in the case of applications, dynamic application security testing (DAST). Inside assessments range from voluntary questionnaires that the subject completes to document its adherence to accepted practices or regulatory mandates, to full audits that are in-depth and detailed.
But even an audit is only a snapshot of some aspect of the organization at a given point in time, and is limited by the scale and capabilities of the auditor. It is also limited by the auditor's knowledge of, and access to, the aspects of an organization that are being audited. How realistic is it to expect this visibility to be obtained continuously throughout an organization, and extended meaningfully to relying parties – or to expect that relying parties can utilize this glut of information from tens, hundreds or thousands of suppliers?
For many years, security has invested in going deep: observing and understanding in detail the specific tactics, techniques and behaviors of the adversary, as well as ways to connect evidence to specific threat actors. But it is becoming clear that security needs to go wide as well.
The number and varieties of products, services and providers in the IT supply chain have exploded, but so have the number and variety of IT consumers – from millions of digital users to a host of highly diverse endpoints in traditional IT, as well as in new arenas of operational technology, smart systems and IoT. As a result, awareness of the extent of the attack surface keeps expanding. In just the last few weeks, we have seen heightened interest in technologies that increase the aperture of exposure visibility, such as attack surface management. The growing interconnectedness of technology means that scale will only increase.
Ironically, the SolarWinds compromise makes it clear that the future of IT isn't all we need to worry about. The breadth of the challenge includes existing technologies and techniques we've relied upon for years, well before IT had its head in the clouds. Given the depth, detail and daunting scale that is being faced by third-party cyber and IT supply chain risk management, it's clear that this field will have to push much further.
As the extent of the SolarWinds incident continues to become evident, we expect there to be additional implications and more (perhaps many more) affected parties. We also expect to further assess the ramifications of this in later reports. Stay tuned, as we cover the expanding impact of this event.