High-profile cyber incidents and the security shame game
December 18 2020
by Scott Crawford, Daniel Kennedy
In recent days, technology news has been dominated by a rash of related cybersecurity incidents involving SolarWinds, FireEye, Microsoft and many other organizations. As a matter of fact, our 'Five things wrong with security' touches on one of the frustrations of security teams that often follows a successful attack: the tendency to blame and shame those responsible for cyber defense when a penetration attempt succeeds.
The 451 Take
This tendency reveals a failure to understand how and why adversaries target the objectives and victims they do. It neglects to consider that organizations characterized by a high degree of skill and resources in technology and cybersecurity may actually be prime targets. They possess detailed insight into those they defend, which, if they provide products or services, may include many other organizations. They may also be valuable sources of tools and techniques that represent the state of the art in defense. Such organizations may be prized by attackers for more reasons than a few valuable objectives alone, no matter how valuable those objectives may be. We'll examine how and why attackers select their targets, and why even the most able security teams and strategies may be compromised. Adversaries take on significant risk when they take aim at such organizations. It may mean 'game over' for a campaign when an operation is discovered. But it may take an attack against just such an organization to discover new adversary tactics, hidden threats and the real extent of a threat actor's intent.
In a subsequent report, we'll also examine one of the most serious implications of the breach of SolarWinds and others: what it means for managing third-party cyber risk and the IT supply chain.
Of red and blue, myths and shame
Before taking on the implications of such a widespread attack that could reshape cybersecurity priorities, we must first consider a stumbling block that inhibits reaping the greatest benefit from an incident: the tendency to blame and shame defenders for failure when an attack succeeds.
This may be evident in nearly every case where an attack may appear. It is a recurring theme of penetration testing. Penetration testers are those with the skills and craft of the adversary, and in cybersecurity they are often called 'red teams' in order to put those skills to work for the benefit of 'blue teams' charged with securing the organization and its assets at risk.
This sounds like an eminently practical way to evaluate security readiness – and it is. But it has its pitfalls. For one, a successful penetration is too often seen as a failure of defense. There may be a post-exercise 'blamestorm' and demands to explain why there may be one gap or another. An obvious problem with that approach is that blue teams rarely have a stated goal of preventing all manner of penetration (no matter how unrealistic that strategy is). As the sophistication of the intrusion method increases, in the form of expending either greater time or advanced expertise on the part of an attacker, the corresponding defensive measures that could prevent it increase in cost and resource time, assuming such countermeasures are even available. As part of its security strategy – and based on available budget, staff and time – an enterprise may depend on detective or incident response processes after an attack if it determines that the cost of preventing a sophisticated or unusual attack is prohibitive, unreasonable in the face of the probability of such an attack or beyond the capabilities of current available tools.
There's another side to perceived failure in this arena, however, and that's when attacks in a red team/blue team exercise don't appear to have succeeded. In that case, defense may appear to be stronger than it really is. Penetration testing is often constrained when it's prohibited from interfering with critical functionality so as not to disrupt the business, or when access to sensitive data could potentially violate compliance requirements. When organizations set such limits, they also need to ask themselves: What won't
we see as a result? False negatives can be even more misleading than false positives when they lead to an unfounded sense of security. Attackers won't be so scrupulous – especially when business-critical functionality or sensitive data is precisely what they're targeting.
These examples will be relevant in the days to come regarding analysis of the breaches of SolarWinds et al. – namely, as the wrong way to react to an incident. A more helpful approach is to review and evaluate findings with appropriate context, understand how current approaches may need to be re-thought in light of such cases and how strategic security priorities may need to be reordered.
Organizations may prioritize one security investment over another for entirely sound reasons such as limited resources applied to critical exposures. Compensating controls may exist to protect exposures that might be compromised in a test that did not include those controls. Findings may reveal gaps that may have little to do with technology: if an organization does not have the staff to address concerns (an ongoing problem when security expertise can be highly competitive to find and retain) or is limited in availing itself of technology or automation that could alleviate demands on staff, those are organizational issues worth highlighting in assessment. In the arena of red team-blue team penetration testing, this is the sort of cooperation implicit in the 'purple team' concept that aligns red team findings with the knowledge of the organization and the context of incidents known best – if not only – to the blue team.
We will need to have this mindset as we assess the fallout from the widespread incidents currently in the news.
The risk calculus of the adversary
Security shaming also says a lot about how willing people are to believe another dangerous fallacy: the myth of the invincible security wizard, which can make for even greater and less productive blaming when a target recognized for its security skills and capabilities gets compromised. Assuming that successful penetration means a failure of defense too often fails to recognize that an adversary armed with the following will often find a way in – no matter what the target, how well secured it may be or how adept the target's staff:
The adversary has the means. They not only have the skills and operational 'chops' to pull off a successful attack, they also have the resources to field the best capabilities in both people and technology. (This doesn't always mean attackers need to employ a high degree of sophistication. When a simple tactic will succeed, why risk exposing your best methods?)
The adversary has the motivation. Everything from achieving a strategic political or military objective to tangible gain to simply having an ax to grind can be enough to motivate an effort to compromise, depending on the threat actor. But what drives the attacker is only half of the motivation equation; the other is the value and richness of the target, and this may be more than any single thing. Even those whose intellectual property may be seen as an ultimate objective may have additional value in access to sensitive business information, communications, tangible assets or relationships with other organizations. As more serious attacks against supply chains are beginning to make clear, the value of those downstream opportunities may be even more sought-after than the primary target itself.
The adversary has the time. Obtaining an initial foothold may not take very long. Sometimes it's merely a question of inducing an individual to execute a malicious link or message attachment. But that may be only a foothold. An adversary with larger ambitions and the skills to stay unnoticed will take time and effort to pursue a penetration and move toward greater objectives.
The adversary doesn't see an attack as cost-prohibitive. Cost is the most potent leverage the defender can bring to the fight. Increasing the level of effort the adversary must invest taxes its resources. Sustained effort across multiple defenses also increases the likelihood of detection. Adversaries must be convinced that they can sustain these risks successfully enough to achieve their objectives, even if they incur losses along the way such as the exposure of tactics or other evidence that may reduce their viability in later attempts. New techniques are always evolving, and if the adversary can remain competitive in the long game, it may be worth the lost element of surprise to expose, for example, previously unused tactics.
When observers are astonished that adversaries have compromised organizations characterized by highly skilled, well-resourced security efforts and expertise, they must keep these factors in mind. If the objective is capturing the equivalent of competitive intelligence or valuable tools that reduce the burden and cost of organic development, where else is the adversary to turn?
In these cases, the effort may be seen as worth it, in spite of the risks to the adversary. These risks are not inconsiderable: in the most recent case, the attack that involved SolarWinds also penetrated FireEye and its various Mandiant teams, who are among the most experienced security analysts, researchers and incident responders in the business. It was the detection of anomalous activity by FireEye's teams that led to the discovery of this attack, which might otherwise remain undetected to this day and its extent still not yet fully realized.
That's not to minimize the impact of an attack on the provider of widely used products and services – but as a result of penetrating FireEye, this adversary's cover is now blown – at least for this operation. Is it still deeply embedded and hidden elsewhere? No one can yet say – but even when they are compromised, highly skilled and well-resourced organizations that detect their own compromise can potentially benefit us all with what they learn.
When attackers target such organizations, the payoff must be worth the risk. Enterprises of scale with a reputation for technical sophistication may be attractive for far more than the value of a single target asset. They are compendiums of a number of different business functions. While a reputation for technical acuity can be built upon the work of, for example, a research or application development department, there are nonetheless still accounting, finance, human resource, sales and a host of other departments with varying degrees of technical knowledge appropriate to their work in most enterprises.
Looking at a past example, Operation Aurora was an attack disclosed in 2010 against Google and other companies by what was asserted to be an advanced persistent threat or APT. Among the potential goals identified by investigators at the time were the modification of source code, viewing correspondence from parties of interest to the attacker and the theft of intellectual property. The attack was considered sophisticated due to the use of tactics such as a zero-day vulnerability in Internet Explorer, but the method of delivering the attack to initial targets was much more routine: a form of spear phishing using a link sent to targeted personnel. As with recent attacks, reactions at the time questioned how attackers could breach organizations with strong reputations for technology innovation. Those reactions must consider both the idea that a well-funded adversary may invest significant time in attempting the compromise of a rich target, and that such enterprises, despite their reputation, have both non-technical users and similar tradeoffs in security defensive controls as other enterprises, commensurate with their perception of the risks they face from both a probabilistic and impact standpoint.
Taking it out on users
Security shaming doesn't end with the 'armchair quarterbacking' of attacks. The implied condemnation of users for failing security tests such as anti-phishing exercises (and the penalty-oriented tone of notices sometimes administered to users as a result) reveals a dirty little secret about cybersecurity: in spite of massive investment in security technology, organizations remain fundamentally dependent on their people to do the right thing when presented with a threat.
This, too, is implicated in these recent highly visible incidents, since it's common for even the most sophisticated attacks to achieve initial penetration by manipulating someone within an organization to unwittingly initiate a stealthy attack, often via techniques such as phishing. To some extent, that will never be avoidable: technology, after all, exists not only to serve people but to interact with them to provide value. Defenses that strengthen resilience to human risks continue to evolve, but considering the value to the attacker of exploiting human nature to achieve a foothold within a target, it behooves the adversary to continue investing as well in keeping their tactics as successful at exploiting behavior as possible.
While this likens cybersecurity to an arms race (which, in this sense, it is), security technologies and organizations alike must nevertheless accept that adversaries will always try to exploit the human being interacting with the technology. 'One and done' can never be the rule in defending against attacks on human behavior. It becomes incumbent on the security industry to relieve people – and their organizations – of this burden as much as possible, continually evolving techniques in step with the adversary to better defend organizations rather than depending as much on the entire workforce of a business to shoulder that burden.
For their part, organizations can't, on the one hand, give lip service to initiatives such as security awareness training while simultaneously inducing users to indulge in risky behavior just as an attacker might, such as including 'click here' links in messages that don't reliably demonstrate authenticity, or allowing third parties to send unverified content under the organization's own logo or domain.
Coming soon to a supplier near you
That issue with third parties is not as cut and dried as it may sound. It has a daunting number of facets that challenge organizations of every size – and it's directly implicated in at least one of the two high-profile security incidents of recent days.