Small Wars Journal

A Response to “Cyber Proficient Force 2015 & Beyond”: Why We Will Continue to Lose the Cyber War

Tue, 09/13/2016 - 2:34am

A Response to “Cyber Proficient Force 2015 & Beyond”: Why We Will Continue to Lose the Cyber War

Robert Zager and John Zager

The United States is losing the cyberwar. We are losing the cyberwar because cyber defenses apply the wrong philosophy to the wrong operating environment. In order to be effective, future cyber defenses must be viewed in the context of an engagement between human adversaries.[1]

There is strong evidence indicating the cyber intrusion of the DNC was the work of hackers working on behalf of Russian intelligence, US officials said this week.

- CNN[2]


Cyberattacks fill the news. The story is always the same. Something bad happens, cybersecurity experts are brought in. After their investigation, an attribution is made.

The first wave of cyber security was focused on perimeter controls with tools such as firewalls, gateways and anti-virus protection. The second wave of security brought Security Information Event Management (“SIEM”) to bear. The volume of SEIM information which must be processed is driving the third wave of cyber security, termed “cyber threat intelligence,” in which analytic tools are used to observe data in real time and report deviations from known patterns. IBM is now promoting the next wave of cyber security, which it dubs “cognitive security.”[3] According to IBM, “Whereas the current generation of systems are reactive—detecting and responding to anomalies or attacks—cognitive security is proactive. Forward focused and continuously multi-tasking, cognitive systems scour for vulnerabilities, connect dots, detect variances and sift through billions of events to build upon a base of actionable knowledge.”

In this paper, we assert that cyberintelligence solutions, such as cognitive security and cyber threat intelligence, are fundamentally flawed approaches that cannot deliver what they promise. Cyberintelligence is an important, but insufficient, approach to cybersecurity. Cyberintelligence must be subsumed into the larger “Methodology for Adversary Obstruction.”

The Missing Parts of Cyberintelligence

Splunk provides a diagram of a typical cyberintelligence system which is set forth in Figure 1:[4]

Figure 1 - Source: Splunk

The Splunk model shows the numerous sources of system activity feeding data into a “Security Intelligence Platform.” The output of the security intelligence platform is provided to security personnel. The purpose of the IBM’s cognitive security is to provide higher quality information to the analysts.

First Missing Element – Human Analysts. We immediately encounter the first missing element of the cyberintelligence model – the human beings who must process the output from the security intelligence platform. Assuming, for the sake of argument, that all of the information which is needed by the security personnel to make a timely threat assessment is inside the machine data stream, is the information presented to these people in a way that supports human decision making?

There is a scarcity of research on the effectiveness with which people process the cyber security event information.[5] In an analysis of 130 papers on cyber security data visualization, researchers found inconsistency in nomenclature, test methodologies and evaluation methods.[6] The most startling finding of the analysis was the complete absence of longitudinal studies which demonstrated the efficacy of data visualization tools. Discussing how information is provided to analysts avoids the core issue – does the system output enable the analysts to reach useful conclusions regarding heretofore unseen threats in a timely manner? There is no evidence that the answer to this question is, “Yes.”

Second Missing Element – Users. Some of the activities of users generate data processing artifacts which are included in the machine data which is the input of the cyberintelligence system. User activity which is not conducted within the data collection network is not available to the cyberintelligence system; the cyberintelligence system does not include the content of telephone calls, activity on third party systems, face-to-face conversations, paper correspondence, real world events or shadow IT[7] Data monitoring functions cannot analyze the content of files that users encrypt.[8] Moreover, users have system privileges in order to perform job tasks. It is important to acknowledge that implementing a Zero Trust Network Architecture[9] is not the same thing as implementing a zero rights data processing system – in a Zero Trust Network Architecture, users are given the data processing rights that are needed to do their work. When cybersecurity restrictions interfere with users completing their job tasks, authorized users engage in counterproductive actions that reduce system security.[10] Admiral Rogers, NSA Director, observed, “[Y]ou can have the greatest technology and greatest defensive structure in the world, but in the end, never underestimate the impact of user behavior on defensive strategy.”[11]

Third Missing Element – Adversaries. In cyberwar, we are in a continuous engagement with the adversary.[12] In this engagement, the adversary brings deep technical knowledge of systems and defenses.[13] In addition to this technical knowledge, the adversary has a powerful tool in its fight against cyberintelligence systems. That tool is deception.

But what happens when suddenly our data is manipulated, and you no longer can believe what you’re physically seeing?

-- Admiral Michael Rogers, NSA Director[14]

Fourth Missing Element – Deception. Regrettably, the data manipulation about which Admiral Rogers warned is already here. Our adversaries bring a vast arsenal of deception to the engagement. Deception is deployed against the user, the human analysts and the machine data.

The user interface is controlled by the attacker.[15] By manipulating what the user sees, the adversary is able to deceive the user into compromising actions. An excellent example of this deception process was observed in the attack on the Ukrainian power grid.[16] The adversary sent an email that appeared, through deception, to be from the government. Driven by the deceptive prompts created by the adversary, the victim opened the email, opened the attachment and enabled scripts. These scripts then instantiated the attacker’s command and control system. This compromise methodology can operate without malware, such as in cases where the script launches native processes, such as powershell, in so-called “living off the land” compromises.[17] The victim can be compromised entirely through user deception, such as the Business Email Compromise, in which a deceptive email purporting to be from a person in authority is sent to an accounting clerk.[18] The email instructs the clerk to undertake a payment activity within the clerk’s scope of authority (such as sending wire payment or changing a remit to address). When the fraudulent instructions generate a payment, the money goes to the adversary.

Much of the information about the attack is subject to obfuscation by the attacker.[19] The very machine data which feeds the intelligence platform and is reviewed by human analysts is subject to anti-forensic manipulation by the attacker.[20] Adversaries alter log files.[21] Adversaries intentionally trigger deceptive alarms.[22] Adversaries hide the true nature of software, compromising whitelisting services in order to mask malicious code.[23] Adversaries engage in denial and deception campaigns.[24] “Big Data” itself tends to generate spurious correlations.[25] Understanding the “connect the dots” process used by defenders, the adversaries apply the Maxims of Deception to distort the dots, thus undermining the work of cyberintelligence.[26]

Fifth Missing Element – Context. According to Verizon’s 2016 Data Breach Investigation Report, over 90% of cyber breaches are discovered by third parties.[27] At first blush, the rate of third party discovery seems alarming. But upon further reflection, the reason that compromises are discovered by third parties and not internal security analysists is clear – the machine data can only be understood by applying the context of subsequent events. For example, when money is misapplied, accounting personnel become aware of a problem. Subsequent investigation ultimately leads to the initial spearphishing email. The key dot required to connect the dots occurs outside of the machine data after the machine events are completed. The true nature of the machine data is emergent and can only be determined forensically. Thus, as Li and Clark discuss in their book, “Security Intelligence,” no Advanced Persistent Threat has ever been discovered before the damage was done. They conclude that cyber intelligence is purely a forensic tool in the Advanced Persistent Threat defensive landscape. [28]

Sixth Missing Element – Time. An unfortunate corollary of the context element is time. Unlike the promise of forward focused activity that will provide a base of actionable knowledge, the reality is that forensic activities are not predictive, but investigative. The first detective to arrive at the murder scene arrives AFTER the murder has occurred. Real time data is not predictive of the future.

Only by adding the six missing elements to the machine data can the entire operating environment be seen. Adding these six elements reveals the significance of the cognitive dimension in the cyber engagement.[29] Unlike chess, where observing the board provides perfect information about the current state of the engagement, in cybersecurity the observable data is incomplete and can be a deceptive misrepresentation of reality. Adversaries cheat and lie.[30]

Scientism - The Flawed Philosophy of Cyberintelligence

Osarno, et al. describe the defensive identify and respond decision cycle as set forth in Figure 2.[31]

Figure 2- Identify and Respond Process

The premise of cyberintelligence is that by applying analytic processes to an incomplete stream of machine data which is salted with deceptive content, the compromising event will be detected in time for the identify and respond process to act preemptively.

Regrettably, understanding what happened before is not generally predictive of the future. Karl Popper, in works such as The Poverty of Historicism, established that, except in limited cases, the past is not predictive of the future. Popper observed that the predictive power of incremental information was applicable only in systems which are well-isolated, stationary, and recurrent (like our solar system). The predictive power of historical data in some systems (such as predicting eclipses in the Sun, Earth, Moon system) is not applicable to cybersecurity. Cybersecurity, with rapid changes in technology and the adaptive behavior of attackers, defenders and users, is not such a system. In systems that are not isolated, stationary, and recurrent, collecting more information does not equate to having more knowledge. “[N]o society can predict, scientifically, its own future states of knowledge.”[32]

NSA Methodology for Adversary Obstruction

The NSA is one of the world’s foremost practitioners of intelligence. In August of 2015, the NSA’s Information Assurance Directorate issued written cybersecurity guidance. Rather than offering the false hope of real time intelligence, the NSA took a different approach. The NSA analyzed the adversary’s tactics, techniques and procedures (TTP). The NSA’s guidance focuses on impeding the adversary at each phase of the adversary’s intrusion. This guidance is called “NSA Methodology for Adversary Obstruction.”[33] In the Methodology for Adversary Obstruction, the adversary’s progress is broken down into three phases. The first phase is the Access Phase, during which the adversary seeks system access. The second phase is the Persistence Phase, during which the adversary creates a presence in the network to support subsequent actions. The final phase is the Control Phase, during which the adversary uses its influence over the system to attain its objectives. Consistent with these phases, the NSA offers eleven “Targeted Mitigation Techniques.” These are:

  1. Protect Credentials.
  2. Segregate Networks and Functions.
  3. Implement Hosts Intrusion Prevention System (HIPS) Rules.
  4. Centralize logging of all events.
  5. Take Advantage of Software Improvement.
  6. Implement Application Whitelisting.
  7. Install and correctly use EMET.
  8. Public Services Utilization.
  9. Use a Standard Baseline.
  10. Data-at-Rest and Data-in-Transit Encryption.
  11. Use Anti-Virus Reputation Services.

Of these eleven recommendations, only two (numbers 4 and 11) address intelligence. The remaining 9 recommendations are Operations Security,[34] setting forth specific practices and procedures that counter the adversary’s TTP in order to drive defensive strategies.

The concepts of the NSA Methodology for Adversary Obstruction are not dependent upon making precise predictions of new threats in real time. Rather, the focus is on adapting to the adversary’s known actions.

The OODA Loop 

The most important distinction between the cyberintelligence approach and the Methodology for Adversary Obstruction is time. In the cyberintelligence approach, the defenders are responding to the actions of the adversary. In the Methodology for Adversary Obstruction, the defenders implement measures that anticipate the adversary’s actions.

The interplay between adversaries through time was comprehensively described by Col. John Boyd in the Observe, Orient, Decide, Act Loop (OODA Loop) set forth in Figure 3:[35]

Figure 3

Boyd developed the OODA Loop to describe aerial dogfights. Time is everything in a dogfight. The first pilot to successfully complete the OODA Loop survives. A key element in attaining victory in a dogfight is controlling events to disrupt the adversary’s orientation. This is an attack on the adversary’s cognitive dimension.[36] Orientation is derived from the interaction between the adversaries. If an adversary can be deceived into an incorrect orientation, subsequent decisions and actions will be compromised. The pilot who first accomplishes a successful OODA Loop is “operating inside the loop” of the slower adversary. The same concept applies in cybersecurity. In “real time” cyberintelligence, the adversary is operating inside the defender’s loop. In the Methodology for Adversary Obstruction, the defender is operating inside the adversary’s loop.   

The Future

The current focus on data processing systems as the heart of cyber security drives a system analysis in which the most important components– the people – are excluded from the system.

When people are included in the system, a new perspective on the future direction for cyber security emerges.

Human cognition did not evolve to optimize the comprehension of computerized presentations of data. System designers tend to emphasize the technical challenges of the computer-based system with little understanding of the human role in the system. Systems which depend upon unrealistic expectations of human cognition can induce human error. [37] Therefore, future threat analysis tools used by human security analysts must be designed to support real human cognitive functioning.

The cyber security community must change its perspective on authorized users. Currently, cyber security personnel and users have diametrically opposed concepts of security.[38] The security community assumes that users will adopt and execute the mandates required to meet security objectives. On the other hand, users focus on their job tasks and treat non-job security requirements as costs to be avoided.[39] Making matters worse, the malicious nature of the user interface creates an operating environment ideally suited to deception operations by adversaries. The security community must adopt a new perspective of “Security by Design” in which the needs of security are integrated into the user’s job tasks. Security by Design starts with security personnel understanding the job tasks that must be accomplished.[40]

Efforts to predict the future should be replaced with the Methodology for Adversary Obstruction and its emphasis on countering the adversary’s TTP. By disrupting the adversary’s TTP, the Methodology for Adversary Obstruction can be applied to evolving threats without any knowledge of the specific data processing exploits. Take, for example, the Business Email Compromise engagement. The real time machine data cannot reveal the factual misconceptions that have been planted in the mind of the targeted person and entered into the accounting system. Despite the total absence of real time machine data to perform cyberintelligence analysis, the principles of the Methodology for Adversary Obstruction can be applied. The Access Phase can be countered with email anti-deception technology.[41] The Persistence Phase, which exists entirely in the mind of the victim, can be countered with Operations Security processes. For example, Wells Fargo Bank recommends payment confirmation processes that are not dependent upon the data processing systems.[42] The same Operations Security processes that obstruct the adversary’s persistence in the victim’s mind also obstruct the completion of the control phase by preventing the data processing entry and data processing execution of false payment data. With no real time intelligence regarding of the adversary, the infiltration email or the compromised payment data, the Methodology for Adversary Obstruction defends the integrity of the payment data processing system.

Another example of applying the framework of the Methodology for Adversary Obstruction is the defense of credentials. A common adversary TTP is compromised credentials. In July of 2016, NIST issued draft comprehensive guidance on the defense of credentials.[43] This guidance is not in the nature of real time cyberintelligence, but is Operations Security that implements the principles of the Methodology for Adversary Obstruction.

The Social Security Administration provides an example of a data processing system that has been implemented without applying the framework of the Methodology for Adversary Obstruction. The Social Security Administration has deployed a system which uses deficient credential processes which permit adversaries to compromise Social Security accounts.[44] Cyberintelligence will not prevent adversaries from abusing this poorly conceived credential system.

In the face of compromises that are customized to specific targets, sharing cyberintelligence information has limited value. Knowing that the data breach at the Wellpoint Blue Cross affiliate could be traced to the domain “” was useless in the defense of the Premera Blue Cross affiliate, which was compromised using the domain “”[45] The valuable lesson from the Wellpoint compromise was the recurring spearphishing/credential compromise TTP, not the data processing artifacts of the engagement.


Popper cautioned:[46]

[N]o scientific predictor--whether a human scientist or a calculating machine--can possibly predict, by scientific methods, its own future results. Attempts to do so can attain their result only after the event, when it is too late for a prediction; they can attain their result only after the prediction has turned into a retrodiction.

Effective cybersecurity must start with a rejection of the siren call of fantastic machines that can predict the future. The forensic value to be derived from machine data must not be confused with predictive defensive strategies. Rapidly sharing the forensic results of defensive failures allows defenses to be updated by the entire target community. In order to be effective, defenders must operate inside the adversary’s loop. Security analysts and users need tools that enhance their orientation and encourage effective actions. The Methodology for Adversary Obstruction is a practical framework which can be applied to current and future cyber engagements to operate inside the adversary’s loop.

End Notes

[1] Frederick Chang, Ph.D., “Guest Editor’s Column,” The Next Wave, 2012, Web. 5 May 2014,

[2] Evan Perez, Jim Sciutto and Manu Raju, Feds probing Clinton campaign hacking, CNN, 30 July 2016. Web. 30 July 2016,

[3] Cognitive security white paper, IBM, 2016. Web. 25 July 2016,

[4] Dave Herrald, Building a Security Operations Center With Splunk, SlideShare, 5 May 2015. Web. 7 July 2016,

[5] Robert S. Gurzwiller, Sunny Fugate, Benjamin D. Sawyer, and P.A. Hancock, The Human Factors of Cyber Network Defense, Proceedings of the Human Factors and Ergonomics Society Annual Meeting September 2015 vol. 59 no. 1 322-326

[6] Diane Staheli, Tamara Yu, R. Crouser, Suresh Damodaran, Kevin Nam, David O'Gwynn, Sean McKenna and Lane Harrison, (2014). Visualization evaluation for cyber security: Trends and future directions. VizSec, 49-56.

[7] 81% of line-of-business workers and 83% of IT staff admit to using non-approved SaaS resources.  See, Frost & Sullivan, The Hidden Truth Behind Shadow IT, November 2013, Web. 18 July 2014,

Companies are using 15 times more cloud services than CIO’s know about; users are placing substantial data processing activity outside of the view of the SIEM system. Nick Earle, Do You Know the Way to Ballylickey? Shadow IT and the CIO Dilemma, Cisco Blogs, 6 August 2015. Web. 14 July 2016,

[8]Steven Chan and Davin Stilson, Information Protection, DLP, Encryption Demo, proofpoint Webinar, 28 July 2016. Web. 28 July 2016. at 47:20

[9] Mark Western, Developing a Framework to Improve Critical Infrastructure Cybersecurity, In Response to RFI# 130208119-3119-01, The National Institute of Science and Technology, 8 April 2013.

[10] Iacovos Kirlappos, Adam Beautement, M. Angela Sasse; “Comply or Die” Is Dead: Long Live Security-Aware Principal Agents, Financial Cryptography and Data Security, Volume 7862 Lecture Notes in Computer Science pp 70-82.

At a 5 July 2016 press briefing, FBI Director Comey described a widespread and persistent pattern of unhelpful employee conduct by State Department personnel. Statement by FBI Director James B. Comey on the Investigation of Secretary Hillary Clinton’s Use of a Personal E-Mail System, FBI, 5 July 2016. Web. 4 August 2016,

[11] Admiral Michael Rogers, Briefing with Admiral Michael Rogers, Commander of U.S. Cyber Command, Wilson Center, 8 September 2015. Web. at 41:40

[12] Chris Strohm, U.S. Military’s Anti-Hacking Force Won’t Be Ready Until 2018, Bloomberg Technology, 14 April 2015. Web. 18 June 2016,

[13] Martin Libicky, Lillian Ablon, and Tim Webb, The Defender's Dilemma: Charting a Course Toward Cybersecurity, RAND, 2015.

[14] Berman, Dennis K, Adm. Michael Rogers on the Prospect of a Digital Pearl Harbor, Wall Street Journal, 26 October 2015. Web. 26 October 2015,

[15] Gregory Conti and Edward Sobiesk, Malicious Interfaces and Personalization's Uninviting Future. IEEE Security and Privacy, May/June 2009.

[16] Robert Lipovsky, BlackEnergy trojan strikes again: Attacks Ukrainian electric power industry, welivesecurity, 4 January 2016. Web. 30 July 2016,

[17] Dmitri Alperovitch, Bears in the Midst: Intrusion into the Democratic National Committee, Crowdstrike Blog, 15 June 2016. Web. 1 July 2016,

[18] Business E-Mail Compromise, An Emerging Global Threat, FBI, 28 August 2015. Web. 3 August 2016,

[19] Matthew Miller, Jon Brickey and Gregory Conti, Why Your Intuition About Cyber Warfare is Probably Wrong, Small Wars Journal, 29 November 2012. Web. 10 December 2015,

[20] U.S. v. Su Bin, Criminal Complaint, 27 June 2014, Web. 11 November 2015,

[21] Dmitri Alperovitch, n. 17.

[22] Elgin Ben; Lawrence, Dune; and Riley, Michael, Neiman Marcus Hackers Set Off 60,000 Alerts While BaggingCredit Card Data,” Business Week, 21 February 2014, Web. 25 March 2014,

[23] Dan Kaplan, Hackers hijack Bit9 to target its customers with malware, SC Magazine, 8 February 2013. Web. 2 August 2016,

[24] Toni Gidwani, Guccifer 2.0, the DNC hack and Fancy bears, oh my!, SC Vendor Webcast, 26 June 2016. Web.26 June 2016. (registration required)

[25] Nassim N. Taleb, Beware the Big Errors of ‘Big Data’, WIRED, 8 February 2013. Web. 6 March 2016,

[26] Joint Publication 3-13.4, Military Deception, pages A-1 -- A-2

[27] Verizon 2016 Data Breach Investigations Report, page 5.

[28] Qing Li and Gregory Clark. Security Intelligence. Indianapolis: Wiley, 2015. Page 251.

[29] For a discussion of the cognitive dimension, see, Joint Publication 3-13, Information Operations; Joint Publication 2-01.3, Joint Intelligence Preparation of the Operational Environment and Joint Publication 3-12, Cyberspace Operations

[30] Gregory Conti and James Caroland, Embracing the Kobayashi Maru - Why You Should Teach Your Students to Cheat, IEEE Security and Privacy, July/August 2011.

[31] M. Osorno, T. Millar, D. Rager, “Coordinated Cybersecurity Incident Handling,” 16th ICCRTS, 2011.

[32] Karl Popper. The Poverty of Historicism. Boston: Beacon Press, 1957, p. xi

[34] See Joint Publication 3-13.3, Operations Security.

[35] John Boyd, The Essence of Winning and Losing, 28 June 1995, (Rev. January 1996)

[36] Information Operations and Cyberspace Operations, n. 29.

[37] P.A. Hancock, (2012). In Search of Vigilance, The Problem of Iatrogenically Created Psychological Phenomena, American Psychologist, Vol. 68, No. 2, 97-109.

[38] I. Kirlappos, A. Beautement and M. A. Sasse, n. 10.

[39] Adam Beautement, M. Angela Sasse and Mike Wonham, “The compliance budget: managing security behaviour in organisations,” Proceedings of the 2008 workshop on new security paradigms. ACM, 2009.

[40] Adam Beautement, Ingolf Becker, Simon Parkin, Kat Krol and M. Angela Sasse, “Productive Security: A scalable methodology for analyzing employee security behaviours,” Symposium on Usable Privacy and Security (SOUPS) 2016, June 22–24, 2016, Denver, Colorado.

[41] Robert Zager and John Zager, “Combat Identification in Cyberspace,” Small Wars Journal, 25 August 2013, Web. 1 August 2014,

[42] Impostor fraud: Do you know whom you’re paying?, Wells Fargo Bank, N.A., 2014, Web. 2 May 2016,

[43] Digital Authentication Guideline: Public Preview, National Institute of Standards and Technology. Web. 5 August 2016,

[44] Brian Krebs, Social Security Administration Now Requires Two-Factor Authentication, KrebsonSecurity, 1 August 2016. Web. 4 August 2016,

[45] Brian Krebs, Premera Blue Cross Breach Exposes Financial, Medical Records, KrebsonSecurity, 17 March 2015. Web. 1 August 2016,

[46] Popper, n.32, pages Ix-x


Categories: Mad Scientist

About the Author(s)

Robert Zager is an inventor and entrepreneur. He has been granted twelve United States patents in the areas of computer networking and email. He holds a BA degree from the University of California, Berkeley and a JD degree from Santa Clara University. He is currently a security researcher at Iconix, Inc. in San Jose, California.

John Zager is a psychologist with a penchant for systems analysis. He holds a BA degree in Psychology and an MA degree in Industrial Organizational Psychology from Hofstra University While completing his undergraduate studies he served as an intern for U.S. Senator Kirsten Gillibrand (D, NY). He is currently a People Analytics Manager within Walmart’s eCommerce division.


Outlaw 09

Sat, 09/17/2016 - 1:43am

We are failing for two major reasons;

1. we never respond in kind to attacks if the attacker is identified...hacking is a black art using the failures of coding and the IEEE and ego driven....if successful once the individual or group will continue until someone steps on their toes and shows them they are being followed closely...if they see no response they just keep on increasing their actions

2. we must realize in a hurry that we are no longer the rulers of the coding world and Silicon Valley is no longer the center of IT/internet creativity...there are increasingly other areas of the globe that more than match our current coding/IT abilities

A have a number of Ukrainian coder/analysts working with me that are head and shoulders above many actually a lot of US coders/analysts to start with.....and they think out of the box in ways we simply do not see at major universities producing IT personnel with MS degrees....

To them coding/analysis is simply an elevated art form and that is how they treat it....US types only look at the salary and ask for more...

The views expressed herein are the views of the authors and do not necessarily reflect the views of Iconix, Inc. or PepsiCo, Inc. Posted by the authors.