Knowing Your Battle Space - Part 2
As a refresher we covered the following topics in our first post:
The Blue Teamer Dilemma
- Vendor Pressure to buy vs educate
- Increasing attacks and sophistication level
- Lack of priority and focus
- Adoption of Common Framework
- MITRE ATT&CK Framework
- Changing from signatures to tactic based detection
- Pivot point for the relationship between the attacker and defender
- Common Knowledge
In this post we will cover a lot of the connections between the MITRE ATT&CK framework to how we created a maturity model from it and help visualize our environment. Our topics for this discussion:
- Scoring based on multiple decision points
- Mapping Specific Security Controls to Tactics
- Visualizing results for decision makers
All of the code used for these visuals, dashboards and searches can be found at GitHub-:
Scoring based on multiple decision points
After adopting a framework to measure your maturity against adversarial tactics the next most important thing is how to measure and score that framework. Luckily we had created a previous maturity model and put a lot of thought into that scoring which translated well to the ATT&CK framework. An aspect that emerged that we wanted to maintain was that there are several factors we want to measure for the same tactic:
Detection - Our ability to see a tactic being used and be alerted to it
No visual - No ability to see the use of a tactic
Local visual - Local event log or management console
Central visual - Logs and events loaded into Splunk every hour or faster
Active visual - Searchers, thresholds and alerts established within Splunk
Response - Our ability to identify, respond or both to a tactic
No Response - No response of the tactic
Identification - Identification of the tactic with a control
Response - Blocking or redirecting the tactic through a control
Identification and Response - Identification and blocking or redirecting the tactics through a control
Sophistication - The skill, specific knowledge, special training, or expertise an attacker must have to perform the tactic. These were used from the STIX definitions: http://stixproject.github.io/data-model/1.2/stixVocabs/ThreatActorSophisticationVocab-1.0/
Novice - Demonstrates a nascent capability. A novice has basic computer skills and likely requires the assistance of a Practitioner or higher to engage in hacking activity. He uses existing and frequently well known and easy-to-find techniques and programs or scripts to search for and exploit weaknesses in other computers on the Internet and lacks the ability to conduct his own reconnaissance and targeting research.
Practitioner - Has a demonstrated, albeit low, capability. A practitioner possesses low sophistication capability. He does not have the ability to identify or exploit known vulnerabilities without the use of automated tools. He is proficient in the basic uses of publicly available hacking tools, but is unable to write or alter such programs on his own.
Expert - Demonstrates advanced capability. An actor possessing expert capability has the ability to modify existing programs or codes but does not have the capability to script sophisticated programs from scratch. The expert has a working knowledge of networks, operating systems, and possibly even defensive techniques and will typically exhibit some operational security.
Innovator - Demonstrates sophisticated capability. An innovator has the ability to create and script unique programs and codes targeting virtually any form of technology. At this level, this actor has a deep knowledge of networks, operating systems, programming languages, firmware, and infrastructure topologies and will demonstrate operational security when conducting his activities. Innovators are largely responsible for the discovery of 0-day vulnerabilities and the development of new attack techniques.
We use these 3 measuring points to make more granular decisions about where to prioritize effort and capital. It will also come in handy when we look for blind spots.
Mapping Specific Security Controls to Tactics
As we began scoring the tactics in our environment we kept finding ourselves mentally tracking what controls we found useful against a tactic. We developed a tool in Splunk to relieve us from mentally remembering and helped us to analyze the useful ones.
There are a couple of steps needed for this, first adding your controls to the DarkFalcon data, then linking them to tactics.
Once this data is populated and linked there are some incredible visualizations you can build quickly.
Visualizing results for decision makers
With all of the effort put into populating this data let's start leveraging it to help make better security decisions. We posed a common question from management, what are the top 3 priorities we could execute to improve our security posture? We took a threat tactic approach to answering this and used our scoring and control linking to drive an answer.
Below are slides from a sample walkthrough we did with this exercise:
We realize this is a considerable amount of material but it really builds a nice foundation for the upcoming posts. It also hopefully helps you save the time of head scratching we went through so you can be effective quicker.
Again, all of the source code is posted in our GitHub:
You will also see dashboards and code we will cover later in the series so hang in there with us.
In the next part of the series we will cover how we worked through automating the testing of MITRE ATT&CK Tactics with AttackIQ Firedrill and automatically score them in Splunk.
- Automated Adversary Testing
- Repeatable and objective against all Tactic scoring
- Quick adoption of new tactics
- Automated scoring of Tactics in DarkFalcon