Several SANS CTI ago I presented on the ROI of Threat Intelligence Sharing and wrote a complementary blog where I highlighted a few scenarios where resource constrained organizations could actually save time and money in the acquisition and processing (analysis) of Threat Intelligence through the simple act of sharing. While that notion is still valid and some of these ideas are still applicable, the value of Threat Intelligence sharing continues to evolve.
In terms of Threat Intelligence sharing, perhaps our priorities are off. Perhaps we shouldn’t be placing as much value in created things (indicators & feeds) when the true value lies in the creator (the process by which indicators come about or are leveraged). Maybe sharing the business process (or tradecraft) should really be the priority. Maybe we can share what we know in a way that allows us (and others) to customize and scale Threat Intelligence processes.
The Pyramid of Pain & The Dichotomy Diamond
Consider for a moment that our adversaries have specific processes in which they leverage capabilities and infrastructure against their victims to carry out their evil deeds. One of my favorite models that best highlights the “value” of indicators contrasted against processes in the context of “Threat Intelligence” is David Bianco’s Pyramid of Pain (PoP). The TL;DR version of the PoP contains a variety of common indicator types layered by their relative value to the adversary, where their value increases as you move up the Pyramid. Organizations with the ability to impact (read: disrupt) an adversary’s Tactics, Techniques and Procedures (TTPs) will likely cause the adversary the most “pain” increasing their operational costs (time and or money) in the hopes that the adversary packs up and goes home or they spend their resources going after another less mature target.
Assuming the premise of the Pyramid of Pain is true, I wanted to see if the inverse was also valid. If we are to simply mirror the Pyramid of Pain from the perspective of the defender below that of the attacker, we get a “Dichotomy Diamond” which also seemingly reinforces the value of the process over atomic level indicators. Hashes, IP’s, Domain names and Artifacts are simply the product of the process. The defender’s tactics, techniques and procedures (read the Defender’s process) by which indicators are created, enriched and or operationalized are of far more value, thus likely having a much longer “shelf life” and the potential to carry much more impact for an organization in the long run.
Threat Intelligence Business Processes
Threat Intelligence consumers and practitioners will both agree that the costs for the production and acquisition of quality, finished Threat Intelligence is very expensive. This is because you have a very scarce (and expensive) resource, specialized “Threat Intelligence talent” (practitioners), distilling an even more scarce resource in quality Threat Intelligence.
One of the best ways an organization can save time and money is with business processes. These business processes are often data-driven or measurable in a way that allows someone to validate what works and what doesn’t. The very last thing any company wants is for resources to be spent on something that ultimately wastes the organization’s time and money.
Unfortunately, when it comes to business processes, many newcomers to the Threat Intelligence scene do not know where to start. They have yet to define any processes, of the processes they do have, they are highly manual, they often struggle to put these processes into action; and worse, they fail to periodically test and measure these processes in order to refine and improve them. But then can we really blame them? Most of us have pets that are older than the Threat Intelligence industry. However, if Threat Intelligence is going to become a key investment area for organizations and taken seriously in the boardroom, practitioners and leaders will have to be enabled to demonstrate that there is value, return on investment and/or compelling impact that Threat Intelligence brings to the organization.
Paleolithic Era of Threat Intelligence
Since early man, we have had various forms of tools and purpose built solutions such as fire, stone wheels, arrowheads etc. Many of these solutions were used independently at first, and eventually used together to create secondary purpose built solutions.
For those of you geeks who are familiar with Minecraft, you should very well understand the concept of taking individual elements, “crafting” them together to create a new solution; for those of you who are not familiar (or too proud to admit it), please spare yourself the shame and lost hours of productivity.
The creation, use and refinement of such primitive purpose built solutions have been shared across generations, iteratively improved upon over the ages. The process by which fire was created, or a wheel was formed was not horded away, rather the knowledge of how these respective solutions were built and used was proliferated globally. In the Paleolithic era of Threat Intelligence, I suspect we are placing emphasis on the wrong thing. We seemingly place a greater emphasis on selling or sharing the outputs (indicators), not the process that we use to create them.
Process, Process, Process
The ThreatConnect Research Team performs many of the same tasks that today’s Security Operations Centers (SOCs) and Threat Intelligence teams perform. The purpose is to ensure that ThreatConnect is meeting our users where they are, and delivering against the needs of the market in pragmatic ways.
Following are a few examples of expensive, highly manual tasks that we initially used to hunt for and analyze data of interest within VirusTotal Malware Intelligence Service (VTMIS) and eventually optimized these processes to be automated and scalable within ThreatConnect.
NOTE: As you read below, the sum of these modular automated processes will come together to create a powerful capability which helps our analysts scale and work much more efficiently. Please do not lose sight of the forest for the trees: The purpose of the following section is to provide (give away) some of the very processes (aka secret sauce) we have been successful in implementing. Perhaps this will spark some ideas as to how to adopt and/or customize similar processes.
Signature Management Processes: The following section relates to the process in which the ThreatConnect Research team planned and directed collection of data (malicious files) deemed of value from within VTMIS data service.
- Before: When we first began a few years ago, our Signature Management process was very immature. It was primarily conducted by a single analyst, where they kept categories of of Yara signatures within a single text file on their laptop. The analyst would have to manually update the master signature file and periodically upload the updates to VTMIS, a process which was very time consuming and highly error prone. Aside from the continuity of that business process should that analyst ever be hit by truck (or if the laptop was ever lost or stolen), it simply didn’t scale as our team grew. We needed our Yara signatures to be centralized, visible and accessible by all, where many analysts could take part in the creation and editing process.
- After: Once we developed criteria as to how we would organize signatures, we could extend that process to other team members building in automation to reduce the manual steps as well as the likelihood of syntax errors. We incorporated the use of Git as a revision control system to keep our rules organized. This allows all changes to be tracked to a single individual. It also allows those changes to be reverted precisely to a previous version if a mistake is found. Lastly, because we use a cloud git provider, there are backups and redundancy built into the system. We also used ThreatConnect tags with a custom schema so that we could organize, find and bundle categorize specific signatures. The process then uses build scripts to ingest the rules from the git repository and organize them to ready them for deployment. Because some rules are duplicated across many rulesets, humans are prone to copy paste and other typing errors. These build scripts remove the possibility of making errors due to repetitive actions. Machines do a better job of repetitive tasks, so why not use them?
Signature Prioritization Process: The following section relates to the process in which the ThreatConnect Research team prioritized how automated processes would select and process data (malicious files) that our team deemed of value from within the VTMIS data service.
- Before: Our initial prioritization processes was highly subjective. Analysts would only work on the things that interested them, not necessarily what was of value (or interest) to the organization. There was no formal process in the order in which alerts needed to be worked immediately and which were of lesser importance to be worked at a later date and time. We also lacked any way to account for whether or not a specific signature was effective – namely if it was prone to false positives or not.
- After: In addition to our Signature Management Process, we also established a signature prioritization process schema using the Yara metadata section. Considering we had limited resources and we couldn’t treat everything as high priority five alarm fire. We had to be selective about what we worked on, focusing resources only on items where we felt there would be maximum impact to the organization. This process also included a custom prioritization schema that we would use as a team to capture and memorialize our prioritization process, this helped the analysts focus their time and energy on high pay off alerts that we were confident were not associated to false positives.
Notification Management Process: The Notification Management Process takes a very large constant flow of alert notifications (on the order of 250-500 per day) and orders them using the Signature Prioritization Process. Only the freshest notifications that we have already decided are the most important with the highest degree of confidence are not associated with false positives are the ones that bubble to the top and are then passed to the next stage of automation.
- Before: Like many others, our initial VTMIS notification management process was based on email. Huge volume of emails would come in around the clock telling us that something we cared about needed to be reviewed. The high, medium and low priority alerts were intermixed, requiring analysts to manually triage the alerts, sorting and searching for the alerts of significance. Some analysts would look at emails instantly, other would access notifications within a digest. The volume of reviewing hundreds of emails was too much for any one analyst to handle. In many cases valuable information would be dropped on the floor, which completely missed the point of having a notification process in the first place.
- After: With ThreatConnect we were able to create a custom source which would be responsible for warehousing all of the VTMIS alerts that were of interest. The notification process would be responsible for digesting the queue of notifications that our Signature Management Process created, and that our Signature Prioritization Process put in order for our human analysts to review. We also had the option of deprecate content from within that custom source over time, retaining only the items we cared the most about and slowly decaying content that we no longer wanted.
Download Management Process: The number of downloads or queries from a source like VTMIS can be a finite resource (unless your organization has bottomless resources). For less resourced organizations, services which are charged based per download or query quotas can be costly. In many times it can be especially difficult for organizational stakeholders to justify budget, as the organization cannot predict how much of something they will use. This Download Management Process leans heavily into the optimization of a business process that ultimately maximizes the value of a certain security investment. If you pay for 1,000 queries a day, you’d better be leveraging that investment to the max and using all 1,000 queries.
- Before: Our initial Download Management Process was manual. Analysts had to “keep an eye on” download quotas and self-regulate their download and query “burn rate” in that they didn’t consume too many, too quickly. Alternatively, if there were months where analysts were on vacation or work was light, that particular data service investment would not be fully leveraged, essentially wasting organizational resources.
- After: For organizations big and small, mature and immature, optimizing processes which maximize the use of a particular investment and ensure there is no waste of organizational resources should always be a goal. Our new process takes an agreed upon daily allowance and maximizes this daily limit by rolling unused quota “credits” over to the next session. It also is aware of the day of the month and the time of day. On the last business day of the month after business hours, it consumes all the rest of the credits on the account preventing waste. Anytime you can speak in these terms and demonstrate resourcefulness to your leadership will certainly build trust and confidence that your Threat Intelligence Program is on the right track.
Automated Malware Analysis (AMA) Sandbox Triage Process: This process takes the downloaded malicious files and sends them to a controlled environment using free or premium AMA Sandboxing solutions. These solutions conduct an automated first tier triage of malicious binaries on behalf of the analyst. This allows the analyst to get a gist of the type of malware, what infrastructure it interacts with, what files it may interact with and how it executes upon a victim host.
- Before: Our initial AMA Sandbox triage process was 100% manual. An analyst had to interact with malicious files and upload them to a series of AMA solutions. Many of these AMA solutions also maintained submission queues which required analysts to keep an eye on “job” quotas. Once an analysis job was queued, an analyst would have to wait for the job to finish or work on something else until the Sandbox analysis was complete.
- After: We created a handful of ThreatConnect apps that integrated many of the leading premium and open source Sandboxes. This stage in the workflow is an interchangeable component. Any available type of AMA can be dropped into this point in the toolchain.
Sandboxing Results Organization Process: This process makes sure that when the sandbox sessions end, the results are gathered, each report is automatically processed and structured within ThreatConnect, that all indicators are extracted and subsequently associated with the specific signature that triggered the event, the notification was sent, and the initial binary that caused the rule to trigger. This process happens day and night, allowing the team to rest assured that when the analyst comes into work the next day the content has been triaged and packaged for an analyst to review, clean-up, and disseminate to other people and processes.
- Before: If you are an analyst, you are probably detecting a theme here. Our original post-process sandbox reporting was – you guessed it, manual. After the Sandbox job finished, the analyst would then have to manually transfer the output of the Sandbox report into ThreatConnect. It was a very time consuming process which often lead to excruciating bouts of carpal tunnel. This process was also highly error prone which made about as much business sense as paying expensive highly learnt lawyers to dig a hole in your backyard. (I don’t know about you, I want my analysts to analyze things, not do mindless data entry from one system to another.)
- After: We created a ThreatConnect App that would track the status of the AMA processing jobs. When it saw that a particular report was completed it would package the output and place the respective indicators and context into ThreatConnect attributes.
As a kid (pre-Google, pre-iPhone and pre-RTFM), I would ask my mother how to spell something. Then, she would tell me to grab a dictionary and look it up. She did this so that I would learn how to learn versus just giving me the answer. Although spellcheck has undermined that maternal tough love – the spirit of “giving a man a fish, he eats for a day, teach a man to fish and he eats every day of this life” clearly has applicability within our community too. We are not going to improve as individuals, organizations, or an industry if we continue to give others the “correct spelling” or just feed them “fish”, we can help them for the long term if we give our peers the processes by which they can help themselves.
The spirit behind sharing these automated process is to transfer knowledge and we have all certainly heard the adage that “knowledge is power”. Our goal here at ThreatConnect is to transfer both knowledge and power to our customers so that they can take control of both their security investments and data giving them a platform that makes their data work for them, not just aggregate someone else’s feed.
I didn’t grow up on a farm, but I have recently spent a good bit of time on a tractor. For those of you that aren’t as familiar, tractors have extremely powerful engines. A good-sized farm tractor will also have power take-off (PTO) drive which has the ability to transfer power directly from the engine to modular, purpose built implements (such as: mowers, tillers, post hole diggers) depending on whatever task you need to do. Just back the tractor up to the implement, connect it and you are ready to go to work.
ThreatConnect is delivering the same type of powerful engine with “PTO” extensibility through our SDK’s & API. Now anyone has the ability to take their Threat Intelligence Processes to create or share Apps “Threat Intelligence Implements” via our TC Exchange. This construct allows any organization to adopt, buy or create purpose built solutions for whatever Threat Intelligence business needs or processes that they may have. Those purpose built solutions can just as easily be given away so that others do not need to “reinvent the wheel” they can simply take control of what they need and fine tune it to their respective use case or business process. This feature set is available in our Cloud, Dedicated Cloud as well as our local On-premise deployments options.