Threat Intelligence


Webinar Background

Kinetic and Potential Energy Framework: Applying Thermodynamics to Threat Intelligence

ThreatConnect proposes a framework for evaluating and triaging indicators based on physical energy properties

All variety of scientists, from chemists to physicists and engineers, measure kinetic and potential energies to better understand how objects are acting or will act within a given situation or system. We posit that these energy concepts can be applied to threat intelligence as a framework to better understand and evaluate indicators and the intelligence associated with them.

Cyber threat intelligence consumers or producers can use this kinetic and potential energy framework to accomplish the following:

  • Scrutinize indicators for the relevant context that would ultimately constitute “intelligence.”
  • Evaluate and triage indicators, reported activity, and intelligence feeds or reports based on basic, inherent intelligence requirements.
  • Identify intelligence gaps and collection requirements to further enable a threat intelligence program.
  • Share the necessary context or calculated energies to facilitate a consumer’s integration of provided information.

We’ll start by describing some common issues with threat intelligence that we hope the application of this framework can mitigate or deter.

Issues with Cyber Threat Intelligence

Intelligence Requirements

At many organizations, incident responders or security operations center (SOC) personnel might be dual-hatted and also serve as threat intelligence analysts. Organizations with dedicated threat intelligence teams or individuals are uncommon, and many times those organizations still have issues integrating intelligence analysts with the typical incident response function and wind up not seeing or realizing the full potential of threat intelligence. Those shortcomings often manifest in specific problems like a lack of intelligence requirements.

If you’re asking what are intelligence requirements and why do they matter, don’t worry, you’re not alone. To summarize, intelligence requirements essentially identify what intelligence analysts at a given organization focus on. If you consider the intelligence cycle, intelligence requirements are a part of the planning and direction step.

The Intelligence Cycle

The Intelligence Cycle

Let’s say you’re an organization operating in the healthcare sector. A very basic intelligence requirement for your organization might be to identify activity targeting the healthcare sector. That requirement would then dictate the sources of information that you collect or procure, how you would process and exploit that information, the specific intelligence analysis that you produce from exploiting that collection, and what and how you disseminate and integrate that analysis at your organization.

Oftentimes organizations don’t have any identified intelligence requirements. When that’s the case, threat intelligence research without intelligence requirements is just surfing the web. Conversely some organizations will say that they want to know about everything so “everything” is their intelligence requirement. If everything is your intelligence requirement, you’ll end up being inefficient with your defensive resources. Intelligence requirements also have to be relatively specific so that the execution against them within the intelligence cycle can be tracked.

Morpheus Meme

For organizations that are getting started with threat intelligence or don’t already have identified intelligence requirements, there are basic intelligence requirements that your organization can use. These might seem overly simplified – which they are – but they are still significantly more specific than “everything” and can give threat intelligence teams a general heading. Those basic intelligence requirements include the following:

  • Activity targeting my sector
  • Activity targeting my organization
  • Activity targeting specific data types that my organization secures (eg. protected health information or PHI)
  • Activity emanating from my known adversaries

“Intelligence” Feeds or Reports

Indicators in and of themselves are not threat intelligence, but too often feeds and reports will claim to be intelligence when really they are only indicators. Context maketh intelligence. Consider the Grizzly Steppe Joint Analysis Report from two years ago. There were hundreds of indicators shared in that report, but the context that was shared with each of those indicators was insufficient to actually qualify them as intelligence. Ideally, cyber threat intelligence feeds and sources would answer all (or at least two) of the following, which generally correspond to the vertices and axes on the Diamond Model of Intrusion Analysis:

  • Who the bad guys are
  • What they are doing
  • How they are doing it
  • Who they are doing it against
  • Why they are doing it
  • What they will do next

Focus on Known Bad

Finally, the last issue worth noting is a general focus on known bad activity or indicators. Don’t get us wrong, this is completely necessary. But it fails to recognize the fact that, if we are employing threat intelligence to its fullest extent, we can proactively identify indicators that might be used in malicious activity in the future but aren’t yet known to be malicious. What you’re left with is playing whack a mole with indicators that possibly are not even being used in operations by the time that you hear about them.

By using this kinetic and potential energy framework, organizations can triage indicators and activity using basic intelligence requirements, scrutinize reports for relevant intelligence, evaluate their intelligence sources or reports, and include a more proactive approach to defense that incorporates suspicious indicators.

A Quick Thermodynamics Lesson

Kinetic and potential are different states of energy that describe the capability of an object to do work. Kinetic energy results from an object in motion, such as a moving car. Potential energy comes from an object’s position and may be converted into kinetic energy, such as a ball held above the ground or a compressed spring. To measure and understand these energies over time scientists have to measure things like an object’s velocity, vector, height, and compression, while also taking into account energy-degrading factors like friction or gravity.

To better explain kinetic and potential energy, let’s consider a bow and arrow. A bow and arrow by themselves have no energy. When a bow is drawn to shoot the arrow, energy is put into the bow and arrow system. This energy is potential energy and is held in the drawn string of the bow. That potential energy can then be transferred into the arrow by releasing the string and shooting the arrow. At that point, the arrow that is flying through the air has kinetic energy while the potential energy in the bow is gone. This kinetic energy will then degrade as friction from the air and gravity act on the arrow until it hits its target or falls to the ground.

Let’s now consider that there is an arrow that we have to physically defend our organization against. Generally, this arrow has several characteristics that we want to understand to determine if and how we defend against it:

  • Whether the bow has been drawn
  • Whether the arrow has been shot
  • Where the arrow was shot from
  • What the arrow was shot at
  • How fast the arrow is traveling
  • Who shot the arrow

Correlation to Threat Intelligence

Those characteristics about the arrow that we want to understand are essentially threat intelligence and those arrows aren’t significantly dissimilar from indicators. In some cases there are indicators that we aren’t going to care about because they weren’t shot at our organization or any similar organizations.

Those things that we want to know about arrows relate to our intelligence requirements. Many of those intelligence requirements manifest in the physical energy properties – was the arrow shot, how fast and where is it traveling, is the bow drawn — so maybe indicators have relatable energies that we can measure to evaluate and better understand them.

Factors to Measure

When considering kinetic and potential energies for indicators there are certain variables that we want to make sure to include in our equations to capture the necessary data points for the indicators we’re evaluating. These factors mimic those that scientists measure to calculate energies. For kinetic energy, we want to include velocity, vector (or direction), and it’s degradation over time:

  • Velocity is simply going to be binary — is it active or not.
  • Vector will be a combination of binary, relative factors. Depending on your frame of reference — the organization you’re in, your sector, the data you safeguard — that calculated vector will be different.
  • Degradation, much like gravity or friction ultimately reduce kinetic energy, time will reduce the kinetic energy of an indicator.

For potential energy, it is a bit more nebulous. The main variables we’re interested in are the compression or height and the degradation over time:

  • Compression/Height is where things might get sticky. This is going to be binary and relative to our frame of reference, like the vector for kinetic energy, but it is going to necessitate a better understanding of our adversary and their tactics.
  • Degradation is similar to what it is for kinetic energy with time ultimately reducing the potential energy of an indicator.

Our Equations

As we considered those factors that play into kinetic and potential energy, we ultimately generated the below equations to measure those energies. Keep in mind that these are the equations that we’ve developed to account for the aforementioned factors in the cyber world. The way that your organization views these factors and ultimately uses them to measure kinetic and potential energy may differ. More on that later.

Kinetic Energy

Velocity Vector Degradation

Kinetic energy for a given indicator is relative, meaning it is going to be different based on who is evaluating it and what organization they are a part of. Usually, any indicator with a kinetic energy greater than 0 deserves additional attention and the higher the kinetic energy, the more pertinent the indicator is going to be to the individual/organization evaluating it. Let’s break down the different factors in the equation:

  • Velocity: To start off, if the indicator hasn’t actually been used in an operation, U is going to be 0 so the kinetic energy is going to be 0. In that case, we’d move to potential energy and evaluate that.
  • Vector: S+O+D+A really represents those distilled, basic, inherent intelligence requirements referenced earlier. For our equation, we’re treating all of these factors equally, but when doing this for your organization, you might choose to change it up a bit. This part of the equation represents essentially where that indicator is directed.
  • Degradation: The kinetic energy is going to decrease over time and ultimately approach 0 based on a deprecation period.

Potential Energy

Potential energy should only be evaluated when an indicator is not known to have been used in an attack. Potential energy correlates with what might happen that is relevant to a given organization based on known adversaries. When indicators with potential energy greater than 0 are addressed, organizations are being proactive in defense. These are the factors in the potential energy equation:

  • Compression/Height: Potential energy necessitates an understanding of your adversaries and their tactics. When those things aren’t known, that can be considered an intelligence gap.
  • Degradation: Like with kinetic energy, potential energy will also degrade or deprecate over time. It should be noted however that the period over which you deprecate these suspicious indicators might be different than the period over which you deprecate known bad indicators.

Applying the Equations

Now we’ll apply these equations and use these energies to better understand a group of indicators. We’ll evaluate these indicators from the perspective of five different organizations. A financial company specifically working with cryptocurrency, and pharmaceutical, media, sporting, and think tank organizations. The indicators we’ll evaluate include the following:

  • Arkouowi[.]com was identified in an Accenture report on 2018 Hogfish (aka APT10) operations targeting organizations in Japan; however, no context was given for the type of sector or data that was targeted. APT10 is known to have targeted financial and pharmaceutical organizations, among others.
  • Ikmtrust[.]com was identified in an Arbor Network 2018 report on Fancy Bear lojack operations, but no targeted sector or data type were included in the report. Fancy Bear is known to have targeted media, sport, and think tank organizations, among others.
  • 222.122.31[.]115 was identified in an Intezer report as part of a Hidden Cobra operation targeting the financial sector. Specifically they targeted data and organizations related to cryptocurrency. Hidden Cobra is known to have targeted financial and media organizations.
  • Fifacups[.]org was not identified in operations, but the domain was registered (Incident 20180326A: Domains Using Suspicious Name Servers and Hosted on Dedicated Servers) through a suspicious name server and as of July 24 2018 is hosted on a dedicated server at 5.135.237[.]219. Those tactics are consistent with previously identified Fancy Bear tactics.
  • Atlanticouncil[.]org was not identified in operations, but the domain was registered (Incident 20180611A: Additional Patchwork Infrastructure) at essentially the same time and through the same registrar as domains identified in Volexity report on a Patchwork activity targeting US think tanks. As of July 23 2018, this domain is also hosted on a dedicated server at 176.107.177[.]7. Patchwork is known to have targeted US think tanks and Chinese political and military organizations, among others.

Based on the above intelligence related to these indicators, we can calculate the kinetic and potential energy for each based on the organizations we previously mentioned. For the purposes of these calculations, we’ll assume that the financial cryptocurrency organization deprecates malicious and indicators after 180 days, while all of the rest deprecate them after 360 days. We’ll also assume that all of the organizations deprecate suspicious indicators after 360 days. Here are examples for two of the indicators:



Since this indicator has not been identified in operations (our U variable is 0), the kinetic energy is 0 so we then proceed to evaluate potential energy.

indicators equation 2

Understanding Results

Based on a calculation date of July 24 2018, we ultimately come up with the below measurements for these indicators’ kinetic and potential energies.

When we rack and stack the findings for each organization, we can see how organizations might go about prioritizing the review of some indicators before others. For example, the 222.122.31[.]115 IP address would be a higher priority for the financial cryptocurrency organization while a lower one for the media organization.

We also see that, within these results, there are no potential energy scores for the financial or pharmaceutical organizations. If we conduct this analysis for a number of our sources and don’t have any potential energy scores, that is something that can feed our collection requirements. In that case, we need to pursue different sources that focus on identifying suspicious indicators associated with our specific adversaries’ tactics.

Important Notes

There are several important notes to mention now that we’ve employed the framework and gone through the analysis. To start off, it is important to note that potential and kinetic energy shouldn’t be directly related because they aren’t a one to one comparison. How you treat both most likely will differ.

When you’re going through this analysis, everytime you say to yourself “I don’t know” that is an intelligence gap. The more you work through those intelligence gaps, the more you’ll build a baseline for who to follow and why. An important aspect of this framework is that it requires a general understanding of your adversaries or forces you to learn about them. It may be worthwhile to conduct a capability vs. intent assessment of adversaries prior to employing this framework to determine which adversaries are most pertinent to your organization.

Whenever you have a lack of or very low score of either type of energy, that is a collection gap. Procuring or acquiring additional sources may help mitigate those deficiencies and result in better intelligence for your organization.

From the intelligence publisher/creator perspective, this framework can be applied to improve the utility of what they share. If they find that they can’t identify the variables that go into these equations from their reports, there is some additional context there that they should investigate and share if possible. Additionally, if they were to provide calculated kinetic and potential energies for affected organizations along with their reports, then that might facilitate consumption and integration of their intelligence.

It’s also important to note that for some reports that are one instance of specific activity, you only have to calculate the scores for a single indicator. That same score would then be accurate for all other indicators in that report or directly related to its relevant activity. For example, the IP addresses 5.135.237[.]219 and 176.107.177[.]7, which respectively host fifacups[.]org and atlanticouncil[.]org, would have the same potential energy scores as the domains.


While we went over our specific equations for kinetic and potential energy, this idea and the equations are extensible. The main issue is capturing the velocity, vector, and degradation. But maybe you want to treat your assessment of that vector differently. Maybe you want to include other basic intelligence requirements like the country targeted, if so, this is how your equation might look, where L is whether your location/country was targeted in the activity:

Or maybe you want to exclude unknown variables to mitigate shortcomings in reporting. Using n, where n is the number of variables you’re actually including, instead of 4 could do that:

Or maybe you want to weight certain variables differently to reflect more important intelligence requirements. This is maybe a way that equation would look, where activity targeting your organization is more important than the other variables:

Regardless of what intelligence requirements you want to include, the vector factor of the equation is where you can easily change things up based on your own organization’s needs.

Caveats and Conclusions

There are several caveats related to this idea and framework that we should also mention. First and foremost, this framework isn’t going to be for everyone and its utility may hinge on your threat intelligence program’s maturity. Some organizations may completely discount it as they already have a different process in place to evaluate indicators against their intelligence requirements. Others might not have the resources to run through this framework. Others also might just not think this framework is useful. We’re hoping though that some organizations might find this useful either as a thought experiment, or to audit their intelligence program and identify intelligence and collection gaps, or maybe to even incorporate into their daily processes. Regardless of where you fall on that spectrum, we’d love to hear back from you on this idea and any thoughts you have on it.

Finally, it’s worth noting that at this point, this is a manual process and more of an analytic technique or framework; however, we are investigating ways to employ this at scale and include it in our own intelligence reports. A lack of standards in industry reports and feeds could ultimately complicate automation efforts, so that is something else we are taking into account.

We’ve also created a data sheet summarizing this framework, complete with a worksheet to employ it against indicators.

Intel’s in the way that you use it, Snoke don’t you know

threatconnect intelIt’s in the way that you fuse it
Intel comes and it goes
It’s in the way that you use it
Snoke don’t you know
                       – Eric Clapton (modified)

When I decided to join a cybersecurity startup, I had no idea fashion designer would become part of my job description. But I must say I’m really glad that it has (btw, google images for “cyber fashion” will give you great ideas for cyber casual Fridays). After the first offering in our Star Wars-themed clothing line was voted a show favorite at BlackHat, we knew it wouldn’t be our last. And the 2016 RSA Conference offered the perfect venue for unveiling our new spring line.

The release of Episode 7 conveniently provided us with a treasure trove of new material to choose from going into the ideation phase. I watched the movie 3 times and listened to the book once – strictly for professional research and inspiration, mind you. Our internal designers got together and came up with several good options, including a “Who is Rey?” attribution-style concept that will fit better in a blog post someday than on a t-shirt.



We initially shied away from doing another “Rebels Resistance blows up the Death Star Starkiller Base” design, but, like J.J., we simply couldn’t help ourselves. It just ain’t Star Wars unless you’re blowing up planet-sized guns in the face of overwhelming odds, right? Plus, the events leading up to the destruction of Starkiller Base conveniently parallel the “Aggregate – Analyze – Act” construct we often use to describe major functional categories of the ThreatConnect platform. So we yielded to the will of the Force and just went with it.

threat intelligence aggregate analyze act

It’s not the size of your blaster…

threatconnect t shirt frontOur 2015 BlackHat shirt began with the premise that if the Emperor had known about Luke’s womp rat targeting capabilities, he might have better protected the Death Star’s exhaust ports. The moral, of course, being that good threat intelligence should drive defensive actions.

For the new tee, we wanted to expand a bit on that theme. Exactly how does intel inform and drive action? Furthermore, what separates those who successfully leverage intelligence from those who, like the Empire, don’t? In the end, we landed on the premise that what really separates the Ren from the Boyegas in the Galaxy far, far away boils down to how you use your intel.

threatconnect t shirt backFollowing the initial demonstration of Starkiller Base’s destructive power against the Hosnian system, the Resistance was in a tight spot. The Galactic Senate was no more. A large part of the New Republic fleet annihilated. The Resistance was now left largely on their own against the might of the First Order. It was as if millions of voices suddenly cried out in terror…no wait…wrong episode; my bad. But it’s probably applicable here and I’m pretty sure more than one of the Resistors (is that what we’re supposed to call them?) had a bad feeling about this development.

But it didn’t stop there; would this be how liberty dies…with thunderous applause? Demonstrating their counter intel prowess, the First Order tracked a Resistance reconnaissance ship back to the Ileenium system, where their main base was located on the planet D’Qar. General Hux was coordinating an effort to pinpoint the exact location, but Snoke was like “nah, it’s cool; our blaster is big enough to destroy the whole system.”

Aggregate – It’s the way that you fuse it

We’ve all been here. Our adversaries do indeed possess some pretty big guns (including an Ion Cannon!) and they know how to use them. Many of us have experienced them wiping out or invading multiple systems in one fell swoop. Thus, the “cyber battlefield” (I kinda hate myself for saying that) is often described as “asymmetric” due to the many disadvantages facing defenders.

One way to level the playing field is for defenders to gain information superiority over the adversary…or at least something that approaches information parity. And take my word for it – a bunch of disparate, disjointed, and disconnected intel silos is definitely NOT the way to achieve this.

aggregate threat intelligence

Consider how the Resistance handled their present predicament. They had virtually zero intel on Starkiller Base other than Leia sensing an Alderaan-esque disturbance in the force. Though their intel sources were scattered across the galaxy, they gathered everyone together in the command station on D’Qar to develop an understanding of what they were up against. Han, Finn, and Rey all shared what they knew about the First Order’s super weapon. None of this intel was sufficient by itself, but the sum proved greater than the parts and gave the Resistance what they needed to begin formulating a plan.

And that’s basically what the aggregate component of ThreatConnect does. You bring everything you know about threats from everywhere you know it together into one place so your analysts have the best shot of making good, informed decisions to protect your business. Plus, it’s much less hassle than flying all stakeholders to a remote planet for a face-to-face. 

Analyze – It’s the way you peruse it

Having gathered every scrap of intel they could muster on Starkiller Base, the Resistance set about figuring out how to destroy it before it destroyed them. And with the weapon well into the recharge cycle, time was not on their side.

They knew it drained a star’s power to collect dark energy known as “quintessence.” They conjectured this energy must somehow be stored within the base’s core until the weapon was fully charged and ready to fire. And they had gleaned a rudimentary understanding of how the weapon’s beam of concentrated phantom energy traveled through sub-hyperspace to it’s target (don’t think to hard about it…this is not the science you’re looking for). But having all the right pieces doesn’t mean the puzzle is done.

Which is exactly what the “Analyze” component of ThreatConnect is all about. We’ve brought together a suite of capabilities, integrations, apps, and processes that enable analysts to peruse all the pieces of intel and fit them together to form an accurate picture of what they’re up against. Note: holographic projection of threats isn’t currently available but is on the roadmap.

analyze threat intelligence

Based on the aforementioned intelligence, Resistance scientists set about solving their own puzzle, reasoning that a planetary magnetic field would not be enough to store the massive amount of energy they had witnessed deployed against Hosnian Prime. Rather, it would necessitate some kind of oscillating field because “much less energy would be required to sustain it than if it was maintained at a steady state.” Further analysis led them to deduce that destroying the oscillator with the weapon fully charged would destabilize the planet and cause it to implode. But destroying the oscillator was easier said than done, and determining the best course of action to do that presented their next challenge.

Act – It’s the way that you use it

hux kylo threat intelligenceWith a good understanding of the threat against them, Resistance tacticians set to work assessing how to exploit Starkiller Base’s formidable defensive measures. They presumed the entire planet would be shielded and the First Order would have applied the hard-learned lessons from the Death Star debacles to harden protections around the oscillator itself (after all, Hux and Kylo had been spotted wearing our BlackHat t-shirts). They also surmised their attack would be detected quickly and the response would include an aggressive deployment of military force.

Seem hopeless? Well, not to this small band of rebels struggling to restore freedom to the galaxy. Planetary shield? Please – Han concocted a never-attempted, physics-defying stunt to penetrate it by flying through the shields at lightspeed. Don’t even bother telling him the odds. Disabling the shield so Resistance fighters can attack? No problem; big, tough Captain Phasma will cave without even having to use enhanced interrogation techniques like Wookieboarding. Oscillator protections? Pffft – they’re soft and “Chewie” on the inside, baby! Certainly nothing a few well-placed thermal detonators can’t handle. Military forces? You must not have met the best pilot in the Resistance nor seen his bad@ss black X-Wing fighter. Good intelligence empowers them to stay on target and renders the neutralization of Starkiller Base mere padawan’s play (except, of course, for a certain death that I can’t elaborate on lest I cry).act on threat intelligence

Join us, and together we will rule the galaxy

Bringing all this back to ThreatConnect, I’m not going to insult your intelligence (pun intended) by suggesting threat intel becomes padawan’s play in our platform. But what it will do is give intel padawans the tools they need to become jedi and help turn jedi into jedi masters. It even has something for those of you who are little too short to be a Stormtrooper and can’t seem to hit your target. So, don a bathrobe, grab a flashlight, sign up for a free ThreatConnect account, and become a guardian of peace in the galaxy!

Oh – and I’m not sure how to tell you to get a shirt. Maybe contact your local ThreatConnect rep, drop by our booth at a conference, or try hitting us up @ThreatConnect on the Twitters.

Threat Intelligence in 3rd Party Risk Assessment


We’ve finally arrived at the fourth and final installment in this series exploring the relationship between threat intelligence and risk management. If you’re just joining us, previous posts are listed at the bottom of the page (and I do encourage you to start with those before diving into this one). We’ve covered a fair amount of ground in this series, but there’s one topic I think deserves attention before we close things out – how threat intelligence can help (save) 3rd party risk assessment.

For those not familiar with the process of assessing and managing information risk related to 3rd parties (vendors, suppliers, contractors, partners, etc) – you’re honestly better off. Stop reading and stay as far away as you can. If you do have responsibilities in this painful-yet-important area of information risk management, I want you to know that I feel your pain and want to help. I consider 3rd party risk assessment **AS IT’S TYPICALLY DONE** to be one of the most wrecked and wasteful practices in all of information security. I realize that’s an extreme statement – and I truly don’t make it to cause offense – but I think its a valid one. A huge amount of organizational time, talent, and treasure are continually poured into this sinkhole and I see little evidence that we get much risk-reducing value out of it other than a compliance checkbox.

Looking for monsters under the bed

Please note I am not minimizing the risk here; my research over the last 10+ years shows that 3rd parties cause or contribute to somewhere between 1 in 4 to 1 in 3 data breaches. I’m saying the way we assess and manage 3rd party risk is broken.

But don’t just take my word for it; Calvin understands my point and illustrates it well in this strip below (see what I did there?). He’s concerned about monsters under the bed and uses the tried-and-false method of self-attestation to assess the risk. Even though the assessment results show “no” risk exposure, you can see from his face that he’s not very comforted by them.

Clipped from The Essential Calvin & Hobbes, pg 155.

Clipped from The Essential Calvin & Hobbes, pg 155.

Now, you might be thinking “that doesn’t have much to do with 3rd party risk and, even if it did, it didn’t cost him much to do that assessment.” And you’d have a point. But as he has a knack of doing, Calvin helps us examine our grown-up issues through his six-year-old thoughts and experiences. His monster test might be trivial, but what happens when hundreds of potential “monsters” must be evaluated? And how about asking each of them hundreds of detailed questions that all require non-trivial answers? What if all of this cost real money and distracted good people from doing good things to actually reduce risk? To top it all off, what if all that effort netted results that held no more truth or usefulness than the monster’s reply above and didn’t change anything anyway? You don’t have to be a boy genius to see that something under the 3rd party risk assessment bed is drooling.

Hints of intelligence in 3rd party risk management

After all that, I should clarify that I am not writing this blog to fix 3rd party risk management. Some things even my lucky rocketship underpants can’t help. Thankfully, others are tackling various aspects of that issue. For example, BitSight rates 3rd parties using externally observable evidences of risk (like drool under the bed) instead of endless questionnaires and RiskLens focuses on their capacity to manage risk as a key factor in dealing with 3rd parties. What I’d like to do instead with our remaining time is show how threat intelligence and, more importantly, intelligence sharing can help assess the presence and size of 3rd party monsters and facilitate actions other than shouting for Mom.

In an earlier post, we used NIST’s Special Publication 800-39 as the basis of our discussion on how threat intelligence fits into the risk management process. It’s not the only risk framework out there by any stretch (we also looked at ISO/IEC 27005), but it serves as reasonable prototype. We’ll go with it again here.


NIST SP 800-39 Risk Management Process with annotations to highlight role of threat intelligence

Not surprisingly, SP 800-39 stresses the importance of considering 3rd party risk relationships. It doesn’t specifically call out “use intel here,” but it does give a strong hint: “For organizations dealing with advanced persistent threats (i.e., a long-term pattern of targeted, sophisticated attacks) the risk posed by external partners (especially suppliers in the supply chain) may become more pronounced” (pg 8). From that, I conclude that 3rd party relationships must be considered during the intelligence direction phase and collection efforts should be adapted to include them.

One quick win here is to use threat intelligence to focus 3rd party risk assessments on what matters. Based on who the third party is, what they do, where they sit, and how they’re interacting with you, different threats will be more or less relevant than others. Threat intelligence can help these assessments be more meaningful and less costly. This could be a whole blog post on its own, but I’m going to move on to the point I’d really like to make about 3rd party risk management, which emphasizes collaboration over assessments.

I’m very glad to see this little nugget in SP 800-39: “Establish practices for sharing risk-related information (e.g., threat and vulnerability information) with external entities, including those with which the organizations have a risk relationship as well as those which could supply or receive risk-related information (e.g., Information Sharing and Analysis Centers [ISAC], Computer Emergency Response Teams [CERT]) (pg 8). Thus, intelligence ops should identify 3rd parties to include in the dissemination phase and collaborate to determine what should be shared to meet the needs of all parties. The feedback phase can help adjust and improve this over time for all involved.

Hang with me awhile more as I expound upon on the “supply chain” and “information sharing” notions presented mentioned in the above text. It’s the most important part of this post and where I see threat intelligence having the biggest impact on 3rd party risk management.

A way forward: Intelligence collaboration within supply chains

There’s nothing in SP 800-39 that prompts me to mention intelligence processing and analysis in the context of 3rd party risk management, but I’m going to bring it up anyway. The notion of “crowd-sourcing” or “peer-sourcing” intelligence is not new. Many benefit from leveraging the knowledge, tools, and talents of others for their research and operational purposes. But what about “chain-sourcing?” Rather than building walls of worthless questionnaires between partners, why not proactively collaborate to process and analyze intelligence to better manage shared risks?

A supply chain – or value chain as they’re sometimes called – is essentially a group of organizations working together to bring goods and services to market. A highly-collaborative group of 3rd parties, if you will. The field of supply chain management understands well that risk to one represents risk to the whole. A very rich body of research and practice recognizes the value of collaboration and information sharing to reduce traditional supply chain risk by getting the right products to the right place and the right time. I think it’s high time we realize that intelligence collaboration can do the same for cybersecurity risk.

Lemme use some data to make my case. The figure below comes from a source near and dear to my heart, the Verizon’s 2015 Data Breach Investigations Report. It uses a clustering algorithm to place industries with similar threat profiles in proximity to one another. The DBIR explains it well:

“Each dot represents an industry “subsector” (we chose to use the three-digit NAICS codes—rather than the first two only—to illustrate more specificity in industry groupings). The size of the dot relates to the number of incidents recorded for that subsector over the last three years (larger = more). The distance between the dots shows how incidents in one subsector compare to that of another. If dots are close together, it means incidents in those subsectors share similar VERIS characteristics such as threat actors, actions, compromised assets, etc. If far away, it means the opposite. In other words, subsectors with similar threat profiles appear closer together.”



I’ve added the orange arrows to demonstrate what happens when you join different types of organizations into a supply chain that must be highly collaborative and coordinated to remain competitive. A very basic supply chain will have a manufacturer, a distributor (some type of transportation), and retailer. Notice in the figure how the Manufacturing subsector 324, Transportation subsector 486, and Retail subsector 445 basically form a triangle of points about as far as you can get from each other. That means their threat profiles are very, very different. When those organizations connect systems and processes to form supply chain, you can imagine that they’re exposing each other to threats the other party has never seen before. Kinda like sharing toothbrushes. Questionnaires won’t fix that. A collaborative approach to threat intelligence and defense is the only way forward I see to efficiently and effectively manage risk for all parties.

To provide another view on the same concept, I used my “phone a friend” card to request some additional data from the Verizon DBIR team. They graciously agreed, and @gdbassett whipped up some R code to generate the stats behind the chart below (don’t blame him for the chart itself; that’s my fault). It shows the percentage of breaches attributed to several common varieties of threat actors for the three industries highlighted above. You’ll notice stark differences among them with respect to threat actor profile. Retail is hammered by organized criminal groups, while state-affiliated actors plague Manufacturing. Transportation suffers fairly equally from both. This reinforces the point from above – if organizations from these sectors are sharing threats, so-to-speak, as supply chain, then they need to be sharing what they know about those threats and helping each other to defend against them.


As the concept of Information Sharing and Analysis Organizations (ISAOs) evolve, I hope to see info/intel sharing groups form around supply chains rather than industries. I hope I’ve made the point that such relationships arguably have stronger incentives to share intelligence than non-collaborating industry peers, and I can see this building momentum for cooperative defense. Plus, supply chains already have a need to share and a basis for trust established, which will ease many intel sharing paint points. I think this concept holds a lot of promise and I look forward to helping a supply chain deploy ThreatConnect to help its members coordinate threat intelligence operations.

Aaaaannnnnd we’re done here

Thanks to all of you who have hung with me over the 6 months it took me to complete this four-post series. I was glad for the opportunity to organize some of my thoughts on “paper,” and I hope they’re of some practical use to you as you seek to use threat intelligence to inform and improve risk management in your organization.


All posts in this series:

Growing a Threat Intelligence Program is like Growing a Beard

*Disclaimer: Limitations in beard growth do not correlate to actual ability to implement a threat intelligence program.

It was just after Thanksgiving dinner and my two-year-old daughter was sitting on my lap while I drowsily watched the Bears and Packers game. As she sat there patting my face and pinching my chin whiskers, she said “Daddy, I like your zebra.” Confused, it finally dawned on me what she was trying to say and I explained to her that my greying beard, while impressive, was not an exotic African species.  Maybe I was having one of those L-tryptophan overdoses, but the conversation triggered a synapse where a deep subliminal connection was made.

You see, I have been spending quite a bit of time looking at #BeardsofTI and thinking about how to help organizations mature their Threat Intelligence programs and get the most value out of their security investments by applying “process and platform.” Some of our customers are asking – Where do I get started with Threat Intelligence? Can you help us setup a Threat Intelligence Program? How can I take my Threat Intelligence Program to the next level?

brace-yourself-beard-threat-intelligence This is where things start to sound strange, please bear with me and settle into your seat for this ride to “crazy town”. I thought to myself that setting up and maturing a Threat Intelligence program was really no different than growing a sweet beard. Here is why:


Coming of Age:

I see the Threat Intelligence community as coming of age, “threat intel” is that snarky industry whipper snapper stepping up to the plate while the old school “rainbow series” is collecting dust in the corner. Like all coming of age stories, there’s a young hero or heroine who has been called to action, they, like all those before them, lack experience and understanding. Though they are not yet fully capable (perceived or realized), it is the journey, the process and the adventure in which they become more capable as they go. No different from when many young “shavers” look themselves in the mirror, and for the first time begin to see that the youthful peach fuzz has now become darker and more coarse – it is here that the first visible signal of an important transitional point in their life is finally seen. In conventional terms this call to action may be in the form of a breach or some sort of motivator that has signaled to an organization or industry that they need to take the next step in their journey in establishing a Threat Intelligence Program, a transition that will thrust them forward to take decisive action to counter specific risks in a more mature and efficient way.


Decide & Commit:

You will notice in the last sentence I used the term “decisive” to describe the type of action. Like growing a beard, establishing or maturing a Threat Intelligence program is going to require a decision, and that decision will require commitment to ensure that the investment of time, talent and treasure is well spent. Challenge yourself not to think about where you want your Threat Intelligence program to be in the short term, play the long game and think about the outyears. Where do we want to take this program? What are the investments that we need to make to ensure that we are not taking one step forward and two steps back? Are we building our program on rock or sand?

The higher up the corporate “food chain” you go you need to be prepared to speak in these terms and timelines to reinforce buy-in and obtain top level commitment. Be prepared to use process and routine metrics so that you can continue to promote the value of the Threat Intelligence Program or see where you can make process refinements as needed.

Just like all things in life, there is a right way and a wrong way to do something. There is no room for half-heartedness with Threat Intelligence or a beard, if you don’t decide you are going to do it the right way, you and others are going to be able to tell – and you are going to look stupid.


If it Hurts it’s Probably Worth It.

So you have made a decision – and you are going to do this thing. If you are setting up a Threat Intelligence Program or growing a beard you have to accept that things aren’t going to be perfect at first, in fact there is a point where things become uncomfortable and painful. When growing a beard there is that week 2-3 mark where things are scratchy, itchy, breakouts may even happen – but you have to work through the discomfort because you have something awesome waiting for you on the other side. When establishing and growing your Threat Intelligence program understand that you may be in this season of pain and discomfort for some time as you begin to understand what your organizational needs are. It’s important to not rush through this phase, you must work through the crucible of pain and discomfort because this is where you are going to learn the most. Know this is coming and your willingness and ability to embrace it will depend on the velocity in which you work through it. Your Threat Intelligence program is going to involve many opinions and stakeholders, where the processes you establish will require you to work cross functionally and support other teams. One key principle to understand is that Intelligence always supports Operations not the otherway around.


The Awkward Stage:

Congratulations, you have made it through the gauntlet, you have endured the discomfort and pain and made out to the other side. The good news is that things don’t feel too bad anymore, bad news is that they look…awkward. You wake up in the morning, you look at your beard in the mirror and you can see in it a few things that can be tightened up, much like when you get to work and you see how your Threat Intelligence Program also has things out of place, processes are uneven or some people just aren’t fitting in.

This new found awareness is an indicator of maturity. The fact that you can see things that would not be so obvious a few months ago should give you encouragement. You may even have an idea on how to correct many of the observed shortcomings. As you navigate the road of maturity, be mindful that you do not become complacent and settle in to the point that you are not challenging yourself or your Threat Intelligence Program. You may find yourself asking – do I keep things simple, short and clean, or do I go all in where things may get complicated? By challenging status quo one can usually achieve greater things, by keeping things “simple” you may not be achieving your full potential or delivering the full value to your organization.


Maintaining Mature Decisions

Either route you decide to go (easy or complicated), you are going to have to do some sort of maintenance to keep things in order for the long term. Like snowflakes, all beards (and enterprises) are different, but made out of the same “stuff” while maintaining unique shapes, sizes, structures and use cases. Beard maintenance and grooming (cutting, trimming, combing, oiling, waxing) requires work, it is a new creation after all. Like it or not – your Threat Intelligence Program is going to require similar processes and regimen if it is going to be a long term success.

It is also important to remember that just like beards, Threat Intelligence Programs are not “one size fits all” they are very unique and customized to the organization they support. So be very wary when you are told that Threat Intelligence is just aggregating post-processed indicator feeds. A mature Threat Intelligence Program will know the futility in spamming your SEIM, understanding how this complicates processes, creates more work and ultimately distracts the organization. For organizations who cut corners and seek what they perceive to be the easy button will ultimately learn things the hard way.

The future success of your Threat Intelligence Program will be wholly dependent on the maturity of the decisions that you make moving forward. Over time you will find that through process, structure and organization things actually become easier and more efficient.

Share What You Know


Whenever I see a beard or Threat Intelligence Program for that matter – I can appreciate either for what they are. One can quickly study (or admire) the fruit of the time and effort that was placed into creating either one.

Individuals both from within and external to your organization are going to look at you. Some may be inspired to achieve similar successes. In doing so they may seek to obtain insights into how certain things were done, at what time, what were the choices that were made along the way and why, what worked and what didn’t work. All of these examples are forms of a higher order of information sharing. You have been there and done that, now that you are a Jedi Master you are in a position to help others out, so share your insights and experiences. Give others the necessary tools and feedback that they can leverage in pruning and maintain the growth of their own legacies and works of art.

yesbeard1 If you are looking at setting up and maturing your Threat Intelligence Program, register for a FREE ThreatConnect account and a follow up discussion. We would love to help you wherever you are with your Threat Intelligence Program. Looking to show off your beard? Check out our Beards of Threat Intelligence contest. Whether you are growing a beard or a Threat Intelligence Program, connect with us, and you will find that we will grow with you…and on you.

Best Practices: Indicator Rating and Confidence

ThreatConnect enables users to assign a Threat Rating and Confidence to every single indicator… but what do those numbers really represent?  In order to enable your organization to make the best decisions, it’s important to standardize on the connotation attached to these ratings.  When your analysts, defensive integrations, and leadership all speak the same language regarding indicator impact, you can make more timely and accurate decisions.

Indicator Rating and Confidence –>

Understanding Threat Rating

ThreatConnect allows you to assign each indicator a Threat Rating, measured as 0-5 Skulls.  Within the scope of your organization, you can define the difference between a 1 Skull indicator and a 5 Skull indicator.  If you’re having trouble making such decisions, or want your indicator ratings to match those across the ThreatConnect Cloud, it may be helpful to look at the Skull level definitions implemented by the ThreatConnect Intelligence Research Team:


  • Unknown (0 Skulls): There is not enough info to assess Threat Level.

    Example “I’m still working on the indicators in this Email’s header; I don’t know anything about that SMTP server yet.”


  • Suspicious (1 Skull): There has been no confirmed malicious activity, but suspicious or questionable activity has been observed from an unknown threat.Example “I’m not sure why our users’ laptops keep visiting this URL, but so far I can’t see anything wrong with it.”ThreatConnect-Skull-Chart-2
  • Low Threat (2 Skulls): This indicator represents an unsophisticated adversary — it may be purely opportunistic and ephemeral, or indicate pre-compromise activity.Example “We see scans on that port from IP’s in that netblock all day.”ThreatConnect-Skull-Chart-3
  • Moderate Threat (3 Skulls): This indicator may represent a capable adversary — their actions are moderately directed and determined, and the indicator corresponds to the delivery/exploitation/installation phase.Example “That file hash represents a document pretending to be a Corporate Memo specifically targeting our company’s HR Department.”ThreatConnect-Skull-Chart-4
  • High Threat (4 Skulls): This indicator can be attributed to an advanced adversary, and represents that targeted and persistent activity has already taken place.Example “The callback address from that targeted ‘Corporate Memo’ masquerade is all over our access logs…”


  • Critical Threat (5 Skulls): This indicator represents a highly skilled and resourced adversary — it should be reserved for those adversaries with unlimited capability and is critical at any phase of the intrusion.Example “Start ripping servers out of racks; we’re bleeding customer data to that man-in-the-middle host!”

Using a standard Threat Rating will enable decision making across your organization, both at a human and machine level. If your Threat Intel analysts decide that an indicator is 5 Skulls, your Incident Response analysts can respond accordingly when it’s discovered. The knowledge transfer of context surrounding indicators is essential to making sure you’re putting your best foot forward.

Understanding Indicator Confidence

Of course, Threat Ratings only capture one dimension of context surrounding an indicator. Analysts rarely see such an attribution as a black and white problem. To address this, ThreatConnect allows you to model the confidence in your assessment as an integer between 0 and 100.

Screen Shot 2015-11-23 at 8.50.46 AM

Analyst-Derived Confidence

Confidence can be set manually — perhaps an analyst has only found the tip of the iceberg in C2 redirects, and isn’t ready to commit to their assessment of that entry point. Likewise, your confidence in your Threat Rating assessment may vary based on the timeliness of the available data, or knowledge about your adversary’s tactics and techniques.

ThreatConnect assigns ratings on the following scale to denote separate levels of confidence:

  • Confirmed (90-100)
    The assessment has confirmed by other independent sources and/or through direct analysis. This assessment is logical and consistent with other information on the subject.Example “That executable is definitely dropping a known malware variant.”
  • Probable (70-89)
    Though this assessment is not directly confirmed, it is logical and consistent with other information on the subject.Example “That URL has the same nonsensical 15-character path at the end as other known bad URL’s, but is on another host.”
  • Possible (50-69)
    The assessment is not confirmed, and is somewhat logical, but only agrees with some information on the subject.Example “That email address has the same username as the My Documents path when we reverse engineered this malware…but it’s a pretty common name.”
  • Doubtful (30-49)
    This assessment is possible, but not the most logical deduction, and cannot be corroborated or refuted by other information on the subject.Example “The scans came from an IP address rented from this VPS provider…we’ll have to dig deeper to see if it’s actually bad.”
  • Improbable (2-29)
    This assessment is possible, but not the most logical deduction, and is directly refuted by other information on the subject.Example “The file calls back to a host which appears to have been taken down, maybe that C2 host has since been rotated.”
  • Discredited (1)
    This assessment is confirmed to be inaccurateExample “That’s not malware, that’s just a poorly-written PowerPoint presentation.”
  • Unassessed (0)
    No confidence has been assigned to this indicator.

Automated Confidence

As time goes by, your analysis may be less relevant as indicators become stale. ThreatConnect can actually decay the confidence of indicators over time if they’re not being touched. This allows you to “age out” indicators that you saw years ago… they may have been high Threat Rating at one point, but your ability to say that may decrease over time.

This rate of confidence deprecation is configurable within each Organization, Source, or Community. Every day that an indicator goes untouched, that indicator’s confidence will deprecate by the configured amount. ThreatConnect can even delete the indicator if its confidence reaches zero.

Putting Threat Rating and Confidence to Work

Threat Rating and Confidence are great measures for two separate dimensions of an indicator’s relevance. An adversary that aggressively rotates C2 infrastructure may result in a slew of 5 Skull, 0 Confidence indicators. A script kiddy launching attacks from his attributable hacker domain may result in a handful of 2 Skull, 100 Confidence indicators.

The important thing about Threat Rating and Confidence is that you use them to drive decision-making. By implementing the above best practices, you can begin to leverage the analysis that you’ve modeled in each indicator’s respective ratings. You can write a TC Exchange application to extract all high-confidence 5 Skull indicators to initiate scans within your network. Alternatively, you could leverage an existing TC Exchange application written in conjunction with one of our partners to automatically block or alert on indicators that meet such parameters.

Standardizing on the meaning of Threat Rating and Confidence allows you to take action within the scope of your organization or contribute to the greater community.   You worked hard to find and triage all those indicators; now make them work for you!

For more information on ThreatConnect’s Threat Rating and Confidence, please download our “Evilness Rating” tool here.


Hunting Adversaries w/ Diamond Dashboard for Splunk

Action – the other half of the battle

hunting-your-adversariesAs a kid, G.I. Joe ranked somewhere below Star Wars, Legos, and Transformers for me in terms of toy box volume and hours of entertainment. Maybe a little above Masters of the Universe, depending on when you asked me. One thing I still remember very clearly (beyond the annoyance of loose torso bands) is the public service announcement that concluded every television episode – “Knowing is half the Battle.” I don’t think they ever explicitly told us what the other half was, but if I may be so bold as to put words in Flint/Duke/Scarlett’s mouth, I always assumed that “Action wins the battle.” After all – that’s why they’re called “action figures,” right?

We recently released a report, Project CAMERASHY, which investigates cyber espionage activity against nations and entities in the South China Sea. By correlating malicious infrastructure on the Internet along with social media habits of a suspect associated with that infrastructure, we were able to conclusively tie this campaign to a PLA staffer in unit 78020, also known as Naikon. As explained in a related blog post, one of our goals in producing the report was to demonstrate the value of a fuller approach to threat intelligence that not only seeks to collect a bunch of indicators, but understand the context surrounding them, associations among them, and, perhaps most importantly, the adversary behind them. IPs are fleeting; Adversaries are forever.

Important as it is, however, G.I. Joe would like us to remember that knowing your adversary is only half the battle. Winning the battle requires doing something about them. The Camerashy report shared a lot of knowledge about the Naikon threat, specifically Unit 78020 operative Ge Xing, but one might argue it doesn’t offer much for the “doing” half. I’d like to tell you how a new update to ThreatConnect’s Splunk app helps “do” just that.

“But wait…adversary intelligence isn’t actionable”

Before we get to the app, though, let’s talk a bit about adversary intelligence. I’ve heard on more than one occasion that adversary intelligence might aid ‘knowing’ but it doesn’t help much in the ‘acting’ half of the battle (maybe even counterproductive). In other words, it’s not actionable. Before you introduce me to Terry Tate, lemme explain. Apart from any aversions you might have to the phrase “actionable intelligence,” pretty much everyone agrees that they’d like to be able to actually *do something* based on the intelligence they receive or produce. Don’t get me wrong – I’m the kind of guy who’s perfectly happy just reading (good) threat research, but most infosec execs/practitioners want more than analyst pr0n. They want something useful that makes their job easier and their organizations more secure. Good threat intelligence should contribute to that end, and I think that’s the main point Adam and Rick make in the posts linked above. I don’t disagree with that sentiment.

But I do disagree with those that claim adversary intelligence isn’t actiona…er…useful. It is true that adversary intel often focuses a lot on attribution rather than action, and some don’t have much use for the former in their daily grind. I’m not writing this post to defend attribution, so let’s just agree for now that attribution eyes a different goal than tactical blocking and tackling in your local network environment.

Just to be sure we’re on the same page – adversary intel is not synonymous with attribution. Good adversary intel seeks to describe who’s attacking you (or might attack you), why they’re doing it, how they’re doing it and, ideally, what signs you can look for to know if/where they’re doing it to you. There’s a lot in there that’s actionable (bring it, Terry).

Go get more info to better prepare for them…
We’ll prepare differently if they’re determined…
Assess existing controls against their TTPs…
Scour our network for those indicators…

Introducing the Diamond Dashboard for ThreatConnect’s Splunk app

In addition to knowing your adversary better by reading the Camerashy report, it was very important to us to enable our users to act on that knowledge to protect their organizations. For starters, all indicators associated with the adversary persona Ge Xing as well as the broader Naikon threat were shared into the ThreatConnect Common Community prior to the release of the report. If you don’t have access to that, you can register for a free account here. But we didn’t want to stop there.

One of the last things I was involved in before joining ThreatConnect was helping to design a Splunk app companion to Verizon’s annual Data Breach Investigations Report. The basic idea was to give readers a way to search their network environment for evidence of the various threat patterns analyzed in the report. I really liked the way it bridged the knowledge-action chasm, and was excited about the opportunity to do something similar at ThreatConnect with the Camerashy report. We spoke to the good folks at Splunk about it, and they were happy to lend their expertise towards the goal of operationalizing adversary intelligence.

For those unfamiliar with v1 of our Splunk app, it allowed one to use Splunk to search/alert on indicators stored in ThreatConnect. That’s obviously an oversimplification, but it’s good enough for now. What it did not do was allow you to “search for anything associated with Adversary X.” Because Camerashy was very adversary-centric and heavily leveraged the Diamond Model of Intrusion Analysis, we knew an update was needed to support the level of act-on-intelligence we wanted the report to support. Enter the new Diamond Dashboard.



I’m going to spare you a complete walkthrough of the app in this post (you can get that here), but I do want to hit some highlights. In the upper left, you can select from any intelligence source or sharing community you have access to in ThreatConnect. You can search for threat groups like Naikon or a specific adversary like Ge Xing. Upon submission, the dashboard will populate an intelligence profile for the threat/adversary based on the Diamond Model (see here and here for examples of Diamond-driven analysis). The Capability table shows any malicious file hashes and vulnerability exploits associated with the selected threat/adversary, while the Infrastructure table lists IP, email, host, and domain indicators. Any other related threats, incidents, adversaries, etc are shown in the Associations table.

That’s all pretty slick, but the really useful info is found in the Matched Events timeline and table. That’s where Splunk uses the compiled indicator sets to search for any sign of Naikon/Ge Xing/Whomever/Whatever in your environment, when it was seen, where it was seen, and other info pertinent to the ensuing investigation. Nobody likes to see activity spikes as shown in the figure, but awareness is always better than ignorance.


yojoeFrom the main dashboard in the app, you can bounce back to ThreatConnect to get more information on indicator observations, do additional analysis, collaborate with peers, etc. There are some other nifty aspects to this update that take better advantage of the power of Splunk. For instance, Diamond Dashboard searches now leverage Splunk’s Common Information Model (CIM). But I’ll leave all that to to the user’s guide.

It’s no secret that the cybersecurity terrain often favors the adversary, so it’s critical that defenders take every advantage of opportunities that help turn the tide. Better ways of operationalizing everything we know about the adversary is one such advantage that, in my opinion, deserves our collective effort as an industry. That’s why I’m pretty excited about working with Splunk on this update, and I hope it helps you in the other half of the battle. Yo Joe!


Threat Intelligence-Driven Risk Analysis

Way, way too long ago, we started a series exploring the relationship between threat intelligence and risk management. I’m not sure if a 3+ month gap disqualifies it as a series, but I’ll claim we’re taking a page from the George R.R. Martin school of sequel timing. To refresh your memory, the last post examined how threat intelligence fits within the risk management process. This one focuses in on how intelligence drives risk assessment and analysis – a critical phase within the overall risk management process.

coloralltherisksIf there’s one thing I’ve learned about assessing risk over the years, it’s this: creativity will always fill the void of uncertainty. A second, related lesson is that data *is* the plural form of anecdote to most people most of the time.

In other words – people are great at making $#@!% up. And let’s be honest – hopping a Trolley ride through Mr. Roger’s Neighborhood of Make Believe Risks is a lot more fun than dealing with the realities of uncertainty and ambiguity.

The unfortunate outcome of these tendencies is that many risk assessments become a session of arbitrarily assigning frequency and impact colors to all sorts of bad things conceived by an interdepartmental focus group rather than a rational information-driven exercise. Aside from the entertainment value of watching people argue about whether yellow*yellow equals orange or red, this isn’t a great recipe for success. And thus, we all-too-often underestimate the important risks and overestimate the unimportant ones. See Doug Hubbard’s The Failure of Risk Management for more on this topic.

Risky questions deserving intelligent answers

Clearly a more “intelligent” approach is needed for analyzing information risk. When tackling various issues or problems, I almost always try to start with a set of interesting questions. This probably harkens back to my scientific background, where simple questions pave the way for more formal hypotheses, experimental design, data collection, etc. Thought experiments like the one we’re conducting here are less rigorous than those done in a lab, but formulating questions is still a useful exercise. In that spirit, here’s a (not exhaustive) list of questions risk assessors/analysts have that I think threat intelligence can help answer.

– What types of threats exist?
– Which threats have occurred?
– How often do they occur?
– How is this changing over time?
– What threats affect my peers?
– Which threats could affect us?
– Are we already a victim?
– Who’s behind these attacks?
– Would/could they attack us?
– Why would they attack us?
– Are we a target of choice?
– How would they attack us?
– Could we detect those attacks?
– Are we vulnerable to those attacks?
– Do our controls mitigate that vulnerability?
– Are we sure controls are properly configured?
– What happens if controls do fail?
– Would we know if controls failed?
– How would those failures impact the business?
– Are we prepared to mitigate those impacts?
– What’s the best course of action?
– Were these actions effective?
– Will these actions remain effective?

But how, exactly, can threat intelligence help answer these questions? What frameworks or processes are available? Where does the relevant intelligence come from and in what form does it exist? How do threat intel and risk management teams collaborate to produce meaningful results that drive better decisions? These are the kinds of questions we’ll explore during the rest of this post (and series).

Feeling RANDy, Baby?


There is surprisingly little information I’ve found in the public domain on the topic of using threat intelligence to drive the risk analysis process. There is, however, a paper from the RAND Corporation that goes the opposite way – Using Risk Analysis to Inform Intelligence Analysis. It concludes that “risk analysis can be used to sharpen intelligence products…[and]…prioritize resources for intelligence collection.” I found this diagram especially useful for explaining the interplay between the two processes. It’s well worth reading regardless of which direction you’re traveling on the risk-intelligence continuum.

Other recommended quick reads that touch on threat intel and risk analysis include this article from Dark Reading and this one from TechTarget. But neither of those venture into the realm of frameworks or  methodologies. If you know of others, feel free to engage @wadebaker or @threatconnect on Twitter. I’ll update this post for the benefit of future readers.

Let’s be FAIR about this

We’ve already reviewed NIST SP 800-39 and ISO/IEC 27005 in this series as prototypical examples of the risk management process. While both of these frameworks (and most others) “cover” risk analysis, Factor Analysis of Information Risk (FAIR) reverse-engineers it and builds it into a practical, yet effective, methodology. Note – Neither I nor ThreatConnect have any stake whatsoever in FAIR. I’ve chosen to reference FAIR because a) it’s open, b) it’s a sound analytical approach and c) it plays well with threat intelligence, and d) it plays well with ISO 27005. Other frameworks could be used, but I don’t think the process would be as intuitive or comprehensive. But your mileage may vary. More info on FAIR is available herehere, here, and here.



The diagram above represents how FAIR breaks down the broad concept of risk into more discrete factors that can be assessed individually and then modeled in aggregate. The lowest tier will be our focus for infusing intelligence into the risk analysis process. Before we go there, though, it will be helpful to discuss a similar decomposition model for threat intelligence.

Pickin’ up STIX

pickupstixI’ve long maintained that one of the primary challenges to managing information risk is the dearth of accessible and reliable data to inform better decisions. Correcting this was the primary driver behind Verizon’s Data Breach Investigations Report (DBIR) series. As we studied and reported on more security incidents, we realized that the lack of a common language was one of the key impediments to creating a public repository of risk-relevant data. This lead to creation of the Vocabulary for Event Recording and Incident Sharing (VERIS) and launch of the VERIS Community Database (VCDB). If you’re looking to bridge the worlds of incident response and risk management/analysis, I suggest reviewing those resources.

But this post is about bridging the chasm between threat intelligence and risk analysis. While IR and intel share many commonalities, they also differ in many ways. Similarly, VERIS contains elements that are relevant to the intelligence process, but was never optimized for that discipline. That goal was taken up by The Structured Threat Information eXpression (STIX™), a community effort lead by DHS and MITRE to define and develop a language to represent structured threat information. The use of STIX has grown a lot over the last several years, and it has now transitioned to OASIS for future oversight and development. It’s also worth noting that a good portion of the STIX incident schema was derived from VERIS, which is now a recognized (often default) vocabulary within STIX. There’s also a script for translating between the two schemas, but I can’t seem to locate it (help me out, STIX peeps!).

The point in bringing this up is that if you’re looking for threat intelligence to drive risk analysis, learning to speak STIX is probably a good idea.

What about the Diamond?

Thanks for asking. Yes, the Diamond Model for Intrusion Analysis, which we talk about a lot here at ThreatConnect, is definitely a threat intelligence model. But it is a process for doing threat intelligence rather than a language or schema. Because of this, the Diamond Model and STIX are complimentary rather than competitive. The STIX data model maps quite well into the Diamond, a subject we’ll explore another time. For now, suffice it to say that using FAIR, STIX, VERIS, VCDB, DBIR, and the Diamond might sound like crazy talk, but it’s perfectly sane. Ingenious even.

A FAIR-ly intelligent approach

With all of that background out of the way, we’re at the point where the rubber finally hits the road. The first thing I’d like to do is identify risk factors in FAIR that can be informed by threat intelligence. To do that, I’ll use a modified version of the FAIR diagram shown earlier. Orange stars mark risk factors where intelligence plays a key role in the analysis process; grey represents a minor or indirect relationship.



Next, I’ll attempt to create a mapping between these FAIR factors and STIX data model constructs, which lays the groundwork for intelligence-driven risk analysis. Before I do that, though, I’d like to mention a few things. First off, I apologize for the rigid and rather dry structure; I couldn’t think of a better way of presenting the necessary information. You’ll notice a lot of redundancy. This is because the relationships between the models are not mutually exclusive; a STIX field can inform multiple FAIR risk factors in different ways. Furthermore, the STIX schema inherently contains many redundant field names across its nine constructs. One final note is that I have not listed every conceivable relevant STIX field for each risk factor, but rather tried to focus on the more direct/important ones. That’s not to say I didn’t miss some that should have been included.

Enough of that – let’s get to it.

Contact Frequency

FAIR Definition:
The frequency, within a given timeframe (typically annualized), that contact with threat actors is expected to occur.
Relevant STIX fields:
  • Threat Actor
    • Identity: Identifies the subject of the analysis. This would, for instance, differentiate an external threat actor from a full-time employee or remote contractor.
    • Type: While not a specific identity, generic types (e.g., outsider vs insider) still help in determining the likelihood of contact.
  • Incident
    • Victim: Profiling prior victims may help determine the threat actor’s likelihood of coming into contact with your organization.
  • Indicator
    • Sightings: Evidence of prior contact with a threat informs assessments of current/future contact.

Probability of Action

FAIR Definition:
The probability that a threat agent will act once contact occurs.
Relevant STIX fields:
  • Threat Actor
    • Motivation: Understanding a threat agents’s motivation helps assess how likely they are to act against your organization.
    • Intended_Effect: A threat actor’s typical intent/goals further informs assessments of the likelihood, persistence, and intensity of actions against your organization.
  • Incident
    • Attributed_Threat_Actors: Useful when searching for intelligence on particular threat actors or groups.
    • Victim:  Profiling prior victims helps assess a threat actor’s likelihood of targeting your organization.
    • Intended_Effect: A threat actor’s intent/goals in prior incidents further informs assessments of the likelihood, persistence, and intensity of actions against your organization.
  • Campaign
    • Intended_Effect: A threat actor’s intent/goals in prior campaigns further informs assessments of the likelihood, persistence, and intensity of actions against your organization.
    • Attribution: Useful when searching for intelligence on particular threat actors or groups.
  • Exploit Target
    • Vulnerability: Exploitable vulnerabilities may attract malicious actions against your organization from opportunistic threat actors.
    • Weakness: Exploitable security weaknesses may attract malicious actions against your organization from opportunistic threat actors.
    • Configuration: Exploitable asset configurations may attract malicious actions against your organization from opportunistic threat actors.
  • Indicator
    • Sightings: Evidence of prior malicious actions informs assessments of the probability of current/future actions.

Threat Capability

FAIR Definition: 
Level of force a threat agent is able to apply. Generally comprised of skills (knowledge and experience) and resources (time and materials).
Relevant STIX fields:
  • Threat Actor
    • Type: The type of threat actor (e.g., a nation-state vs an individual) grants insight into a threat actor’s possible skills and resources.
    • Sophistication: Informs assessments of a threat actor’s skill-based capabilities.
    • Planning_And_Operational_Support: Informs assessments of a threat actor’s resource-based capabilities.
    • Observed_TTPs: The tactics, techniques, and procedures utilized by a threat actor reveal a great deal about their capabilities.
    • Intended_Effect: Certain intentions/goals may enable a threat actor to apply more force against a target. For instance, if concealment isn’t necessary, more overt and forceful actions can be taken.
  • Incident
    • Generally applicable; Studying prior incidents associated with a threat actor informs multiple aspects of capability assessments.
  • Campaign
    • Generally applicable; Studying campaigns associated with a threat actor informs multiple aspects of capability assessments.
  • TTP
    • Behavior: The attack patterns, malware, or exploits leveraged by a threat actor directly demonstrate their capabilities.
    • Resources: Informs assessments of a threat actor’s resource-based capabilities.
    • Exploit_Targets: Identifies vulnerabilities, weaknesses, and configurations a threat actor is capable of exploiting.
    • Kill_Chain_Phases: A threat actor’s TTPs for each phase of the Kill Chain offers another lens through which to understand their capabilities. For instance, do they develop their own custom malware for the exploitation phase or reuse commodity kits?
  • Exploit Target
    • Vulnerability: Identifies specific vulnerabilities a threat actor is capable of exploiting.
    • Weakness: Identifies specific security weaknesses a threat actor is capable of exploiting.
    • Configuration: Identifies specific asset configurations a threat actor is capable of exploiting.

Resistance Strength

FAIR Definition: 
Measure of an asset’s ability to resist the actions of a threat agent.
Relevant STIX fields:
  • Threat Actor
    • Intended_Effect: Certain intentions/goals may render controls ineffective. For instance, if destruction or disruption is the desired effect, disclosure-based controls will offer little resistance.
  • TTP
    • Kill_Chain_Phases: The phase in the kill chain can inform assessments of resistance strength against various TTPs. For instance, AV software offers little value after the exploitation phase.
  • Incident
    • Affected_Assets: The compromise of certain assets may may affect the strength of COAs. For instance, knowing the Active Directory server was compromised, lessens the effectiveness of authentication mechanisms. Can also highlight recurring security failures involving particular assets or groups of assets.
    • COA_Taken: Knowing what has already been done informs assessments of the incremental value of additional COAs.
    • Intended_Effect: Certain intentions/goals may render controls ineffective. For instance, if destruction or disruption is the desired effect, disclosure-based controls will offer little resistance.
  • Exploit Target
    • Vulnerability: Unpatched vulnerabilities can erase or erode the strength of security controls against threats capable of exploiting them.
    • Weakness: Unmitigated security weaknesses can erase or erode the strength of security controls against threats capable of exploiting them.
    • Configuration: Poorly configured assets can erase or erode the strength of security controls against threats capable of exploiting them.
    • Potential_COAs: May identify previously successful COAs against a threat, thus informing assessments of resistance strength.
  • Course of Action
    • Type: Different types of COAs can have significantly different effects and strengths.
    • Stage: The stage at which COAs occur informs assessment of effort and efficacy. For instance, it’s much harder to resist or remove a threat actor who is deeply entrenched throughout the victim’s environment.
    • Objective: Objectives for COAs have a significant effect on resistance strength. For instance, some controls are better able to detect malicious actions than prevent them.
    • Impact: Understanding the impact of a COA informs future assessments of resistance strength for that COA as well as other complimentary or compensating COAs.
    • Efficacy: Understanding how well a COA met its objective(s) informs future assessments of resistance strength for that COA as well as other complimentary or compensating COAs.

Primary Loss Magnitude

FAIR Definition: 
Loss that occurs directly as a result of the threat acting against the asset.
Relevant STIX fields:
  • Incident
    • Security_Compromise: Distinguishing unsuccessful attempts vs network intrusions vs data disclosures informs impact assessments.
    • Affected_Assets: The assets affected in an incident have a direct bearing on impact.
    • Impact_Assessment: May contain information or values directly useful for assessing loss magnitude.

Secondary Loss Event Frequency

FAIR Definition: 
Percentage of time that loss events are likely to affect secondary stakeholders (e.g., customers) in a manner that may cause an adverse reaction on their part.
Relevant STIX fields:
  • Threat Actor
    • Motivation: Understanding a threat actor’s motives may hint at possible secondary losses. For instance, disgruntled employees may desired to release embarrassing data over time.
    • Intended_Effect: Understanding a threat actor’s goals may hint at possible secondary losses. For instance, some threat actors seek to embarrass victims by releasing stolen data publicly, while others may provide that information to other threat actors for a fee.
  • Incident
    • Intended_Effect: Understanding a threat actor’s goals may hint at possible secondary losses. For instance, some threat actors seek to embarrass victims by releasing stolen data publicly, while others may provide that information to other threat actors for a fee.
    • Impact_Assessment: May contain information or values directly useful for assessing secondary loss event frequency.
  • Course of Action
    • Generally applicable; knowing prior COAs informs assessments of future/secondary loss events.

Wrapping it up

I’d like to reiterate that I don’t view this as a done deal – much the opposite, in fact. One of the things I hope this post prompts is further discussion and refinement on this topic (generally) and this mapping (specifically) by the FAIR and STIX user communities.

I was going to provide some thoughts on how threat intelligence and risk analysis teams can begin to implement this “in the real world,” but I think I’ve used enough of your time for now. I promise to visit that topic in a follow-up post. And thanks for sticking with this series though its lengthy pauses and course corrections. It’s been enjoyable for me and I hope worthwhile for you. Until next time –

All posts in this series:

Camerashy on You Crazy Diamond

Yesterday ThreatConnect and DGI released a report titled CameraShy, which investigates Chinese cyber espionage activity against nations in the South China Sea. The report combines a very data-driven statistical analysis of malicious infrastructure on the Internet with a very human-focused view into the social media activities of the adversary to arrive at its conclusions. This combo offers a unique and compelling twist on the Chinese APT report genre. Here’s a quick summary of major findings and the original Wall Street Journal article.

shineoncrazydiamondThere are many aspects to this report we could (and eventually will) discuss, but I’d like to focus on the underlying methodology in this post. One of the things readers will notice immediately is that the whole report is structured around the Diamond Model of Intrusion Analysis. Every chapter features a different facet or vertex of the Diamond, and this wasn’t just window dressing. It was an intentional effort to guide the reader through our own analytical process and also make a case that threat intelligence must understand relationships between adversaries, their target victims, and the capabilities and infrastructure used against those victims.

I got some pretty good feedback on my last Diamond Model post, Luke in the Sky with Diamonds, so I’ve stuck with that formula and adapted a song title for this post too (if this keeps up, I’ll to have to extend my musical horizons to find more “diamonds” in the ruff). I’m sorry to disappoint those wondering about the connection between Pink Floyd and cyber espionage – it goes no deeper than the title. Though I will say that this stanza is more than a little suspicious given our context:

“You reached for the secret too soon, you cried for the moon. [ASEAN state secrets]
Shine on you crazy diamond.
Threatened by shadows at night, and exposed in the light. [espionage revealed via OSINT]
Shine on you crazy diamond.
Well you wore out your welcome with random precision, [persistence, deceptive targeting]
Rode on the steel breeze.” [ephemeral C2 infrastructure]

Makes you wonder, doesn’t it? Ah well; another investigation for another day. Back to the topic at hand.

Quick review: The Diamond Model is an approach to conducting intelligence on network intrusion events. The model gets its name (and shape) from the four core interconnected elements that comprise any event – adversary, infrastructure, capability, and victim. Thus, analyzing security incidents (or intrusions/activity threads/campaigns/etc) essentially involves piecing together “the Diamond” using bits of information collected about these four facets to understand the threat in its full and proper context. Reading on, you’ll find a summary of each chapter’s contribution to filling out the Diamond.

The first chapter of CameraShy provides some background on tensions in the South China Sea and shows how network intrusions are used to further china’s interests in the region. In terms of the Diamond Model, this obviously hits on the upper Adversary apex (though, in my opinion, we should have used a biker helmet and sunglasses rather than the obligatory cybervillian fedora and mask). It focuses on a particular threat group known by some as “Naikon,” which we identify as unit 78020 within a Technical Reconnaissance Bureau located in Kunming. At this point in the report, we don’t yet have a specific adversary persona, but hold your horses; we’ll get there. Victims compromised by Naikon are not identified, but we reference reports that have done so and discuss sustained targeting of nations and entities in Southeast Asia since 2010.


The main thrust of chapter 1 is the socio-political axis, which concerns the aspirations, needs, and intentions of the adversary in relation to the victim. The horizontal axis of the Diamond is also in play here, since we discuss the technical means (or TTPs) leveraged by Naikon to target their victims.

Chapter 2 takes a cross-section of the larger Naikon threat and slides activity associated with a particular domain (greensky27.vicp[.]net) under a microscope. The idea was to analyze several years of DNS records to profile the infrastructure in a purely objective and data-driven manner. We learned that Kunming is the central node, the domain is highly dynamic, regional roles and patterns existalgorithms render it as slightly Death Star-esque, and there’s a temporal element to the campaign. We geeked out a bit in this chapter and I think readers will really enjoy some of the data visualizations it includes.


In terms of the Diamond Model, Chapter 2 is heavy on the infrastructure side. But we also discuss malware associated with this infrastructure, so it hits on capability and the technical vertex between the two as well. In fact, this chapter is a good example of the concept of pivoting, which is core to both the Diamond Model and the ThreatConnect platform. By researching Naikon malware using regionally-themed delivery vectors, we identified the personified greensky27.vicp[.]net domain as a common C2 callback. Our analysis pivoted to the infrastructure undergirding it and then through another pivot to the adversary persona behind it all.

Chapter 3 gives Camerashy its title and takes the phrase “adversary attribution” to a whole new level where the adversary actually participates in the process. Hundreds of self-posted photos, social media activities, research publications, and some help from the Internet of Things then enabled us to conclusively tie greensky27.vicp[.]net to a specific person, Ge Xing, and place him within the compound of military unit 78020 in Kunming, China. That’s Diamond-speak for an infrastructure to adversary pivot. The report’s moniker is obviously a bit tongue-in-cheek, as our subject clearly isn’t scopophobic.


TC Exchange partner DGI Inc was “pivotal” to this chapter, providing key HUMINT and Chinese language translation components. I really can’t do it further justice via summary – you’ll need to check out the report to fully appreciate the self-attributing mosaic pieced together in chapter 3.

Chapter 4 carries the title “No Room for Coincidence – Evidence Ge Xing and Unit 78020’s Involvement in Naikon Activities.” Lengthy, but spot on. It spikes the attribution ball, correlating the patterns of infrastructure activity from Chapter 2 with the pattern of life activities in Chapter 3. For instance, when he announces the birth of his son or posts travel pics, the infrastructure goes silent at the exact same time. It’s true that correlation doesn’t imply causation, but the number of correlations here and weight of the overall evidence make this about as close to certain as one can get without access to God’s DVR.


Similar to Chapter 3, you’ll need to read it to really appreciate it. It spotlights the relationship between man and machine in a way that I’ve never seen before, and it was thrilling to watch the pieces fall into place.

Piecing it all together, here’s the complete Camerashy diamond. It represents a multi-sourced, multi-faceted approach to threat intelligence that isn’t just achievable by the likes of ThreatConnect and DGI. ThreatConnect’s primary job is enabling our customers to experience the value of “Full Diamond” intelligence for themselves. Toward that end, all indicators associated with this report have been shared to the Common Community in ThreatConnect. Jump in and help us continue to expand and enrich what we collectively know about this threat.


That brings up one final point I’d like to make before closing this one out. It’s worth mentioning that we began this investigation with research shared by others and also worked with others to extend that research. In addition to the knowledge Camerashy imparts about the adversary, we hope it also demonstrates the merits of intelligence sharing and collaborative research. It may sound trite, but we truly are smarter and stronger together.


Project CAMERASHY: Closing the Aperture on China’s Unit 78020


ThreatConnect® and Defense Group Inc. (DGI) have collaborated to share threat intelligence pertaining to the Advanced Persistent Threat (APT) group commonly known as “Naikon” within the information security industry. Our partnership facilitates unprecedented depth of coverage of the organization behind the Naikon APT by fusing technical analysis with Chinese language research and expertise. The result is a meticulously documented case against the Chinese entity targeting governments and commercial interests in South, Southeast Asia and the South China Sea. This report applies the Department of Defense-derived Diamond Model for Intrusion Analysis to a body of technical and non-technical evidence to understand relationships across complex data points spanning nearly five years of exploitation activity.

Key Findings

  •   The Advanced Persistent Threat (APT) Group commonly known within the information security industry as “Naikon” is associated with the People’s Liberation Army (PLA) Chengdu Military Region (MR) Second Technical Reconnaissance Bureau (TRB) Military Unit Cover Designator (MUCD) 78020.
  •   The PLA’s Chengdu MR Second TRB MUCD 78020 (78020部队) operates primarily out of Kunming, China with an area of responsibility that encompasses border regions, Southeast Asia, and the South China Sea.
  •   Naikon APT supports Unit 78020’s mandate to perform regional computer network operations, signals intelligence, and political analysis of the Southeast Asian border nations, particularly those claiming disputed areas of the energy-rich South China Sea.
  •   Analysis of historic command and control (C2) infrastructure used consistently within Naikon malware for espionage operations against Southeast Asian targets has revealed a strong nexus to the city of Kunming, capital of Yunnan Province in southwestern China.
  •   The C2 domain “greensky27.vicp[.]net” consistently appeared within unique Naikon malware, where the moniker “greensky27” is the personification of the entity who owns and operates the malicious domain. Further research shows many social media accounts with the “greensky27” username are maintained by a People’s Republic of China (PRC) national named Ge Xing (葛星), who is physically located in Kunming.
  •   In eight individual cases, notable overlaps of Ge Xing’s pattern of life activities would match patterns identified within five years of greensky27.vicp[.]net infrastructure activity.
  •   Ge Xing, aka “GreenSky27”, has been identified as a member of the PLA specializing in Southeast Asian politics, specifically Thailand. He is employed by Unit 78020 most notably evidenced by his public academic publications and routine physical access to the PLA compound.

In addition to this report, ThreatConnect has released technical indicators of the Naikon Threat within the ThreatConnect Common Community, which is accessible to current users or by registering for a free account. It is important to note we are not claiming this is a comprehensive listing of all malware and infrastructure leveraged by Naikon globally for nearly half a decade. Rather, it forms one chapter of a larger story, where we look forward to enriching and expanding future collaborative research within our community of users and partners.


Why Build Apps in ThreatConnect

Why Build Apps and Share them in ThreatConnect’s TC Exchange™ – Collaborate to Strengthen Your Threat Intelligence Practice
If you’ve spoken with anyone here at ThreatConnect, you may have noticed that we, and many of our customers are all pretty excited about the launch of ThreatConnect’s TC Exchange™.



I thought it would be a good idea to explain why we are buzzing about TC Exchange, and its ability to strengthen your threat intelligence practice by building custom applications for data ingestion and processing, workflows and analysis.  

Screen Shot 2015-08-24 at 3.41.22 PM


                                                                                        WHY ARE WE EXCITED ABOUT TC EXCHANGE?

TC Exchange empowers our users to customize ThreatConnect in a variety of ways, allowing them to build a stronger threat intelligence practice. We’ve built an application runtime environment and released associated SDK’s that allow our users to install, schedule and run applications that integrate with our already powerful API 


So what’s new here? Before TC Exchange, integrations were either hard coded in the platform and not easily configurable by the users or, if using the API, they always had to be run and configured on a separate server running programming language such as Python. Now installed applications can be configured easily from the ThreatConnect UI without having to tweak the integration code. This is a very powerful capability that allows integrations to be ‘plug ‘n play.’


A commercial Threat Intelligence Platform should be completely customizable to specific intelligence needs and processes, and we’re giving our customers the ability to do just that. However, what excites us the most is what this underlying technology will allow us to do for our customers with the TC Exchange.


                                                                                HOW CAN TC EXCHANGE HELP MY THREAT INTELLIGENCE PRACTICE?

The TC Exchange gives our users the ability to download applications, whether built by our team, a partner, or another user, and run them in their own instance of ThreatConnect. In the public cloud instance, subscribing customers can run jobs for available apps in the TC Exchange. This means that users can build customized apps for a variety of purposes and can choose to share those apps in our cloud based exchange for others to use. Beyond simply sharing indicators or tailored intelligence, TC Exchange provides the ability to community-source applications and tools, allowing users to share efficient processes with each other.


“That’s great,” you say. “But what does this mean to me?”


Let me flesh this out a bit more by describing what applications in ThreatConnect can do.



normalmizeddata (1)

Apps in ThreatConnect can be used for traditional integrations such as ingesting structured and unstructured intelligence and integrating with defensive products.  For those with more mature processes, you can also enrich indicators from various reputation services, push malware to one of our many automated malware analysis partners, or create customized ratings based on intelligence you’re receiving or on your own past incidents. These apps can be chained together to complete an entire workflow from discovery to detection.



Allow me to give you an example of an end-to-end workflow we can enable. Let’s say a customer is using ThreatConnect to store malicious files associated to known threats that they have dealt with in the past. Using an application running in ThreatConnect, the malware is automatically queued for analysis in ThreatGrid. The analysis is returned and parsed in ThreatConnect, including related callout domains and IP addresses. A second application now kicks in based on the thresholds set for the indicators derived from the analysis. If there is enough confidence that the new indicators are evil, they are sent to the SIEM for alerting or even to a Firewall or OpenDNS Umbrella for blocking. The configuration of the malware analysis returned and the thresholds for sending derived indicators for alerting or blocking are all set by the user from within ThreatConnect. Many of these apps will be available from us or our partners in TC Exchange for you to leverage in the public cloud or in your private instance.




However, you might wish to build your own or tweak existing applications. There are several reasons why you may want to do this. For instance:

  • If you have a custom, closed source of intelligence you want to ingest into ThreatConnect.
  • You may track adversary relevance in ThreatConnect based on a set of custom attributes and wish to automatically update those attributes based on incidents they are newly associated with.
  • You can chain multiple existing apps together and create custom triggers between them.

For processing intelligence with our API, what you can do with an app is limited only by your creativity. We support you with a growing library of open sourced applications for you to tweak to your needs. With our fully supported and documented Java and Python SDK, a Sandbox instance of ThreatConnect to build and test against, and support from our team with questions and troubleshooting as you go, we ensure you can easily customize ThreatConnect to your needs for ingestion, processing, analysis, and action on threat intelligence.

TC-Icons-Sandbox-DarkIf you’ve built an application for processing intelligence, chances are others can benefit from it too. Developers are encouraged to share their code within the TC Exchange, get some kudos from their peers, and help others make intelligence-informed decisions about their security posture. All apps can be submitted by users to ThreatConnect. Once the app makes it through our thorough code and security review, we’ll make it available for download to other users and if applicable to be run in the ThreatConnect Public Cloud.


Not yet a customer you say?


No problem! We are opening up Developer Accounts for the purpose of developing custom applications for the ThreatConnect platform. If you have a great idea for an app, and are serious about building it, contact us at


We’re holding a contest for you to put our app development team to the test. Our Best App Idea Contest runs from august 24th through October 16th. Read more here to learn more about the contest and rules. Users can send their ideas for the best app, the winner will get a free year’s subscription to ThreatConnect and best of all we’ll build that application for you!


There should no longer be a question of whether or not a commercial TIP can do what you need it to do. We’ve built ThreatConnect to be extensible to the needs of mature Fortune 500 & government security teams, as well as those just getting started with utilizing threat intelligence. The TC Exchange and application runtime environment are just the evolution of the trail we began blazing two years ago. We have not put our machete’s away yet.


Learn More About the App Idea Contest!

Learn More About TC Exchange!