ThreatConnect 4.1 Incorporates Return on Investment for Threat Intelligence
If you have not seen it, Wade Baker, ThreatConnect’s VP Strategy and Risk Analytics, wrote a series of blog posts with the great folks at the RSA Conference discussing the evolution of InfoSec by looking at their conference topics over the past 25 years. One thing we have noticed at ThreatConnect over the past few years is the increase of the topics “threat” and “intelligence” (not that we’re biased or anything). Wade’s analysis supports this anecdotal observation, and he highlights some of the good things about that trend in his Part 2. But, I think there is a darker, unfortunate side to this new world we’ve found ourselves in...
With the popularity of threat intelligence (TI) as a topic, the hype around TI these days is strong. Not all intelligence is created equally, though. Some of it is, frankly, better classified as threat data rather than threat intelligence. The problem here is that there is a persisting misconception in the industry that throwing a bunch of unvetted feeds into your SIEM is a sufficient “check the box” solution for threat intelligence.
I hypothesize that we’ve been on a steady incline up the TI hype-cycle for a few years. The industry is soon due for a fun ride down the trough of disillusionment. There are some clues to back up this idea: most notably the recent story from Mr. Krebs on a certain Nordic-named company’s implosion due to overhyped capabilities (can’t say I didn’t see that coming ◔_◔). As Robert M. Lee pointed out, this event does not mean the sky is falling for the industry, but it is a needed correction. It’s also critical for me to clarify here that the problem does not lay in one snake-oil selling TI company.
Effective use of TI requires a process oriented approach. Since every organization’s environment, risk tolerance, and business processes differ, so will their use of TI to inform decisions on these matters. Because of this, the practice of applying external TI has to involve refinement to make it fit into your own set of processes. If more organizations measured the value of the TI they are bringing in, there would be a lot less Loki-like trickery in the industry being peddled, fewer frustrated analysts chasing meaningless alerts, and many better protected networks because of TI. The problem until now was that the best way to do this was not necessarily apparent or accessible. Analysts need a better way to prioritize alerts from external sources dynamically based on relevance and confidence. Executives and other decision makers need a way to see what sources are returning the most value (and not creating negative value) for the organization.
Measuring the Return On Intel
ThreatConnect is providing just that. With the release of Episode IV Act 1 (ThreatConnect 4.1) last week, users now have the ability to make some informed decisions about the intelligence they are currently using (or thinking about using). These tools can be used to measure external sources, such as feeds, premium intel providers, and also communities or your own intelligence created from incident response (IR) engagements, threat research, and hunting.
Now, ThreatConnect users can collect metrics to provide some simple measures of relevance and accuracy with observations and false positives.
Seeing is believing (…well, sort of).
Two threat intelligence qualities discussed in our eBook are relevance and accuracy. With our new release,ThreatConnect users can collect metrics from their defensive integrations to provide some simple measures of these qualities with observations and false positives.
Here’s how: We’ll define an observation as an actual event where an indicator of compromise (IOC) was detected in your environment, perhaps on a firewall, a host-based agent, etc. The fact of the observation is a useful data point to validate that the intelligence provided from a source has at least some relevance in your environment, because it’s detecting activity. Now observation metrics can be sent back to ThreatConnect to tally the observations per indicator and per source. You’ll be able to see those metrics over time for every source you have access to within ThreatConnect. Soon you’ll be able to measure these metrics with all of our supported integrations.
“But, wait!” you may say. “Just because I see something in my environment doesn’t mean the intelligence is accurate.” And, if you were to say this, you would be absolutely correct. There is a counterbalance measure to observations: false positive (FP) reporting. Users can now report indicators as false positives via the integration by way of our API or directly within the ThreatConnect UI. Like observations, these metrics are also shown for each indicator and in aggregate for each source over time. A “Negative ROI” source with a high amount of reported FP’s can now be spotted at a glance and filtered out or throttled with a low confidence weighting, allowing you to focus your monitoring and response time on hits coming from legitimately valuable sources of intelligence.
There’s more. Measuring the return on investment for intel is a major theme that we’ll be providing more capability around. In upcoming releases, we’ll provide other measures of relevance and qualities of intelligence. Each release will further focus on allowing you to make the most of your threat intelligence investments and your team’s time spent creating and using it.
I mentioned last year about the positive and negative returns on investment possible with threat intelligence. Carelessly deployed, they can be a quick way to make your sensor look like a Christmas tree. But done right, you can sharpen your defenses, save time, and make better strategic decisions on your security using threat intelligence. Our focus is to make sure you’ve got an easy path to get a net-positive return on your intelligence.