Posted
Providing insight into how certain feeds are performing within ThreatConnect
As part of our latest release, we’ve introduced a new feature to help users better understand the intelligence they’re pumping into their systems. Intelligence can be a fickle thing; indicators are by their very nature ephemeral and part of our job is to better curate them. We find patterns not only in the intelligence itself, but in the sources of intelligence. As analysts, we frequently find ourselves asking a simple question: “Who’s telling me this, and how much do I care?” We sought to tackle this problem on a few fronts in ThreatConnect in the form of Report Cards, giving you insight into how certain feeds are performing across the ThreatConnect ecosystem.
First and foremost, we wanted to leverage any insights gleaned from our vast user base. We have users spanning dozens of industries across a global footprint. If a customer in Europe is seeing a lot of false positives come from a set of indicators, we want the rest of ThreatConnect users to learn from that. This is where ThreatConnect’s CAL™ (Collective Analytics Layer) comes in. All participating instances of CAL are sending some anonymized, aggregated telemetry back. This gives us centralized insight which we can distribute to our customers. This telemetry includes automated tallies, such as how often an indicator is being observed in networks, as well as human-curated data such as how often False Positives are being reported.
By combining and standardizing these metrics, CAL can start to paint a picture of various intelligence feeds. CAL knows which feeds are reporting on which indicators, and can overlay this information at scale with the above telemetry. This has an impact at the strategic level, when deciding which feeds to enable in your instance. We’re all familiar with the “garbage in, garbage out” problem — simply turning on every feed may not be productive for your environment and team. High-volume feeds that yield a lot of false positives, report on indicators outside of your areas of interest, or are simply repeated elsewhere may not be worth your time. Now system administrators can make an informed decision on which feeds they would like to enable in their instance, and with a single button click can get months of historical data. These feeds are curated by the ThreatConnect team, who is doing their best to prune and automatically deprecate older data to keep the feeds relevant.
The Report Card view goes into more depth on a particular feed. For each feed CAL knows about, it will give you a bullet chart containing the feed’s performance on a few key dimensions, as determined by the ThreatConnect analytics team. In short, a bullet chart identifies ranges of performance (red, yellow, and green here) to give you a quick representation of the groupings we’ve identified for a particular metric. A vertical red line indicates what we consider to be a successful “target” number for that metric, and the gray line indicates the selected feed’s actual performance on that metric. We’ve identified a few key metrics that we think will help our users make decisions:
- Reliability Rating is a measure of false positive reports on indicators reported by this feed. It’s more than just a count of how many votes have been tallied by users. We also consider things like how egregious a false positive is, since alerting on something like google.com in your SIEM is a much more grave offense in our book. We give this a letter grade, from A-F, to help you identify how likely this feed is to waste your time.
- Unique Indicators is a simple percentage of how many indicators contained within this feed aren’t found anywhere else. If a feed’s indicators are often found elsewhere, then some organizations may prefer not to duplicate data by adding them again. There may be reasons for this, as we see below with ThreatAssess. Nonetheless, this metric is a good way to help you understand how much novelty does this feed add?
- First Reported measures the percentage of indicators which, when identified in other feeds, were found in this feed first. Even if a feed’s indicators are often found elsewhere, this feed may have value if it’s reporting those indicators significantly earlier. This metric helps you understand how timely a feed is relative to other feeds.
- Scoring Disposition is a measure of the score that CAL assigns to indicators, on a 0-1000 scale. This score can be factored into the ThreatAssess score (alongside your tailored, local analysis). The Scoring Disposition is not an average of those score, but a weighted selection based on the indicators we know our users care about. This metric helps answer how bad are the things in this feed according to CAL?
The Report Card also contains a few other key fields, namely the Daily Indicators graph and the Common Classifiers box. The Daily Indicators chart shows you the indicator volume coming from a source over time, to help you understand the ebbs and flows of a particular feed. The Common Classifiers box shows which Classifiers are most common on indicators in the selected feed. Combined, these can give you an idea of how many indicators am I signing up for, and what flavors are they?
All of these insights are designed to help you make better decisions throughout your security lifecycle. Ultimately, the decision to add a feed should be a calculated one. When an analyst sees that an indicator was found in a particular feed, they may choose to use that information based on the Reliability Rating of that feed. You can leverage these insights as trust levels via ThreatAssess, allowing you to make such choices for every indicator in your instance.