Skip to main content
Request a Demo

Watch ThreatConnect Demo: Beyond a Threat Intelligence Platform

Unveiling the Power of a Threat Intelligence Platform: A Dive into the ThreatConnect Demo

Click here to show transcript

Arpine Babloyan:
And today’s demo is hosted by Matt Brash, who, um, is fantastic. If you have attended previous events that he hosted, you know that, um, yourself. Um, Matt, um, over to you.

Matthew Brash:
Yeah. No. Thank you very much, Arpine. So pleasure to be able to present to everyone today. Um, as Arpine said, I I’m one of the sales engineers here at ThreatConnect. I sit within the EMEA region. And in terms of today’s topic, we’re gonna be focusing specifically on CTI life cycle in threat intelligence operations. Now as with any webinar or presentation, I felt compelled to include some type of insightful quote, which will make you all think deeply about the cybersecurity issues that you may be facing. But instead of, uh, taking that traditional approach of using an industry analyst’s statement. Um, I actually tried to fit in with the gen alphas on the call and used an AI assistant to help me in this journey. And I really just posed a very simple question. Why is cyber threat intelligence difficult? And both to my frustration and also humor, um, the answer I got back in my AI summary is really the same set of answers I probably would have got if I asked that question ten years ago. It’s really about too much data, needed to get through that data quickly in order to get relevancy for an organization, and then, ultimately, the whole interoperability challenge that we always hear. How can we make CTI, threat intelligence data, actionable? How do we get it to teams to make the right decisions at the point of investigation and triage? Now at ThreatConnect, we’ve always had this ambition since our founding to intertwine intelligence into the cyber defense organization. And really the goal of that is to build a much more cyber resilient posture for that entity. Over time, we’ve, um, developed our portfolio of offerings aligned to this mission, and really it now boils down into three core capabilities. The first of which is our traditional bread and butter, really our tip capability, And this is where we focus on aggregation, contextualization, and authorization of threat intelligence. So we wanna help organizations very quickly sift through thousands of data points and identify what is really relevant to their organization based on where they operate, the technologies they leverage. And then most importantly, make that data accessible to other teams outside of the CTI function so so they can make better informed decisions at the point of triage and investigation. Now with that, we realized that by having access to huge amounts of threat intelligence data, we could also enable more strategic decisions around cybersecurity. That includes things like understanding the current risk posture and also making investment decisions around cyber risk. And our quantification platform, uh, risk quantifier, really is about taking threat data, combining it with an understanding of your own landscape, and being able to put financial metrics against cyber risk scenarios. And so by bringing that into our portfolio, we’re now enabling business leaders to make sensible decisions around cyber risk. If I invest in a control, what is the financial risk mitigation? How much financial risk am I removing from the business? Or equally, if we were to perhaps start operating in a new region, um, what cyber risk posture does that look like? How does that increase our overall financial risk landscape? And then finally, um, the the third and final part of our portfolio is all around federated search and enabling SOC and IR teams really to drive down the time for response. And we do this through our polarity client, which is a desktop capability, which can integrate across all of your threat sources and all of your internal controls, presenting searches in a single place for an analyst in the pain of glass they like to live. So we really believe by taking these three core offerings, we’re gonna move organizations to a much more robust cyber resilient posture. And then importantly, we’re gonna move away from that exposure gap into more of a defender’s advantage. In terms of today’s webinar, I’m gonna focus predominantly on the first and the third point. So operationalizing intelligence and accelerating investigations. So throughout my tight time working with cyber threat intelligence teams, some of which are, you know, at the largest organizations globally, I found that there is one common aspect to those that I would say are are mature and efficient in the way they work. And that’s that they always put the consumer of CTI at the heart of all of their discussions. Now why would we do this? Well, when you focus on the consumer, it helps the CTI team in terms of the direction and the planning that they put in place and also to understand what types of data sources they need to be able to collect in the platform. Consumers are very diverse in nature. If you think about the SOC, they may require a technical output from the CTI team. Maybe their requirement is simply, I want indicators of high fidelity in my scene, and I wanna be able to do detection correlation with those indicators. So that’s a very technical type of consumer. Whereas if we look at maybe the interest response team or the threat hunting team, they are also a consumer audience, but the type of output they require could be vastly different. They may be focused more at tactical level, looking at TDPs and how they could use those TTPs to scope out their IR plans and their threat hunting cycles. And then finally, just one more example. We talked a minute ago about cyber risk, which is much more of a leadership type conversation, a strategic type conversation. And, again, the outputs that the threat intelligence team might be generating there could just be a simple threat landscape report, talking about the actors and the motivation of those actors in a particular region or a particular sector. As a platform, Fred Connect provides capabilities and functionality to appease all of these consumer teams, but also enables Fred’s intelligent teams to work more efficiently. And we do that by integrating automation throughout our solution. So whether that be automating the collection of data feeds, automating the enrichment and processing of that CTI data, or providing well cost analysis capabilities through our FREC graph and our workflow case management module. We really are a CTI analyst driven platform, but, again, we flip the narrative and think about the consumers first. So with that in mind, I’m actually now gonna jump into the FretConnect solution and really talk about that life cycle and how we, uh, really enable some of those consumer teams. So no matter where organizations are in terms of their cybersecurity maturity, when it comes to threat intelligence, everyone starts at the same point, and that’s really about collection. As a platform, FreqConnect prides itself on being as agnostic as possible to the mechanisms and the sources of intelligence that organizations need to use from a collection perspective. So we support email based ingest, Styx taxi, missed import, and then from majority of our, um, feed integrations, we leverage an API first approach. We do also recognize that organizations start at different levels. So whereas some of our customers will rely solely on the open source feeds that we provide out of the box, we also have premium source integrations for many of the reputable providers in the CTI space. So that could be the likes of Flashpoint, Humaniance, and even Recorded Future CrowdStrike. Now in addition to ingesting data from sources, we also have other means of bringing intelligence data into the platform. You may notice this community section on the right hand side here on my screen. Communities just represents the fact that you as an organization may want to share intelligence in an anonymized and selective manner with other entities, perhaps people that work in the same sector as yourself. And so FratConnect can not only stand up communities that you can publish intelligence into, but you can also receive data into a community from an external entity. And we provide four RBAC controls to make sure that those external entities will only see the data that you want them to have access to. And then finally, perhaps the most important source of intelligence is internal intelligence. So my fictitious organization here is Brash Brothers, named after myself. And in this instance, I might want to ingest intelligence data from my own environment, maybe my vulnerability reports, my instant data, or even things like phishing campaigns that we see target our employees. Ultimately, all of that data is being centralized into threat connect regardless of source and regardless of mechanism. Once we have the intelligence in the platform, one real critical point to discuss is this process of normalization. Why would you invest in a platform such as ThreatConnect? Well, one of the main goals is to give you insights from across all your different feed providers and have those insights within a single place. But in order to do that, we need to normalize intelligence in some sort of structured, consistent manner. FreqNet’s data model is really built from principles defined in the diamond model of intrusion. Um, if anyone has ever read about FreqNet, you’ll know that one of our, um, founders and our EVP of product management was one of the original contributors to the diamond model of intrusion research. Hence, the principles that were aligned in that research are reflected in our data model. This includes being able to define associations between intelligence and pivot across those relationships. Now most of the time, customers will be working at the indicator and group level, indicators very much be your technical IOC level information. With technical IOCs, our main goal is to help you score those indicators from a fidelity perspective, manage that score dynamically over time, ensuring that appropriate deprecation is in place, and then disseminate those indicators to target controls, be that preventative or detection. You might notice some unusual indicators in my instance here, and I just want to call this out because although we normalize data, we still have flexibility where it makes sense for you and your use cases. FreckNet works with many financial services organizations, and one of the common questions we’re asked there is can we track fraud related indicators, things like crypto addresses, crypto wallets? And, um, yes, you can because ThreatConnect can create custom indicators and apply import validation for those types of IOCs. So just be aware you can expand that data model where you need to and where it makes sense. Now those indicators are often linked to some sort of common intelligence point, and that’s where the groups come into play. A very simple example of a group is an email. I mentioned earlier about ingesting internal intelligence. Well, one of those sources of internal intelligence could be a phishing email, and that phishing email may have a URL in its body. It It might have a sending host, and it might have a sending IP. Because we’ve normalized the intelligence, we’ll be able to tell you pretty much straight away if any of those IOCs from the email are also known to your feed providers. So could it be that the sending IP that is referenced in that email is also linked to a campaign that manually is tracking? Groups will also be associated together to complete the story. That email, as I said, could be linked to a campaign, could even be linked to things like an adversary group. And so this normalization and association of intelligence is gonna be critical to all the downstream processes that we do in FREC events, whether that be filtering from a dissemination perspective, whether that be identifying new IOCs to begin a threat hunt from. Now one area we very much pride ourselves on is our data visualization tools, being able to present data in a meaningful way to our clients. And dashboards is driving a lot of that with inside Fresh Connect. Dashboards themselves are built on very simple queries that are defined using our query language. And this is one area that we focused on intensely to try and help customers of ours, um, achieve value quickly from the top. So we don’t rely on you to actually necessarily understand our underlying query language. We leverage AI and our TQL generator to translate natural English statements into, um, essentially, underlying queries that can be saved inside the dashboards. So here’s a very simple example of a natural language input. I’m asking FretConnect to show me all IP addresses linked to botnet activity and then found in the last six months. So there are three logical filters there that we’re applying in the search that I’m doing here in FretConnect. You’ll notice that that English statement is translated to a TQL query, which is the name of our query language. And if I want to, I can preview that inside the platform. So here I’m running a very simple search now to see the types of results I get back. And assuming that aligns to the specific use case you have, you can save these queries and reuse them across the platform. Why am I spending two minutes of your lives talking about TQL? Well, ultimately, TQL underlines, as I said, lots of the capability in our platform. The TQL queries can be used for data visualization in the dashboard. They can be used in reports that you generate. And then most importantly for many of our clients, they also use to filter indicators when you build your downstream dissemination logic. From a dashboard perspective, typically, we’re focused on specific topics, and the types of dashboards I typically see my customers build would start around things like threat actors that are most prevalent to the organization or that are currently being tracked by the organization. So this is one way you can build something like a threat actor profile and visualize the new updates related to that profile. Here’s a very simple example for a threat group called oil rig, also known as a p t thirty four. My dashboard itself is splitting the data both from a technical, a tactical, and also a strategic level, where we’re looking at things like the motivations and origins. We also are are quite often asked to build dashboards around vulnerability intelligence, and then we partner with many industry players, including the likes of Vunchack. FrameX has the ability to take external vulnerability data and overlay it with your internal scan data, helping answer questions about what are the vulnerabilities I really need to worry about based on things like exploitation data and attack data that we see from other premium sources of intelligence that we aggregate into the platform. It’s worth saying at this point that dashboards that you build are dynamic in nature. And so at any point, an analyst can derive the original source and then further interrogate the dataset here. So for this vulnerability that we’ve ingested from Vunchack, you’ll notice we have all of the CVSS drivers, the description of the vulnerability, and then information around things like, is it linked to name ransomware activity? What types of products are affected? All of these are filterable elements that can be searched upon and, as I said, can be correlated with your internal data. So we can help guide the vulnerability team to to help prioritize essentially which CVEs might be most impactful to their organization to go after first from a patch perspective. So we’ve talked here quite a lot about getting data into the platform. We’ve mentioned the fact that you can build very granular filters and search through that intel. What about the analysis stage? Assuming we have the data inside the tool, how can we actually perform analysis activities in a consistent manner? Well, here I have a view of IOCs that were observed in my environment, and I might want to build an analysis around this. So this specific dashboard here is actually built based on telemetry insights that get shared back to FretConnect, an important concept to be aware of because not only do we disseminate information to controls, we also have controls report back to us when certain observations are seen in the environment. So here I have the breakdown of all of my observed indicators, and then we can pivot on those observed indicators to look for second level relationships across all of the integrated feeds. So now I can quite quickly say, is that indicator that I saw in my EDR linked to a known intrusion set or linked to a known malware family? If we were to open up an indicator within the platform, you’ll have the ability to actually deep dive into some of the contextual aspects that we aggregate on the indicator itself. I can see here straight away that this file hash is known to multiple feed providers, and that’s one aspect that many CTI teams struggle with. Multiple contextual points, multiple scores. How do we get one view of this information? Well, in Fred Connect, we do deduplicate at a score 11. So inside the tool, you will have a single master fidelity score, which can be used for your downstream filtering. And most of our customers will use this score, which we call fret assess. Threat assess itself is computed based on a range of different inputs. This includes the information we get from all the different feed providers. This includes telemetry that we might get from your own environment. If you’ve told us this was a false positive or it led to a legitimate observation. And then finally, global telemetry that exists within Freconnec’s dataset, our analytics dataset. This analytics dataset is referred to as CAL. It’s just an acronym that we use to describe our underlying analytics capabilities, and one way that it’s useful is it exposes this telemetry. Cal’s data, your local telemetry, will also feed into this score calculation. So you can really trust that Freck Connect is that single source of truth from a fidelity perspective on an IOC. You can also build deprecation logic where it makes sense for your teams. So we have many customers that will say, hey. For IP addresses and host names, I really wanna aggressively deprecate my confidence around those indicators as they’re very transient in nature, and you can set up those types of deprecation logic in the platform. So our customers have confidence then that Fred Connect is helping score indicators, and we’ll manage that process for them. But part of the deduplication is also presenting contextual information. And you may have noticed on the right hand side here, we have a section called tags and crossovers. This is really to show you all of the, um, data points that we get from the different feeds. In this instance, recorded future is providing, um, TTP data linked to information observed when that file was executed. Whereas CrowdStrike is actually linking this to a specific threat type and the malware family for me. So two different feeds providing their own unique context, but now being centralized and visualized by the analyst. I’d be remiss to not mention, um, we recognize that not every data point that you require or every enrichment is from a a third party feed. You might have subscriptions to the likes of VirusTotal or Shodan. And, again, these can easily be plugged into FretConnect and queried from within inside the platform. So if you wanted to find related indicators to this file hash, you could do a lookup in VT, import those indicators, and even do a fret hunt around those new IOCs that you’ve discovered. So let’s talk about the operational side. How do we do something useful with indicators? This is predominantly gonna be driven through the automation capabilities we have inside our platform. And this includes, um, playbooks, which is a fundamental component that differentiates Freck Connect as a solution provider. Playbooks enable me to simply integrate IOCs down into my controls, and they can be initiated in a few different ways. In this instance, I’ve just executed a very simple playbook to block this hash inside my Microsoft Defender instance. But most of the time, customers aren’t necessarily relying on these user action triggered playbooks. They’ll have predefined schedules or logic playbooks that run on the back end. So you could say every day, grab IOCs of this type of context, and then push them downstream into my controls. No human interaction is required in that loop. Now just to show you, um, playbooks themselves are built upon very simple app logic. So for those of you that do have more of an automation mind, um, or perhaps have a security engineering background, there’s no need to be a Python expert to build threat connect playbooks. We have a library of integration apps that are available for different technology types, and you can simply embed those and use those for whatever purpose makes sense inside the tool. Now sometimes, we do actually have native integrations for particular, um, solution providers, and that means integrations that sit within their environment. So one of the most popular scenes that FreightConnect uses or our customers use, I should say, is Splunk. With inside Splunk, you can actually pull data from FretConnect into the local indexes using, again, that TQL language. So you can define very granular queries here in terms of types of IOCs that you want to actually integrate into your Splunk instance. And those IOCs will then be compared via specific data model searches with your event data. So now we’re taking all of that highly filtered intelligence, exposing it locally in the scene, and driving up the quality of the detections that we see in the scene platform. And it’s very easy for an analyst through our native app to just interrogate this and look more deeply at the matched information that we see in the in the tool itself. Now the final aspect I wanted to talk about when we think about the operational use cases for IOCs is really the polarity piece, the the federated search module that I highlighted earlier. Polarity gives analysts an ability to expose all of the intelligence from the tip to SOC and IR stakeholders natively at their point of investigation. So whatever pane of glass they’re living in, they can consume enrichment data from the tick in that pane of glass. Polarity itself is actually delivered through a simple desktop client. So if you were to actually, um, install polarity, you would just see a desktop client such as the one on the screen in your, uh, on your local desktop. And as I said, it plugs directly into FretConnect’s tip. So here’s a real world scenario. Let’s imagine the SOC or the IR team are currently in an in internal Slack channel, and someone’s raised an a a question about an IOC in that Slack channel. Theoretically, I could copy that IOC. I could go into my tip. I could go to showdown. I could go to virus total, and I could search that individually in those platforms. But what we’re doing is through a very simple keyboard shortcut, we can tell the desktop client that we want more context on that indicator. And what it will do is it will query FretConnect’s tip as well as any other integration you want to plug into polarity. So So you can plug things like your enrichment tools into the solution, but it will query all of those simultaneously and present the data back in a single unified view. So we’re now saving the operational teams a huge amount of time. They can simply interrogate the results that we get back from Freck Connects tip in one place. And that would include the Cal Analytics dataset and also the feed providers that you’ve ingested into the tip in the first place. So I can see this file hash is known to three different feed providers unsubscribed to. I can see a summary of all of the tags, the contextual tags, the score. And if I really wanted to, I could interrogate some of the association data inside the tip as well. So maybe I want to know, is there any reports about this indicator that have referenced this indicator? And, yes, in this instance, we can see that file hash was found in a ransomware report linked to the group Blackbusta. As I mentioned, it’s very easy to plug in other enrichment tools directly to polarity. So if you wanted to query, say, virus total and look at the behavioral data from virus total, that result set has actually been presented in the same view as my Freck Connect tip data. So all being centralized in one place for the analyst. Ultimately, we focused intensely about supporting people summarize all of this context. And so through some of our native AI integrations, you can actually even generate AI summaries from the, um, data results that you found most impactful. So I might actually say, well, in this instance, I really like the data I got back from VT and from VirusTotal. And so I’m now gonna generate a nice AI summary, which could be used to copy into this Slack channel as a response to the question that was posed by one of my SOC analysts. So we’ve really thought about this end to end flow. Clarity is a tool which really was designed for enabling the stock of the IR team. One of its core use cases is exposing threat intelligence data. Now polarity also does serve a role in feedback, and that is an area that a lot of CTI teams struggle with. How do we get feedback from investigations that are taking place in our own environment? So here’s another real world use case where polarity can be useful for an overall security function. Here, I look at a phishing email that was reported to my team. That phishing email includes a QR code. So the first thing I need to identify is how can I scan that QR code? How can I find out what URLs or domains it was pointing to? Clarity has an embedded OCR module, so you’ll notice that focus mode gives me the ability to select the QR code. Clarity will recognize the pixels on that QR code, translate that into searchable elements, and then present that back to the analyst team. Once we have that version done, we can now import those indicators into FratConnect’s tip. So you can see now it’s it’s a much better and easier process to get feedback for the CTI team from the IR function during their investigation cycle. And being able to just do it natively through an import in the client is really gonna, um, prove the responsiveness we get from the IR team after the investigation has taken place. Do note you can run OCR over things like web pages as well, Twitter feeds, Reddit forums. So wherever you want to import data from, doesn’t matter what pane of glass it is, Clarity will serve a use case for that. So we took a slightly side tangent there in polarity, and that was really because we were thinking about how can we operationalize the intelligence that we have inside the TIP platform. And this was all driven from an initial indicator that we were looking at inside our dashboard. What about analyzing more strategic type outputs or or tactical type outputs, I should say? So if I come back to my central observation dashboard, I can see that there are a range of different reports that are linked to this particular, um, set of observed indicators. And I can open up any of those reports and interrogate that data in a more meaningful manner should I wish to. In this instance, I’ve opened up a report from a very specific feed provider called the automated threat library. This is actually, uh, a feed we provide our customers out of the box, and it’s essentially an aggregation of different RSS feeds. And what we do is we preprocess that data for the consumer. This will include using LLM models to generate an AI summary and also using NLP, natural language processing, to help understand, is this piece of intel relevant to certain sectors, or is there MITRE references from a behavioral perspective in the text? So all of these tags that you see here, the tags and the, um, attack tags, are all driven through a trainer NLP model, which is run running over the top of the block content. And then in addition to this, I can always go back to the full article if I need to. Now there are a few different analysis capabilities that an analyst has available to them. So I might want to interrogate either the TTP data that I’m seeing on this blog, or I might want to know more about these indicators, perhaps execute some type of hunting investigation around them. FreqConnect, as I mentioned at the beginning of the session, has world class analysis capabilities for the team. This includes our Freq Graph, which is a very simple data visualization tool to show you the links between all of your known intel, and this is done in a deduplicated way. And why do I say that? Well, you will only ever see one node for the indicators or the groups on the screen, but we can tell you whether those nodes are known to multiple providers. So here, this 95 address is not only known for automated threat library, it’s seen across the map of premium feed providers that you’re subscribed to. And if you wish to, you can actually go to the details page, which we looked at earlier directly with inside the graph itself. Now most customers of ours are going to try and interrogate other datasets and pivot from here. So let’s imagine I want to do some type of threat hunting activity. Well, before I do that, I want to enrich my understanding of related indicators. So for this 95 address, maybe I’m gonna try and pivot across some of my enrichment capabilities. The first of which is that CAL analytics dataset that I’ve mentioned earlier. CAL provides a rich source of resolution data over time that can be queried directly in the graph. Now I have a related domain for this IP, again, another searchable element that I can use. If I wanted to take this one step further, I could also embed, again, a VT type lookup or a showdown type lookup through premium enrichment tools that you have access to. The graph can be rearranged at any point using some of the arrangement options that you see on the on the, uh, top left here, and this is really just dependent on preferences on how you like to visualize data. The one aspect of the graph that really stands out to me is it being operational in its ability. So if you were to select a node on the graph, you may have noticed the run playbook section. And that’s because we enable customers to actually run, um, threat hunts directly on IOCs that you’re viewing inside the graph itself. And those threat hunts can use native query languages, such as SPL and Splunk, KQL and Sentinel, and really render results back for the team based on those searches. So now I’ve searched on that 185 address. If I was to go to the details page, you’ll notice that based on the search of the hunt that I’ve executed, we may see new attributes being populated with specific information from the scene. So here we have a number of alerts that actually reference that indicator in Microsoft Sentinel, but what we can see is that those alerts actually weren’t generating incidents historically. So maybe I need to go back now and reassess my understanding based on the new hunting context that we’ve got for that indicator. Graphs can be saved and reused. And as we’ll see later on, you can embed graphs with inside the reporting module of FREC Connect. What about on the MITRE side? Again, we recognize that one of the big consumers on the CTI side is the incident response team and also people like the detection engineering team. So being able to visualize the breakdown of techniques on a relevant piece of intel is important and also being able to compare that to perceived security coverage that you have today. You can do that type of visualization inside ThreatConnect’s attack visualizer capability. This tool is also extremely useful for doing campaign analysis. So we have many customers that ingest internal incident data. One of the things you can do is once you get that incident data into Frac Connect is you can filter on those incidents, and you can add them into an overlay inside Mitre attack. So So I can select multiple of these indicate incident types and then overlay them to then determine technique prevalence. And this is the type of output that is really gonna benefit those detection engineering teams, being able to say for any particular technique, are we seeing prevalence amongst them? So I can go, for example, to my, um, scene content stakeholders and say, why are we seeing so much the efficient attachment? What types of logic do we have in place in the scene today to detect that? I can also go to my architecture team as well and say, is our inbound gateway configured appropriately? Because, again, it’s a technique that seems to persevere and be successful every time we see incidents inside our organization. Now although we have extensive analysis tools, these tools are really, really useful if you could tie it back to the need of the business. And when we talk about the need of the business, we’re really thinking about intelligence requirements. Before you even embed or start a CTI program, you really need to go and interview all of these different groups, vulnerability, IR, risk, and understand what’s important for them. What do they need from you? FretConnect does embed intelligence requirements as a capability in the platform, and you can actually build, um, very granular dashboards which report data aligned to those intel requirements. This includes being able to answer the question of which of our feed providers provide most relevant information to a particular requirement itself. Now in order to get to this type of view, you do need to define those requirements themselves in the platform, and we do this through a very simple keyword search capability. So here I have a Russian Sreck Group intel requirement I created. You’ll notice on the right hand side, I can classify this to start with. I can define the type of intel requirement itself and also where it originated from within my organization. The requirements, though, are actually driven predominantly through a keyword search feature. So when you come into the requirement creation step, you can embed keywords into the solution. We’ll also give you some suggestions here. So one of the things we will do is for any threat actor or malware family is actually highlight known alias names that are used to describe those. This removes the guessing game from your searching and can improve the quality of the results that we get back from the intel requirement. The data now is going to be presented under this results section. So, again, we’re saving CTI teams huge amounts of time and effort here. They can come in every day to our platform, look up their intel requirements of interest, and see all of the new data results that are coming back for those intel requirements. They can be previewed directly from this results panel, and you also have the ability to manage the review process. So you can mark these as archived. You can mark these as false positives. And for the results that are really impactful, you can associate them. So if anything that we think is really important, we will associate because we know we need to do much more thorough analysis around that intelligence. Now one of the final features I was going to talk about today is really FreqConnect’s case management module because now we have intel requirements, we have data that’s really pertinent to those intel requirements. I might want to take some of these reports and investigate them in a more structured, consistent manner. And the workflow case management feature is a perfect aid for that type of use case. With inside workflow, you could define granular templates with predefined questions and predefined automations that are gonna help you answer simple questions like, what is the impact of this CTI, this new report, to my organization? So in this template, you’ll notice I have some simple questions to be answered. For the report, does it have any vulnerabilities? If so, do they affect our critical assets? Does it have any reference TTPs? What’s our security con coverage for those? And then at an IOC level, are there any observations inside our environment? So with this template, I’m trying to very quickly help an analyst say, what is the impact, and then how do I appropriately escalate my response or my analysis to this piece of intel? You’ll notice that in these templates, there are mechanical symbols, and that’s just to represent the fact that you can use automation where it makes sense. So that playbook layer that is embedded into FreqConnect can also be used to drive automation in your case management module. So for this IOC step where I want to search those indicators in my environment, I’ve got a simple threat hunting playbook that I’ve assigned for Microsoft Sentinel. In reality, the case management module will play out into a view that you see on the screen here. So in this instance, I have a piece of intel, which was a report associated to the case. That report was one of the reports that was linked to my intel requirements, so I know it’s important. And what we did is we extracted all the context, the indicators, the TTPs, and link them to the case. My automation helped me answer some of the impact assessment questions. So for the indicators, we did the retrospective search in Sentinel. For the TTPs, we looked at detection content and answered, do we have appropriate coverage today for those TTPs? So as an analyst, you can now come in and then do your own review based on these results. You can provide an impact score, and you can also determine whether this needs to be escalated to a particular consumer audience inside your organization. Anything that’s seen with inside Freck Connect case management can also be embedded inside our reporting module. That’s really the final topic for today’s conversation. We recognize that the outputs that you need from Freck Connect are all different, um, stages of CTI, and so strategic outputs is one of those that could be important. So I can actually use Freck Connect’s template modules to generate outputs for the business based on content with inside my case management module. So I’m now actually actively embedding content from the case, the review I’ve just done, into a reporting template, which can then be sent out and distributed to different stakeholder groups. Do note the reporting module is used standalone as well. So although it fits nicely into the case management story, we can also build reports aligned to specific topics of interest. So here’s just an example of a finished report that was done about clock malware family. There is section on the right hand side to embed dynamic content into the report, including images and the saved investigation graphs, and then also with the ability to build those dashboard charts and tables also within the report itself. So here I have a very simple table, which is used in our query language to identify any of those known indicators to clock. Once you’ve finished the report and you’ve built out the context, you can actually share this directly from. We can email this, or or we can just export it as a PDF and perhaps upload it into a Slack channel or a Teams channel. But, ultimately, this can be distributed directly to the audience that it requires in the mechanism that you require. So that brings me to the end of today’s webinar. Um, as I said at the beginning of the topic, we wanted to focus on overall as a broader topic, cyber resilience, but how CTI plays an important part in that cyber resilience story by being able to contextualize intelligence quickly, write it back to business needs, and then obviously operationalize it in whatever mechanism or format is required by the business. So thank you very much for, uh, listening today and attending today’s webinar.

Arpine Babloyan:
Thank you so much, Matt. This was amazing. We did have a question in the chat about the recording of the demo so that the attendees can share it with their teams. We are going to be sending it out. Um, so, uh, definitely stay tuned for that in the next couple of days. If you have any questions whether whether you’re on right now or you’re watching the recording, you can always reach out to us through our website, uh, at threatconnect.com. Um, thank you for the the wonderful feedback, um, and, uh, we look forward to seeing you on the next demo.

Matthew Brash:
Thank you very much.

Arpine Babloyan:
Thank you, Matt.

 
When it comes to a company’s cybersecurity strategy, one cannot overstate the importance of a robust threat intelligence platform. As showcased during a recent demo by ThreatConnect’s sales engineer Matt Brash, the innovative ThreatConnect platform is a game-changer in this space. Rooted in the Cyber Threat Intelligence (CTI) lifecycle, ThreatConnect promises to elevate your organization’s cyber resilience and navigate the complex web of data with unmatched speed and relevance.

ThreatConnect has a unique approach to data collection and contextualization. Instead of merely aggregating, it makes intelligence relevant to business needs. It uses automation to streamline CTI data enrichment and processing, seamlessly integrating advanced analysis capabilities without requiring coding expertise. A simple app logic is all you need to execute operational tasks.

Internal intelligence is vital, but input from multiple providers is equally important. ThreatConnect incorporates feed aggregation from various sources, enhancing its depth and breadth of data. Moreover, it utilizes AI-assisted features for efficient data filtering, analysis, and downstream processes that significantly ease the burden on CTI teams.

Taking its uniqueness a step further, ThreatConnect embeds plugins within your solution providers’ environment. For instance, it integrates directly within Splunk, amongst others. In addition, ThreatConnect houses a work case management module to facilitate structured and consistent investigations using automation.

When it comes to delivering insights, ThreatConnect doesn’t disappoint. Its result-oriented analysis offers easy-to-understand visualizations that tie directly back into business needs. It touts an array of extensive analytical tools, including the intuitive threat graph, invaluable for campaign analysis. But that’s not all. Indicators of Compromise (IoCs) can be swiftly detected and examined, thanks to complex data sets and revealing query results that expose potential security threats.

Keeping in tune with the user’s needs, ThreatConnect provides a feature to define intelligence requirements directly into the platform. It generates relevant data results, saving invaluable investigation time for CTI teams. Furthermore, these reports can be shared directly from ThreatConnect to various stakeholders, ensuring everyone stays informed.

As the ThreatConnect demo wound to a close, the audience was left with a thorough understanding of the dynamic capabilities and immense potential of this powerful threat intelligence platform. Being thanked for their participation, attendees could look forward to a recording of the demo that would follow in the next few days for further insight and analysis.

In the realm of threat intelligence platforms, ThreatConnect is undeniably a leader. It optimizes the CTI process through automation and puts comprehensive, contextualized intelligence at the fingertips of the users, fulfilling every business’s cyber resilience needs.