AI and Threat Intelligence: Unlocking Potential, Navigating Risks
In this recorded panel, our experts cut through the AI hype and reveal its real impact on cyber threat intelligence. AI is often marketed as a magic bullet, but many solutions fail to deliver meaningful value. Security teams face mounting pressure to process vast amounts of threat data, yet AI’s effectiveness depends on how well it’s applied.
Watch Now:
What You’ll Learn
- AI for Cyber Threat Intel – What’s Hype vs. Reality? – Separating AI marketing myths from real-world cybersecurity applications.
- Speed & Scale – How AI Powers Faster, More Accurate Threat Analysis – Using AI to automate classification, correlation, and enrichment of threat data.
- AI & Human Expertise – Striking the Right Balance – Why AI should enhance, not replace, human decision-making in security operations.
- The Future of AI in Cybersecurity – Preparing for AI-driven cyberattacks and securing AI model supply chains.
- What’s Next for AI & Cybersecurity? – How AI-driven agents and quantum computing will shape the future of threat intelligence.
Why Watch?
Discover how AI is truly impacting threat intelligence. Our experts share practical insights on optimizing decision-making, improving security workflows, and enhancing human expertise in cyber threat operations.
Click here to show transcript
Dan:
Alright. Hello, everybody. Good morning, good afternoon, good evening, depending on your time zone. Welcome to this threat connect roundtable called beyond the buzzword focused on how we’re actually leveraging artificial intelligence in threat intelligence. Uh, now, of course, I’m sure you’ve seen a lot of vendors putting AI features out there, which is great. We love to see it. But we really wanna hone in today on some of the actual use cases in terms of how we at ThreatConnect are actually thinking through leveraging AI in terms of, you know, making your lives better. Uh, just a quick bit of housekeeping. Uh, if you do have any questions or want to chime in at any point during this roundtable, happy to make it interactive. Please use the chat feature to the right of your screen. You know, if you want the screen to be bigger, we do there is a full screen option, uh, but, uh, let’s go ahead and dive in. So by way of introduction, my name is Dan Cole. I’m the vice president of product marketing here at ThreatConnect. Uh, I’ve been with the company almost ten years. And before we introduce our panelists, uh, I wanted to take a brief moment just to talk about why I feel I am qualified to moderate this roundtable. So two of my all time favorite movies, and I know it’s a cliche in these things to talk about Terminator, when you’re talking about AI, but these were genuinely some of my absolute favorite movies. Uh, I saw them when I was a kid, and if you’ve seen them, you know, most people, their takeaway from these films is, you know, it’s a chilling reminder of the dangers of AI and the dangers of technology and will AI replace us, destroy us? Uh, solid takeaway. My takeaway was that it’ll be super awesome to have my own indestructible robot to, you know, do my dishes, be my best friend. I thought that was the absolute coolest thing. And it wasn’t just about, hey, it’d be cool to have a neat robot. It was really about the possibilities of artificial intelligence, and I got super into this. So this is the arm of a Terminator, you know, the the the killer robot from the Terminator movies. This is my actual eighth grade science project. So I went deep into this AI interest, this AI potential. I also got the same haircut that John Connor had in Terminator two, which was a disaster. I will not be showing an image of that. But even if we look at the movies, you know, as much as they had this theme of, you know, the dangers of AI run amok, there were lots of instances where characters did see the potential of how AI, if properly applied, could be a net good. So the person who actually invented the artificial intelligence used in Terminators, his vision was to have a pilot that never made a mistake, that never got hungover, that was never tired. Imagine how that would improve air travel safety. Even, you know, Sarah Connor, who was out there to, like, save humanity from AI, saw the value of having an artificially intelligent robot companion that could really help and really protect her son. And I think as we go through this webinar, we’re gonna talk a lot about, yes, some of the pitfalls as well as some of the potential of artificial intelligence. So full circle, I did not end up going into AI as a career despite this promising start. But full circle, now I actually get to bring in some actual experts in the field of AI to talk a bit about artificial intelligence. Uh, and so we’re gonna go through three topics once I introduce them. One is, you know, what is actually hype around AI versus what is reality. Like, there are a lot of big claims out there. What are we actually seeing in the field? Two is really around use cases. So how are we actually seeing analysts leveraging AI, uh, both from vendors for and also possibly, you know, independent things like chat g p t to actually produce better threat intelligence? Uh, and then finally, balance. You know, I’m a big believer that, you know, AI works best when it’s paired with a human companion. Uh, and so the question becomes, you know, how do we actually strike that balance in terms of AI and people? So without further ado, I’d like to invite our panelists to come on stage, uh, and introduce themselves. And, uh, we’ll just kinda pass the mic. So Joe Miller, if you’d like to introduce yourself first.
Joe:
Yeah. Thank you. Uh, hey, everybody. My name is Joe Miller. I’m the director of product here for Polarity. Uh, been with Polarity for about nine years. We recently came on ThreatConnect as part of a merger over the summer, and we’re super excited to be here and see what we can do together with the with the different products. Uh, previously and then part of the reason I’m here is I I spent a lot of time as a data scientist in the intelligence community working on different platforms and data to help set up for machine learning and things like that, which, you know, AI is covers a lot of things nowadays, and machine learning is part of that. So excited to be here and excited to talk about everything.
Dan:
Awesome. Excited to have you here. Uh, next, I’ll pass it on to, uh, E Sneed.
E:
Hey, everybody. I’m E Sneed, uh, or Sneedie if you’re dyslexic. Uh, my pronouns are they, them. So, um, I’ve been doing cybersecurity product management for about ten years, and I’ve worked on a lot of scaled systems and AI systems, uh, specifically around phishing detection, both for mobile and email, uh, along with a bunch of other different solutions. And now I focus on data and analytics at ThreatConnect, where I specifically work on enhancing, uh, threat data with AI machine learning and other enrichments as part of the collective analytics layer. Um, really excited to be here and kind of talk about the the features that we’re developing, how we’re going about it, and, um, get feedback from everybody else of what, uh, what you think is important and, um, um, just have a good con conversation. Alright.
Dan:
Excellent. And last but not least, doctor John Snyder.
John:
Thanks, Dan. Uh, Yeah. I’m I’m John Snyder. Uh, so, um, originally, I got a doctorate in statistics, uh, in the twenty tens. Um, since then, I’ve worked in various different industries, uh, applying machine learning, AI, whatever folks have been calling it, uh, in in any given year. Here at ThreatConnect, I’ve been here for about four years now. Uh, I I manage the data science team we have here, primarily serving CAL, which is our big data, uh, player, um, so to speak. But I’ve been engaged with, uh, most areas of the of the business, um, you know, building models, uh, making the product better through AI.
Dan:
Excellent. So without further ado, let’s actually dive in and start asking some questions. So the first topic is around hype versus reality when it comes to AI in CTI. So sure everyone on this call has seen cybersecurity vendors selling AI as a magic bullet, and that’s happened even before the advent of l LLMs. So what are some of the common misconceptions we see when it comes to things like AI driven threat intelligence?
John:
I can take this. Um, Yeah. So, I mean, I would say you know, I’ve I’ve been doing this for a little bit now. And, you know, several years ago, uh, the the letters AI, instead of AI, it was ML. Right? Everybody was doing ML. Everybody wanted to do ML. You know? And and through through the hype that we’ve seen in various different cycles, uh, over the years, you know, ultimately, uh, us folks who are building models and making products better and whatnot, we’re just doing stuff. Right? Um, but I would say, like, generally speaking, to cut through the hype, like, we have to think about, like, what what AI means. And and and to me, uh, and others, if you’d like, can speak for themselves. Um, to me, AI is essentially anytime you’re you’re building a system where you’re trying to model something stochastic, where you don’t have a deterministic, like, path from your input to your output. Right? Um, and so so, essentially, I would say any problem where the outcome is uncertain, we can call AI. Right? And and trying to find what is the best solution in those cases, uh, I would say. Um, as far as misconceptions go, I mean, like I said, uh, you know, the, um, the the pitfall a lot of times is is to call, uh, you know, say say simple if else statements, uh, to be some AI magical, like, solution. Uh, and I I think, like, when we do that, we we run the risk of kind of, you know, cheapening the the power of these tools and and where they actually can can augment, you know, our our product experiences.
Dan:
Excellent. And one thing I heard in there was, you know, that notion of uncertain outcomes And, you know, which can be tough in security because one of the things that we’ve heard is that a lot of security teams are skeptical of AI because of black box decision making. So we get an outcome, We don’t know how we got there. And that can be hard because CTI is very much, I think, a a a show your work kind of discipline. So who can talk to me a little bit about how, here at ThreatConnect, we try to focus on AI transparency and explainability in our models?
E:
Um, I can I can take this? And I I think I think there’s a couple of pieces that were kind of interesting in that little misleading up to, like, what we’re doing. And I I think one of the important things in this is you said, uh, security teams have skepticism. Right? And I think it’s important to talk about the difference between an individual using a any given LLM or AI product and how they’re gonna make choices of how that’s gonna augment their own skill sets versus how you would build things for a team and for workflows. Right? And so I think we keep that in mind. So we’re trying to both provide that workspace for the analyst, but also create scalable works, uh, workflows for folks to use and really scale their operations so they can focus on net new things. And so within that, uh, within that, when we come to transparency, what we’re trying to do is think about, uh, how do you how do you know you can trust this. Right? So an analyst using an LLM, they’re making choices on the fly. Does this look right based on my experience? Oh, this is something I’m not as good at. They can make those choices. Right? But when we talk about scaling things, you really need to be able to show your customers this is why we make these decisions. And so some of the big things I think are important and we put this together to provide to our customers, why was the technology selected? Right? And so that could be whether it’s a specific type of machine learning, whether it’s a specific LLM model. Why, uh, what sort of performance does it have? How is it evaluated? Right? And how does it compare to other technologies? Were other things looked at? Um, is it explainable? Uh, is the the is it is the data that you’re using, is its provenance explainable? Do you know where you got it? Uh, was it synthesized? Uh, do you why do you know you trust that data? Who does it belong to? Um, where is that data process and stored? Right? And that’s very important depending on especially if you have, uh, compliance requirements, GDPR, different risk needs, um, and then also, uh, what ways are is that data outputted and validated. Right? And that goes not just like, hey. We know this is AI in the UI, but, like, hey. How do you really know this is working? How do you make sure this is working over time? And and that goes into what is the strategy for maintaining these features. Right? And I think this is kind of going back to what John was saying. You know, uh, the difference we had, uh, pre LLM boom is, you know, machine learning takes a lot of operational focus to maintain and keep going. So you saw entire products, entire detection products, whether it be phishing detection, whether it be network detection, things like that. That’s that was an entire product. And now with, uh, how LLMs have come on to the scene and how AI has kind of burst, a lot more folks are into it. You have all of these much smaller little features that you need to maintain. Uh, but, also, if you want those to be trustworthy and useful, you have to have a plan on how to keep those how to make sure that those outputs are going to be useful to to the end users, how they understand the risk, and how you make that visible. And so we we do a lot of work to, uh, be able to show that transparency and show that explainability and provide that to our customers both within the context of, like, hey. We need this for compliance, but also as just a, hey. We wanna understand how this works. We’re trying to understand. We wanna know what this is. And then we take that feedback and we try to put that back into those features so that, uh, it’s clear for everybody. It’s documented, and they know what it’s doing. And it really anybody who’s making these features working in AI should be able to answer those questions. Right? Should be able to explain this is why our feature works. This is how we know it’s trustworthy, and this is these are the things you need to know when you’re using it.
Joe:
Yeah. And I’ll I’ll add on to that, e. Like, one thing to consider when it comes to products and and AI in general, especially as AI is moving, it’s moving very, very rapidly in the space, right,
Dan:
is Yeah.
Joe:
The transparency around models, what the models are, what they’re doing, and the ability to tune is going to be huge in the future, especially when you have agentic AI starting to come into the space. And these AI models doing stuff on their own, having the transparency to go in and understand what’s happening, be able to tune it, be able to adjust to threat actors that are trying to attack those models, right, and trying to influence those, having that ability and transparency there is gonna be key as well. It’s something that we’re gonna we strive for here as we start to develop more and more AI.
E:
Yeah. And and I think too, this is also the difference. You see this in a lot of places, and this may get into, like, other other pieces that we’re gonna talk about. But, um, you know, going back to what John said, in some cases, folks are presenting AI as sort of a brand. Right? And and, you know, I I get it if if you wanna say, hey. This is a lot automation. We want people to be excited about it. But, you know, there is a difference between, like, a a brand and a whole set of features under that that some are AI and some aren’t, and then, like, really being able to explain, this is AI. You need to know it’s AI, and here’s how it works, and this is what you need to know about it. So you can really trust those outcomes and make this and build those into your pipelines and workflows, uh, with trust.
Dan:
Awesome. So it sounds like, you know, to sort of address, you know, some of that one of the way to cut through that hype is with sort of that level of thoughtfulness and that level of care, you know, as we get into how these things actually work. Um, and so that’s kind of us, you know, thinking as sort of a vendor. But, you know, when we think about, like, an actual CTI or security team, what are some of the actual biggest risks that they’re gonna face when they deploy AI? Like, you know, how can organizations avoid falling into that, uh, that overreliance trap when it comes to those AI generated insights? So, like, you know, how do they avoid some of that internal hype, if you will?
E:
I think and and I’m not sure if anybody else has some other items to add to this. But I I do think, um, it’s important to understand it it, uh, the hype can be good. Right? So I think it’s important to understand that AI or at least my the way I I like to think about it is AI is there to help you automate kind of mundane tasks that are not using the best of your brainpower. Right? So, um, you so that you don’t have to know everything and your ability to trust that that’s happening. Um, and so it’s supposed to free up the analyst to really leverage their subject matter expertise so that they can identify new and novel things. And that’s important because AI doesn’t AI is built upon past data. So it can help you come to conclusions about new novel threats, risks, things like that, but it’s not gonna automatically find those until it is trained and set up to do that. Right? And so we were always gonna need analysts to be thinking and kind of looking at where that new, uh, that new horizon is and then leveraging those tools. So, um, I think a big risk is assuming that the AI can detect everything, assuming that the analysts, um, aren’t gonna know as much as the AI. Um, and then also, uh, I think, uh, just remembering that those AI models and those AI tools, um, they’re built on data so that they can find things really well that have happened before. But those are also based on biases. Right? And so they aren’t, again, aren’t gonna be able to find those novel things. But also, uh, if if, uh, we’ve made an assumption about how something should work and and the space changes, then, uh, it’s gonna be built on those biases. It’s gonna have a hard time finding those things. So you’re still gonna need to understand where your human expertise needs to be versus where AI is augmenting and enhancing those workflows and enhancing those analysts to do even more.
Joe:
Yeah. And and I’ll I’ll add and kinda kinda build on a little bit from the transparency piece from the last question is some of the biggest pitfalls we’re gonna that are gonna happen when it comes to AI are gonna be centered around the lack of transparency. Right? Like, when you don’t know what’s happening with the model, when you don’t know when a threat actor has come in, has influenced the model, and has influenced the data that’s making the wrong decisions for those agents, right, when you don’t have that transparency, it’s gonna be huge. So that, um, also being able to tune and adjust because threat actors are it’s it’s gonna be real. It’s already started. Right? We’ve already started to see in this space that threat actors are now attacking models, right, that thing happened with DeepSeek not too long ago, and and people found ways around it to get to the database. Right? Like, this is gonna be a thing more and more, especially as more and more agentic AI starts to come on board. So making for sure and holding us as people who develop products. Right? That, again, that transparency is built in. There’s not a black box around it. There’s not just this solves everything for me. I can turn it on and don’t have to worry about it ever again. That’s not that’s not gonna be the case in in most realistic aspects. Right?
E:
I mean, and I think that’s a really good point. And I and I think it’s also important to realize where different, um, teams and companies are coming from. I think, you know, as for larger enterprises, kind of more mature security teams, um, the folks on those teams know know a lot of this. Right? They know this instinctively. This is not new to them. Um, but I do think that depending on where those teams are where those companies and those teams are with the maturity, with forming those teams, AI can seem like a silver bullet. It’s like, oh, it’ll just solve everything. Right? It’ll just do all this. And so it’s really that is a a hype that needs to be looked at critically when when you’re building when you’re building out your security needs and building out your compliance needs because it probably can help a lot. But, again, it really just depends on how your organization is is positioned, what its risks are, what its threats are, um, and and how that works with, uh, your analysts and your security teams and how they are structured within your organization.
Dan:
Excellent. And and I I really like that notion of, you know, humans not making assumptions about the AI. And I think we’re gonna we’re gonna dive into that a little bit more, um, a bit later on. One last question before we move on to the next topic, which which really focuses on kind of, uh, real world applications. And this is kind of like a geek out kind of question. But earlier on, uh, one of you talked a bit about, you know, machine learning versus LLMs. So, you know, we kinda think about LLMs, you know, including sort of the evolution to agentic AI. We talk about LLMs being able to do everything for a business. So I’d like to kind of back up a little bit. Can you or or can someone talk a bit about what is the role of an LLM versus traditional machine learning when it comes to AI, uh, and and how do we kind of approach that at ThreatConnect?
John:
Uh, Yeah. I would say that, um, essentially, anytime you need to synthesize information, uh, that’s really where LMs, they really are are very powerful. Right? Um, uh, sometimes if you if your problem, uh, is that you you need to know, uh, we’ll talk about this a bit later, uh, you have an input, like, sentence or something. You wanna know, like, which MITRE attack technique this sentence may be referring to. You have a fixed set of outputs there. That would be a more traditional, uh, machine learning problem, uh, versus, like, say you you had a mountain of of blogs you could potentially read that day of all the things going on in the space, potentially in your industry, etcetera, and you need to know individually, are the topics that are being discussed in these articles relevant to me and my organization? Right? That’s not a problem you can answer with a simple yes, no. That’s very hard. Right? And so, you know, uh, to answer that type of question, uh, one would would establish some criteria for what matters to me. And then through that information that we have, you can then pull in the the blog’s text and then extract some metadata that is relevant to to, you know, uh, to an organization, uh, that they can use to make that decision very quickly. Right? Uh, and and to me, that’s where the power of these, uh, large language models, uh, that’s where they come in. That, you know, that, uh, synthesizing lots of text, summarizing, extracting metadata that would be very hard to extract otherwise, uh, that we could then use to to make decisions with. Yeah. And
Dan:
Excellent. Uh, and, you know, I I think you really sort of closed, uh, you know, kind of brought things full circle to that notion of cutting through the hype and the the the thoughtful approach that ThreatConnect takes. You know, I think a lot of companies have the problem of, you know, they’re gonna start with we want AI, and then they’re gonna look for a problem, they’re gonna look for a use case. Whereas, you know, we take a a really hard look at what problems do CTI teams face. So, you know, we look at something like intelligence collection. And then we ask, is intelligence collection something that can be made better with AI? If yes, we’re gonna pursue it. If no, we’re gonna move on to the next thing. Uh, and so with that, let’s actually switch to the next topic.
E:
I was gonna
Dan:
Oh, go ahead.
E:
I just wanna I just wanna double down on that. I think that’s such a good call out, and I think everybody experiences this differently. Right? Folks are getting to a solution because they wanna use a technology before they know what problem they’re solving. Right? And I and I think that’s such an important call up because the AI, um, AI technology is really exciting, and it can’t has so many options. We could do this. We could do that. And so, like, when you’re and again, I think this applies both within product development. I think it also applies within security teams that are looking to try to solve tough problems. Right? And so, um, just to double down on that, it is very easy to say, well, I have this. I could use it for these things versus saying, like, I need to solve this problem. And it like an example of, uh, our beta TTL generator, right? It’s really hard for folks to learn and specialize in SQL teach well for for the platform, right? That is an extra skill set. That is great if you have it, but if you don’t, it is a real challenge to get over to get to the data you need. So really pinpointing, like, what is gonna help analysts get there? What is gonna help them focus on the things that are important for them to specialize in? That is a really good point. And I think just to double down on that, I think it’s so important as security teams think about what am I what am I trying to accomplish and not getting lost in the, well, I could use this or this technology would be cool or whatnot. It still can be, um, but, again, really think about what you’re trying to solve. Definitely. Mhmm.
Dan:
Awesome. So speaking of what you’re trying to solve, one of the things that, you know, we we see a lot of teams struggle with is the balance between efficiency and effectiveness. So, you know, the next question is, you know, how does AI improve the speed of analysis while also ensuring accuracy? So, like, maybe one thing that I think would be good to look at there is, you know, can you think of a real world example where AI helped reduce response time?
Joe:
Yeah. Yeah. I can I can I can take this one from folks? Um, definitely a really good question because that’s what everybody wants. Right? They want AI to just solve all these problems quickly for me, and I don’t have to think about it. Uh, but that’s that’s not really the case. Right? AI is there to help speed up in in a lot of ways. One of the ways and to answer your your second question there, Dan, directly. Right? Like, a real world example would be how in Clarity, right, we have the ability to quickly summarize context and add that to a report, add that to a ticket. Right? Like, that’s it’s in within seconds. Right? You can get a quick idea, a quick summary of that information, add its report, and bam. Right? You can move on with your day, close that ticket, whatever that piece you need to do is. Um, this isn’t pure, like, LLM AI and John. Don’t don’t murder me. But, like, when when you when you look at the Cal data here. Right? Like, how Cal collects collects the data and comes up with the score is another huge, huge thing when it comes to you talking about AI and how it’s gonna speed it up because it gives you an idea of, does this matter? Does this not matter to me? And I can prioritize it. I can not prioritize it, and I can move on quickly. Right? And you’re gonna start to see with that prioritization piece. And I know one of the things that we’ll hopefully be looking at here with IDEO and stuff in the future is is can the AI actually just prioritize this for me. Right? We know all the scores. We know the data. Right? Have it prioritize me. Give me that thing. Hey. I need to look at this. I need to look at this. I need to look at this. Right? That that tells me immediately, here’s the things that matter to me and how I can go start my day without having to drill through everything as much as possible. So AI is really gonna help there. And and to answer the, like, kinda second of the three questions, right, it it’s gonna help speed up but not be the solution because humans still need to be there. Right? Humans still need to validate that prioritization because we can look at a score from six different systems from threat intelligence, right, and say, hey. Yeah. This is malicious. This is this is this. This is this is really bad, or we think it is. But in your environment, we we might not know that. We might know that you guys have already taken action on this information. Right? You guys have already blocked this. It’s already part of your firewall, what whatever that is. Right? So the human still needs to be there to evaluate and and understand that information. There’s a lot of other ways AI will help speed up in the future. Right? Like, agents are gonna start coming into play a little bit more over some of this stuff is gonna be gone auto automagically. I’ll call it automagically because, again, it gets to that, oh, this is a black box thing, which is what we don’t wanna which is what we don’t wanna get to. Right? But you’re gonna have, like, vulnerabilities and being able to understand the vulnerability. And does this affect me? Does this not affect me in my organization? Right? AI is gonna be able to start helping to answer those problems for us to speed up our our everyday lives.
Dan:
Awesome. Great great answer. You know, I love that notion of the human still needs to be there. Like, a customer said to me a few months ago that having an LLM is like having the words the world’s fastest intern, which is great. But if it were a human intern, I I would still check their work. Uh, and having one that’s super fast, so much the better. You mentioned the notion of prioritizing. Uh, and certainly one of the key ways that we prioritize a lot of things is with MITRE. You know, are these TTPs extra relevant to our assets, our organization? Now, of course, to actually prioritize things by attack, you need to first classify things by attack. So does this particular intel report actually relate to spear phishing, for example, which I think is, like, t eleven ninety two? So thank you. Uh, someone check that. Uh, so, um, you know, one of the things that we do with ThreatConnect is our attack classification identifies four times more attack techniques than traditional methods. So we can take, you know, plain text and pull out, hey. This text is talking about t eleven ninety two, and then we can actually then use that to prioritize. So how does that classification work, and why is that an advantage for us?
John:
So our, um, our our AI powered, uh, MITRE attack labeler, I’ll call it, um, essentially, not only tells you which, um, uh, which, you know, techniques are are mentioned in a in an article, but also specifically where, uh, they are in the article. Essentially, doing sentence level classification um, of, you know, the the entire document, uh, and to which which portions of that are relating to different techniques. Um, it it it’s powerful more powerful than traditional ways. You know? I I mean, traditionally, one may one may look for a literal technique code being mentioned, which essentially is going to rely on you ever constructed that article to do that work and say, like, what what technique is this? Right? Um, and and so, you know, that that that string, um, that’s, uh, you know, t whatever or specifically the name of the technique may not be present. Right? Some someone in an article may be referring to, hey. We observed, uh, somebody, uh, you know, throwing spaghetti sauce at our mainframe. Right? Like, what what technique is that? Right? But it doesn’t mention a code. It doesn’t mention a name.
Dan:
Right?
John:
And so we need to take that that portion of that text, and we need to figure out what the technique code is. Um, and so that that’s why the machine learning approach for this particular solution, uh, is is the one that we we employ, uh, because there’s a fixed set of outputs, uh, for the MITRE attack, uh, you know, technique space. Um, and so we use an approach for that. Additionally, um, uh, the the part of this that is is is powerful is that, you know, anyone can throw together a classifier. Right? Like, you can throw together and say which of the, you know, whatever, 600 or so techniques, uh, is the sentence referring to. Um, but but at ThreatConnect, the way our our model works is it it goes through, uh, without, you know, going through too much, but, uh, a a post processing step that ensures, uh, the the classification that a customer ends up seeing, if you see it in in an article, has a very high degree of precision, uh, in in its accuracy. Um, and so we we went to great depths to make sure that, um, there wasn’t noise in this process, that we weren’t, like, you know, throwing a bunch of stuff in there that that that, you know, wouldn’t wouldn’t, you know, stand the test of time. Right? So much of the, you know, the the the work that we’ve done on this has been to ensure a high degree of precision in the outputs for them. So, um, yeah, that that’s roughly why I would say this this technique is so powerful, this project anyway.
E:
And I I wanna call back to what we were talking about with the transparency and understanding how, uh, features work. Like, part of the research we did in developing, uh, and developing the pipeline for, like, training this data and getting it out there and making it so you could classify was also comparing it to other examples of what other groups had done and looking at what was working and what was not working. And what we saw was that it was almost approach kind of similar to fishing. Like, we can’t miss anything. So even if it’s of probably low probability, include it. But that ends up putting a lot of work back on the analysts. Right? And so we were able to look at those, really understand what worked and what could be improved and really make a a really, uh, what I think is just a a great feature that tries to be precise and really give you trustworthy results, um, that you would not find if it was unless it was explicitly labeled in a report. Right? So it’ll really help analysts.
Dan:
Awesome. Thank you. And, uh, I did check, uh, November used to be spear phishing, but
John:
I wasn’t as late.
Dan:
In the current version, it it is 1566. Yeah. So back in time, I was correct. Uh, anyway, uh, so that sounds like a really powerful way to actually bring the data in and classify it. And and I think, like you said, sort of precision is a really important aspect of that. Like, if you’ve got a specific TTP that you’re really keen on, you know, it is a high priority, you wanna make sure that it’s not been discarded like it’s been missed, or you don’t wanna be bringing irrelevant data that just looks like something you might care about. So once the data is classified, you know, there there’s still a lot of threat data out there. So how does the AI actually help with that prioritization, uh, so that, you know, once analysts don’t get lost in the noise?
John:
I can speak to some other, uh, some other, you know, elements of our our platform, uh, that we have, um, uh, for that. I mean, you know, speaking about, uh, specific, uh, you know, entities that may be extracted, uh, from from reports, say, where we’ve mentioned specific techniques, we also will, um, you know, score those. Right? And the scoring of the various different types of indicators that we have are AI enhanced. We have various different AI models that are evaluating the behaviors, uh, of indicators that’s, um, that used to drive, uh, an indicator score up or down. Right? And we we quantify that and we actually explain it, uh, the, uh, to the user why a score is what it is. Uh, you know, what are the drivers of that. Additionally, uh, I kind of touched on blogs before. Um, you know, rather than sifting through a mountain of of articles every day, uh, you can read we provide summaries, uh, and we’re doing what we have on the roadmap, a lot of of initiatives, uh, to enhance those summaries based on the content of the blog. So, you know, instead of having to read, you know, uh, all of these articles, you could be able to to get the the most relevant bits, um, you know, that are are relevant to you, uh, and specifically derived from the blog’s content as well. Um, anybody else have anything they wanna add to that?
Joe:
Yeah. Yeah. I’ll I’ll talk about it and kinda, like, expand upon what we’re what we’re looking at, right, in in the future. Um, because when when it comes to AI and it comes to a mass amount of data, one of the things that AI is is gonna hopefully help with, right, is making sense of all that data and being able to now prioritize it for for our customers. Right? So one of the things we’re we’re starting to look at from a road map kind of future AI development is can we understand who you are as a company, what matters to you as a company so that helps to prioritize. Hey. You’re you’re company a. You are in industry y. We know that these threat actors attack company a, and these threat actors attack industry y. And we know these TTPs matter, And we can understand, hey. These these actors are associated with these indicators. Right? So can we automatically take that, map that to you as a as a customer, and then prioritize those indicators and take a look at across our platforms. Right? Does this affect me? Does this not affect me? So I can know this is a priority, this is not a priority. So there’s a lot of really, really awesome things that’s gonna come in the future with AI, especially what we’re gonna start dealing.
E:
Yeah. I’ll add one more one more item to this. So one of the, um, one of the features we have is automated threat library, ATL, which is a a CAL feed that comes in that is a bunch of, um, security blogs that are out there that and it gets reprocessed. And originally, when this is coming in, there are so the news is so dense on, uh, breaches and, um, you know, exploits and things coming in every day. You know, trying to figure out based on a headline whether you should read something or not is is really, really tough. So we introduced, just to start, AI summaries. We got a lot of good feedback of how much that’s really helped analysts quickly kind of scan and decide what’s important. We’re looking at taking that kind of next level and looking at things like, can we get more information about the topics at hand so that folks can, um, use those kind of, uh, levers to pivot and see? So for example, can we determine if it is about a zero day so that folks can clearly get clear summaries about that zero day, but also get, um, labels or ways to pivot with that to say, hey. I need a prioritize list of anything that comes in with this sort of information. And then has those those summaries that are very specific to those use cases that help people say, this relates to me, this doesn’t. Or look at those relationships that are built because when we process this, we we do a lot of, um, mapping. We, uh, pull out all of the indicators. We do all of this. So there’s a lot of different pieces you could put together really, like, customized prioritization of not just indicator, not just at that technical indicator level, but also strategically what does the organization need to bring in both from an operational level and to help, um, package for their, uh, their stakeholders that need so they can make decisions and kind of pivot as the world changes at a pretty fast pace every day.
Dan:
Excellent. So I I think that all those comments really just highlight a lot of the power that AI can have. And, you know, I appreciate the call outs, some of the ways that we are actually applying it in the platform. Uh, and and, Joe, I appreciate it. It sounds that or you were both sort of teasing some upcoming features. So definitely more and more AI is coming, again, aligned to those use cases. So let’s move to our final topic, which is all about balance. And, you know, as you talk about prioritizing things, summarizing things, AI can still make mistakes. Take it from me. Do not use an LLM to help with your taxes. Um, so with that balance of AI and human, you know, how do you actually incorporate that human feedback to refine the AI over time?
E:
I’ll take this. And I, um, so just to give us a little bit of background. So, prior to my work here, I’ve been working on kind of scaled detection systems. Right? Anti phishing both for mobile and non mobile. And so, um, some of those were just incredibly scaled systems that did not really leverage AI, and then others were AI specific systems that looked at that. And, um, what I would say is even in those cases, all the way up until this point, right, your your machine learning and your AI systems’ entire products, not just sub features, right, they’re never gonna get it right. And so it’s a matter of helping people understand, like, you’re making choices and automations based on these things. Right? But if you do not get that feedback, in all of these cases, you have to design in human feedback for folks to say, this isn’t right. You need an adjustment. An adjustment. That’s the only way that your product can really move forward and pivot quickly enough to stay up with the threats. Right? And so designing those feedback systems into the product from the start so that it can become easy for the customer to say, this isn’t right. You need to make a change. I don’t like you know, I don’t see how this works. And we’re doing that. And so, again, going back to humans are important. Uh, analysts are important. It’s important to help them focus on the right problems. Right? So that they are not getting distracted by distracting AI or tools that are only half working or things that degrade over time. Right? So building those features in and then having those operations where you’re monitoring and you’re looking at those things to make sure that these things are staying high quality is a really important step in the process. And so, uh, those refinements happen over time. And so as we build out more and more, like, distinct AI features, we’re making that easier and easier, not just to look at the analytics behind it, right, and make decisions, but also to make it very, very easy to call out and prioritize on our side. These need change. Right? This is definitely causing a problem and we can, um, or this maybe it’s not a problem. Maybe it’s something you never thought about. So for example, one of the funnier ones I thought with TQL generator was we started running into a problem where folks were, like, trying to get to, like, a threat rating, but people kept calling it skulls. And so the LLM did not know that it was in the UI that it looked like skulls. Right? And that’s a very small thing, but it’s it’s really frustrating if you’re in the middle of trying to get something done and you don’t know the language of whatever the system is. And so, um, again, some of these refinements are really, really fast detection based refinements. Like, man, how do we get those cycles in so, like, we can make adjustments, like, on the fly? Others are, uh, how do we adjust and improve the UX of these systems? Because maybe they have the capability, but we didn’t think about how someone might reference something. We didn’t think about the context that they’re looking at. And so getting that feedback so that we can make those adjustments is really, really important, um, so that you can so that it continues to be valuable.
John:
This is also not a new new topic either. I mean, you know, uh, ML ops, uh, AI ops, uh, machine learning operations, AI operations. Um, you know, I I mean, it’s kind of a core tenant in there. Right? Is you have a model life cycle, you monitor performance, performance, and then you make subtle tweaks, and then you see how did engagement change. Right? Um, you know, this is this is a very important thing because models drift over time. Right? Whether it’s a prompt, you know, and you gotta tweak the parameters or or, you know, modify the prompt a little bit or if it’s traditional model. Right? Sometimes data will drift over time. You have to modify that or monitor that, make adjustments, capture the engagement after you make that change, uh, to make sure that that things are are moving along seamlessly.
E:
I think that’s a really good point because, uh, you know, even even it’s it’s these are very different than, um, you know, there’s a lot of really good data processing features out there in the platform. AI features are a little bit different in that, cool, I’ve launched it. Right? And you’re like, it will work forever. But the inputs change. Right? Like, if we even think of if we think of an example that’s like outside of threat intelligence. Right? Um, you know, as Google became more popular, SEO became more popular. And so how websites present data to you change. Right? And so if you had a feature that was looking at that and trying to process what was important about that, um, prior to that that shift, right, the input changes and maybe it’s not as accurate. Maybe you need to make adjustments. But as environments change, as those inputs change, you’ve gotta evolve with it, and you’ve gotta be able to have, um, the sensors out there to see what is happening so you can make those adjustments and and keep things keep things going and valuable and and working well. And and that’s also like, to build on a little bit
Joe:
further, that’s also just where users come into play. Right? Like, direct feedback. Hey. Something is funky with this. Something is not funky with this. This works great. Right? Direct feedback is key when it comes to AI and and how humans interact with this. So that tells us immediately, this is bad. We need we need to go match this. Hey. Hey. If we had if we had the ability right to say, hey. This isn’t matching on skulls. We need to update skulls, and somebody told us that. It would make that make that speed it would speed that process up that much for you. Right? So human direct human feedback is key.
E:
And I do wanna make a I do wanna make a distinction here in case anybody in the audience is getting annoyed. Like, I do not mean, um, that that the end user should be making all of the test cases for a product. Right? Like, I think that’s really just a diff a thing to point out because, like, the you as part of your plan should be should have an extensive set of tests that are like, will it go wrong in these case? Here’s all of the things I can find. Right? But again, data drifts. Right? Your inputs drift. Again, non deterministic types of solutions, so your outputs are gonna change too. And so there’s some things that we can test and have alerts on and identify early within the system. But again, sometimes when it comes to subject matter pieces, oh, well, we delivered 200 words and it’s a summary. It’s like, well, but this other thing, something’s wrong here. And then we can go back and look make those changes. Right? And I think that’s that’s an important piece so that you can respect your customers and respect their time because that is the point of of these AI solutions. It is to help help end users focus on the right things. And if you’re not doing that, right, like, that’s you know, you’re you’re wasting your you’re wasting your appetite time.
Dan:
Excellent. And I just love that notion of feedback. You know, I think that, you know, going back to the magic bullet comment, anyone who considers AI to be that magic bullet, that’s a red flag. But the fact that we include these feedback mechanisms, uh, that to me is a really big green flag to help evolve these things over time. Uh, and to your point, me, about, you know, we’re not just using our users as test cases. Like, I know personally that when we developed our AI to classify by industry, you manually read through something like 700 different logs just to help validate that the AI was producing those more precise those more accurate outputs.
E:
Well, it it was yeah. It was not just looking at those outputs, but it was also looking for other test cases that we needed to set up to say, under what circumstances is this is this gonna, uh, create, like, funky outputs? And we came up with controls to make sure that we were giving folks the best information we can. And then as we continue to work through, um, we’re actually scheduling an update of the model, and I am really excited about, uh, the work that’s going into this. I think we’ll have it, um, later next release or so. But, um, but, like, you know, we’re gonna have, uh, even better outputs for that. Right? And so, like, having that at the beginning, having those test cases, knowing what that is, and then being able to when you make those updates, say, well, we know this is better because of this, and you’re getting the feedback because of this. Those those all work really, really well. Awesome.
Dan:
Uh, looks like we’ve got time for two more questions. So, you know, with all that in mind, how are we making sure that ThreatConnect’s AI is enhancing human analysts in their intelligence workflows?
Joe:
Yeah. It’s a, uh, it’s it’s a good question. Right? I mean, a lot of it kinda goes back to what we what we discussed a little bit ago that the e went through. Right? It’s ensuring that we’re being transpiled the transpiled Transparent with our models as I’m trying to mix words together here. Um, being transparent, collecting that feedback. Right? And then ensuring that what we’re working on from AI aspect is it takes into account our customers. Right? Like, making sure this feature, this update is something anybody wants, they need, they they wanna use. Right? That’s that’s a huge piece to it because if we just develop some AI that we don’t think it that nobody is gonna use, right, we’re just developing AI. Right?
E:
Yeah. Well, and I think I think we saw that with the beginning, the, you know, the beginning when we saw, you know, all of the chat bots that showed up in the world. Right? Like, chat bots everywhere for everybody. And, um, and so, like, you know, chat bots can be really useful. Right? But, like, you know, you really have to be clear on what you’re trying to solve. Right? That they’re reliable. Right? You need to make sure that they’re providing the right information. You need to do those things. And so I think, again, like to your point, it is, um, I think going back to some of the earlier pieces we talked about, you know, I really think that it’s really about what is the problem you’re trying to solve. Right? And then what are the right tools to solve that and really identifying, like, you know, those pain points to to get there. Right? There’s a lot of things that AI can do, and I think keeping it focused on, like, again, thinking about, hey, I’m an analyst. I’m just gonna use this tool, uh, kind of free form in a way that I get to choose versus, um, and I think those workspace tools are really, really important versus when we’re designing things for a workflow that needs to be flexible, that needs to be able to hit, uh, the needs of a bunch of different ways that teams work, different organizational needs, things like that, and really making sure that you’re hitting a clear need that will help teams scale. Right? And that is reliable and transparent and that they can trust.
John:
I think, like, also, you know, the the idea of of AI enhancing the subject matter expertise of our customers is sort of, in my opinion, going to be the natural order of things for a while. Right? I mean, as a as a personal anecdote, uh, uh, with my own deep subject matter expertise. So I I mentioned in the beginning, I’m a I have a doctorate in statistics. Right? I’m a statistician by by training. Right? Um, just yesterday, I was analyzing some data we got. We were, you know, deciding between a few different model configurations for for a new feature that we’re developing. Um, and, uh, I I was like, you know, this is a very simple statistical analysis. I don’t wanna do this. I’m like, hey, Chai g p t, models a, b, c, and d. These are the results. Do a statistical analysis. And then I I continue working on my slides to deliver the update to the product. I look at it a few seconds later, and I was just like, this is a very simple analysis. I just didn’t wanna write 15 lines of code that it would take to do it. And and it was completely incorrect. Like, I mean, like, offensively wrong. Right? I mean, very simple thing, you know, junior, sophomore in college level level, uh, you know, problem, and it was completely, like did a completely wrong wrong path. Right? And so, you know, of course, with prompting, I I I tell it to correct itself, and it does. Right? Um, but but the fact of the matter is left to its own devices, it’s not gonna be able to answer that question. Right? So we we need to make sure that we’re we’re always working to enhance the the professional lives of the customers.
E:
Well, and I I think, again, that’s a really good example of, uh, you know, using that element within your own personal workflow, and you have the choice of whether to accept those outputs, right, versus building something that is not defined enough and then trying to put it into workflows that are trying to scale where folks may or may not have the expertise to say that this is wrong. Right? That is that is a key piece of when we’re looking at at what we’re developing and developing these features, really making sure that we’re keying in on so that we’re not sending somebody down, um, down a path that, like, they’re just like, I didn’t know that this bridge stopped and what went off the end of it. Right? Like, we wanna make sure that those tools are really focused so that they have the right guardrails so that people are getting the right thing and they’re have
John:
to be trust on what’s coming out of this. Right? Like, it that’s so important. I mean, you yeah. Like, every everybody knows that these tools will hallucinate. Right? Like, we’ve all experienced it, like, probably at least twice a week. And so, you know, we we have to make sure that there is a sufficient validation. We’ve done sufficient work to make sure that the data flowing into these things and the questions that we’re asking are on target, um, and and and just to ensure the highest level of trust that’s that’s possible, uh, with these models.
Dan:
Yeah. Excellent. And then, you know, John, at one point, you talked about sort of staying the course, uh, for the foreseeable future. For a final question, let’s talk about the future, and let’s actually go back to Terminator. So in the movies, uh, the Terminators are sent back in time from the year 02/1929. So, you know, when they have the flash forwards, that’s, like, the post nuclear apocalypse. Future war is the year 02/1929. And it’s a little depressing to think, but that’s four years from now. So, uh, how do you see AI evolving between now and the year 02/1929? You know, what advancements or shifts should security teams prepare for, uh, between now and 2029?
E:
What’s the heck at first?
Joe:
Jim. John, you wanna go first?
John:
Uh, Yeah. I mean, uh, as a joke, I can say I always tell Tragedy PT thank you just in case. I mean I mean, from my perspective and and the the the problems that I solve, you know, um, I see the models getting bigger. I mean, more more reasoning type models, uh, that’s becoming the thing. Um, you know, like, when I when I think about one of the projects I mentioned was, um, that we have, uh, summaries for for blogs that you may read. When we first launched that feature, the, you know, the the, uh, sort of APIs that you can use to call these models were in their infancy. Um, we didn’t have access, uh, to that at the time. Uh, and so we were using, like, a llama two at the time, which was we were hosting on a server on on AWS. And that was the way you that was the only way we could really do it, uh, at the time. Um, you know, a year and a half later or whatever, we’re we’re able to use these APIs now and get much better results. I mean, I see this trend continuing. We’re gonna have access to larger models with more capabilities. And I think, like, we all have to be, in my opinion, optimistic, uh, about about this and and think about how we will exist alongside these tools to make our productivity better. Realize that there there are always gonna be feedback loops to make the things better. Um, you know, and and to me, it’s very exciting, generally. I mean, just just even when I would see it in the own prod or the the projects that that we would develop here at ThreatConnect, um, the the problems that we can solve now are ones we wouldn’t have been able to solve five years ago. Um, and I think in the next five years, we’ll be able to solve even more problems for customers. But but at every stage of the way, you know, uh, our jobs are going to evolve, and how we interact with the tools is going to evolve. So I I think we all have to keep that in mind as well as as we go forward.
Joe:
Yeah. Um, I yeah. You wanna take it, or I can go next? It’s up to you.
E:
I I’ve got a I’ve got an answer, but it’s a little bit more boring. I I feel like okay.
Dan:
Go ahead. You’re fine.
E:
Alright. Okay. No. Um, So I think there’s a a couple of things. So one, you know, kind of the more fun of the two, uh, kind of just bouncing off of what Shamina is saying. Like, I think, um, one of the things that’s gonna be interesting, not just for security teams, but for everybody, Um, but I think the amount of data that security teams need to go through on a daily basis makes them a unique case, uh, compared to a lot of other types of roles. Um, I think we’re gonna see we’ve we’ve built a lot of these very, very big apps. I think that the way we interact with applications is gonna change a lot. Like, I think the way that user interfaces are gonna, um, evolve is gonna be very different. So before, you know, everything was these very static answers. So we need a button for here and a drop down and filter and this and that. You’re gonna see a lot I think we’re gonna see a lot more dynamic interfaces, right, because because you’re gonna have to, um, it’s gonna be about interacting with that data in a totally unique way. Right? And I think that’s gonna be really neat, but it’s also gonna be confusing because you won’t have that same visual, that same layout that you do every time. How do you know that this other person saw the same thing that you did? Right? So it’s gonna be really I think there’s gonna be some really interesting, uh, evolution in user experience and user interaction. Um, but also, it’s gonna also have to, like, take into account, like, how do you know if you’re seeing the same thing as everybody else? And I think it’s gonna be very, very neat. That’s the more exciting of the two. Right? Um, the other thing I think is gonna be, uh, kind of growing. And this is because on my, like, product side and and security side, there’s a thing called AI security posture manager, which is like tools and practices and processes for securing AI related resources, which I’m seeing pop up with a lot more, uh, kind of leading security vendors. And I think that what we’re gonna see so right now we’re, uh, you know, AI is wildly popular. Uh, you know, none of the, like, the questions that you get for compliance are the same. Everybody’s got a different way. Nobody quite knows what it is. So I think we’re in a lot more alignment of, like, what what do you need to do as an organization to make sure that the AI that you’re using is, uh, safe for your organization. Right? What risks are you doing, uh, taking into account and and how are you, uh, fostering for that? And I think that’s important from, like, not just your security team and protecting your organization, right, but also, um, on the other side that your vendors are taking that into account and, um, really understanding how they fit into your security posture when you’re also doing security mission. So
Joe:
Yeah. Yeah. Totally totally agree. Um, and on on my end, right, in how I how I look at AI in the next forty years or so, I think one of the big things for for cybersecurity teams is that you’ll see a lot of security practitioners to understand AI a little more. Right? You have to understand how models are working, how this is tuned because it’s gonna be another thing that could be attacked. It’s gonna be another thing that you will in the future have to look at and have to tune and and work with. Right? And just as part of every day job everyday job. I also think this doesn’t just apply to cyber. I think this applies to really, like, a lot of spaces in the world, to be honest, because we have to evolve, and AI is gonna be one of those evolving tools to kinda talk to, like, what John was saying earlier. Right? Is we needed to know to work with this, how to work with it, what it’s gonna do to make our lives better. Right? One of the other things that I think is gonna happen in the next forty years is there’s agentic is is a hot topic. Everybody’s saying the word agentic. Everything’s gonna be agentic. I do think you’re gonna start seeing agents pop up a lot more. Right? Like, that’s gonna be a thing where this platform talks to this platform and takes takes an action, does this thing, prioritizes this. Right? You have an agent that’s running, that’s doing these different mundane tasks that other people would do. Right? It’s it’s looking at this and and prioritizing it or summarizing it for me when I don’t have to do that. Right? So you’re gonna see agents pop up a lot a lot
E:
more.
Joe:
Um, another prediction is that you’re gonna see AI cyberattacks on the increase probably in a few years. It’s gonna be it’s gonna be a big thing. So you’ll probably have risks. People accepting risk for AI go up and then start to go down a little bit when when it starts to get it starts to get hacked a little bit more. And and as the the community evolves with it, you’ll start to see those risks and AI be accepted more and more when it when it comes to it. And this is not even to mention quantum stuff. We won’t even get into quantum, which now there’s, like, Microsoft and Amazon supposedly have quantum chips. And you wanna talk about speed when it comes to OM. If that’s actually true, that’ll be crazy what will happen in the future.
Dan:
I can’t even can’t even go down that path.
John:
Yeah. The the model supply chain is gonna be huge. I mean, I I, uh, several years ago, seeing seeing talks about how you can fine tune, like, triggers into into models, um, and and have them yield, uh, an expected malicious output for a trigger input. And so people are gonna have to be really careful about where where they’re getting their models from. So the the that that aspect of this is gonna be very important. Yeah. Security for AI, I guess you’d call it.
Joe:
Yes. It’s gonna be exciting, though. I mean, the AI world is is is cool. It’s an exciting space that’s evolving a little too fast. But
Dan:
Well, exciting is good. I mean, where Terminator is a chilling portent of things to come, it definitely sounds like you are a lot more optimistic.
Joe:
That’s just why we have to say thank you. And when we tell it in our prompts, we either say, hey. You are amazing. Right? You gotta you gotta butter it up a little bit. You’re awesome. You gotta do that so much. Exactly.
Dan:
Right. Like, we
Joe:
can’t just we can’t just treat it badly. You can’t just, like, go do this for me. No. You’re gonna be like, hey, please. Can you do this? Right? You gotta coerce it and guide it.
Dan:
Exactly. Uh, a little kindness goes a long way. Uh, alright. So we are just about out of time. Uh, I do wanna thank our panelists very, very much. This has been super insightful. I hope our audience, who I also wanna thank, has gotten a lot of thought out of this as well. Uh, just a quick bit of housekeeping. Uh, if you there should be a resources tab here. So if you do wanna download, uh, a PDF that goes a little more in-depth on how ThreatConnect specifically leverages AI, you can do that. Uh, there will also be a little pop up that comes up that you can use to dive a little deeper and learn a little bit more about ThreatConnect so you can request a demo. Uh, and we do have more of these events coming up. Uh, so definitely, uh, head to threatconnect.com. Stay tuned. And uh, I hope everyone has a wonderful AI driven rest of your week. Uh, we’ll be back. Have a good one, everybody. Bye, everyone. Thank you.