AI's Critical Role in Modern Cybersecurity
The Q&AI: AI's Critical Role in Modern Cybersecurity
In this episode of The Q&AI Podcast, host Bob Friday sits down with Kedar Dhuru, senior director of connected security at Juniper Networks, to discuss AI’s evolving role in cybersecurity and Zero Trust strategies.
Topics covered include the arms race between cyber attackers and defenders, the impact of generative AI on helping detect and prevent sophisticated cyber threats like phishing and deep fakes, and the emerging field of quantum cryptography.
Kedar also shares insights from his extensive career in security, the challenges of integrating AI with cybersecurity, and the future of network security technologies.
You’ll learn
How generative AI is helping detect and prevent sophisticated cyber threats
Where the challenges are with integrating robust security and networking
What the emerging field of quantum cryptography is and how it relates to cybercrime
Who is this for?
Host
Guest speakers
Transcript
Bob: Welcome to an episode of the Q&AI podcast. Today I'm joined by Kedar Dhuru, Senior Director of Connected Security here at Juniper. Today we're talking about how AI can be used to catch cyber threats, the biggest challenges in integrating AI with cybersecurity, and what's on the horizon for security. With that, Kedar, maybe give the audience a little bit about your background. How did you end up in security?
Kedar: Hey, Bob. Thank you for having me here. I actually got into security quite by chance. About 20 plus years ago, I was graduating college and I got an opportunity to work at National Semiconductor as a security analyst. That's how I got my first break and start into security.
And over the years, I felt that security was an evolving space, an emerging space, an exciting space in terms of us versus them in terms of how we can help secure enterprises, help them grow in a secure manner. And that's what kept me in security. And from there, I've been around in the industry in a few different places, most recently at Imperva Systems, which is an application and a database security company. And in the last four years, I've been at Juniper here trying to build our security portfolio, trying to build our security expertise in across our AI networking customers.
Bob: When we look at what's happening in AI, and for myself, 15 years ago, one of my CEO friends actually had a situation where the bad guys actually, phished him, convinced him that his CFO was calling him from China and wanted 10 million shipped over, and they ended up losing 10 million dollars in a phishing attack. When you look with AI and the deep fakes now. What do you see as the impact of AI coming into the security space?
Kedar: I think that's interesting. Phishing attacks have been in the industry for a very long time. They've become more sophisticated. It's becoming harder to detect them. But they are still, we are still able to identify phishing emails. We are still able to identify, when there are when something doesn't seem right.
But I think the convergence of AI or what AI brings to the table is the ability to craft phishing attacks, phishing emails, or even create serious kind of deep fakes where you can pick up the phone and, I know it's Bob Friday calling me, but it could very well be an attack, a voice attack that is impersonating Bob.
And I think AI or the influence of AI on these kind of attacks is going to be profound over the next five to 10 years. It isn't there yet, but this is an evolving area that I think is going to be incredibly exciting to watch, but from a security practitioner's point of view, but also difficult to defend against.
Bob: Yeah, you kinda see this like an arms race, right? Bad guys, good guys. When you look at the bad guys using AI Do you think who's got the lead in this race right now? We got the bad guys using the AI to create these fakes or the good guys you know, are they leveraging AI in any way to actually detect these fakes?
Kedar: So the good guys or the security companies or security tool set is improving by the use of AI. This is what we call AI for security. This is where we are making use of AI to enhance our ability to write better security policies, write more or to better do better investigation in a, from a SOC point of view to go out and say, Hey, there's an attack here, but now give me more information so that I can respond to this more quickly.
But I think the bad guy's ability to be able to use AI is vast. It's un-constrained and the attacks come from anywhere, I think, so they have a little bit of an edge at the moment as we continue to develop our AI capabilities on the defensive side but I think you're right, this is an arms race they currently are a little bit ahead and the security the defensive mechanisms are following, but I think we will we will continue to improve in that area, the ability to detect, the ability to become more proficient and the ability to respond more quickly to AI based attacks.
Bob: When I look at this, it looks like a lot of this technology dates back to ChatGPT, these large language models, these transfer models that really enabled all this tech. We look at companies that are embracing Gen AI as a technology. From a security perspective, what's your words of wisdom for companies that are starting to bring in Gen AI into their operational organization are into their products now.
Kedar: As companies look at bringing generative AI into their products, it's definitely going to help them improve the user experience. This is important from that point of view of saying, I want to do better investigations and be able to link things more easily. It is also going to help them write better security policies.
I think where this becomes interesting is as companies look at using generative AI to help become more productive in nature, not just the security aspect of it, but use it for improving their product set and improving their business practices. I think giving customers giving their users the right kind of access, being able to bring a, either a zero trust approach or be able to help secure these AI models or these AI workloads is going to be critically important.
And that is another area that's I think in AI, which is security for AI. How do you help secure the access to Gen AI models? How do you help secure the Gen AI training clusters itself? How do you help secure against poisoning the models? It's going to be an interesting and evolving space which is which is evolving.
Bob: I'm not a security expert, and I keep hearing this term zero trust. When I look at zero trust networking AI, maybe give me the audience kind of a, what does this zero trust networking really mean?
Kedar: Zero Trust Networking is more about understanding who the entity is that is accessing what kind of an application do they have the right kind of access to they have the right kind of privileges has it changed since they last received that access and then continuing to monitor it.
To be able to understand and profile that to be able to say something bad is happening or something has changed. And so we need to change the level of access we are giving the entity, the access to the application. So that's the kind of core element of zero trust networking or zero trust network access as the industry knows it today.
I think that's as we, we look at Gen AI or the access people have, I think Zero Trust Network access becomes very important in terms of who is accessing your different models, when do they have access to it, how are they what are they doing once they have access to the models, how are they training them how is that evolving, what kind of changes they're making to it.
All of that needs to be monitored and you need to be able to take preventive action if something happens that's not, that's unexpected.
Bob: When did Zero Trust Networking come out? It seems like it's been around for at least a decade or so. It came out long before AI, right?
Kedar: Right. So Zero Trust Network Access is a relatively new term in the last couple of years. Kind of popularized around the industry now. But that kind of if you go back to the to many years ago, the whole idea of firewalls allowing access from one zone to another kind of mimics and then being able to inspect the traffic flows, it's kind of basis of zero trust network access.
We have now. As we have learned application level access, as we have learned that identity has become more important, I think Zero Trust Network Access has become more evolved it has it has brought in more of the identity and application level awareness and being able to understand what people are doing with it from a data context.
All of those different pieces have now surfaced, but Zero Trust Network access has been been around in the industry for, five, seven years now,
Bob: Maybe you're asking, when I look at a AI is a kind of a broad term, there's I think AI has been security for quite a while now. Gen AI, ChatGPT that came out in 2022, you kind of differentiate between AI before ChatGPT and AI after, Did we have a, have we had AI and security for quite a while?
Kedar: I think security is one of those industries that has benefited from the precursor of AI. AI, as we know it, what we term it in Gen AI is relatively new, you're right. But I think the security industry has been using machine learning models, heuristic models to be able to identify threats, whether they're flowing on the wire or to be able to use machine learning models to be able to identify different and emerging forms of threats. You can write security rules and policies early on.
This has been ongoing in the security space for several years now. AI, as we know, it has only enhanced that capability. It has actually made it better because now we can train AI models to be able to detect threats against large volumes of data, large corpuses of data. I think so in the security space, AI becomes is a welcome addition.
In addition to that, I think the ability to map an indicator of compromise against what it means helps the security administrator to find those needles in haystacks and to be able to understand the meaning of that and to be able to take remediation action even more quickly and more easily.
And I think the ability for AI and Gen AI particularly to be able to go in there and write rules or be able to understand how to write rules quicker so aid the security administrator in doing their day to day job. I think Gen AI helps with that. So AI is definitely a welcome addition in the security space.
Bob: I usually try to explain to people, machine learning algorithms, we've all been, as engineers, we've been using those for decades, right? Exactly. Logical regression, linear regression. Exactly. It seems like, what's really changed in the last, 2017 is really these deep learning models.
Kedar: Yes,
Bob: These big models that we've been training tons of data on, do you think security? So security has been using machine learning for quite a while along with other. You think security is really gonna embrace these deep, big 100 billion large models trained on tons of data. Is that gonna become relevant in the security space?
Kedar: I think it does. I, and I think what happens with these large deep learning models is it allows you to write better security detection capabilities because you're now able to detect threats that otherwise you would have to have human intervention to, to write rules for.
I think it allows us to get into a space of trying to be proactive in our security approaches because by looking at trends, by looking at data more closely associated with the identity that's trying to access it, you may be able to write rules that help you tune your Zero Trust Network access capabilities even better.
So it helps you reduce, identify and then reduce your attack surface to only levels that are that, allow somebody like Bob to access certain amounts of data without or giving him over overprivileging him.
Bob: Okay. So is it fair to look at this as the battle of deep learning models? We got the bad guys coming at us with these deep fakes with these large language models. Are the good guys using the same deep learning models to combat these?
Kedar: I think that's going to be an interesting space. As I was thinking about this, I think the same models that the bad guys are using to create new and exciting threats the security industry can use to actually write and identify these threats and actually write preemptive rules for, even if they're not perfect, you can actually look at heuristic based rules and improve those to be able to detect some kind of ongoing threat or compromise.
Bob: Now maybe the other topic I may touch upon, AI is one of my favorite topics, but that I'm an amateur physicist on the side. Physics always been one of my other little hobbies. I hear a lot about this quantum cryptography, are these quantum computers actually going to be a threat to my cybersecurity. Have you actually heard this?
Kedar: I have heard this. And this is an area that's not there today. I think it's an emerging area. The expectation is when you have quantum computers, you will be able to get past security much more quickly and more efficiently than you can. Then it would take today's computers to do.
For example breaking passwords or hacking or breaking encryption would be much, much easier with a quantum computer. I don't think there is any need for a defensive mechanism yet. Quantum threat is still a few years away. I don't think we've reached the point where quantum computers are present that they can break encryption, but I think there's a lot of thought process in the industry around this and we are starting to see different ways of trying to address the quantum threat.
There's an entire area called post quantum cryptography that deals with new cryptographic algorithms that will be made available that are quantum safe. There is an area where when you have encryption on the wire like IP sec. Instead of using today's standard algorithms, which are which can be broken by quantum computers is there a way to slow that process down?
And so there are different things that are available in the industry. I think post quantum cryptography is getting there. You have some algorithms that are defined by NIST. But they have some limitations or they have specific ways in which they have to be implemented where tools and products aren't available yet.
But I think what you will see is post quantum there's a, there's an area called post quantum key distribution, which is basically when you have an IPsec tunnel, you have symmetric keys on both sides that are encrypting the data channel. Is there a way to rotate those on a more regular basis that would slow down a quantum computer from reading the data on the wire?
Because you're continuously changing them and you're continuously updating them so that the quantum computer would have to continuously work to break that encryption. And I think there are there are some ways that are available today. And then there are new ways that are coming up in the future.
Bob: Now, you said maybe a few years or decades, what I've heard from my friends is that they, these quantum computers, chronogenically cooled, that maybe are a decade away from these quantum computers, getting to a point where the Shor algorithm or the Grover algorithm could actually start to break our RSA encryption techniques. You think we're, two years away? Or ten years away?
Kedar: I don't think it's two years. I think it's somewhere between seven and ten years. From everything I see, I think it's closer to yeah, seven to ten years. I think two years is still too new. But technology is evolving. It's hard to predict with certainty where this lands. But I think it's more than, two to five years away.
Bob: So it sounds like it's an exciting time to be in security. We have AI and large deep learning models that security has to deal with. We have this quantum thing on the horizon five to ten years from now coming. Maybe any final words for the audience on, the security audience on where we're headed with security? How soon will they have to embrace these deep learning models?
Kedar: I think some of the deep learning models in terms of where security can where AI can help your security, most of them are available today.
I think there's ways in which vendors are looking to bring these deep learning models and Gen AI LLMs to the security products today. So I think that's exciting. That allows people to become more easily interact. You don't have to be an expert. You can learn things on your own.
So I think that's where that's what's available today. The other thing I'll say is I think Gen AI and AI in general is a very exciting space. And if you have access to a computer, you should go out there, get yourself something like an OpenAI account and start playing with that model, just learning.
Have your kids start looking into how to use OpenAI to finish homework. Not that they should, use it to rely on that completely, but I think it just allows them to start working in this space, exciting space more easily and use it as a tool and as a need.
Bob: Okay, Kedar. Is it, you think I should sleep safe tonight? Are you, nothing to worry about in my email tonight?
Kedar: I think that's, I think that's safe to say that you can, you can have a restful sleep.
Bob: Okay. But I want to thank you for joining us. This has been a great discussion. And I want to thank everyone here for joining us today. A great discussion around AI and security.