Compliance Unfiltered is TCT’s tell-it-like-it is podcast, dedicated to making compliance suck less. It’s a fresh, raw, uncut alternative for anyone who needs honest, reliable, compliance expertise with a sprinkling of personality.

Show Notes: Artificial Intelligence: What Are the Cybersecurity Implications?

Listen on Apple Podcasts Listen on Google Podcasts

Quick Take

On this episode of Compliance Unfiltered, we turn science fiction in to science fact. This week Adam takes an in-depth look at Artificial Intelligence (AI) and what the cybersecurity implications are around this rapidly growing part of our every day lives.

  • Curious if is AI is coming for your job?
  • Wondering if these robots are acting for good or evil?
  • Want to know about how the cybersecurity world is approaching technology like ChatGP?

Have no fear, the CU guys have you covered there too. All these topics and more on this week’s episode fo Compliance Unfiltered.

Remember to follow us on LinkedIn and Twitter!

Read Transcript

So let’s face it, managing compliance sucks. It’s complicated, it’s so hard to keep organized, and it requires a ton of expertise in order to survive the entire process. Welcome to Compliance Unfiltered, a podcast dedicated to making compliance suck less. Now, here’s your host, Todd Coshow, with Adam Goslin.

Well, welcome in to another edition of Compliance Unfiltered. I’m Todd Coshow, alongside a man who will be your John Connor, along this compliance journey. Mr. Adam Goslin, how the heck are you? I’m good, as long as I can just avoid Skynet, then we’ll be good. Indeed, indeed. Well, speaking of, the topic today is one that is truly on everyone’s tongue these days, and that is artificial intelligence.

Now, from our perspective, Adam, what are the cybersecurity implications here? Set the scene. So, you know, it was actually funny when we were… prepping material for this, etc. I was curious, I went back, I looked and like, what was the first, because it’s artificial intelligence and robots and automation and it’s long been a vision, right, of a lot of people and kind of gets people interested and whatnot. I went back and I did the digging and the first film that related to some form of artificial intelligence was Metropolis from 1927. So, you know, we’re almost- Science fiction becomes science fact, yeah. Yeah, exactly. You know, so we’ve got almost a hundred years under our belt of people kind of dreaming of advances of AI, and whether that vision looks more like the Matrix or more like the Jetsons, it just kind of depends on, are you know waiting for Terminators and, you know, and Skynet to rear its ugly head? The artificial intelligence is a hot topic right now. It seems to be getting hotter by the minute. Chat GPT is one of the platforms that’s out there that’s been getting a lot of air time, but that’s just one of the latest AI machines that could be poised to, you know, bring AI to the next level. Chat GPT, you know, is capable of writing content, generating convincing photographs of people and even doing some coding of software programs. One company went so far as producing an AI generated 20 minute interview between deceased Apple founder, Steve Jobs and Joe Rogan. There’s a realm of possibilities that seems limitless, it kind of leads to, you know, the question of, you know, well, what are the implications for AI and cybersecurity?

Sure, so. I mean the real question, let’s get down to it. Is AI ready to take over the world and or your job? Well, artificial intelligence has come a long way. And, honestly, there’s some capabilities within Chat GPT, which are impressive. But bottom line, AI, AI isn’t anywhere near ready for primetime. I mean, I saw those Boston Dynamics robots, man, they’re doing back flips and all sorts of things. I saw them with my own eyes, you’re sure we’re not stressed about that? There are implications to different forms of, you know, use of AI. There’s a ton of fascination and excitement around Chat GPT. But, you know, you look at it objectively. It’s just not doing that much very well on its own yet. The artificial intelligence machines depend on a gigantic pool of data that they can pull from because they need to go in and kind of get trained properly. As of today, they still require a good amount of human intervention for training. In terms of just standalone true intelligence, you know, the AI capabilities of Chat GPT, they’re great for, I’ll call it doing some of the legwork, right? You know, go write me a paper on the effects of the drought in the southwestern United States and its impact on water reservoirs. right? And you can go out there and gather up this just cacophony of facts and, you know, spit it into this thing that the machine thinks makes complete sense. But, you know, next thing, you know, you’re going through and you’re looking at the detail of what all it wrote, and while it attempted to pool the data based on the factual information that it could find stringing together any assumptions that may be able to be made from the facts that are there, etc. And it leaves a lot to be desired. So, I don’t know, there were some people that were looking at, I heard some scuttlebutt about certain kind of news organizations were contemplating leveraging Chat GPT effectively to do the boots on the ground initial collection. And instead of having people doing that, they would hand it off to something like a Chat GPT to go out and do all the legwork and then have human beings basically involved in the messaging of what it came up with. But the reality is that things are moving fast and it is gonna be sooner rather than later once we have a kind of true AI program up and running on its own, but not just yet.

Okay, now is AI, I guess the best way to ask this question, is AI solution for good or for evil? Well, despite all the excitement around AI, there’s also a lot of trepidation. I mean, for the first time we’ve got white collar professionals anxious about being replaced by the robot, and many people are also similarly worried that AI could be used for evil. In my mind’s eye, neither of those concerns is unwarranted. right, wrong and different. Anytime you have a new technology, it’s going to replace some kind of human labor. Robots replace humans in a massive scale on automobile assembly lines, as an example. Automation software has reduced the need for human labor in almost every other area of business. So at the same time, which of us wants to say, you want to know what? That was a dumbass idea. Let’s go ahead and undo all the technology advances we’ve had in the last, you know, end decades, right? So you sit and you look at unemployment rates, you know, for the past several decades. And, you know, these new technologies coming into play didn’t have some, you know, long-term impact on unemployment, you know, for some professionals. AI innovation may require them to pivot. It probably won’t spell the end of most people’s careers type of thing. But there definitely going to be some transitional periods as, you know, AI, enters into, you know, the security industry. But if we do it well, then we can use AI to enhance human labor, not replace it. And, you know, in the end, the leveraging of artificial intelligence gives us the ability to kind of more productively protect more people, more organizations. And at the end of the day, that’s a good thing. That’s fair enough. Now, I guess where my head immediately goes as we have this conversation is what are some of the security risks related to artificial intelligence? Well, you know you’ve got security risks. I mean, I’m going to kind of jump all over the board. But, you know, you’ve got students that are using Chat GPT for writing entire papers for them. We’ve got AI, you know, able to create deep fake photos of celebrities and potentially in government positions. So for all of the promise that AI brings to the table, it also creates new threats. You know, as much as we intend to use AI for good in the security arena, there are going to be those that are determined to use the same technology with bad intent. Certainly AI will introduce more security threats. It will increase the rate at which the bad guys can identify security threats, where they’re taking advantage of the AI capabilities to broaden their scope of attack. And the depth of their attack, quite frankly. You can bet your bottom dollar that the bad guys are already using AI to invent new ways to do evil. Right now there’s an entire industry where we’ve got full floors of telephone operators targeting senior citizens as phishing victims. Whatever, claim that the target’s granddaughter Emma’s in jail and the only way to get her out is to transfer money to this account, blah, blah, blah. Now imagine that. Had that happen to a friend last week, Adam. Geez, no shit. That’s scary as hell, because you figure now you fold in AI, right? And whatever, they’ve got some recording of a female kind of screaming and dragged away from the phone and the person just instantly assumes that that’s their grandchild or whatever. Now I could load up 15, 20 fake but convincing pictures of Emma, send them to the grandparents. I could mimic her voice. I could mimic a video of her. There’s all sorts of really crazy, creepy stuff that can happen. So, yeah, the notion of AI does pose new security risks, and ones that we haven’t even tripped across at this point in the game. It will certainly allow security vulnerabilities to start to be exposed at a rate that we haven’t quite been seeing today.

One thing I wanted to throw in here is, and this was, gosh, I wanna say this was about 15 years ago, maybe longer, maybe 16, 17 years ago. I was literally standing in front of a server console where the screens busily blinking away at a place that I was at. And they ended up finding out that somebody had actually breached that machine and was actively running, trying to extract data from it type of deal as we’re standing there. And that was an enlightening experience because as we went back and we looked at the logs to try to see what the hell happened, you know, it was interesting. The organization had one IP address come in from, let’s say France, and that IP effectively determined, yes, I’ve got a live machine here. And then it went away for about 30 seconds. Next thing you know, there’s four new IP addresses that are coming at the box. And each of those four were doing now different tests. One was seeing if it was a web server, one was seeing if it was a file server you know, etc. So they had this other round, then once those ones ran through and did their testing, they all went dark for about 30 seconds. Next thing you know, 12 new IP addresses came in from all over the globe. So as long ago as, 16, 17 years ago, we had a fairly decent level of automation included into the approach that the bad guys were taking. And, you know, you take that level of capability, a decade and a half ago and now layer on the ability to, you know, get the bad guys code lined up for AI and capable of, making extended leaps, and making decisions on the fly, etc. Yeah. Yeah, that’s going to mean the time of discovery for the bad guys is going to get astronomically shorter than it is right now. They’re gonna start finding people, finding issues, etc, far faster than they’ve been able to previously.

Most definitely. Now, you know, for the folks. out there looking at this situation going, gosh, there’s a lot of doom and gloom around this right now. How can AI be leveraged productively in the cybersecurity? Well, there is reason for concern, certainly, you know, but there’s also reason for optimism. The good guys have the capability to use AI just as effectively as the bad guys. But it’s funny. Oftentimes, it’s what the bad guys are doing with it that drives the fastest capability of response for improvements, it’s, rare that those improvements happen all on their own. They’re usually kicked off by some inventive bad actor, you know, kind of having their way with things, if you will. But, you know, the reality is that AI could be an incredible tool for White Hat hacking and protection. Using AI could allow security professionals to improve protections, really at astonishing rates. Some of the examples we get into, the realms of pattern recognition and evaluation. Machine learning is already being used today, been being used for some time in the cybersecurity arena as it relates to things like pattern detection in web application, firewalls, and in central logging. Right now, the machine goes out, collects up patterns, and then the humans are teaching it to appropriately allocate the findings into, hey, this is alert worthy, Hey, this is not an issue, this is something that we need to do a little bit more digging into. They can categorize that, but it’s typically done through that human teaching. If we integrated true AI into that arena, where we could effectively train the AI on what things are benign, what things are good, it could make far more intelligent assumptions, if you will, on how to categorize those, and maybe even be able to take the first, once it gets to a certain point, be able to take the first crack at categorizing those things. I’d still want a human to be going in, reviewing it, and making sure that dumb decisions weren’t being made for some reason. But with the appropriate mix of oversight, we could very quickly make a paradigm shift in the pattern recognition arena.

You look at things like handling incident requests. Right now, typically, what happens is that, hey, we have to declare an incident. So then, depending on the organization, right, I sent an email in here, I call this hotline, whatever, bottom line, there’s a, hey, I’ve got an incident notification, and typically it’s human beings that will go in and start to handle those incidents. Well, you know, if we integrated smart AI based, you know, kind of chat capabilities, it may be that we can leverage the AI to go in and, you know, respond to the initial incident requests, determining what type of an incident is, apportioning immediate direction and guidance to the reporter, knowing when, and how we need to go ahead and escalate based on what’s found. So that could be an area where AI could in many ways help with the automation and handling of that initial incident alert handling and whatnot. Monitoring security videos. I mean, more and more people are moving into the cloud, etc, but there’s some contingent of us that either have headquarter offices where we still have cameras, certainly the folks that are running data centers, etc, are going to have those. Even when it comes to things like schools, hospitals, etc, there’s always gonna be security videos around. But imagine that now I’ve got AI trained to where they can take that security camera monitor and the visuals that are coming through it, and they can tell the difference between a mouse running down the hallway and a brick thrown through a window type of thing. They can tell the difference between someone walking down the hallway versus carrying an AR-15 down the hallway. So you’ve got a lot of interesting uses of integrated capability for alerting off of video monitoring, which actually would be fantastic because I can’t wait until we get AI to go in and I don’t know, train AI how to effectively kill all email spam. Another one I love the most is that we use AI chat bots or something and fake voices blah. I just wanna set them up on honeypots for those damn robocall morons so that we can effectively just absolutely waste their time continuously passing them, transferring to other people, different voices, oh my God, that would be amazing.

I would love that. Hit the folks with some parting thoughts and shots this week, Adam. Sure thing. Cybersecurity is certainly a ripe candidate for inclusion of AI, but only when it makes sense. We talked about some of the things they could do with it. And probably relatively soon, that list of stuff we were just talking about are things where AI could start to come into play. There’s a lot of functions within the security space, and countless rules governing oversight that need reliable protection. But you definitely don’t want to be rolling the dice with your security and compliance, especially in the early phases of AI. I think there’s going to be a ton of people that are, hey, just come over here and spend your money and use our AI. I mean, AI is going to turn into this buzzword, AKA bullshit, that people are throwing around. I think we’re going to see that BS happening on a much greater frequency as we go through this trip.

But really, it’s on us as security and compliance professionals to basically see through the smoke and mirrors horse poop that is being slung, and make intelligent decisions around where’s the right opportunity to integrate AI into what we’re doing. As AI comes into its own, the cybersecurity world is definitively going to get darker, and at the same time going to get a lot brighter. So in many ways, the arms race has already begun.

And that right there, that’s the good stuff. Well, that’s all the time we have for this episode of Compliance Unfiltered. I’m Todd Coshow. And I’m Adam Goslin. Hope we helped to get you fired up to make your compliance suck less.

KEEP READING...

You may also like