This webinar explores the security risks that come with AI adoption in businesses and how to protect sensitive data. The discussion covers the benefits of AI across different departments like customer support, sales, and IT, while highlighting five major AI security risks: prompt injection, model theft, data poisoning, internal threats, and data exposure. The speakers emphasize that once AI has access to data, it becomes difficult to control what happens to that information. They present real-world examples of AI-related data breaches and discuss how companies are struggling to balance AI adoption with data security. The session introduces Secuvy’s platform as a solution for discovering, classifying, and protecting data across enterprise environments, providing visibility and governance controls to prevent data leakage while enabling safe AI deployment. The webinar concludes with specific buyer triggers that indicate when organizations need better data protection solutions.
Video Transcript
Transcript is auto-generated.
We are focusing on AI. AI is everywhere, and so is the risk that comes with it. Your customers are adopting AI tools quickly, but most don’t see what’s happening behind the scenes with their data. So to help us break this down, I’m excited to welcome Sumera Riaz, our VP of cybersecurity, and the security team, Sirena Ong, VP of sales and alliances, and Alex Au, our senior product manager. So welcome, everyone. Thank you so much for being here.
I know that a lot of folks are interested in this. We talked about it a little bit last week and about AI, the risks that are involved. How about we just kind of start from the beginning and just start from the very beginning? Explain to us what is AI.
Awesome. Thanks, Cass. We’ll just dive right in. Can we get the agenda going?
Awesome. So this, this segment today is called who let the bots out. So as Cass explained, it’s just a quick look at AI meeting security. It’s AI is a great tool.
Right? Adopting being adopted companies all around the world. It scales a company as far as revenue goes. It can process data at light year speed.
And along with the good, it does bring a bit of a risk with it. So today with Sirena and Alex, we’re gonna dive into the risk of that AI brings to a company, how do we resolve it, and the great solutions Secuvy have that can help your clients today. So with that, Sirena, let’s get started. You know, we’re just kinda take a step back, zoom out for a little bit. Tell us what is AI and just break it down in simple terms for us. When we when people say AI, what does it mean exactly?
Essentially, AI is, is is basically technology that helps computers or machines learn and think like people do.
Can you hear me okay? Yeah. So that they they can understand things, they can make decisions, and they can they get better as they learn more based on data. And we’re seeing examples of AI everywhere, and it I mean, there’s no shortage of headlines around AI. But, you know, that’s essentially what AI is doing. It’s widely adopted whether come we as, you know, we as consumers are using AI through applications and services and through our vendors that we work with or as companies adopting AI to drive efficiency and productivity, it’s all about, you know, making these machines learn just the way we would.
Exactly. Yep. Okay. So if we look at AI and can you tell us, like, some use cases that AI brings to us?
I think for that, we can go to slide number two.
Yeah.
Yep. How are we with AI today?
Yeah. So, I mean, all of you have customers out there across different verticals and industries.
And I think for the most part, what we hear most is around what is the efficiency gain? What are the cost benefits? What are the ways in which a business can adopt AI to really drive value for their organizations, but also for their customers more importantly. So it’s really redefined in many ways how different departments, or groups within an organization operate and the things that they can get done more quickly to ultimately reduce cost, drive better customer experiences, and whatnot.
So what you see on the slide here are just some different examples around how customer support, IT, business users, finance sales, these are all ways in which, AI is actually benefiting those organizations, like customer support, for example, being able to accelerate, ticket triage and, service rep you know, issue resolutions, through the chatbots and for handling some of those easy to solve tier one tickets that before they get escalated. Sales, dear near and dear to all of our hearts, I think. Being able to help us understand from a lead scoring or prioritization perspective, like, which are the opportunities that are most likely to close?
Which ones have the prior highest alignment to what the customer needs are to what we provide as service providers? And, you know, I don’t know about everyone else, but I’m not a huge fan of doing CRM updates. So as much as AI can help me automate and fill in the the gaps around notes and, you know, fill in the details around every opportunity, I love it. You know?
Yep. So lots of opportunities.
Yeah. Right. So I’ll throw a question out there for everybody who’s listening in today. If you guys saw, any of you guys caught the Super Bowl a couple of weekends ago, and, you know, I know you’re muted, but I can hear the ohs and the ahs, and the painful noises coming from your coming from your minds and hearts.
So as you were watching the Super Bowl, if you noticed, there was a commercial there for Claude. If anybody caught that, they they showed a coach and all these, you know, football players that were trying out. And the coach used Claw dot ai to basically determine, like, which which player they were gonna they’re gonna recruit. And I thought that’s so interesting.
I want I want you I wanna hear from you guys what you think about that basing recruiting a football player to a team based on the predict the predictions from an AI. Like, looking at the football player stats and predicting the future how they’re gonna perform, Do you do you think about that? Do you think that’s a good way to, like I guess it’s not really good or bad. It’s it’s genius in a way, but it also reminds me of that old movie minority report in a way. So wanna hear your wanna hear everybody’s thoughts on that.
And if you didn’t catch it, you might wanna Google that commercial. It was so interesting.
So hopping back onto the how your customers are using AI. It that’s so fascinating. And, you know, in our in our world today, we have clients already that are that are accessing customer support, IT ops. We’ve got great solutions in our ecosystem that our advisers are pulling in for their clients already.
And I love seeing this. This is amazing. And the seventy seven percent stat of reducing AI risk is huge. Right?
I mean, that’s that’s just that’s just gold. We get time back. So it’s amazing. Now this is beautiful.
Now what risks does it bring in when we do deploy AI?
Yeah. I mean, there are a lot of and if we can move to the next slide to talk about risks. And I think I’m looking at the chat in parallel, and I think everyone’s right. I mean, there’s a lot of the reality is there’s a lot of benefits to AI, but it doesn’t replace people. And it doesn’t replace those final decisions that humans need to make. I think the goal is really to, you know, try to reduce the minutiae and the manual things that no one really loves to do, but really be able to focus on the high value work. So kinda like what you’re talking about, Sumera, in terms of recruiting and coaching, I think Billy Bean years ago with the Oakland A’s kinda proved, you know, that it’s not just the stats, but it is the stats do help, you know, when you’re looking at a different way.
And and AI is just kind of accelerating all of that analysis in some ways, but at the same to drive the efficiency and and the visibility that we need
To make decisions. But at the same time, there’s a lot of unknown risks that people don’t realize is actually happening because once AI has access to data so when your customers are deploying ChatGPT or enterprise Gena applications or building their own private LLMs, you know, the AI needs data.
It needs to be fed data to learn and to improve and to to drive better outcomes. But when we’re talking about risks, there’s a lot of risks that maybe aren’t as obvious, unless you really get into it or until actually something happens. So these five are sort of the five different categorical areas of risk that, most organizations and CSOs are thinking about today when they’re thinking about deploying AI in any shape or form.
Sorry. Oh, my screen changed. Starting with prompt injection. I tried to put we we tried to put something together that hopefully makes it a little bit easier moving beyond all the technical jargon. But prompt injection is when someone an attacker actually goes in and tries to give it malicious instructions. They’re trying to drive a a different outcome, that is actually bad for the company. So, like, us being able to go and tell a cashier that your manager said it was okay to give me this product for free.
Probably not true. Yeah. That’s not the case. But think about that at scale when we’re thinking about AI and how it’s being deployed.
Model theft I’m gonna get back to data exposure. But model theft, they’re not looking at stealing the code. It’s kinda like they’re playing with your model and replaying your model and re re you know, repeatedly, repeatedly prompting the model to try to figure out what how it actually works. So they could reverse engineer, or they can go and find those points of weakness.
Exactly.
Poisoning is let me feed it some you know, it’s kinda like getting an an an injection, you know, of bad things into your body that suddenly wreak havoc. You’re poisoning they’re poisoning the data because the again, the AI is learning off the data that it sees and that it’s connected to to help improve and and find predictable outcomes, etcetera, and kinda learn from context, etcetera. So if you give it bad data, it’s like garbage in, garbage out. Yeah. Yeah.
I’m sure you it.
It’s kind of what Sam said in our last week’s Tuesday call. If you didn’t catch that, I’d encourage you to go listen to that one. It’s Sam, Jason Kaufman, Dan, you know, just a great conversation around around all of this. So and she said it in last week’s call too.
It’s junk in, junk out is basically how that works. But on purpose, internal threat actors can go in. Employees who actually work for the company, they can actually go in. You know, the ones that are maybe disgruntled employees, I’ve seen that happen many times, when they can go in and create create risk that that sometimes we’re invisible to when we’re within a company.
So very interesting.
Yep. And it’s not just data, though. Right? Risk, Sirena? So data is one aspect of the environment, but then we have an entire telemetry that’s built off of orchestration, visibility, identity, and that whole layer when that introduces an an additional risk that AI brings.
Because now when we deploy agentic AI across our whole environment or even bots to do certain repetitive work, that those can be taken over. Right? Those can be less likely as human identities. But if those if those bots fall into the wrong hands or is are hacked into, it’ll be very hard to tell within a company where they’re moving because they move differently than than humans do.
Right? Yep. Yep. It’s what I find interesting is that our whole security solutions, any solutions you can think of that exist in security today, they were built based off of learning human behavior.
Right? So that’s how security solutions work. They pick up on human behavior that’s an anomaly to flag it as an alert or flag it as some, hey. This is not how this person usually works. They’re going outside outside the realm of what they usually do, and that creates an anomaly right away. But with AI bots, it’s a different it’s a machine versus a human. So I think our security solutions are catching up to the reality we’re in today.
Yeah. Absolutely.
So if we go to the next slide, please.
Oh, well, before we go there, can we go back one more point?
The model integrity, but data exposure is, like, the biggest I I mean, ultimately, what everybody’s after when they’re doing these running these different, attempts to compromise your AI or your, AI environment or whatnot is they’re trying to get to the data. And the risk with AI also is, like I said before, once your GenAI application like, once ChatGPT, your private LLMs are trained on data or they have access to see specific data. It’s kinda like once it’s there, it’s there. You know what I mean? And it’s almost like the example here, customer service actually reading out all of your information publicly, where you can’t pull back that information. Once it’s out, it’s out.
And that exposure for most companies when we’re talking about it from a compliance perspective, regulatory, brand, and customer trust perspective is huge.
And, you know, that’s one of the big ones I wanna highlight because it’s gonna tee up where Secuvy can actually help as well. But next slide.
And these are some examples. I mean, I think we read about data breaches. We receive letters at home where our comp our information has been compromised in different ways. But now we’re moving into an age where AI is sort of scaling and accelerating the the the impact, I guess, of all these data breaches. Because a lot of things are running on AI or we’re interfacing with AI, and we’re finding incidents where now the bad actors are using AI to get through the AI or through your environment to get to your data.
So, like, in the Middle East I actually was just in Dubai, a few weeks ago, and your Emirates ID as an Emirati or even as a visitor, your passport information, You know, every time you walk into a building, you’re showing your information. You have to check-in and so it’s really critical. It’s part of everyone’s identity to buy a house, to buy to rent a, you know, to buy property, to rent a home.
Anything that you wanna do out in the UAE, for example, it depends on having that national ID. So imagine having your information compromised through an AI powered cell, call center where ten million conversations, not even just your identity information, but your identity information is attached to the conversations you’re actually having with that call center.
So, like, when we’re talking about data exposure and compromise, it’s not just you know, for us here in the United States, our first name, last name, phone number, Social Security number, but now it’s also the context of who we’re engaging with, what we’re engaging about. You know, when I’m talking to a call center, they now know what devices I’m using, what systems or applications I’m accessing, etcetera. I mean, the the scale at which, you know, our information is at risk gets bigger and bigger when you start thinking about it from an AI perspective.
So true. Yeah. It’s it’s interesting. Right? It’s like the best of times and the worst of times because just this sheer exposure that’s creating these these breaches, it’s it’s exponential. I’m curious to see by the end of this year how the the the amount of breaches that were just caused by AI from last year was at forty six percent. I’m very curious to see if that number is gonna be double or triple this year because of the rapid use of AI and attacks.
So this is great. Yeah. For the sake of time, should we move to the next slide?
Yeah. Let’s do it. Yeah.
So few examples yeah.
Go ahead. No. Go right ahead.
No. I mean, these are just some of the common examples that you that, you know, are are happening today.
Data leakage is a huge thing. Maybe with your customers, you hear things you hear a phrase called, DLP or data loss protection.
Or when your customers they might not be calling out data security, but they’re thinking about you know, I’ve run into a lot of CSOs that have tried to, hold back their organizations from adopting AI primarily because they’re worried about what what about their data. Like They’re worried about what type of exposure and leakage they might experience once they actually start deploying AI, and it’s a very real concern when we’re talking about loss of intellectual property. If you’re dealing with customers that are in manufacturing or they’re that have a lot of proprietary information, that their brand depends on or that they provide as a service for their customers.
Any loss or compromise of intellectual property could be Loss of revenue.
Right? Loss of revenue. Loss of competitive advantage, confidentiality, contractual legal violations.
AI tools are retaining and sharing enterprise conversations like that last example we just talked about in the call center. Yeah.
Third party AI vendor breaches. So a lot of these and and and a lot of it is due to the lack of guardrails, number four. Yeah. The reality is, like, AI has so many benefits and so many efficiency advantages that it brings, but it does depend on learning, and it needs to learn off of data.
And it’s kinda like one of those things, like, once it knows about the data, it’s always gonna be there. So then what you need to do is, okay. We’re we’re going to acknowledge and accept the fact that the business finds a ton tons of value. So now from a security perspective or a compliance perspective, how do we make sure that we can control the risk?
Yeah.
Right? And, you know, any how do we control the risk in such a way that we can prevent that data leakage that we can ensure that we have the right that’s only accessible Yeah. By the right people.
Exactly. Yeah. Putting the right controls over it. That’s I think that’s kinda where it all starts, and somebody mentioned in the chat beautifully right now too. It’s basically, it’s the it’s the permissions that you allow AI to have within a company, and that’s kind of you know, it’s it’s that’s the crux.
Yep. Yeah. Like, most of the CCaaS I’ve talked to, they were like when they were trying to deny access to, like, ChatGPT or Gemini or whatnot, they found a lot of their users were using their own personal accounts And work around it or Yeah. You know, and still sharing. So then the concern was like, well, if I don’t give them a safe environment to utilize it, then I’m still at risk because my data is still getting out there.
Yes. Exactly. So it’s kind of a twofold approach. It’s one, deploying an a private LLM within your company for employees to to utilize.
And then on the second on the second fold is classifying and protecting your data so that it’s there’s the chance of leakage is minimum.
Right. Yeah.
And I know can help us with both. So if I don’t wanna I don’t wanna steal your thunder.
So It’s all good.
Let’s go to the next slide. I forget where we are. Okay. Yeah. This is teeing up Secuvy now.
Look at that. So I I saw a question that just came up around, you know, should a company have to create silos? Oh, I think of Yvette. You just asked a question about should all companies create silos before adopting AI?
What we what we’re trying to do here is not force companies to have to reengineer, rebuild their environment to safely adopt AI. You know? What we’re really excited here about Secuvy is, you know, what we’re able to do is actually provide companies with the visibility. We kinda walk them through this life cycle.
Right? It’s about visibility, governance, and enablement, essentially. And from a visibility perspective, all the things that we talked about where it came to intellectual property, design information, proprietary information, you know, PII data, PHI data, it it’s all out there. Right?
Whether it’s in their file shares, whether it’s in databases, CRMs, applications, snowflakes, Databricks, etcetera.
The data’s out there, so we’re not going to force we don’t need to force customers to have to build these silos, but rather enable them to get the visibility they actually need to understand where does the data live across their environment. And that’s sort of a core component of what Secuvy helps organizations do is really be able to discover and classify based on context where is where are my where is my IP? How is it being moved, copied, and shared? Whether it be through email, whether it be you know, like, everyone has a copy of, you know, whatever it is, but they might have it in their personal drive. They might have it in the shared drives. They might be sharing it in the CRM. There may be notes and context that exist in different places, in Slack, or commune you know, different communicating applications.
But all of this data is all out there, so let’s go and find it. Let’s understand and inventory it. Let’s tag it appropriately, but let’s also build the data associations around it so we understand the relationship between the data across the environment so that now we can move into stage two, which is governance. Being able to get the right like, based on what I know now, I now understand what are the right policies.
How do I wanna govern and control access to the data?
How do I wanna govern where this data can move and how it can potentially or not potentially be shared with third parties? And then enable the organization to operationalize the actual protection of that information. Yeah. And so what you’re seeing on the left side and the bottom column on the side is really some of that core capability that Secuvi’s platform can offer to help ensure that customers can reduce the manual burden and effort around protecting their information.
But then as they adopt AI, whether it is chat GPT or their private LLMs, they start rolling out their own models that they do have the right guardrails in place to prevent the data leakage that we talked about earlier.
We can also you know, if we’re talking about, like, someone building their own models and wanting to train their models or train their app AI applications. You know, there are things that we can do in the preparation stage to scrub the data, do link any, identity information from the from the data. We can generate synthetic data.
We can mask and hash the information and anonymize the information so that the data is still usable, but it doesn’t violate any legal obligations.
But it also doesn’t put the organization at risk even at that stage of the, you know, adoption of AI. Then when we actually move into the deployment, we can actually put the guardrails in place and ensure that, you know, we minimize any risks around data leakage, mishandling of data, people who aren’t supposed to see the data that they don’t see the data.
Like, the common example that everyone uses is, you know, who wants to know how much Sumera makes? Anything. Can I go to my chat GBT and actually ask for Sumera Riaz’s, salary information and private information? Technically, none of us should have access to that. Maybe HR or management at best. But if I were to go and ask for it, technically, the prompt should come back and say, you’re not authorized to see that, or it should not answer the question and explain why.
That was that’s actually happened to me, miss Sirena. So a couple years ago when I was a CISO, copilot, and it had just come out, and our CEO just wanted to deploy it without really understanding the risk of it. And we did deploy it without my approval, obviously. And we had this became a legal conversation because one of our CSMs went into Copilot and said, how much does the CEO make for the year? What are the open legal cases we have against this company?
And they were able to tap into legal data, HR data, and so it was you know, it became It’s one of those things that you can’t pull back. It’s, like, kinda like once it’s out there
Then it’s out there. And now it’s just like, how do we prevent that from happening going forward? And let’s learn that lesson that others are experiencing to ensure that that doesn’t happen to us. But, yeah, I can imagine, Sumera, like, how stressful that must have been. Yeah.
And somebody actually went and looked me up. Thanks a lot, Zachary, for throwing me out there, buddy.
And for those of you who know me on the call, you know I like to fly under the radar.
Like, I my personal IPs are always masked and, you know, I just have a whole lot of social media presence.
Yeah. No. I just meant, like, each of us at our own companies may wanna know our CEO’s information, or we may wanna know proprietary information about our customers that maybe we’re not entitled to see. Right?
Like, maybe I’m not supposed to see all the credit card numbers of my customers. Maybe I’m not supposed to see, you know, contract details, the, the actual contract value amounts and things like that because I’m I’m on customer service. So there’s I have no reason to see some of that information. As a company, you can make some of those decisions to decide who can actually see what and when.
And, you know, rather than just making the data available, it’s all about putting those guardrails in place to prevent, you know, the data ending up in the wrong hands.
And not to toot your horn, but I’m gonna Tukui is one of those solutions, guys. Like, what I I go in to talk to CSOs all the time. I’m on calls with you guys, with your clients almost every day. And even though they say, hey.
We’re we’re good for security. We don’t need anything for security right now. But I bet you anything. In all those calls, data is something that they don’t really have any control over.
They want to. They know it’s a risk, but there’s just not enough tools out there to help them with it until now. So Sucubi, if I were to go back and be a CISO again, that Sucubi is something I would personally deploy in my environment because it sits right on top of your SOC. It’s gonna classify the it’s gonna organize your data, classify it, it’s gonna tag it, and then I can go on top of it and deploy my identity access solution.
I can deploy my privilege access solution on top of it, and it’s seamless.
And it it not only, you know, not only tags my data, it also prevents ShadowAI from happening with their new solution that’s come out in January that that protects all their data even if there’s an employee using ShadowAI, ChatGPT, Claude, what have you.
They there’s a firewall that’s gonna block them from putting in from getting the sensitive PII data from egressing out of the environment. So it’s I just personally, I love this solution. It’s beautiful.
So go ahead, Sirena.
I’ll let you take it from here.
Thank you for that. Sure.
Yeah. So, I mean, you know, we’re just trying to ease I mean, the the thing is there’s a lot of discussion around AI governance out there. There are different tools and platforms that are offering a methodology around securing AI and the use of AI. So I you know, we’re very we take a an approach, a more data centric approach in in terms of you need to kind of know what you need to protect.
If you don’t have the visibility around what you need to protect Then it’s really hard to govern and apply policies around it and to apply those guardrails. And so we’re kinda taking that bottoms up view. Because then everything else like, if you’re you’re protecting the data, then, you know, then we’re minimizing the risk through some of these other methods as well.
Precisely. Alright. We’ve got a five minute or a three minute warning.
So We can print out two slides maybe to the things or what to listen for.
Yep. Let’s go to slide fifteen.
Yeah.
So these are triggers, guys. If you’ve gone through my security content at all in any of the webinars I’ve done, these what we call buyer triggers. If you hear one of these triggers, you’ve got a deal. So, Sumera, walk us through these.
Yeah. I mean, this is very focused on, so the first one is really I don’t know where our sensitive data is, and you need to better enforce our policies to restrict access and protect it. They may not say it in those words, but I think it starts with not really knowing where their data is.
And if they have any type of reg if they’re in a regulated industry, health care, biotech, if they’re a manufacturing company that or, you know, if they’re a prime or subprime contractor to the the, the Pentagon, then Yeah. You know, they’re and then or if they have consumer information, they have to adhere to privacy regulations. Everyone’s regulated in some way, potentially some more severe than others. But it really if they’re they’re struggling with understanding where that data is, then there’s an opportunity here with Secuvy.
Exactly. Yeah. I know that. We’re rolling out ChatGPT, but I’m concerned about what data it’s accessing.
So it kinda goes back to what we were talking about up until now in terms of once the once AI has access to the data, you know, if someone’s trying to upload sensitive data they’re not supposed to or Chattypedia has has access to the Google Drives and SharePoints and Salesforce environments within your environment. Once it’s connected, it has sees the data, it’s really hard to claw it all back.
And so, you know, that’s something that CSOs are worried about.
So if they’re rolling out Gen AI, if they haven’t put any guardrails or security controls in place, that’s something that we can definitely have a conversation around. And then same it kinda goes down the way. I know we’re running out of time because I see Cassandra appearing on the video. But, like, you you know, using customer data to train the models, again, we can help with the data preparation and whatnot.
The business wants to use AI, but I’m worried about the exposure of our data. I need better controls and policies to ensure our company is safe. Those are all themes, I guess, you can listen for with your customers. This is very specific to AI, but even without the use of AI, it’s a similar concern around data security in general with or without AI.
A lot of companies are struggling. Their data breaches are on the rise. Regulations are on the rise, especially if you’re operating globally for customers, but also even here state to state, especially in California, for example. Yeah.
So, you know, it all comes down to if they need to protect their data, if they’re wondering or even if they’re not wondering, the question maybe to ask them, what have you what are you doing around protecting the data?
Because there are a lot of assumptions that they’ve made investments already, in DLP or in the firewall or other solutions that actually aren’t enough to protect their information.
Sweet. Thanks thanks, Sirena. Thank you, Tsukuba team, joining us today.
Thank you.
Have a great discussion.
Just, you know, it’s just short and sweet and just, really speaks to the heart of where the market is going. So thank you for both your time today, you and Alex.
And if anybody has any questions, or needs follow-up or would love a demo of our platform, please reach out to Sumera, Trevor, or the Telarus team, and they know where to find us, and and happy to set up time.
Thank you. Thank you, guys.
Appreciate it. And yeah.
No. Thank you guys so much for joining us. There were a lot of questions I saw in chat, so, hopefully, we’ll be able to get some info to you from those folks who who reached out because I know there are a lot of questions. I’m sure that, you know, for customers who are overwhelmed by the data sprawl, like, what is the first or the simplest first step to take with them is probably a big question a lot of our advisers have.
So, yeah, let’s let’s definitely make sure that we get you guys in touch with them, and thank you all again for being on the call. It was a very informative call and just fantastic.