In this HITT training, the focus is on enhancing cybersecurity awareness as part of the ongoing month dedicated to it. Stefanie Cortez, VP of IT at Telarus, emphasizes the importance of foundational controls and the human element in security breaches, which still account for nearly seventy percent of incidents. The discussion highlights the dual nature of AI in security, urging organizations to use it responsibly while maintaining robust policies and training. A culture of security awareness is essential, with constant reminders, monitoring, and incident response plans being crucial for effective defense against modern threats. The session concludes with a call to validate AI tools and ensure data governance, reinforcing that users must remain vigilant and educated.
Transcript is auto-generated.
Today’s high intensity tech training begins now. We are continuing cybersecurity awareness month as we take a look at securing the future, the strategies your clients must use to protect their organizations in the age of AI and these modern threats. AI, of course, is everywhere, but its usage must be secured. Human error must be reduced.
Defenses must increase against phishing and deepfakes. Foundational controls must be in place such as encryption, Patching, backups, and zero trust principles have to be adopted. Today, we’ll learn actionable strategies that you can provide your clients, whether in IT, security, or enablement, to protect their organizations from today’s sophisticated attacks. Of course, your comments and questions are welcome in the chat window to which our presenters today will respond both during and after today’s event.
Today, we welcome to the Tuesday call one of our favorites, Telarus VP of information technology, Stefanie Cortez. Great to see you, Stef. Thank you so much for being here today. How are you?
Good. Thanks for having me, Doug.
Oh, you bet. Such an exciting topic today. We appreciate you taking this on, and, let us know, how we can help along the way, but I’ve got my Crayolas and cardboard here at the ready. Go right ahead.
Sounds great.
I think, Chandler, you have the slides there? Perfect. Alright. So like Doug said, I’m Stephanie Cortese, VP of IT at Telarus. And I always joke that the security and compliance team doesn’t necessarily light up a room when we walk in.
So I appreciate you being here.
Not true.
Trying to stick with me today. Compliance walks in and people are like, oh, no. So as you can probably tell from the wordy title, like, technologists, like, we’re not always great marketers. So the bottom line of all these words is I’m here to talk about five things to focus on this month for cybersecurity. And I’ll just say, like, nothing I’m gonna say is probably brand new to all of you. Use it more like a gut check for yourself, for your organization, and for your customers.
So next slide.
So just, you know, before we get into it, note from our legal team. The presentation is informational and educational only. It provides general guidance, not legal advice.
You see these a lot in, like, Salesforce slides, so just same same kind of disclaimer here.
Alright.
So the new reality of the security landscape.
The modern security landscape is changing so fast and rapidly, which again is what we all know.
But AI has made it easier than ever to gather data about a person or a company for targeted attacks. What used to take hours of someone trying to, like, comb through sites or writing bots can now be done with a deep search and automation in AI. So think about capturing things from, like, LinkedIn, Facebook. Like, all these personal information about people can be done just with one quick search.
And even with all these AI tools, like, humans still remain our largest weakest link within the security organization. So a Verizon data breach investigation report shows that nearly seventy percent of breaches still involve a human element. And often, that still happens from, like, a single click or a misconfiguration on a server, something human caused. And in our industry, we’re in a really unique position that we handle so many different people’s personal data, we have to protect that data.
We have know, show good guidance to that data, be good stewards of it. And protecting that data builds trust, and trust builds reputation. So let’s dive into these five areas that we’re just gonna do that gut check on this month. So AI is really that double edged sword, and there’s no bigger buzzword.
Right? Everybody is saying AI. Every CEO wants you to figure out how to use AI.
But, like, what does it mean, and how do you use it responsibly?
So when you use AI irresponsibly, it helps bad actor actors move faster. And when used with governance, it can be a really powerful productivity tool. So a couple of key reminders is that anything that you put in public AI is now public. That’s how I want everybody to consider any public AI tool.
Don’t share anything in there that you wouldn’t want posted online. So that means that we all need to set and train on our AI policies. So the average user, I really just think, wants to work smarter. Right? They don’t want the risk.
They just wanna be told, like, here’s my sandbox. Like, here’s what I can play in, how can I use it to just, you know, automate my tasks or make things easier? And then you’re gonna have those troublesome power users that just wanna try everything.
For there, you really need to, like, lean on your policies, lean on your training, and give a compliance sandbox for people to explore.
It’s really easy to set up private enterprise AI models, and really, I would just recommend that people do that before putting any corporate data into any tools.
At Telarus, we’re taking the enable and educate approach, and so that’s kind of what I’ve called the the leverage of our policy. And we’re building all these different modules to define who our AI coworker is. So, like, what attributes matter to us? What are the morals of AI? How can we use it to challenge our thinking? Where can it push us further?
I don’t want AI to make decisions for me. I want it to help educate my decisions. So, like, I personally use a digital twin, and it’s loaded with a bunch of context about my role. It challenges me.
It asks me all these clarifying questions. And you know what? That really frustrates the heck out of me sometimes. Because I’m like, just give me the answer.
But the fact that it frustrates me, it it challenges my thinking. It means it’s doing what I need it to do. It’s like me fully rested, fully caffeinated, and it’s on twenty four seven. But it is compliant, and it is smart, and it is helping me do better.
It’s not replacing who I am. So just as a reminder, policies, training, and enablement will only work if you validate and monitor them.
Like, anything AI right now is not a set it and forget it approach. Like, things are moving so fast. We have to build build a regular cadence to review and adjust as we move forward.
And, of course, number two, so users will continue to be our biggest risk.
We can train all day long, and we can send out all the phishing test. But at the end of the day, users will some sometimes still click.
So at Telarus, we block thousands of emails a day. But if a known good sender gets compromised, that message, sometimes they can slip through. Users have to slow down. They have to questions things that feel off. They have to be able to report things immediately and know who to report them to.
I personally love that we’re creating a culture of security awareness at Telarus. So, like, one night, a employee was out walking her dog, and she walked by an office building, and she saw someone in the office, so not Tara’s office, but she saw someone in the office that left their computer unlocked. She didn’t ignore it. She actually, like, snapped a picture.
She put it in the company, chat. You know, we’re all kinda joking about it. But, like, that’s the awareness I love. Right?
She saw that. She’s like, that’s wrong. Like, Stephanie would be so disappointed, and we can all, you know, make a joke about it. But, like, that’s gonna help keep us strong, and it really does take constant reminders, constant training to the end users to keep that culture of security awareness top of mind.
And, of course, at the end of the day, like, things still will happen.
Monitoring and automation are so important. I would also say leverage that that automatic remediation wherever humanly possible because I don’t want that ten PM, like, Friday night alert where I’m, like, panicking trying to get to a computer. I don’t wanna be connected to my phone all the time.
Like, as an example, if somebody at Telarus has a impossible travel detected or, like, we we detect some kind of leaked credentials, we have the automated remediation that next time they try to sign in, they’re gonna be prompted to change their password. They’re gonna be prompted with actually two lever levels of two f a. It’s just gonna reduce that panic alert, and it all comes down to monitoring on automation. And then the last kind of thing I wanna put here is have an incident response plan.
Your organization, your customer’s organization, everyone should have an incident response plan, and most importantly, test it. So I do mock exercises annually with my incident response plan. I don’t wanna ever train somebody in a real incident. Like, should I get an incident?
I don’t wanna be walking them through what the process is. So train people so everybody knows what their roles are, how they can jump in, and how they can, you know, move that thing forward.
Okay. So, new phishing tactics. Right? Again, you all know about these, but let’s just talk kind of, like, top of awareness.
So, it’s not just phishing anymore. Right? We have the QR code phishing. We have video and voice scams with AI that are just so impressive, and then we have the good old SMS phishing.
So with QR phishing, the biggest red flag is when somebody gets an email to scan a QR code to set up, an MFA. I see those come through for other organizations quite a bit.
That should immediately trigger a, like, Pause. Think.
Ask somebody. Right? The AI generation of video and voice messages to make it appear like it’s coming from a CEO or someone in the c suite to call you, or send you a video or the text asking for gift cards. Right?
The big thing to educate people on is, like, does this context make sense? Like, my CEO is never gonna text me and ask me to buy gift cards. Right? Like, that just doesn’t make any sense for, like, what I do in the organization.
So I always train people. Like, one, it’s it’s so hard because we all wanna do more with less. We’re all moving so fast. But we really have to do that pause, think, and ask.
And then at the end of the day, the best way to validate if something is, you know, phishing is to actually pick up the phone and call somebody. So I, you know, I don’t like it when people are like, oh, I replied to the email and asked if it was them. That’s probably that’s probably not the best way to do it.
So call somebody on a good known number, right, and confirm, like, hey. I got this weird thing from you. Like, did this come from you? Ninety nine percent of the time, it’s going to be no.
So, of course, like, these scams rely on urgency. So it’s always they want you to move fast. They don’t want you to have that second to pause. And so slowing down is really your best defense, And training people is where this all comes to.
And it really does have to be top of mind frequent training for the end users so that they are just have this awareness. Then the last thing that I I see is that we have to watch out for the MFA fatigue. We have MFA on everything. So, like, I don’t think my coffee rewards, like, needs a two factor, but, like, that’s where we’re at as a in the world.
So we have to stop and not just hit approve.
Really think, like, hey. Did I prompt for this MFA? If not, like, deny. So try to combine into, like, single MFA tools or SSO where possible, and that’s really just gonna help reduce that fatigue.
And then, of course, fundamentals, they’re not flashy. They’re just essential. Keep up with patching updates and backups, and then most importantly, test the backups. A backup that you can’t restore to the original source is just really useless. And I think that the testing of these things is where a lot of people fall down.
Patch your routers and your firmware. Like, check your home router. Like, see how many how many firmware updates you’re behind on that. And then I also say check how many connected devices you have on all the routers and firmware.
Enable MFA wherever you can.
Reduce access in your cloud apps. So, like, even quarterly, I recommend that everyone go into all of your various cloud applications that you use to run your business and check-in on your access levels. Do the right people have the right roles? Did anybody leave the organization?
Or more importantly, did somebody change roles in the organization? No. Maybe they have too much access. And then, of course, map your data.
Know where your data lives, how it’s connected, where it’s sent, how it’s sent. And then anytime somebody wants to use a new tool, review that new tool against your data map. It’s gonna make it really easy to see, like, hey. Where does this fit in?
Where is there any risk? And just remember that the borings the boring stuff isn’t sorry. The basics aren’t boring. The basics can be boring sometimes.
Right? But they are your best defense. And making sure we do all of these things just allows you to just do business faster on the newer tools in in a compliant in compliance security way.
And then on to the last step here, adopting a zero trust mindset. So, I know, like, You know?
Even five years ago, we were talking about the network per perimeter a lot. You know, that’s really gone. We work everywhere now. So identity has really become the new security boundary in my opinion.
We always need to make sure we’re applying that principle of least privilege, don’t overextend access. And I look at that that that protects the organization and the end users. Like, my operations users don’t need access to financial data. Like, that’s just not a risk that they need.
So I’m gonna make sure I’m protecting the end users and the organization with that principle of least least privilege.
And then this is probably the fifth time I’ve said it so far, but alert and monitor. So use real time alerts and automated re remediation wherever possible.
I always like to pull logs into a central source and flag for anomalies. You’d be amazed at a lot of what the suppliers can do nowadays.
So being able to pull in all my logging and just you use AI to say, like, hey. Like, this is unusual for this user type. That’s gonna help you find things a lot quicker.
And then always just assume something will get compromised. I always say, like, trust nothing, and then, of course, slow down and reduce that risk.
So let’s bring it all together. Obviously, the modern threats are AI, people, and process. That really is, like, what I consider the security triangle. And when governed well, all of these three things can be used combined to drive modern success. So leverage new tools wisely, validate, and then, of course, test them, like I said a few times, and then train and empower your end users. That’s gonna keep your business and your customer’s business running smoothly and allow you to respond to risk quickly, but also use these new AI tools and additional tools to be able to run your business and get productivity out of them.
And then just to wrap it up, because it’s cybersecurity awareness month. So here’s your your kinda call to actions. Audit your AI tools.
I’m sure that you all use a lot of tooling. Customers use a lot of tooling. It’s a really good time to audit them because they’re changing so frequently. And what people are using today may not be what they’re using even in thirty days from now. So identify where sensitive information might be exposed, document what tools are being used, and then even more important, if a tool is abandoned, make sure you delete the data from that tool. People set up trials all the time and and try a bunch of things, which is fine, but we wanna make sure that we’re actually, like, shutting down those accounts when not in used.
Of course, refresh security training, security awareness, top of mind, very important.
And make sure that they’re reflected of modern tactics, not just stuff that was important even a year ago.
Review the fundamentals, patching. Like I said, check the router in your home office even for firmware updates. Check that MFA is set up on all the tools that have sensitive data, and then just do that gut check of the zero trust mindset.
So like I said, it’s all foundational.
And, you know, hopefully, somebody found something valuable or just top of mind reminders for this month.
That’s it for me. Thank you.
Great presentation, Stef. We’re talking with Stef Cortez. She’s the VP of IT at Telarus and very heavily involved, of course, with our own use of AI and so many other tools that we’re using internally.
Stef, just a couple of questions, if you don’t mind, that our partners have brought up and that, I’ve made a few notes about here as well. I I know there are so many people that have questions about the difference between publicly available AI tools and resources versus those that are set up privately in an organization and the security concerns involved with each. We talked a little bit about private AI, models and tools at the start of this call. Wanna expand on that just a little bit as to what organizations should be looking for in terms of how they set up AI for their employees to use.
So the my kind of new path going forward with AI tools is that, like, to make it private, you have to pay for them. Right? So if a organization wants to lead into AI and try out these new tools to make them private, like, it’s gonna have some kind of, like, seat cost associated to it. At that point, then you’ll get, like, the security and compliance settings. So I would say, like, start with one, set it up, provision it correctly, and then use that AI tool to actually help you research any additional ones that you’re gonna set up. So, like, that’s that’s really what what I’ve done. But, yeah, private traditionally means paid.
I think a lot of people tend to forget that, the more we use, especially publicly available AI tools, you’re essentially contributing to the training of that tool and putting additional information in there.
Is there an issue that we all need to be concerned about in terms of the things that we’re asking AI to do that then further trains those models to either make the information better or make the information worse that AI is generating for us? What can we do as users to help ensure the integrity of AI tools that, we’re involved with?
Yeah. So that one’s that one’s tough. Right? So I would just say, first, once you set up a private kind of paid for account traditionally, and I’m it’s not all A tools.
Right? But some of the tools, you can say, like, don’t use my data to train the model. So that’s what I would would recommend for for most people just because there’s a ton of other people putting personal stuff into public AI models to use it to train that data. When it comes down to, like, good or bad data, that’s really where, like, I just think, look at the results you’re given and do some validation on them.
Like, even putting together this presentation, I put notes together and then said, hey. Like, make some suggestions. And it gave me some, like, wildly inaccurate, like, numbers. Right?
Ninety percent of this, sixty. And so I was like, where did you get this from? It’s like, I don’t know.
So I would just say, then tell it it’s wrong. Like, hey. Like, that is not good information for me. You can help to retrain it back like that if we’re using the public models.
And I would just say, like, take it all with a grain of salt, and each person I talk to that uses AI models differently does get a different tone or response back from the model. So it’s not really, like, a replacement for something. Right?
Just you you gotta kinda keep that in mind. It’s gonna learn from you and what you like and kinda answer in the way that you want it to in some cases.
You you made such a great point at the beginning about, AI and similar tools. These are just tools. They’re there for us to use. They’re not necessarily designed to be a replacement for us, and so we have to be very careful about how we use them and make sure that we’re not putting undue trust in them.
You mentioned at the start that as with everything that we’ve seen over the last few years, users can, of course, be the biggest risk. We’ve emphasized security awareness training for some time. Forrest made a great comment here in the chat. Are there preferred vendors or good places that, advisors and their clients can go to help design security awareness training for their organizations?
Yeah. So there’s a lot of tooling out there and it depends on what your I would actually say first look at, like, what is in your existing tool set today.
So, like, I’m I’m not saying Microsoft has the best. Right? But, like, if you have a customer on Microsoft, there’s actually, like, prebuilt, phishing campaigns and training should somebody fail.
So if somebody’s looking at how to get started and doesn’t really want to invest in that spend or invest in a lot of time, a lot of the tools people have already for, like, their email will have, like, something to leverage. And then, of course, there’s, there’s a lot of availability out there from different suppliers even, you know, within the Telarus portfolio that can help with that.
One of the, greatest comments I’ve heard recently was down at the bottom of one of your slides there where it says, if it feels urgent, it’s probably fake. Yeah.
That is such a good rule of thumb for so many of the things that we’re doing in this business, but it’s such a paradox in a way because AI and similar tools are designed to make us faster and more efficient, and yet we’ve gotta remember to put the brakes on it occasionally and say, look. Is this acting asking for immediate action? Is it just sounding too urgent, too good to be true, those sorts of things, and slow things down a little bit? The information is great, but it’s still really on us to determine whether or not this is something we should pursue or not. Correct?
Correct. Yeah. So I always say, like, just because we can doesn’t mean we should.
There you go.
Right? It’s something how I try to approach a lot of things. Like, hey. This is great, but, like, does this make sense scalability or or long term? Right? Like, so really challenge yourself that way.
And then, so, you know, things coming to you with that frequency, unfortunately, that does come down to that user and how they’re feeling that day. Right? Are they already overwhelmed? Are they gonna run through things?
Is that gonna add that little level of stress to get them to respond quickly? So that’s, like, where the constant annoying reminders come in. And then when you get things from some kind of, like, AI tooling, that’s like if you’re on a private model, you can give it those morals. Right?
So I would just highly recommend, like, you feed your AI model with the knowledge and context about you, about your organization, and you’re telling it, like, hey. As a Telarus employee, as an example, these things are important to us. Right? Like, these are how we treat each other.
This is how we talk to each other. Give it that context so it has, like, a level playing field of even where to start from.
Great response. Frank asked a good question here in the chat. Who are some of the providers that help set up a private LLM? As as our advisers go out and talk to their clients, we’ve heard so many state, and we’ve said it here on this call, AI is not necessarily a product or something. No one says, I’ll take three AIs with a side of fries kind of thing. But in terms of being able to set up, for example, a private LLM, other tools that can then access and use I AI in an organization. Who are some of the providers, that you’re aware of that we work with or that you’d recommend for additional information on those?
So I would say that is a really good question for the sales engineers. You know, I I don’t know.
That’s the best answer right there.
Yeah. But I would but they will know, and they’ll know who’s best of breed and and how people are are combining and and the the various AI tools. Because I really do think where we’re coming to is, actually not just, like, leveraging one model, but multiple models within one single UI.
I know that, I even was talking to Josh LaPresto. He did a demo with one of them this week.
So In just, general terms then for security awareness month, you had a great slide up there.
And, Chandler, if you don’t mind putting that back up about, don’t skip the basics, we can throw that back up there. Talk just a little bit more about this, if you would, Stef, while we’ve got a second. What are the things, again, that users should be doing now and that our advisers should be doing now to help encourage them to make sure they don’t skip over the basics when looking to augment their existing AI or their new AI services?
Yeah. So I think the one of the big things for me is always the the data governance. Like I said, have the data map know where it lives. The second that we set up any new, like, cloud processing system, meaning it doesn’t have to be software, right, but just something you’re logging into online and putting data into, your data is now exposed there in in potential breach.
So just challenging before you start up anything new, do you understand where your data is today before you add yet another new plate? So depending on the size of the the company, if you get a subject access request from a customer that says, hey. Like, I’m a California resident. I wanna know where you have my email and personal information.
You know, do you have five places to look, or do you have twenty seven places to look? And so, you’re gonna want five in that case. You’re gonna say the week looking for all the data. So I think that that one is really important, especially when end users are enabled to go set up these own accounts.
Unless you’re really strict on your computer policies and your browsing policies, users can go sign up for trials, right, and set up whatever they want to and start putting data in there. So doing an audit of of where things are at, how the data is flowing, to me, is just really important, especially as we start looking at these new tooling. And then do another quick check on the MFA that you’re using and where you’re using it. So especially if you have some like, every organization has things that are used across the corporation and some that are used just department specific.
Really go to those department leaders and ask, like, hey. Do you have MFA set up on here? Like, if you don’t, you know, let’s get it set up. If you can’t have it, like, why are we using this tool?
You’d be surprised when you do an audit, and those type of things will create entry points into your data and and cause you, you know, potential risk.
Terrific advice. And you reminded me that I need to check this router over here. I haven’t looked at this for a while either. So I’ve got my own project set up for security awareness month this month.
Just a great presentation, great reminders all the way around. As Stef mentioned, the Telarus engineering team and our, sales engineers are always available for you with your questions and concerns. If you’ve got an opportunity, sit down with one of our engineers and let them tell you about, what we offer through Telarus and through our associated suppliers that can help your clients. And take advantage of this month when everybody is thinking about security awareness to, make certain that you’re talking about these issues with your clients as well.
If you need some additional resources, look at Telarus University dot com. We have some great courses back there on AI and some of the, vendors that we use for that, And, of course, talk to us anytime. Stef, great presentation today. Thanks for indulging us on some questions as well.
Anything any last words you wanna throw in before we move on?
No. I would I guess the last word would be, like, everything you do, make sure you test it. So that that’s that would be my closing words. Thanks, everybody.