Subscribe to the Next Level BizTech podcast, so you don’t miss an episode!
Amazon Music | Apple Podcasts | Listen on Spotify | Watch on YouTube
Security architect and “BS sniffer” David Girvin joins Josh to cut through the noise in AI security. From ship captain to penetration tester to AI governance pioneer, David brings a unique perspective on what’s actually broken in cybersecurity today.
In this episode, we dive into:
- Why most AI governance is just policy theater (and what real controls look like)
- The deadly gap between agent identity and execution layer security
- How autonomous AI agents are already destroying companies—with real examples
- Why traditional AppSec approaches fail when dealing with AI tools
- The questions TAs should ask to separate signal from noise in security conversations
David doesn’t pull punches when calling out vendor marketing versus technical reality. Whether you’re selling into security teams or trying to understand the real risks of AI implementation, this conversation delivers practical insights from someone who’s been breaking and building security systems for years.
Perfect for technology advisors, security professionals, and anyone trying to navigate the rapidly evolving landscape where AI meets cybersecurity.
Guest: David Girvin – Security Architect, Evangelist, and AI Governance Expert
Host: Josh Lupresto – SVP Sales Engineering, Telarus
New episodes drop every Wednesday. Subscribe for more deep dives into enterprise technology trends.
Video Transcript
Transcript is auto-generated.
Josh Lupresto (00:05.58)
Welcome to the podcast designed to fuel your success in selling technology solutions. I’m your host, Josh Lupresto SVP of sales engineering at Telarus and this is Next Level Biz Tech.
Josh Lupresto (00:20.17)
Everybody, welcome back. Today’s episode, it’s a hot take versus reality. Exposing bad security advice in the AI era. We’re going to dive deep. On with us today, we have got security architect and evangelist and what I’m calling the BS sniffer guy, David Girvin Welcome on, Dave.
David Girvin (00:41.385)
thanks for having me. I’m excited to be here.
Josh Lupresto (00:44.536)
Awesome. Okay, so you got a unique background here. We know where you’re at now. We’re going to lead up to that. But I want to talk about how you got to where you’re at. So you’ve been a pen tester. You’ve been an application security architect. Now you’re deep in this kind of AI governance space. And so you’re also, we’re going to get to this a little bit here in a minute, but you’re one these more vocal people in the space online. So back us up, give us your journey, and then kind of where you’re focused today.
David Girvin (01:14.574)
Yeah, I think my early career really kind of shows my personality and insecurity nowadays. So I started off as a ship captain and running big boats and into yachts and even dinner cruises. I lived all over the world and did that and then settled down in Hawaii and owned a welding and fabrication company. And I was lucky enough to do jujitsu with one of the best hackers in the world and
He was a buddy and I broke my back and he’s like, should, you should get into security. And yeah, so, long story short, I went to my first, conference after learning how to code and found out that I wasn’t that good at building code, but I was really good at breaking code. So I won a CTF and, and then from there it was kind of history. built offensive security teams at one password and red Canary. worked with bit discovery and we are she hacks purple and.
all these other security companies and then moved into the sales side. And so I was kind of like a field CISO SE at Sumo Logic and then moved into marketing as kind of the technical advocate, technical expert and then became a founder CEO.
Josh Lupresto (02:29.25)
Love it. First of all, first ship captain we’ve ever had on the show. I love hearing everybody’s backgrounds and how technology just seems to suck you in. Like you probably didn’t think, hey, I’m going to do security when you’re a ship captain. But kudos to you for just figuring out how to Yeah.
David Girvin (02:45.006)
Never. Last thing on my mind ever. I was definitely a blue collar guy, so didn’t think security was gonna be my world.
Josh Lupresto (02:55.0)
So to some of this earlier point here, calling out the noise. So you’ve built this reputation. If anybody sees Dave online, you tend to call out the BS in some of these evangelical posts, these let’s find the truth, right? And so not everybody does that. It’s kind of this like, oh, let’s all get along super nice and not make anybody sad. So why did you decide to kind of be this voice against misinformation? Put it out in the open.
David Girvin (03:23.616)
Yeah, I mean, I think I got, you know, nicked by the tism a little bit. I don’t always have that filter. I should as an adult dad with too many kids. yeah, it’s, I don’t know, man. just really early on, think because Jeremiah Grossman, Robert Hanson, some of these early big security guys were kind of my good friends. They always showed me like how the industry.
is BS in a lot of ways. We’re building up these tools for these big companies that are built by product guys who don’t know anything about security, but they’re pushed by big money and they move forward. And when I was on the practitioner side, buying those tools hurt, you know, dealing with those pains where you’re like fighting to get budget as an app stack architect, and then you get this tool implemented and then everything they promised you was a lie. And like those wounds kind of stick with you. So now that I’m kind of
you know, past that part of my career, it’s almost kind of sticking up for the practitioners. Like I’m a technical advocate. I care about the actual builders and users of all these kinds of things, because their jobs kind of suck. It’s hard work. And I’m just honestly sick of marketing. So I just call it out. When I see, you know, big CTOs at these big companies say, we solved this problem and you know, architecturally, it’s wrong.
I try to be nice about it, but I’m the first person to call him out on it because it ruins the industry. It makes us unsafe as an industry, as a country, as a world or whatever. So I think that’s where it came from.
Josh Lupresto (05:01.112)
Fair enough, I think everybody can appreciate that. mean, there’s just this sometimes rosy picture that’s painted and we all know when we get into this, every day there’s chaos, right? We all go through that and the end customers aren’t any different. So you’re kind of, if I’m framing this up for the TA audience here, you’re that kind of person, you’re that architect that they ultimately want to get to, whether they’re.
talking about MDR, they’re talking about stock, they’re talking about risk, they’re talking about whatever, resiliency. I think it’s good to hear this perspective, because we talk about this a lot, and what we see in a lot of our opportunities is everybody’s got tool sprawl, and there’s a person or two left that’s managing it, and that your job is really hard.
David Girvin (05:30.36)
Mm-hmm.
Josh Lupresto (05:56.136)
And so how can our TAs make it easy? I would assume that approach would resonate and not, you know, full of BS and tools that don’t do what they say they’re going to do are probably a great segue into that conversation.
David Girvin (06:09.965)
Yeah, I mean exactly, just having integrity. know, like I know your TAs are probably based around relationships and they should be. That’s where we are, you know. You burn enough people, they stop believing in you. And then you lose your inroads into those companies. You lose your inroads into the people that are hard to talk to, like I was. I did not take calls from most people because, you know, you get burned enough time, you’re like, I don’t wanna deal with this. It’s also on the practitioner’s side though. It’s their fault as well in this space because, and I’ll call them out.
siloing, you know, career goals and not sharing information, not sharing budget across teams and all those kinds of things that hurts the TAs as much as it hurts the company. I’ve been in those deals, you know, multi-million dollar deals where you’re trying to sell and somebody from another team refuses to talk to you even though they share the budget because they’re trying to make their career by building this section or buying that tool or, you know, so it’s kind of a mess. Yeah.
Josh Lupresto (07:03.16)
things to watch out for.
David Girvin (07:06.537)
That’s hard trap. I’ve hit that in procurement before, which is never fun.
Josh Lupresto (07:11.266)
So let’s talk about AI governance. This is coming up a lot. Governance generally is kind of, know, governance and risk have always been some good hot topics that always have more to be explored. So you were kind of doing AI governance before that was really a thing. So what were you seeing early that made you kind of lean in? And then what did you learn as you’re kind of building before there was really any standards?
David Girvin (07:38.028)
Yeah, so I think one of the reasons we didn’t really catch on as quickly from the sales or industry side is a lot of people that were in charge of AI governance, which it wasn’t even a thing yet, in the companies, whereas the GRC team, right? There are amazing GRC engineers out there, but for the most part, a lot of them are not technical. So the difference was I came from the offensive side.
So I knew what goes wrong with autonomous things. I was one of the first people to hack chat, GPT-3 and four. And when agents came around, I saw the problem right off the bat. And so I started writing, even though I was like a technical advocate in the company, I started writing AI governance policy just for the security team, because they were, it wasn’t their space, right? And I started putting in developer policies and controls because I saw the problem and the problem is,
Not what everybody says. It’s, it’s yeah, there’s prompt injection. There’s all this stuff that everybody kind of started with. But the problem is, is that you have an autonomous thing that makes mistakes and hallucinates at the execution layer of all of your tools. And so I started building at that point, I knew the biggest problem was going to be what’s we’re seeing now. Actually we saw yesterday, which a company, startup that had a ton of customers, they had an agent delete their live database.
and they lost everything. I mean everything. They cannot recover. They’re trying to use their stripe to recover their customer orders. So it’s a real problem. I was just maybe eight months ahead of everybody and I built basically the control layer in there.
Josh Lupresto (09:23.48)
So I’m going come back to, I’m going to do a little bit of a level set on kind of what AI governance is in your words. But before we do that, I want to pause for a second and say, the advisors love and I love when we can give out great questions, like great conversation starters that turn these kind of like spicy takes into good conversation. So.
I think there’s a lot of loud and kind of confident advice out there right around AI and security. Everybody’s touting something. So if you’re a TA, how should they separate that signal from the noise and guide it even when maybe the experts don’t agree? Like what’s that conversation track for them?
David Girvin (10:04.269)
Yeah. So the first thing is like, what are you talking about with governance? Right. So there’s people who are like me who think governance is an actual action. It’s a control you have to put in place. It’s not a policy. So you’re going to get people like, we have AI governance. I wrote a policy about it, but there’s no way to enforce any of that. know, that’s kind of for, for you as a TA, if you’re trying to get into those conversations, do you want to ask like real questions that are going to make people go, Oh crap. Like, you know, like that’s something we haven’t thought of yet.
you want to start understanding how are they controlling things. Because one of the big myths in the industry, you’ve heard most of the vendors in the kind of IAM space talk about like agent identity and that kind of stuff. But the bad stuff happens after that. Those things are important. I’m never going to say IAM or any of those companies are bad. They’re great. But what happens is bad things happen after the policy checks, after the identity has been set, all that kind of stuff, because again, they’re at the execution layer.
So if you think of it like this, you tell an agent to go look at a database, it says, okay, and you say, hey, you’re only allowed to read that database. Okay, I’m only allowed to read that database and here’s your ID, okay. And then it gets there and hallucinates and goes, I’m gonna delete this database. How do you prevent that? Now, writing a policy doesn’t prevent that because you have to deal with session risk, you have to deal with all these other things. Having an identity doesn’t deal
So that’s one of the problems obviously I’ve been working in and solving, but that’s a real problem to every single person. So you can ask that, how do you actually prevent a agent from hallucinating and destroying things? The other one is how are you handling agent credentials? If they say like, well we have just in time tokens, just in time tokens do not handle credentials. And I’m gonna give you kind of a technical thing, but this is a good conversation to bring out, which is.
A just-in-time token has to be scoped. Agents have dynamic sessions, the time that they’re allowed to work. You can’t predict, so the just-in-time tokens always has to be bigger than a session. So if a hallucination happens, still doesn’t catch it, right? So they have to have full credential starvation. There’s all of these kind of, technical, but the questions you have to get to is how are you controlling the agent and their tools to make sure things don’t go bad? Governance in general,
David Girvin (12:29.423)
is not just deciding a policy and or authority check at the GRC level. You have to ask, how are you implementing those controls around your entire company? And it’s not just gonna be what I build in my space, it’s gonna be all kinds of things. So I think if you start asking those questions about, well, what kind of controls are you seeing here and what kind of controls are you seeing there? Use those terms.
they’re gonna start opening up things and you can kind of see like, they haven’t thought about this or there’s a gap there or maybe I can sell into this space or I have a solution that can take care of a large blast radius. So that’s kind of the questions I’d start to ask.
Josh Lupresto (13:06.808)
I love that. Just challenge it a little bit. I think initially when we started the security practice years ago, security was very, and it still is, right? It’s very intimidating. It’s what’s that thing that I’m gonna ask and then what might that response be? It could go so many different ways and how do we prepare people to answer what those ways might be, right? I think you’re…
You bring it back to a good point, and this takes me back to the early days of backup data, backup DR. I came from a backup company, and we were what maybe felt like selling insurance for voice services. And so it was predicated that, you are in an industry where you cannot really afford any downtime. Maybe you’re in critical care, maybe you’re in financial, or where there’s expensive transactions happening. So you can kind of quantify downtime, right? And by the way, that was my favorite part of the CISSP of…
quantifiable risk, how to quantify risk and not just make it qualitative. I think the same theme there was when you would explain to somebody that, the world is scary, you’re going to have an outage, do you have a backup plan in place? Yeah, yeah, totally do. Yeah, we bought Veeam, we bought Zerto, we got this backup, whatever. We did backup voice. And so my second question would be, okay, have you…
And I’ll play dumb. Have you tested it? Well, you see man Like we can’t take the network down and stuff. That’s bad How do you know that that is gonna work the way that you would want and so? One of the things that we would always do from a voice perspective was we would pick a lowest time when the hospitals were low or the finance whatever markets were down and and we would enact that and I promise you
over 50 % of the time, it didn’t work right. Somebody misconfigured a backup or the carrier didn’t build failover policies right. And we learned a lot. And then we all got really, really confident in our backup plans. And the customers got real conviction in their backup plans. I think that’s kind of what we’re saying here, taking that framing of, you’re not calling their baby ugly, you’re not.
Josh Lupresto (15:29.336)
telling them they’re dumb. It’s just a matter of, the policy is important, but in all practicalness, how are you, how is it handling that? What’s going to happen when this thing inevitably does the thing that you and I and others are seeing that it does? I think that’s, you bring up a really great point to call out.
David Girvin (15:50.325)
And I think you don’t have to be super technical about it either or call the baby ugly. Like you said, just bring some of the case examples that are out there. Anthropix agent, erased 2.5 million lines of code. Kiro shut down AWS China for 18 hours, which is their agent. And then yesterday pocket destroyed an entire company like their agent destroyed their company. So I think it’s, it’s almost that like, Hey, what are you guys doing for that?
Like, do you guys have stuff in place? And the other side is, if they’re regulated, if you’re selling into someone who is trying to hit ISO 42001, NIST AI ML or EU AI Act, and they have agents, so they actually have real AI, they will not hit that unless they have some very specific things. I could probably send all your TAs a control map or something like that to kind of help them with that, but they have to have a
audit log that’s hash chained, that’s tamper proof of every single action their agents are doing so they can prove they know what it’s doing. All the logs right now, unless they’re doing something like through what I built is going to be through an API, which is just in and out, there’s nothing in the middle, it’s a black hole, or they’re trying to do it at the identity layer, which is just the LLM requests. None of those will meet those. We’re going to hit a really rough time in the next six months to a year, especially when the EU AI
act fine, start coming out because people haven’t really read the policies. I’m a nerd and I did it and it’s horrible.
Josh Lupresto (17:26.712)
There is a great point. You’re right. You know, nobody’s got this locked in. Like, 0.1 % of 0.1 % are going, yeah, yeah, I built it. It’s great. It’s awesome. It works. We have it.
David Girvin (17:36.596)
Yeah. And they’re, they’re confusing policies. I actually had to run my agents to break down the policies. So I understood them better when I was trying to understand them because they’re written by people that exist in their own world.
Josh Lupresto (17:47.64)
Before we kind of go into some of these, I want to talk about some of the things that you saw in the environments. Give us just a quick, we talked about this a good amount, but I just want to establish a baseline for what, let’s simplify what AI governance means. Sounds big, sounds abstract if you haven’t been in this space. So if you had to break it down for a business leader, for a TA,
What would you define it as is just kind of the starting points?
David Girvin (18:19.239)
AI governance is the control of AI inside of your organization. So that’s tricky because it bleeds a little into security because you have PII, have DLP, you have those kinds of things. But it’s pretty basic if they just use LLMs. So they just have, you know, OpenAI or Claude, they’re not really doing any agentic stuff. They’re not coding with it.
It gets a lot deeper when you move into the agents world. So if they have agents for marketing teams and they have agents on their developers and they have agents everywhere, which is where everything’s moving to. Now you are controlling AI in the company, but that AI has execution level permissions. So it gets a lot deeper, a lot more technical at that
Josh Lupresto (19:11.874)
Fair, I like that. It’s good, high level enough and broad. Okay, so you’ve had a couple different roles. They might look similar, but I think they’re slightly different. Yeah, and I’m not gonna go into the, we’re not gonna talk about ship builder, but I’m gonna talk specifically in security. Okay, so let’s talk about Dave is a pen tester. So you’ve been on this kind of offensive side, lots of great tools out there, lots of great techniques, but.
From that side, what are some of the most common gaps that you exploited and are they still problems and kind of these same fundamentals today?
David Girvin (19:50.326)
You know, they’re not getting into all the different vulnerabilities. Like when I was still doing that HTTP smuggling came out and I was hitting that everywhere and it was just kind of a thing, but it’s almost still the same problem that security across the board and the AppSec world has, which is you’re going to find gaps. Developers build to build, right? They have deadlines. They have to hit their KPIs and they have to, you know, they have engineering leaders and all that kind of stuff. So,
The biggest gap still, I think today, and it’s gonna get a lot worse with AI, is understanding what to patch after these things and how to prioritize that. So there’s a big argument in the industry right now on like, should we patch everything or not? And there’s a lot of evidence that we shouldn’t. So because only like 8 % or 6 % of all CVEs have ever been exploited in the wild.
So you need to put money and time into patching your pin test findings if they’re actually critical. Like if not the rated critical, but if they’ve actually been exploited and you know this is gonna chain. The other thing is definitely supply chain. I found terrifying things in the supply chain. And if you’re new to AppSecWorld, supply chain is…
everybody uses everybody else’s code to build. And we saw that with LightLM recently where 90 million people got infected with a virus that someone slid into the supply chain with open source. My hot take is AI is the death of open source software because it’s too easy to get stuff in open source now. But those kind of two things, it’s almost, it’s interesting because you would think it would be like, I find this exploit and this exploit, but really it’s still a process problem.
It’s, it’s understanding how to run engineering teams and AppSec teams correctly and where to put resources. That’s still the biggest problem.
Josh Lupresto (21:57.241)
So then let’s shift here as your roles shifted into, okay, you’re not a pen tester anymore, but you’re probably taking a lot of that knowledge and now you’re this AppSec, you’re building applications, you’re managing application security. let’s do some hot takes, the theme of this is hot takes. So let’s do some hot takes against reality here, but ground it in your experience. So as you’ve been in this AppSec role, you’ve seen how it plays out.
not just kind of what some of the stories say. So you look at companies, what are they really struggling with versus what are people debating online? Like where’s that disconnect in app security?
David Girvin (22:37.567)
Yeah, so one, the biggest thing about app security and being an AppSec guy is they all hire wrong. Unfortunately, to be an AppSec guy, you need to be an inside sales leader. You are selling a problem that you have to win and build champions on the engineering team to fix. So it’s a very outgoing position. from the start, people usually hire wrong. They find some guys good at coding. I just don’t agree with that.
But the biggest problems are still, you have all these tools that have tons of false positives. They also do find things that don’t actually matter. Now you have this giant vuln catalog of stuff you need to fix. You found all this noise. Now you have to go bug all the engineers to ask them to fix it. And it’s just this cycle of, hey, I know you’re doing something important, but you need to do this instead.
And it’s actually, like I said, it’s a game of building champions. Real AppSec leaders are good at building champions in engineering on the C-suite to get buy-in, to understand the risk, and they can speak to everybody in the company. What you’re going to find though from like pain points, if you’re trying to speak and sell into those teams is noise. The tooling is horrible to deal with. It’s getting better. A lot of these new companies coming out are selling some really interesting stuff that do
They can find exploit path, which was a huge, when I was there, that was most of my job. We’d find a vuln, but can we actually exploit it? Some of these new tools do that, or they write the patch for you, or they do all these kinds of things. So if you’re trying to talk into that space and you’re trying to sell software to anybody that’s AppSec or engineering, anything that’s automated well can design a patch, can tell you the exploit path, can remediate in any way. Those tools are going to win every single time.
The other side is gonna be visibility. There’s a bunch of full management companies now, right? And they basically are trying to get all of these giant alerts prioritized and figured out and who do we get to fix what and where does it go? That’s a huge pain point. So trying to ask them like, hey, how are you guys dealing with that? Like, what are your gaps there and that kind of stuff.
David Girvin (25:01.15)
The other side of the house is pretty easy, but it’s just always a mess because it’s the same thing I talked about earlier. Every engineering team has got 15 silos and they’ve siloed the security team. So when you’re building a secure SDLC or the way we build software, everybody’s fighting you for visibility and where to put security checks in and all that kind of stuff. I would say there’s always a good shoulder if you want to build a deep relationship with an AppSec to cry on by just…
kind of talking to those guys about this space because it’s a really, really hard job. So I think if you’re just trying to build maybe deeper relationships to find something to sell into, speaking to kind of those pain points.
Josh Lupresto (25:43.757)
guess it’s interesting, yeah, mean, we’ve got a plethora of hundreds of vendors with so many respective OEMs, one to probably 200 with some of our biggest vendors. And so that’s instantly this just thing that fills up your suitcase full of options to sell. And maybe the basics of this to your point come down to the apps that guys just want to be heard. They want to articulate some of their problems. They don’t know.
Maybe they don’t know you, maybe they haven’t met you yet, maybe you’re working with somebody else in the org, or you’re working on a different technology stack. You’re totally new to them. First of all, security people, we’re all paranoid. Anyway, we trust no one, so you got that starting. So if you can kind of recognize and call that out as the fact, I suppose you bring up a great point of just, are they struggling with tool sprawl? Ask them, find out. Are they struggling with figuring out how to govern? Have they tried anything on these agents? Have they?
Do they know how these agents are being governed besides just identity? I think you bring up great points. Just comes back to the basics, I guess, right?
David Girvin (26:46.026)
Yeah, and that’s a gap that all of them are hurting with. All the guys that I’ve worked with over the years have reached out to me with what I built because now those guys are all architects. So the game is different now because not only are you trying to build the normal AppSec stuff, which the tooling was always really kind of bad. It’s getting better. There’s some startups that are doing some cool stuff, but now you’re governing every single…
engineer that has an agent inside of their ID. And if you’re not familiar, ID is where they write code. It’s their little program. So now we have an agent living in there touching code and database and GitHub and everything else. And that’s a gap every single app set guy is worried about right now. I mean, every single one. So that’s a good conversation to have.
Josh Lupresto (27:33.241)
All right, so final couple thoughts here. Let’s think about the future. So, I mean, we’ve never seen anything move this fast. AI is changing, it’s changing the data, it’s changing the apps, it’s changing the decision making, it’s changing the risk profile. What, and maybe your answer is going to be, I’m just going to guess, it’s going to be something agent related. But what are these companies underestimating about risk?
and how that’s changing over the next 24 months.
David Girvin (28:05.247)
Yeah, mean, agents is going to be the worst thing to the best thing and the worst thing to happen. The reality is, is when people were just learning how to build agents, the security, the security companies trying to be in this space were building like prompt filters. They weren’t solving the problem. Well, I mean, I was sorry, but you know what I mean? So you’re going to see that huge because no one was thinking about, these things are going to touch everything and they’re going to make mistakes.
Josh Lupresto (28:28.407)
or not.
David Girvin (28:35.446)
I think in the next 12 to 24 months, we’re going to see way more breaches that we’ve seen. mean, the last two months, no, three months, we’ve seen four giant breaches. We don’t know how many have been covered up, right? Because no one wants that press. And it’s going to get worse. But there’s, you know, me and there’s a few, there’s some people who are doing it very well. I mean, there’s a couple that are good. You know, I’m not going to say my competitors are all wrong, but there’s a few out there that are trying to get things in place.
to make it so you can enable agents in your company safely. And I think that’s gonna be the name of the game because the big companies don’t care. They want you to burn tokens. They don’t wanna gate things. They want the field labs. So it’s kind of up to like the startup space which is probably the fastest growing crazy money getting thrown at it space I’ve seen in security was runtime, which is
obviously frustrating when you were the first person. But it’s growing really fast because I think people aren’t just thinking of security or governance as like a thing anymore. Now it’s, we have CEOs and boards telling us we have to implement, you know, force multipliers, whether that’s AI talk stuff like bots or full agent workflows. And it’s not a security thing or a governance thing anymore, in my opinion, it’s an enablement thing.
How do we get to where my marketing person is doing 2.5x more work a day safely because the company wants more output, they wanna make more money, but the repercussions of it going wrong are terrifying.
Josh Lupresto (30:19.704)
Yeah, yeah. You know, it seems like ever since, you know, we started focusing on this a while back, just security, obviously, in general, as more products and things became available for us to build a practice around. And it seems like there’s just never any shortage of news. In other areas, we had to maybe help people understand why…
shifting from traditional network switching to SD-WAN mattered, or why shifting off of Prem onto a cloud CX environment, why that was important and what the values were, right? And we had to educate a lot. And there wasn’t great compelling articles out there about like, why this is awesome and you should do it. I mean, there were, but in the same way for security, it’s…
No, this bad thing just happened and it shut down this business. This bad thing just happened and it shut down this business. And then that message goes over and over again. So I suppose what I’m getting at here is, okay, when mythos, maybe it comes out, maybe it’s too powerful to be released and it’s only good in the right hands, and who’s judging what the right hands are, different story. But I suppose is the message here, we’re gonna hear a lot of new powerful things. You’ve got.
know, NVIDIA phasing out and everybody phasing out the old set of GPUs, Blackwells, and then the next things coming online. So we haven’t even really seen big scale production foundational models and then app wrappers around powering this next gen of NVIDIA chips that are exponentially or magnitude, maybe not magnitude, but exponentially better and faster. So I just, I have to imagine it’s going to be really hard to stay in front of that. And so…
It’s just like doing security 20 years ago. So many vulnerabilities. You put a scanner out there and boop, found a spot, right? Because people are going to be using the black wells to find vulnerabilities and all of these things. So it’s just, I don’t see it. It’s just a train pushing another train down the track. Like, I don’t know when it stops. And so I think that’s good if you’re opportunistic. But I suppose it’s just… What I’m getting at also then is the same things that we’ve talked about apply.
David Girvin (32:14.857)
and
David Girvin (32:29.886)
Yeah.
Josh Lupresto (32:37.27)
no matter what. It’s always the basics, isn’t it?
David Girvin (32:40.297)
It still is and it always will be. It’s you know, everybody’s worried about you know, people losing their jobs and security. I don’t I think there’s gonna just be that much more work and and with a lot of the stuff Here’s here’s a hot take I could be completely wrong. I don’t think mythos is that powerful I saw the 12 % benchmark over opus 4.7 and then I don’t know what 5.5 but it beat 4.7 and and like it’s it sounds awesome like honestly, but I don’t think it’s as new
My guess is that it’s super infrastructure heavy. And I don’t think it was released because everybody’s having a hard time keeping up with the infrastructure right now. I’ve noticed another very big name has stopped allowing you to upgrade licenses this last week because I think they’re running out of database space to run all this stuff. So I think the Nvidia chips and everything I think will help with that side of the house. But it’s so hard to tell what’s marketing and what’s not.
Is Mythos really this nuclear weapon or do they realize that they were constrained maybe on the infrastructure side so they came up with a really good marketing campaign to make it look like a nuclear
Josh Lupresto (33:49.347)
Yeah. How will we do? Everybody’s got a bias, right? It’s hard to tell. Okay. Final thought here. Leave us with this for the advisors. Wrap it up. Is there any other mindsets or shifts or questions that they should bring into every conversation to, again, to cut through that noise and add the value that the end customer is looking for?
David Girvin (33:50.461)
I don’t know.
Yeah.
David Girvin (34:17.447)
Yeah, so I think if you’re gonna start talking in the agent covenant space to your customers, I think you just need to ask some simple questions and kind of write down, figure out what their plans are, take a step back and try to get ahead of it. And this is just from my selling brain from being on both sides. The way I’d be looking at it is, how are you guys actually trying to implement it? Ask good questions. You don’t have to be the most technical person in the world, but you need to think a little bit.
Okay, you guys are implementing AI. Well, are you guys just going to use like a Gemini license or something? Are you guys building agents? If you’re building agents, how are you keeping them safe, keeping your stuff safe from the agents? If you’re not building agents and you’re just going down the LLM routes, how are you protecting your data? How are you making sure people don’t do silly stuff with just LLMs? So I think no matter what route you go down there, you’re going to really see a path to sell into and to kind of expand.
your relationship with that company, but you need to get ahead of it. And all of them right now from my experience, because I’m like getting put off because they haven’t built stuff before they can use us is they’re very early stage right now, which I think is the best part for all of you is get in there. Like tell them that you’re interested in this space because I promise you this space is going to be one of the most expensive spaces for a company in the next two years.
Like they’re not even thinking about governance and it’s going to take over their budget because it’s the only way to enable all the AI work they want. So that’s where I’d say get in early, try to understand the problem and try to go find a solution for it.
Josh Lupresto (35:57.593)
I love it. What a great place to wrap. Dave, you’re a wealth of information, and I know we just kind of scratched the surface. There’s a lot there. We can probably go a lot longer, but dude, I appreciate you coming on and sharing everything with us,
David Girvin (36:09.863)
Yeah, I love it. You know, we’ll have to do it again sometime.
Josh Lupresto (36:13.672)
Awesome. All right. As always, everybody, don’t forget these episodes drop every Wednesday wherever you’re coming to us from Apple Music, Spotify, go subscribe so you get those and you can take tips like this to your customers as soon as they come out. So that’s it for today. I’m your host, Josh Lupresto, SVP of Sales Engineering at Telarus, David Girvin Security Architect and Evangelist. This has been Hot Takes versus Reality. Until next time.