Securing AI at Enterprise Scale: Lessons from Walmart's Transformation with Tobias Yergin

July 1, 2025
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

When one of the world's largest enterprises deploys AI across 10,000+ developers, the security challenges are unlike anything most organizations have faced. In this episode of SecureTalk, we explore the critical security and strategic considerations for deploying AI tools at enterprise scale with Tobias Yergin, who led AI transformation initiatives at Walmart.

Key Topics Covered:

  • Why traditional security rules fail with probabilistic AI systems
  • The exponential risk of scaling AI agents from dozens to thousands
  • Building secure data foundations for enterprise AI deployment
  • Protecting AI agents that operate beyond your firewall
  • Strategic approaches to AI implementation that balance innovation with risk
  • The ontological framework for mapping AI capabilities to business tasks
  • First principles thinking for AI security architecture

Tobias brings over two decades of experience in digital transformation, having held senior leadership roles at Intel, VMware, Panasonic, and Citrix Systems. His practical insights from implementing AI at Walmart's massive scale offer invaluable guidance for CISOs and security professionals navigating the complexities of enterprise AI adoption.

Perfect for: CISOs, Security Architects, IT Leaders, Enterprise Risk Managers, and anyone responsible for securing AI implementations in large organizations.


 

View full transcript

Justin Beals: 

Hello everyone and welcome to SecureTalk. I'm your host, Justin Beals. 


When Walmart, one of the world's largest enterprises with over 10,000 developers, decided to deploy AI at scale, it faced a challenge that most organizations are only beginning to contemplate:  How do you securely implement artificial intelligence across an enterprise that serves millions of customers while protecting against risks that the security industry is still learning to understand?


As our guest today, who led that AI transformation warns, be careful what you wish for. 


This of course isn't a theoretical speculation. This is hard-earned wisdom from someone who actually navigated the complexities of enterprise AI deployment at an unprecedented scale. 


And of course, we're living through what may be the most significant technological transformation since the internet, and security professionals are finding themselves caught between two imperatives: enabling AI innovation while protecting against risks that we're just beginning to understand. 


So today's conversation tackles the hard questions that every CISO and security leader is grappling with. How do you secure AI systems that are fundamentally probabilistic rather than deterministic? What happens when your AI agents venture beyond your firewall to data that is dangerous?


And perhaps most critically, how do you build the data foundations and security frameworks necessary to extract enterprise value from AI without exposing your organization to catastrophic risk? 


Our guest today brings a unique perspective to these challenges, having spent over two decades driving digital transformation at companies like Intel, VMware, and most recently leading the AI initiatives at Walmart.


He's seen firsthand how to navigate the gap between AI's transformative potential and the messy reality of an enterprise implementation. His insights on breaking down work into atomic tasks, building trust in AI systems, and approaching AI security from first principles offer a practical roadmap for security professionals who need to move beyond experimentation to measurable, secure AI deployment.


We'll explore why traditional security rules won't work in an AI-driven world, and how to think about AI agents as critical infrastructure, requiring the same rigor as your file systems and databases.


And finally, what it really takes to prepare your organization for a future where AI isn't just a tool, it's a fundamental part of how work gets done. 


Tobias Yergin is a seasoned product manager and innovation executive with over two decades of experience driving digital transformation and creating breakthrough products. 


He's held senior leadership roles at Intel, VMware, Panasonic, Citrix systems, where he consistently delivered innovative solutions that generated hundreds of millions in revenue. His expertise spans product management and go-to-market strategy, mergers and acquisitions, and the application of design thinking and agile methodologies to build high performing teams. 


Thanks, everyone, for joining me and welcoming Tobias to SecureTalk.


Justin Beals: 

Tobias, thanks for joining us on SecureTalk today.


Tobias Yergin:

Thanks for having me. Appreciate it, Justin.


Justin Beals:

We're really pleased to have you join us today. many of our guests, you're an esteemed expert in your work. Maybe you'll tell us a little bit about what you're working on presently where you're engaged.


Tobias Yergin:

Yeah, happy to do so. Thanks for the opportunity again. I think we've got a lot of really fun and cool stuff to talk about today. 

I'm almost 30 years in tech now, so this is entering my 30th year actually, not quite there, but getting close. And very lucky to have pretty much caught all of the different technology waves from networking to security to mobile to virtualization and now AI.


And, you know, people tease me like, “oh, well, hey, how did that happen?” And it's not anything having to do with foresight or planning or being prescient. It was really just luck. My spirit animal is kind of like a crow, I guess, maybe. And I'm attracted to shiny things. And so when I see a new shiny object, I kind of fly over there and check it out. And that was the case here about two and a half years ago. 


And so, I was able to afford the ability by my previous employer, to really go investigate what this was going to mean to modernizing our business model, our operational model, and improving the lives of all of the associates that work there and providing some great services for our customers. And so they gave me a very large remit and just said, hey, “go figure this thing out”. 


And so had a couple of teams that worked with me, and about two, two and a half years later, here we are. I recently left, and I'm doing a lot of advisory jobs now where I basically help venture capital, some private equity and large corporations figure out how to actually work with the technology, implement, deploy it, and some of the considerations that they might actually take into place when they're making these big decisions..


Justin Beals:

Yeah. So an organization like Walmart, how do you think they consider themselves on the continuity of traditional business retailer and more technology business? And I'm sure that's a shifting thing, but I'm curious how they enter into this conversation of AI culturally.


Tobias Yergin:

Yeah, a lot more technical than you would think. know, we're human-driven and technology-enabled to company, think is one of the slogans of the.sayings that we have internally. And, you know, it shows up in the day to day. It shows up in the work that's done and the kind of the remit, as I mentioned previously, to go figure this stuff out and see how we can actually use it to help others in their life. 


And I've got some really cool stories if we want to go down that path, because one of the things that we did is, you know, we've got some ethnographers on the team and we work with those folks and we go out actually into the field. We go into stores and we look at what's happening, both from the associate perspective as well as from the customer perspective.  That was some of the seminal work that led to a lot of the things that we did around trust and agency and looking at decentralized systems and making sure that the systems that we looked at very early on in this process weren't going to be used in a way that could manipulate others as an example. So there's kind of this whole responsible aspect to the technology. Is it explainable?


Is it transparent? when it doesn't know when it's starting to hallucinate? Can it give you a confidence score? There's all of these kind of little nuances which we think makes the technology more applicable to solving problems but also a little bit more human in terms of augmenting the human condition while honoring the human spirit.


Justin Beals: 

I don't think I've run across a lot of folks that have had the opportunity to participate in a technology project that included social sciences as a quality assurance measure. I did it a little bit in the education industry. It's a very cross-cultural bit of work about these technologies and working with ethnographers, at least having a population size big enough to make use of them.


Tobias Yergin: 

Yeah, it was kind of a rare treat for us to be able to do that. We've got design strategists as well as ethnographers, and we really took all of those things into consideration. We've got design principles. We've got project principles. There are things that we do when we build projects. If we're building a new feature, a new capability as an example, we run it through all of the things that we learned while being out in the field and doing those kinds of observations so that when we actually deploy these products, they actually honor all of the research that was done prior to and informing the development of these solutions.


Justin Beals:

Before getting into some of the stories, I'm excited to hear about some of the changes. As you were entering this landscape, you'd worked in technology for a long time. You and I have both seen some of the mistakes and the opportunities. What were some of your fears and maybe some guiding principles that you were trying to steer the outcome away from?


Tobias Yergin: 

Yeah, we wanted to make sure that it was trust. So a lot of the stuff that we did was really based on trust. Does the human trust the technology? Does the human trust the company producing the technology? Does the human trust the output of the technology once they are actually interacting with it? And there's some other things we can talk about, but those were some of the primary ones that we wanted to consider because if you do not have a high degree of trust, then engagement typically does not follow, right? And we wanted to broker that trust. We wanted to earn that trust, and we wanted to go out and find a way that we could better human lives, right? 


There's a couple of stories that we will get into, but I think that at its core was kind of the principle. And how do we potentially even provide, a little bit later down the road, but how do we provide people with additional economic agency, right? A lot of the folks that shop at some of those stores, you know,  that they don't have a lot of money. These aren't people making a million, two million dollars a year like they would wish they could make that in a lifetime in many cases. So, you know, how do we actually help them live a little bit better life by having a huge amount of assortment, which is one of the big things that, you know, my previous employer talked about on a regular basis. 


But how do we actually help give them a little bit more economic agency so that If they needed $20 for a bag of, you know, Hunt's tomato sauce or some diapers for their kids, they actually could go do that. And so we explored some of those really cool things to help people live better.



Justin Beals:

Yeah, that's phenomenal that you get a chance, almost like in an academic setting, an internal review board to set up the study or the project with kind of some guardrails. And so another thing that really resonates with me in your discussion of your concerns around trust is AI as an interface. I've often, in my work in computer science, we've done a lot of information science work over the years.


And sometimes I look at some of these AI tools as being a new interface for the computing platform. And just like the early web, we wondered if we could trust it. It seems like we're at that in the ebb and flow of adoption in a way.






Tobias Yergin:

And just now, I don't know if you saw, but just yesterday, OpenAI released a new product called the Orb. And basically, it scans your iris, right? And then it gives you a hash off of that, I think it's 12,800 binary bits or characters. And at that point, you have, in effect, your digital passport, which is irrevocable, right? And so now you are directly linked to that specific hash for time and memoriam. And those are the kinds of things that I think make a lot of people who work in this industry a little bit nervous because it's tied to one single manufacturer of one piece of hardware that has a non or irrevocable kind of hash that is tied to you forever. You cannot ever get it back.


And if that thing goes sideways or potentially if you are misbehaving like they have in China where they give you a social score, well, maybe in the future you don't get your UBI check this month. And so those are the things that I think are that we're going to see more on a day-to-day basis as this thing continues to accelerate. There's an acceleration of the acceleration. It's called jerk. It's the derivative of acceleration.


We're going to see more of this kind of stuff sooner than most people can even wrap their heads around. It's coming. And so you have to just decide what matters to you. If I can talk one more little bit about it, I wrote something about this. It was almost two years ago now, which is crazy. But there was a, I'm a big science nerd, and I love science fiction. So, in Blade Runner, there's the Voight-Kampff test. Right? 


And that is basically to find out if you're a replicant or not, an automaton, a robot. And yeah, right. And there's like these little emotional cues that the robot slips up and they can catch it. And I thought, you know, this seems to me to be the opposite. Why aren't we taking that approach instead of actually trying to give basically all of humanity a single passport that they can be traced and tracked on? It seems like we're imposing a kind of control system over humanity rather than actually over the digital components. And to me, that just seems kind of backwards.


Justin Beals

But it's a better perspective, I think, for the security industry as a whole. And we've talked to lot of leaders that are like, look, we need to take responsibility for putting security practices in the hands of human beings where we might have been able to write a better code for it.



Tobias Yergin: 

100 % agree. And the way that we've thought about it on some of my teams in the past is you've got human-to-human interaction, you've got human-to-machine interaction, you've got machine-to-machine interaction. And then you've got all of those interactions over different mediums. So over a digital medium, what does human-to-human interaction look like? What does human-to-machine interaction look like in a digital medium? And then you've got the physical world. And some of those things, especially as robots take on, you know, more and more capabilities and it's happening super, super fast there as well, you're going to have those same kind of interactions. And then you have to have an umbrella of trust over all of that. And so this approach for me only handles kind of human to multiple different nodes, I guess, could be human to machine or human to human interaction within a digital medium. And so for me, the architecture just seemed a little bit limited. I don't know if it's extensible enough for it to go into those other, you know, kind of ways of looking, ways of working or ways of interacting.


Justin Beals:

Yeah. Well, let's talk on the positive side. You know, as you were driving this project forward and as you're thinking about adoption of these technologies for a benefit for society, instead of some of the dangers and the privacy concerns that we all have, tell us about some of the benefits that you were seeing in the marketplace.


Tobias Yergin: 

Yeah, so there's this whole thing around agency, there's augmentation, there's automation. In some cases, there's even some elimination of tasks if you could refactor a business process. And so one of the ways to think about this is how to ontologically build up a model from the bottom up so that you can start mapping the capabilities of a particular model into the tasks, which is what we call the atomic unit of work. 


You can get into subtasks, but let's just for the purposes of this discussion, keep it at the task level. All of the tasks that we ran across, and I'm sure that there are others that we didn't, but they're linear, they are sequential, and they're dependent. So, almost all of the workflow that you see in today's businesses has those three attributes.


And as a result of that, you can instantiate workflow from the assembly of all of those tasks. So you can't get to task D without doing task A, B, and C first. And then that workflow can be instantiated into a job to be done. And it turns out that everybody usually has about six to eight jobs to be done. And they usually have about 10 tasks per job to be done. So on average, humans working in today's world have about 70 different tasks per day.


And now you've broken down almost everybody. And by the way, it does vary a little bit. If you're a developer versus if you're picking up garbage or something like that, it does vary. So it's not all the same across all different job functions. But generally speaking, it's about 70 tasks per day. And then we went over to the model. So we said, hey, what can this stuff do?


And it turns out it can reason, can synthesize, it can summarize, it can, to a certain degree, plan, but that's still something that's out into the future. And it's got all these really cool stuff it can write and can do all kinds of cool stuff. And everyone said, hey, well, what's happening here is really cool, but we're still not getting any enterprise value out of it. Enter 2025, Altman's talked about it a lot. It's the year of the agent. And now you, in effect, have a digital worker that can bridge all of the atomic capabilities in these models to the atomic unit of work, is a task. 


And then you can go through the entire process to say, which tasks can we eliminate? Which tasks can we automate? Which tasks can we augment? And which tasks, in some cases, is a corporate enterprise? We looked at potentially outsourcing some of those other tasks that didn't fit into the other three buckets.


Justin Beals:

Yeah.


Tobias Yergin:

And that's really kind of the way of looking at it in terms of building these models up to say, where can we apply the technology? In the case of a developer, as an example, Justin, and this will speak to some of the background that you've got and some of the stuff you've done in the past, more and more of these agents are capable of doing more and more of the individual tasks, the discrete tasks that developers have done over the course of the last 40 years.


Justin Beals 

Yes.


Tobias Yergin: 

And so increasingly, more tasks can be automated. And the ones that remain, without refactoring business processes, what are the chances to make one of our engineers a 10x engineer? Because that's the metric in the valley. Everybody wants a 10x engineer because that's the magic behind a lot of these smaller companies that disrupt everything. What if you could have an army of 10x people? Right?


What would that do to productivity? What would that do to the capability? by the way, the code's going to be a lot better too. So you're going to have a much lower cost of carrying that code forward. You're not going to have as much testing that you have to do on it, et cetera, et cetera, et cetera. It'll probably be a lot more secure as well, by the way. And so those are the kinds of things that we looked at.


Because what we weren't about and what I think a lot of people are already falling into this trap is that we want to automate everything and lay everybody off. That is, I think, a very dangerous recipe. And it is something that we tried really not to do. We said, hey, why can't we make everybody a 10x and just do more?



Justin Beals: There's more value in it, right? Like, I mean, if you can increase productivity by 10 X and grow revenues, you know, through that productivity, it's worth more than, you know, a simple cost cutting. 


I mean, I guess it depends on whether you value your fluctuations in stock price. You know, the quarter to quarter of the longterm, but there are a couple of things that super stand out to me.


 I am a huge fan in the product space of starting with. a data model that we all agree on. So many technology projects in general don't consider that fundamental data model, what the relationship is. We call it an ontology, too, between certain aspects, so that you can kind of map a structure. And I think to your point, have an analysis that's comparative. So we can say this analysis looked at and this analysis looked at, yes.


Tobias Yergin: 

Yes, yes, yes, yes, Justin, that's so good. In some of my advisory work that I do now, one of the things that we look at is where do you start, right? Because I just talked about kind of a general approach in terms of what these models can do and where you have the capability to, you know, automate certain tasks or augment a human condition, but like, okay, that's great. That's just a general frame. It's a general reference. It's a general model.


When you actually get into some of these other opportunities where you say, OK, we're going to go do this, then the next question is, well, how do you start? Where do you start? What do we do? Like, what is page one?


And a lot of it is, to your point, you need existing systems with very good and very clean data that are resilient, that are robust. Where do those systems live? And brother, you and I have been around a while, so we know where they are. They're in systems of record. They're in systems of engagement. They're in supply chain management. There are reams and decades of information there in a lot of these large enterprises.


But what that gives us is a foundation for all of the data that we can then do differentials off of to say, if we deploy these three agents doing these 15 tasks within this specific workflow.


Now we can actually say, hey, what is the differential off of the existing baseline so that we can have the measures that we're looking for, 15 % increase in productivity or efficiency, or maybe there's just a result that's a little bit more efficacious at the end of the day that can result in improvements to the top or bottom line. That's a great point, and it all has to do with the data.


Justin Beals:

I feel like we're kind of babes with toys sometimes with these AI models, because to your point, we'll adopt them without any measurement of quality, accuracy, effectiveness, and the trust thing becomes pretty amorphous, doesn't it?



Tobias Yergin: )

It does. And so how do you measure the trust and who is measuring the trust? And when you don't have the trust, what remediation or mitigation measures will you take? And then after that, do you go back around and kind of have a closed loop process to say, now that we've taken these actions, what has actually happened? Because


Again, like you know it, you see it just like it's moving so fast. Those kind of closed loop systems, those very thorough processes, everyone's just like more sooner. And for us, you know, that's for almost everybody that I talked to that that seems to be the calling card for, what's happening in the world right now. And so, you know, to your point, I think that we at some point need to pump the brakes a little bit and just make sure that what we're doing is actually building trust and actually also for sure solving these big problems that companies are going to pay a lot of money for.


Justin Beals: 

Let's talk at the human level a little bit because I think you point out that 10x developer is really interesting to me and I'll use another thread that or metaphor her example. It's there is a lot happening in music production over the past like 30 years where you used to have a record have to have a record label. They needed a pressing plant. They needed warehouses for distribution, and today, you can produce something and distribute it all on your own.


Let alone like the gear, right? It's gone from being very expensive instruments down to software synthesizers that you can plug and play. Someone who had a passion wouldn't have the ability, but now we've maybe connected the passion to the ability. And at the same time, we lost a lot of jobs, but I mean, this is transformation in process. I tthink it's hard to harness the tiger.


Tobias Yergin:

I don't know of anyone who has and can. I think the mantra for now is, at least when I go out and talk to other folks who are doing the same kind of work that we're doing here, is better, faster, cheaper, safer. 

And if you hit those four metrics than the task that you're doing and maybe even the specific job to be done, not the functional job, not like, I'm in marketing, but like maybe like one aspect of marketing, let's say SEO as a perfect example, actually, you know, maybe that does get to a certain degree from a human's perspective eliminated because in that instance, the human is the bottleneck and the machines can do it better, faster, cheaper, safer.


 And I think it's...I don't know because the rate of change is so fast. think anyone giving predictions is probably going to be wrong, but I'd say within three years, we will see a fundamental rewrite of work as we know it today and what the future of work will look like.


there will be incredible opportunity. And I think at the end of the day, maybe seven, eight, 10 years from now, I think humanity will be in an entirely different evolutionary arc. And I refer to that time as the age of abundance. The time between now and then, however, maybe it's three years, maybe it's five, maybe it's eight, but it's less than 10, I think, with a very high probability. I refer to that time as the crossing.


And these transitions historically, every single time they've happened, have been messy. People get left behind. And especially because this is happening so much faster than all of these other transitions, I think we rarely run the risk of having some issues that we'll have to look at from a society perspective, from a social perspective, from a social contract perspective. 


Because my worry is that just so much change happens so quickly that others are left behind and that's never good for anyone.


Justin Beals:

Do you feel like it's strained or at least maybe I feel like in some ways it's strained our cohesiveness as a society in some ways. I mean, we've seen it used for that, of course, yeah.


Tobias Yergin : 

Yeah. And I think, again, we've talked a little bit about this in the past in some of our previous calls, but I think one of the things that I'm seeing on a regular basis, I go talk to universities here in the local Bay Area quite a bit, and the recent college graduates, the people who don't have the experience, the people who might not have all of the contacts and all of the life lessons along the way, they are struggling to get an entry-level job because it turns out that these systems of intelligence, which is really what we're developing, they are capable of doing an entry-level job now.


So how does a young person get the necessary experience that maybe you and I have to interface with these machines and these systems of intelligence and get incredible outcomes out of them?


What happens if you are a young person and you don't get that experience? Because you can never, as my grandfather would tell me all the time, Danish grandfather, Tobias, never mistake intelligence for experience. And these kids are really smart, but they don't have the life lessons and the experience to draw upon. They don't have the wisdom. And that's another key thing. Intelligence without wisdom is, I think, a very scary proposition, and we're headed down that path.


But these kids don't have all of the things that they need in order to extract the value that these tools can provide. They'll get on, they'll log on, and they'll say they don't understand prompt engineering, they don't understand how to interface with it, they don't have an objective, they don't understand markdown if you really wanted to get kind of crazy into the way that you would format your prompt. And then they say, this thing sucks. this thing can't do anything. this thing is dumb. And then they leave it alone, and they're off doing whatever they're doing.


Whereas someone like you, Justin, you can interface it with it and you can make it do magic. And I think that's just going to potentially and actually in all likelihood, it will increase the chasm between those who have and those who haven't. Because those who have will know how to extract the value and those that don't have the experience I think are going to stand at risk of getting left behind. So that's something we need to figure out as a society.


Justin Beals: 

Yeah, I think to your point, one of the systems I was playing with recently was Replet. And I was trying out a blockchain technology, just conceptualizing it a little bit. But I remember getting the system to a point where I realized that the database table, it had not architected properly. And so I needed to separate a single table into two, and build a relational table between them. And I had to step that system through it. And it didn't understand in the moment the vision that I had for the software or the other future connections. Yeah. But if I hadn't been an engineer before, I wouldn't know where to look. Yeah.



Tobias Yergin: 

And to that point, very similar stuff when I'm doing AI research on next generation capabilities around things like meta orchestrators and the orchestration layer, it'll come up with something. And I was like, no, you forgot this, you forgot this, you forgot this. That's a security hole. We can't do that. But it's all about interrogation and interaction and teasing things out and being a critical thinker and having that 30 years of experience of building product in the Bay Area.


If you're a young person, you just can't get that overnight. They can't inject all of what you and I've done for 25, 30 years in order to work with these systems to get these incredible outcomes. And I think to me that that's one of the most dangerous things. That's one of the things that concerns me. It's like, how do we fix that? Because what do you do? Speed up the clock and give them the experience? I don't know.


Justin Beals:

 I don't know, of course, I'm known to be positing potential solutions to buy this. And I wonder some days if these tools aren't also, you know, we've seen this before, while they are a danger, they're also an answer. know, could they accelerate learning? Do they have the ability to teach at scale in a personalized manner like we always dreamed of, you know, in the online learning space? And so, you know, I also think about people who use models for a living and have for a long time and where their work is at. And I'll tell you the group that always comes top of mind is people that deal with weather. Because there are multiple models, right?, that they're interpreting together to synthesize an outcome in a way. And especially in the most extreme situations like marine navigation, it can be  you know, dangerous to let the computer run the system. Yeah.


Tobias Yergin:

I 100 % agree and one of the clients that I work with, we're talking about looking at an enterprise risk management software solution that is really driven by a large language model and a whole bunch of different agents.


And in that system of intelligence which is happening, we've got baseline scenarios that map out into possible futures. And then there are triggers, there are events, there are thresholds, there are risk ranges. And you can start to model in all of these kinds of approaches to say, what are the critical uncertainties? Because again, I think we've talked a little bit about this, but the world is a complex adaptive system. It is always changing. The variables are changing. The interaction of those variables is changing. 


The weight of those variables is changing over time. The inputs into the system are changing. There's so much at play here. To think that even these systems today could figure all of that out without a human or with a human, think smacks of hubris. And so what we've got to do is basically just chart a way forward to say, this is our best guess, and let's track to see what's happening. And if we're onto something, let's follow the good.


Justin Beals: 

Yeah, I think we've chatted about this, but I certainly see the risk space, the compliance space broadly as ripe for what these types of models do, because to your point, we've systematized all the work. We did that to make it efficient or so that organizations like ColdFire can handle thousands of audits in a quarter. But then that made it easy for computing to harness, and when the data in and the decision making is easy enough, we start running it with machines more than human beings, I think. 


Tobias Yergin:

Yeah, I think I know we're getting, you know, towards the end of the time here. I think for me, when I've looked at this and I've tried to just bottom things out, because I like to do root cause analysis amongst all this stuff. I think at the end of the day, 10 years from now, whatever the time is, the only job that is truly safe is that of an entrepreneur. Because our capacity to think, our capacity to create, our capacity to connect not only disparate data points, but with people, amongst people. That human-to-human interaction, I think, is something that no machine will ever be able to replicate. I have one little story there, if we have time.


Justin Beals: 


Please, yeah, we do. Yeah, yeah.


Tobias Yergin:  

It's fun one. So our neighbor next door, another ex-Intel executive that we had a long run there together. His son is 12, and we were talking one night over a glass of wine and and he said:  “Hey, Tobe what do you think all this stuff is headed?”  and I kind of gave him a little bit of what we were talking about today and his son his name is Rory and he said “ Because Rory I think he knows what he wants to do already”. So what is that? I didn't even know it was a verb but buddling is a thing and that is the act of being a butler.


And I thought, okay, well, that's really interesting. How does this fit into the conversation? And we got through kind of what we were talking about here. And I said, you know what, wouldn't it be the most amazing kind of contribution or the amazing, maybe it's a flex even, I'm not sure how to really think about it yet, but wouldn't it be amazing to say, yeah, you know what, a robot could do that job, but actually a human's doing it instead. And I think it's because it's a human interacting with another human, or a human interacting with a potential horse. Like it turns out horses don't like robots. They can tell the difference between a human and a robot. The smell, the organic components, the nature of who we are as an organic being. 


Like there's those kinds of things which are easy to think about in the future, which I think will be safe probably until we're not here anymore and the cockroaches take over. But I think there's still a lot of opportunity out there for humans. And that's what makes me super excited because the augmentation of what we can be as a species, I think, is going to definitely change in the next 10 years. And I think it's a bright future.


Justin Beals: 

Characteristics in that age of abundance as you describe it do you think we'll see?


Tobias Yergin: 

I think we're going to see disease go away. I think we're going to have the ability to clean up our planet. I think we're going to have the ability to have personal instruction that is tailored specifically to the way that you learn. I think we're going to be able to potentially go to the stars. I think, to be honest, it's possible. I think that we will reverse aging. I think that we will achieve lifetime escape velocity.


And these are just, I think, some of the things that we'll figure out along the way. Actually, I think one of the biggest ones is fusion. It turns out that the biggest input costs to making anything are humans and energy. And when we have the ability to harness the sun and not have to worry about batteries or anything else, renewables or oil or even nuclear at that point, but when we actually get fusion rather than fission, I think the world changes because the input costs asymptotically approach zero and then you don't need a million dollars to live well.

You actually can live well on far less because the cost of making stuff is almost nothing.


Justin Beals: 

I it feels like a continued extension of the power of computing to both in like relationship model where we're human, that's a computer, but also in computing as an extension of humanity. I kind of perceive it the latter more than the first except in my favorite sci-fi films. But it does feel like we're extending ourselves in a lot of ways in this work.


Tobias Yergin:

Yeah, and that's a great, great point. You know, things like Neuralink or, you know, some other thing. I think that we still only use a very small percentage of our brain. Is it possible that we figure out how to use another 30 % of it, which is as far as we know today, laying dormant, it might have other reasons. It might have other things that we don't yet understand in terms of we think it's this latent capacity, but it might actually be being used for something else. But let's just say we could use another 30 % of our brain.


Justin Beals: 

Yeah.


Tobias Yergin: 

Let's say that we could have a 200 IQ. What would that look like? If everyone just walking around had a 200 IQ, imagine kind of the opportunity space that would evolve as we got creative and started thinking about really hard problems. think physics and math are another thing that these models do really, really, really well, it turns out.


And I've got a saying that math is the only truly universal language. And when I say universal, I'm not just talking about universal amongst humans. I'm talking the entire universe. It is the single language that unifies everything. And I think we will solve very difficult, very hard, complex math problems that unlock new physics for us.


Justin Beals: 

I think that it's interesting, so much press got loaded into the LLMs themselves, but I think it's the orchestration side that's really driving the engine here. We've seen it too on our side. was like, yeah, we could build a chatbot that would answer your questions pretty good, or we could build a model that would generate a certain text with a perspective. But once we started piecing it together with new data, it started doing things that humans did.


Tobias Yergin: 

That's right. And I do have an upcoming paper on this. I call it the MetaOrchestrator. And I think if we can find a way to decentralize the orchestration layer so that it's not tied specifically to a single model, then we actually can have, I think, a much more democratized approach to intelligence, which benefits all of humanity, because, I see you nodding. The last thing that we want is six companies basically deciding the fate of humanity moving forward. And so for me, that's a passion project. That's something that I give a significant amount of my time to, to making sure that we're moving in the right direction there, and that all of us get to benefit and participate as well.


Justin Beals: 

I applaud the open source work in this space, just like you and I probably have since the BSD subsystem was released a long time ago. You know, one final thought that I just had, maybe we can wrap up with a little bit. CISOs, security leaders, privacy leaders at enterprise organizations, know, scale of Walmart or in that band, you know, what should they be thinking of as this transformation happens? What guidance would you provide them in considering a generative style initiative?



Tobias Yergin (37:19.714)

I think that there is so much that is open for increased security risk. These are not deterministic models, as you know. They are probabilistic models. They are very right to things like prompt injection. You have to approach things, think, and at least from my perspective, have to approach things from a systems perspective. You have to approach things in all likelihood from an automated perspective because you're going to be attacked by automated AI systems, and humans probably are not going to be able to get to that. You have to think about your heuristics, and you have to think about your first principles, because if you have a static set of rules that you think are going to solve all of your security problems, you're already dead in the water.


 So I think you have to operate from those kinds of perspectives. And then you have to think, I think about lastly, your agents, right? If these agents are behind your firewall and they are in a closed environment, that's one thing. But if you actually are using these agents to go out and fetch information or to do tasks,in the digital environment that we know as the internet. That's a messy place. It's a dangerous place. There are a lot of very bad agents out there already. And if your agents don't have the proper security measures and the proper security policies, it's very easily for them to get infected, and take that information or those procedures back into your organization and do very bad things very quickly. Because you have to remember, at some point soon, we're not going to be talking about, oh, I got five bots, I've got six agents. No, it's going to be tens of thousands, if not millions.


And once we get to that scale, I think you're going to have to look at some kind of a self-healing kind of approach because you're not going to be able to know every permutation and every set of risk issues for every single agent and every single interaction that it's got. That's just not going to happen for a human. So it's probably going to be machine-based.


Justin Beals (39:26.105)

Yeah, I'd see this as a similar like security risk to almost like adopting a shared file system. It carries all the same weight, right? Like, does it is it acid compliant? Is it, you know, is it is it resilient? Do you have failover? Do you have permissions? Right? You know, is it exposed to the network? Did you want it to be like our external people allowed to get to it? Yeah.


Justin Beals: 

It's that kind of fundamental infrastructure.


Tobias Yergin: 

Yeah, think a lot of and actually I'm going to go talk to a partner at a VC firm later today, but who's doing security in the space and I think, one of the things that you have to consider is that there's also liability issues. There's judiciary responsibility issues. There's board level issues. There's brand impairment and implication issues. There's potentially PII involved if you have got customer information. There's all of these things that second, third, fourth order consequences you have to be aware of when making these kinds of decisions. Because if you decide to open up the tap and basically say, we're going to go launch 50,000 agents this next year.


Be very careful what you wish for because I, as guy who's done security and mobile for almost 10 years, I would be very careful with that. don't think we're ready for that kind of scale yet.


Justin Beals: 

Yeah. And I think you've got to be conversant in this stuff as a CISO because your executive team is getting a lot of feedback from their banker or their board about what's going to change and how are you using AI and you're going to need to compete too, just as a business. it's something you got to tackle.


Tobias Yergin:

I'm glad I don't have that job.



Justin Beals:

Well, Tobias, we really appreciate you sharing your expertise and what's happening at the scale of Walmart and certainly will include information about how our listeners can contact you in the show notes today. Thanks for joining us on Secure Talk.


Tobias Yergin: 

Thank you so much for having me. I really enjoyed speaking with you.

About our guest

Tobias Yergin

Tobias Yergin is a seasoned product management and innovation executive who has spent over two decades driving digital transformation and creating breakthrough products at industry giants like Walmart, Intel, VMware, and Panasonic. 

He specializes in identifying and scaling AI and GenAI use cases, building on his proven track record of launching everything from mobile platforms to IoT solutions that have generated hundreds of millions in revenue.

Justin BealsFounder & CEO Strike Graph

Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.

Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.

Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.