Shared Wisdom: Why AI Should Enhance Human Judgment, Not Replace It | Secure Talk with Alex Pentland

January 27, 2026
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

Most AI discourse swings between paradise and doom—but the real question is how we architect these systems to enhance human understanding rather than replace decision-making. MIT Professor Alex "Sandy" Pentland reveals why treating AI as an information tool instead of an authority is critical for cybersecurity teams, business leaders, and anyone navigating the intersection of technology and culture.

The math is stark: 90% of social media users are represented by only 3% of tweets. We're making decisions based on algorithmic extremes, not community wisdom. Pentland shows how Taiwan used the Polis platform to restore government trust from 7% to 70% by eliminating follower counts and visualizing the full spectrum of opinion—proving most people agree more than they think.

For security professionals, the implications are profound: culture drives security outcomes more than controls. The stories your team shares about breaches, vulnerabilities, and response protocols create the shared wisdom that determines whether you're actually secure. AI can help synthesize context and surface patterns across distributed organizations, but cannot replace the human judgment needed when edge cases and outliers occur.

Drawing parallels to the Enlightenment—when letter-writing networks sparked unprecedented collaboration among scholars—Pentland argues we stand at a similar inflection point. We have tools that let us share information at unprecedented scale, yet our digital systems amplify loud voices and create echo chambers instead of fostering collective wisdom. His book "Shared Wisdom" offers a pragmatic framework for cultural evolution in the age of AI, recognizing we'll take steps forward, make mistakes, and need to choose our direction deliberately.

Key insights include understanding AI as a statistical repackaging of human stories, recognizing how four waves of AI development have each failed in predictable ways, and learning why loyal agents—systems legally bound to serve your interests like doctors and lawyers—represent the future of trustworthy AI. Pentland also explains why audit trails and liability matter more than premature regulation, and how communities need local governance that's interoperable but not uniform.

Alex "Sandy" Pentland is Stanford HAI Fellow, MIT Toshiba Professor, and member of the US National Academy of Engineering. Named one of "100 People to Watch This Century" by Newsweek and one of "seven most powerful data scientists in the world" by Forbes, his work established authentication standards for digital networks and contributed to pioneering EU privacy law.

 

Episode resources:

Pentland, Alex. (2025). Shared Wisdom: Cultural Evolution in the Age of AI. The MIT Press.  https://mitpress.mit.edu/9780262050999/shared-wisdom/

 

View full transcript

 

Justin Beals:  Hello everyone, and welcome to SecureTalk. I'm your host, Justin Beals. In the 1700s, something remarkable happened that changed human civilization forever. The Postal Service opened up and suddenly scholars across Europe could exchange letters describing their theories about how the world worked. What we remember as the Enlightenment, those isolated geniuses like Newton and Voltaire were actually part of a massive letter writing network. 

 

They were sharing stories, testing ideas,and building on each other's observations. That exchange of narratives fundamentally reshaped science, politics, and society. Today, we stand at a similar inflection point. We have tools that let us share information at unprecedented scale. AI, social media platforms that reach billions. Yet something has gone wrong with how we're using these technologies. Instead of fostering collective wisdom, our digital systems amplify the loudest voices, create echo chambers and replace nuanced discussion with performative outrage. 

 

The mathematics are straightforward. On Twitter, 90 % of users are represented by only 3 % of tweets. When you scroll through your feed and form opinions about what people believe, you're not seeing the voices of nine out of 10 users; you're seeing the most extreme 10 % who create 97 % of all political content.  The moderate majority has been algorithmically erased from public discourse. Now this creates what psychologists call pluralistic ignorance. We systematically misperceive what our neighbors actually think because we can't see them. Research shows that most Americans hold remarkably similar views on contentious issues like gun control and abortion. Yet our digital platforms make us believe we're hopelessly divided. The problem compounds when we add AI into this broken communication infrastructure.

 

Large language models are trained by harvesting all the stories on the internet and mashing them together into one giant network. The models don't distinguish between communities or cultures or contexts. They can sound like someone from Iowa, then New York, then India, all in the same conversation. Now this lack of coherence is where you get weird hallucinations and unreliable outputs. But there's a more fundamental issue with how we're thinking about AI.

 

We keep asking whether these systems will lead to paradise or doom, whether they'll make us so productive we never work again or replace humans entirely. Those aren't the right questions. The real question is how we guide these technologies so they help us make better decisions rather than replacing human judgment. 

 

Consider the pattern across previous waves of automation. In the 1950s, we built logic engines that could prove mathematical theorems and optimization systems for economics. The Soviet Union based its entire economy on those models. They won a Nobel Prize for the work. Those systems were static. They assumed the world worked the same way forever. When unusual events happened, like the 2008 financial crisis, the models failed catastrophically. Then came expert systems in the 1980s, where we captured rules from smart people and applied them uniformly across organizations.

 

That approach broke up community coherence. It didn't matter if you were in Iowa or New York, the system applied the same criteria everywhere. We saw this with education policies like No Child Left Behind, which stripped away the local knowledge and community context that makes knowledge meaningful. Now we're facing similar challenges with large language models. These systems are probabilistic, which means they occasionally produce real outliers.

 

If you're using them for business purchases or medical prescriptions, that variability matters. And when you chain multiple AI agents together, letting them talk to each other without human oversight, those small unpredictabilities compound into serious failures. The solution isn't to ban AI or slow down development. The solution is to change how we architect these systems and how we use them.

 

Instead of treating AI as a source of truth or a replacement for human decision-making, we should use it as an information tool that helps us understand context, synthesize different perspectives, and make more informed choices. There are already successful examples of this approach. In Taiwan, a platform called Polis helped restore public trust in government. 10 years ago, only 7 % of Taiwanese citizens trusted their government. Today, that number is around 70%. The transformation happened by using technology that eliminates follower counts, visualizes the full spectrum of public opinion, and helps people discover that they agree more than they thought. The key design choices were simple but powerful. No amplification of loud voices, no echo chambers, just a clear view of what everyone is actually saying, which lets people realize they're not alone in having reasonable opinions. 

 

When you can see the face of the crowd, you come to understand that most people aren't the extremists dominating your social media feed. This same principle applies in business and security. If you have meetings where one loud person disrupts the group, you lose the collective wisdom. If you don't know what's happening in the rest of your organization or your operating environment, you make poor choices.

 

AI can help with both those problems, not by making decisions for you, but by giving you better awareness of the full context. The future of AI should be about building loyal agents, systems that represent your interests rather than advertising revenue or corporate profits. These agents would act like fiduciaries, legally bound to serve you the way a doctor or a lawyer does. They would help you remember things, understand what's happening in your organization, and learn from what others have tried when facing similar challenges. This requires a different regulatory approach than what most people advocate. Rather than writing detailed rules about AI before we fully understand it, we should focus on audit trails and liability. We know what good and bad outcomes look like. Killing people is bad, reliable trade is good.

 

 We just need to make sure we can connect outcomes to whomever caused them, and then hold them accountable. That's how we regulate electricity. We don't have complex rules about every possible electrical device. We have liability if you electrocute someone, and we have standards like underwriters laboratory that certify safe designs. The system works because companies have an incentive not to harm people. And when they do, there are consequences. The same approach can work for AI.

 

Let communities develop compatible systems based on their own interests and needs. When something goes wrong, use the audit trails to determine what happened and who's responsible. Learn from failures and encode that knowledge into better practices. This lets innovation happen while still protecting against serious harms. What fascinates me about this moment is how it mirrors challenges throughout my career building technology products.

 

You can't start with compliance requirements when designing secure systems. You start by understanding your actual risk posture, your operational realities, your resource constraints. Then you build security that works for your specific context. The compliance frameworks are measurement tools, not design specification. The same logic applies to AI governance. We need to understand what problems we're actually trying to solve, what outcomes we want, and how to measure whether we're achieving them. Then we can build systems that serve those goals rather than chasing abstract notions of artificial general intelligence or trying to prevent every possible misuse before it happens. 

 

Our guest today has spent decades working on exactly this intersection of technology, policy, and human behavior. He helped establish authentication standards for digital networks, contributed to pioneering EU privacy law, and worked with the UN Secretary General's office on global technology challenges. His research has influenced how hundreds of millions of people interact with technology worldwide. His new book, Shared Wisdom, offers a pragmatic framework for thinking about cultural evolution in the age of AI. It's neither utopian nor apocalyptic. It simply recognizes that we're going to take steps forward, make some mistakes, and do some good things too.

 

The question is whether we can figure out what direction we want to go before we end up somewhere by accident. Alex Sandy-Pentland is a Stanford HAI fellow and MIT Toshiba professor, named one of the 100 people to watch this century by Newsweek and one of the seven most powerful data scientists in the world. He is a member of the US National Academy of Engineering, and an advisor to Abu Dhabi Investment Authority Lab. He is also an advisor to the UN Secretary General's Office. His work has helped manage privacy and security for the world's digital networks by establishing authentication standards, protect personal privacy by contributing to the pioneering EU privacy law, and provide healthcare support for hundreds of millions of people worldwide through both for-profit and not-for-profit companies.

 

Join me today as we explore how to build AI systems that enhance human wisdom rather than replace it, and why the most important innovations may not be technical at all.









—--

Justin Beals: Alex, Sandy, Pentland, thanks for joining us on SecureTalk today.

 

Alex Pentland: Glad to be here.

 

Justin Beals:I had a really wonderful time reading your book, Shared Wisdom. It certainly is applicable in a challenging moment in the tech industry and the internet, where we store a lot of our data today. And I'll quote from the title of the book. It's about cultural evolution in the age of AI. 

 

Alex Pentland: That's right.

 

Justin Beals: And so...certainly there's a lot of inspiration to go around for a book like this, I think, lately. Is there anything? Maybe you can just give us a little background about what got you kicked off in this space or thinking about this book?

 

Alex Pentland: Well, everybody talks about AI, and almost all of it is either probability of doom or we're all going to be so productive that we won't ever have to work again. And neither of those are very desirable, I think. But also, it's completely unrealistic. There'll be some sort of changes of the way we do business or the way we live and so forth. And the interesting thing today is to ask how can we guide this stuff and our choices in it so that we end up in a place that we want? And the core thing with the book is, look, we don't want to give up control of things. We don't want to just retire from life. We want this stuff to help us be smarter. And there's examples in the past, like the Enlightenment, where we invented a technology. 

 

People don't know this, actually.  Letter writing started, people started sharing theories about how the world worked because the royalty opened up a postal service. You could send letters to people. And all these people we think of as these isolated geniuses actually were writing letters to each other like mad. And that's sharing of ideas, sharing of stories. That's what led to the enlightenment and just changed the whole world. And so now we have all these new tools. We have AI, blockchain, have internet, obviously, whole bunches of other things. And we ought to be able to do the same thing, but maybe even better.

 

Justin Beals: Yeah, it is very intriguing to me in some of my recent readings about the difference between something like the animal kingdom and us and really how big culture is a part of that delta. You center the book heavily on the concept of stories and cultural sharing. You know, started with letter writing, but that's been around for a long time.

 

Alex Pentland: Yeah, well, actually, you know, people used to say the difference between people and animals is that we have words. Well, it turns out that's not true. Whales have words, apes have words. There are a lot of animals that have words and even very simple sentences, but none of them tell stories. None of them. And what is a story? A story is, oh, this thing happened, and then, as a consequence, that happened. It's telling you about how the world works.

 

And if you go to Aboriginal societies in Australia, you hear a lot of that, and people have studied it. And, you know, it's the sharing of stories that determines what people do the next day. But also, some of these stories are really old. They can date them because, you know, there was a comet strike or a volcano. And you see stories that are seven to 11,000 years old still being told because they're about, you know, who we are, how we live, what you gotta watch out for, stuff like that. And it's really the way we think. We go to school, we get degrees, we make a big deal about how logical we are, but actually, we're pretty bad. We really are. What we're really good at is remembering stories, learning from them, and sharing stories about, if you do this, then that happens, right? If you do this, then you're gonna get flattened, don't do it.

 

And the combination of stories like that is what we call culture. And most of our choices come from these sort of shared stories, which I call shared wisdom. Wisdom, incidentally, doesn't mean it's true. It just means what people think is true, okay? And so, realizing that, for instance, in security, since this is Secure Talk, the stories about how people break in, how bad things happen, how people forget things, those are the things that build a culture about what to do in different situations. If you have a good culture, you're pretty secure. If you have a bad culture, good luck. So it's trying to sort of wind back this brainwashing, frankly, that we've had, that we're logical and we think things through and so forth to realize that no, actually what we have is this sort of shared set of beliefs about how to behave and what works. And each individual tries some new things, tries to learn from other people, but it's mostly not, you know, logic-ing it out. It's mostly experimenting with things. And that's the way science works too. I mean, yeah, sure, you say, well, maybe it's this. That could be logic, but then you gotta go test it, right?

That's human life.

 

Justin Beals: I certainly think about this pressure testing of ideas, too. One of the things you point out is, and I loved your description of generative AI, is essentially like a statistical repackaging of human stories. That's probably my primary use case for it. I tend to use it for researching topics or trying to synthesize a large amount of data. It seems to have done that very well. It's done that for, pretty good for a little while now, though. That's not a brand new innovation, let's say.

 

Alex Pentland: No, I mean, if you think about how it's made, it's by harvesting all the stories that are on the internet. mean, the internet is not a matter of like lists of facts. It's people talking to each other mostly or explaining things. And probably the biggest error that people make in constructing these big models is that they take all the different cultures, all the different stories, and they just mash them into one big network. 

 

Okay. So it doesn't have a sense of coherence about it. It can sound like somebody from Iowa, it can sound like somebody from New York, it can sound like somebody from India. And so all those stories in there are mashed together, and that's where you get like weird stuff coming out, right? So the models that, for instance, are a little more predictable are models that are trained much more narrowly on, you know, for instance, what do we do in our corporation?

 

What is the conversation? And then you get things that sound like corporate memos, which might not be so good, but they don't hallucinate weird stuff from other planets, right?

 

Justin Beals: Right. Yeah, it is a form of storytelling a corporate memo you and I have been on the receiving end of a couple of them, I bet. I think that you for me that was very clarifying in the book when you talked about it in that way. I hadn't considered the way these things work as repackaging stories. I'd had a hard time describing to them, to peopl,e I would say, wel,l the machine knows that probabilistically it probably needs to write this phrase next or bring up this concept next, because those things have been in sequence or conjoined or had some type of relationship in the data set. And you could peel back the mathematics and see it, right? But yeah.

 

Alex Pentland:Absolutely. Yeah. And that's how you program the thing. Yeah. But then you look at what they're being trained on and it's all this, if this happens, then you do this or they're all stories, right?

 

Justin Beals: Yeah, absolutely. Now, one thing I like that you pointed out in the book as well, I'm hitting all my highlights here, Sandy, is that this isn't the first AI revolution. And I say this as somebody that's kind of struggled. I like to tell folks that I lived through the data science era to the machine learning era, and now we're talking about AI. But this has been a continuity.

 

Now each of these waves have brought really unintended consequences to us. I know in my work in education, there were things that we did that I look back on, and I wonder if that was really the right decision to make or the way to utilize it. If we think about this as a fourth wave, and maybe you can describe wave one and two a little, one, two and three a little bit, what pattern are we seeing emerge again that we need to be nervous about?

 

Alex Pentland: Well, you know, all this AI stuff started back in the middle 50s, right? And actually, the most interesting parts of it were that they had logic engines that could prove mathematical theories, and they had economic engines that could solve the constrained linear systems to be able to understand what you had to make, what you had to order and what was optimal. famously, the Soviet Union based its entire economy on that. And it's not like completely stupid because that work on economics won a Nobel Prize. It's obviously pretty good. But when you build models like that, they're static. It's like this thing works this way forever. And if you look a little more closely, you notice that they have a little bit of tolerance for noise, but they don't have a tolerance for black swans or gray swans or any of that sort of stuff. These are really unusual things.

 

And that matters. Like 2008, might remember we had a big crash. That was basically things that were really unusual happened and the whole system blew up. So that's one type of thing is they don't take into account the fact that this is a changing world and there's new things. And all these models are trained on old stuff, right? Because that's what's out there. And so they're not up to date.

 

And then the next sort of wave that came along was expert systems, right? Expert systems where they figured, well, we can't make equations about it, but we can ask some smart people. And the thing that was wrong with that was a little more subtle, a bit like social media in a certain sense, because what corporations would do is they'd make rules that were uniform for everybody. And that meant that the difference of this community or that town or this state versus that, all that got lost, right? It was like, here's the criteria, that's it, right? It doesn't matter if you're in Iowa, doesn't matter anything, right? It's just, this is it. And it went a long way towards sort of breaking up the community spirit and the ability of communities to do things in the US, which is pretty bad, right? 

 

Justin Beals: I think we complained about that in education, to your point, a bunch. I talked to a lot of educators. We did things like, no child left behind. They were like, hey, these assessments are taking away our strengths of our community, in a way. Yeah.

 

Alex Pentland: That's right, yeah. You can't be your community. And actually, all the data about how kids grow up, that's absolutely critical, is that people have this sense of community and that they're contributing and a part of it. It can't be something that's like from on high, this is everything, right? And yeah, so, and now we're here with the LLMs and stuff like that. And, you know, yeah.

 

Are these LLMs the single source of truth for the entire world? Well, they better not be, right? And they're all trained on, you know, what was said in Reddit and 4chan five years ago, and this is not so good either, right? And the other thing about them, right, is that the methodology is a probabilistic methodology. And what that means is occasionally you get things that are real outliers. And so, for instance, if you're beginning to use them for business like to do purchases or things of that, know medicine or something like that You have to be really aware that occasionally it's gonna prescribe arsenic or something like I mean, it's it's in the vocabulary It's in that network someplace and it won't usually come up but it might and even worse and this is like really for the security stuff is now people are talking about AI agents, so, your agent talks to my agent, who talks to the McDonald's agent, who talks to the what, you know. 

 

When you get that sort of chain of things, all these little sort of unpredictabilities add up. And what the evidence so far is, is that they add up often in fairly bad ways. So that it's not something that's very reliable. And that's something that everybody is wrestling with. It's like, wait a second. You you put a whole bunch of them to make something complicated, like in a business, right? All the processes. And suddenly it does wacky things. Not because any one AI was wrong, really. They were like, somebody did something a little weird, and the next one did something weird in response to that. And then it's like a traffic jam on the road, right? Things just sort of get out of control.

 

Justin Beals: Yeah, when I've been trying to describe to people as we build tools that are AI driven or probabilistic in nature, especially on the internet, when I try to describe to people how to interpret what's going on, I'm like, well, how good is your weatherman? And they're a deep expert. They're using multiple predictive models. They're trying to pull together. And are they right all the time? Like, how often are they right? Do you need to be more right than your weatherman?

 

Alex Pentland: And if you're spending somebody's money, yes, you do. Right? Or prescribing drugs to a person. Yeah, absolutely. Absolutely. No, and that's a great analogy. Absolutely. On the other hand, everybody listens to the weatherman because it's useful. Right? So yeah, they're wrong 10, 20 % of the time. So usually not super wrong. But the human judgment takes in that information and says, well, maybe I'll take the umbrella today, right? And that sort of, the person's in control and using it as an information source, that's where you get the robustness of the system, because you've got the human judgment in there, the contact sensitivity, stuff like that.

 

Justin Beals: One of the things you introduced in the book that I enjoyed your discussion about, and I think someone else penned the phrase, but you use it quite a bit the concept of dragons, know Disproportionately our reality is in digital media. We receive our news through it. It's algorithmic in nature. I watch a lot of YouTube videos just like my niece and nephew. It's it's it's a big part of how I'm getting information.

 

You point out that disproportionately dominant voices, they kind of emerge with this preferential attachment in digital media. And I think one of the things that kind of shocked me in reading is that simply removing them will only create space for new ones to emerge. Like, you're not going to solve the problem by being, like, "OK, nobody can have this many followers" or something like that. 

 

Alex Pentland: You can think of it if one analogy is like think about boiling water, right? You can have something that's a little hot and some steam comes off and and that's pretty good, right? Okay, but if you turn it up just a little bit more you get big bubbles and the thing foams over the and onto the stove and you have a real mess Well, we've taken our social media and we've turned it up that extra little bit by having all these followers and the advertising things that promote Excitement and fear right and as a consequence what we get is we get this mess, just like the pot that's turned up too high. To come up with good decisions, I mean, this is actually a core of the book. It turns out that you need to listen to all the different ideas and try and figure out what people are agreeing about and what they are. And if you do that, you actually can make very good decisions, probably better than a lot of the equations and other sorts of things. But. If you let that discussion be dominated by a few loud voices, then you're just hearing them. And particularly if they're incentivized to be loud and shocking, then you're just getting junk into your head, right? Don't do it.

 

Justin Beals:  Ithink, Sandy, the way you describe this, now I'm thinking of the internet as a screaming toddler. just trying to get it into some form of logical mode of communication. Because I do agree with the shared wisdom of the group, right? That was the promise a little bit, I felt like, in some of these communication tools, is that we'd learn new perspectives or different ideas, and we could rally behind them. 

 

Alex Pentland: That was the hope. Yep, yep. But then we built the advertising in there and the influencer mechanisms. And so we turned it up just that little much that it now is like boiling over. the problem is, is once the temperature is up like that, yeah, you can prick the bubbles, but there's always new bubbles coming.

Justin Beals: Gotcha. Yeah.

 

Alex Pentland:  And that happens, you see that also in finance. So I mean, there's a little different distance, but you see, people getting too crazy about securitization and these other fancy things. And that causes bubbles, right? Same type of mania.

 

Justin Beals: You introduced to me the platform Polis in your book. And to this point that maybe it is in how we architect our communication pattern or the tools we use to do it. Polis, I believe, is a consensus platform, and it's been used successfully in Taiwan. And so I'm curious a little bit about how Polis finds success where other collective decision-making tools like we might try to think about something like Twitter that way or another platform. Really fall flat, yeah.

 

Alex Pentland: Yeah, yeah. Well, you know, Polis is a system that was built by a very dedicated, very smart guy up here in San Francisco, Colin McGill. And it was adopted by somebody who was equally smart in Taiwan. And they used it together with community engineering to really restore trust of the citizens in the government. I mean, just to give you an example, a decade ago, 7 % of the people in Taiwan would say that they trusted the government. And today the number is something like 70%. I mean, wow, that's a real change. And what happened? Well, what we did is we built a test bed, it's called deliberation.io. People can look at it, it's full of codes there, the experiments, everything. And we tested what about the policy thing was actually good. And one of the key things is there's no followers.

 

You can contribute comments, but you contribute comments to a common pool. So you can see all the comments. There's no extra large voices. The other thing that it does is it has a way of visualizing the space of comments. So you can say, well, my comments over here on left, what are they saying over there on the other side? And it turns out that in most situations, people are a lot more reasonable and agreeing than they think. I mean, just to give two things that people really go ballistic about in this country. One is gun control. Turns out most people in the US have pretty much the same attitude, except for the people at the ends, like the extremes. Same thing with abortion. People in this US have pretty much the same attitude. It's a little bit more liberal than Europe, which is interesting, except for the people on the ends. And so, you ought to know that, and you would never know it from either the press or the social media,  and when you know what the majority of people think, you begin to say, well, maybe there's something we can do and I'm sorry about the guys at the end, but you know, which seems like we all pretty much agree on something here. And that's the sort of notion of shared wisdom. When you see what other people are doing, they call it the face of the crowd, then you come to a realization that you're not alone in this.

 

There's a lot of people that have reasonable opinions. They're not crazy folks. So those are a couple of the design choices that are really different than social media. Like if you think of social media, you can't get a sense of what everybody is saying, just the people that you follow or happen to be in your little narrow thing. And that leads, of course, to echo chambers plus to loud voice effects.

 

So it's not how you want to do things. And the same is true in business. So like if you have a meetings in your group and there's some guy who's a loud guy, he's going to disrupt the wisdom of that group. Similarly, if you don't know what's happening in the rest of the corporation or the rest of the environment that you operate, you're likely to make really poor choices. And so in the book, I sort of point out some of the things you can do with AI to avoid those situations.

 

And that may be some of the best strength for AI is that it lets you get a good sense of what everybody else is doing. You said you use it for search. It also can let you get a good sense of talking to each other, sort of summarizing what's happening, but without the loud voices.

 

Justin Beals: You know, this is a This is something that I've struggled with for a while and you really turned on the light bulb in my head Sandy when I read it. I thought it was brilliant. So you know, I think I've struggled with where AI fits even though I use it day in and day out, like we build features all the time around what data can drive for a better feature in a piece of software. 

 

And I think for a while I was very nervous about AI being moderate, doing moderation work. But in reading your book, you seem to be a little bit of a fan. And the more I read your book, the more I was like, you know what? This technology is actually primed for this kind of work. And we just haven't tuned it. It isn't perfect, of course. Yeah, but to your point, being able to synthesize a whole swath of different ideas from a system like Polis into something a lot of people can agree with, that seems to be something that's particularly adept at.

 

Alex Pentland: Yeah, yeah, it's actually pretty good at that. And similarly, I was on a panel with a bunch of CTOs of major corporations. Being an academic, I was the entertainment, right? And they were talking about, it's so hard to bring AI into the corporation because you have to deal with all these data silos and things. But one of the things they mentioned, and everybody said, yeah, sure, was they had all created AI buddies for all their employees. So what's an AI buddy? Well, it's a little local AI. It's not somebody with remote data. And it's read the manual. And it's read the newsletter. And it knows what Joe down the hall is doing. And it can remind you when you do something, oh, that's like what Mary is doing. Or the manual is very skeptical about this, whatever it is. So, for instance, in security, you know that the vast majority of problems are caused because someone sort of didn't think about it right or they forgot something. It can be that sort of, you know, little helpful reminder. And in other sorts of business, like business development, it can give you a better sense of the context in which you're operating and what the other people in your organization are doing. And that's all good. And it's particularly good in an era of like work from home or distributed organizations, because, you know, if you're like in different states and you're working from home and so forth, how are you going to know what's happening? How are you going to be in the loop? Well, these things are actually pretty good at that. But notice that you're not asking them for the truth. You're not asking them to make a decision. You're asking them to be librarians or gossips or something like that. And that they're pretty good at.

 

Justin Beals: Yeah, absolutely. So I wanted to talk a little bit more about the decision-making part of your book. Of course, I think there's some psychology to it that you help lay out so that we can understand the problem space. And I just want to identify, especially for security leaders that listen to SecureTalk, we deal with cultural stories all the time. They probably don't think about it, right? But why you want a password of a certain length or why you want to implement a certain technology has a lot to do with the stories about what could go wrong if you don't. 

 

Alex Pentland:Yes, exactly. Yeah, yeah, those are good things to learn from. Yeah, exactly.

 

Justin Beals: You distinguish between stories for ourselves, stories for me, and stories for change as three types of support stories that people need, where we could also use AI. Can you help me kind of understand how these three types of stories map onto real problems a little bit, yeah.

 

Alex Pentland: Sure. So stories for me is like this AI buddies thing, right? It's helping me remember and know what's going on. And we have a project that this open source and all that it's called loyalagents.org. So it's how to build AI agents that are helping loyal to you and not to somebody else. Okay, that's pretty good. And then stories for us, right? That's this thing like Polis. And we have this thing again, open source thing with all the technical papers called deliberation. That helps actually act like a mediator or a sort of a mirror. It helps people sort of reflect on what they're saying in a group. And the great thing is just without adding any more information, just reflecting back to people, you get results from groups that people like twice as much as if they don't have that reflection. That's pretty amazing. And we're just sort of beginning. 

 

And then,the other thing is the final one is sort of stories for change. So what have people done to move from where you are now to someplace better? And you'd like to know all the different things that people have done and whether it worked out, whether it didn't work out. And people who are concerned with like growing a business or surviving a change in the market, this is what they need more than anything else. What do other people do? What works? What doesn't work?

 

And it's a little bit like market research or something, but that usually is done with these very old fashioned tools or just people gossiping with each other. And the AI can give you a really good sense of what all other people in the world are doing to move from A to B. 

 

Just like it can say, yeah, I've read the manual, and here it is. Or if it's a mediator, I hear people saying this and this. What do want to do? Those are the sorts of things where it's not telling you what to do. If you let an AI tell you what to do, you are going to run into a wall sometime and hurt your head. Don't do it. They can advise. That's fine. But the main thing they're good at is collecting this sort of information and presenting it to you so that you can make a better decision.

 

Justin Beals:  Yeah, Sandy, I want to throw a little bit of a curveball at you. You know, as we talk about AI here, right, there's especially in this particular time and space, there are some fairly massive organizations, or at least they've raised a lot of capital on they have big plans on what they're planning to do. I'm sure there's good and bad in everyone from ChatGPT to Anthropic to Mistral and France, you know,  what is rising in your mind is either a stressor or something that's very exciting as you think about these different organizations and what we should be doing better or what's gone right even.

 

Alex Pentland: Well, there's a couple of things in what you said there. So one is I had a dinner with the head of Singtel and the head of Singapore Sovereign Wealth Fund. And we were listening to the VCs in San Francisco talking about frontier models and things. And they were all smiling because that's not the game they're playing. Who wants a model that speaks Romanian and knows how to cook? For what? Right? mean, come on. Get down to business, get healthy, raise our kids. Those are actually fairly specialized things. And so what they're all doing is they're trying to take the needs of daily life, the processes, the problems, and build things that are really expert and reliable at that. And it's like, okay, yeah, wow. So if they do that, they're gonna be sort of like Microsoft plus Google have been in the last, say, 20 years, right?

 

They're going to be owning all the processes in the whole world. The other thing that was interesting about that is that, you know, we think about there's the US and there's Europe and then there's all these poor countries, right? That's sort of our American mental model. It's deadly wrong.

 

Justin Beals: Yes, we have that one. I hate to admit it, but it's true.

 

Alex Pentland: Yeah, yeah. mean, here an example is, did you know that there's more middle class people in India than there is in Europe or the United States?

 

Justin Beals: I'm so glad to hear it, actually. But I didn't know that, Sandy. Yeah. Yeah.

 

Alex Pentland: Yeah, so these countries are no longer poor. They're not exactly rich, but they have thriving middle class, they have technical skills, many of the people are educated. It's perhaps a little like, you know, the United States in 1950 or something like that. It's really different than our mental models. And of course, they're creating their own ways of living and using the open source models and things like that. So it's a lot more complicated situation than you think and I I sort of worry that you know open AI and people like that are are not being very efficient with their money Let's just say on the other hand, know One of the things that they did is they did like WeChat in China So WeChat in China is what everybody uses almost as the operating system and they buy and they sell and they pay things and they do stuff You know seminars and also all off this one sort of social media platform

 

And the pennies they get from each purchase, right? Let's say like MasterCard or something, goes into an organization called Ant Financial, which is one of the most profitable places in the entire world. So now, you can go on chat GPT and say, find me a leaf blower and it will go to Walmart's web pages, find you a blower, pay for it with your money and send it to you.

And if they take a little slice off of that, yeah, they're gonna be just fine. Thank you.

 

Justin Beals: Yeah. Man, you point out some great changes that are going on. One is this, like, I had this moment where I was like, would I rather deal with all the advertising in my current search engine, or would I rather pay a subscription to something that did? And actually, maybe this is privilege, but I'd pay to not have the ads in there to get straight back to the search or the research that I wanted out of the internet to find the answer. Yeah.

 

Alex Pentland: Well, I'm on your side. this thing that we've built, loyalagents.org, it's a research organization. But the partner in there is Consumer Reports. And what we're doing is building agents to help people buy stuff, right? Because you don't want to use Amazon's agent, because they're going to give you Amazon, like most profitable thing. You don't want to use Walmart's. You want to use something that represents you and your best interests in doing this. And so, you know, what we're trying to do is set a standard that people have to, you know, rise to, to represent you and be fair when it's doing things like buying stuff, right? So it's not an advertising model. It's a, it's an advising model, a little like, you know, a doctor or an accountant or something like that, helping you actually figure it out and buy it. And that strikes me as exactly the right thing to do.

 

You know, and the other thing to realize about doing it that way, that means you don't give up your data. The only thing that the company knows is that you asked to buy a blower, right? Leaf blower. They don't know if you've got kids or where you live or any of they might know some of those things if they have to ship it. You know, but there's many things they don't know that currently they collect all this data. And so that data asymmetry is a really key thing. And it's also, of course, a security thing.



You just don't want lots of copies of your data around everywhere. That's like 101. Don't do it.

 

Justin Beals: Yeah, I you know, it does are come back to the I think the Mona Lisa overdrive William Gibson book that I read a long time ago where I think the young woman and the protagonist had an AI device with her and but they were they were buddies. It was a gift and they shared data very trustworthily because the AI bot was loyal to the protagonist. Yeah.

 

Alex Pentland: That's right. And the legal term is fiduciary, that it legally represents you, right? It's loyal. And the key test is, is it loyal? That's actually exactly the way the lawyers talk about it. If this thing represents you, it's loyal to your interests, then you shouldn't mind sharing with it because just like with your lawyer or your doctor, right? It's trying to help and it can be trusted.

 

Justin Beals: So I want to crack into one more area that you go into in the book and will kind of wind up as how it's impacting some legislation. But I've oftentimes struggled sometimes in the United States. I have the privilege of doing a road trip once a year across the country. And I see lots of different places and lots of different kinds of people. And I think about our national discourse. And each state and community can be very different.

 

And I felt like in your book, you talk a lot about there are decisions that get made at a nation-state level or perhaps someday at a global level, but there are also decisions that need to happen at a smaller community level. Yeah. Maybe you can talk to us a little bit about how breaking that apart in a way, putting more structure or some might think about it like too much bureaucracy, but actually how it really can help.

 

Alex Pentland: Well, you know, there's a Nobel Prize economist called Eleanor Ostrom that studied how people self-manage themselves around the commons. this is the word harks back to, you know, English commons where the sheep would graze on the grass. That was called the commons. How do you make sure nobody, you know, takes an unfair share of it and that it's done correctly? And, you know, there's basically a couple of things.

 

People with skin in the games are the only one that should actually have a say about it. Somebody who lives in the next valley shouldn't be telling you what to do. You need to have data about who's doing what and how well it's working out so you can detect when you're overgrazing or doing something that's bad. And then there needs to be some sort of governance where if you're hurting other people, you face a penalty, okay? And it's graded so like if you're really bad, you get really penalty if you're not not so much. And that's not what we do today. What we do is we try to promote everything to the national level or the international level. And what that means is that any community that isn't absolutely average loses. So that, you know, if you have a policy that works for the average community, it won't work well for the poor communities or the rich communities. They won't work well for the, you know, any other sort of access. Communities really need to manage their own business.

 

And what's interesting is that we think of that as being inefficient. You just said that, actually there's too much bureaucracy But there are organizations that that do that for instance all the commerce in the United States all the buying the selling getting married Adopting kids whatever all that sort of stuff is not done at the federal level It's done at the state level and what they do is they have a volunteer organization that gets together and talks about how they can make them interoperable. And this has been going on since 1870. And it accounts for about 10 % of all the law in the United States. And it's why we can buy things from other states. It's why we can move to another state without having to re-get married or re-adopt a kid or whatever. It's because they've been harmonized because it's in their interest.

 

Two states harmonize with each other so that they can trade with each other, so that they can have things go back and forth, right? Everybody profits, that's great, right? And that sort of driver allows you to say, well, we're gonna have local governance, but it has to be something that's interoperable so that people come and visit and they buy things and stuff like that. And I think the evidence is that it works not just at the state level, but internationally. 

 

If you say international law, international law, I mean, I've done some of this stuff, like with the UN, with the EU, all those sort of leadership organizations. People don't really believe in international law. What they believe is their interest to get along with each other in some sort of common deal. So if you buy something from another country, you know, well, there's deals about how to convert the money. If it screws up, there are deals about how to get your money back. Sometimes it doesn't work so well, but everybody has this interoperability stuff because it's in their interest. They may be cheat a little at the edges, but the fact that it works, the fact that there's so much trade in the world just shows you that it's really quite interoperable and a trustworthy system despite looking a little bit like a pile of Band-Aids.

 

Justin Beals:  Yeah. So this is something in the compliance space that is a big topic in governance. It's a big topic of conversation, right? A lot of experts I talk to are like, why can't they just harmonize all these things? I'm like, yeah, because they all want different stuff. And besides, I think they're a little unique. You worked on GDPR. You've worked on UN sustainable goals, both concepts of governance and how we work.

 

You know, I loved in your book, and you changed my mind on this and reading your book, that the ex-anti-regulation of AI is kind of misguided. Like, you shouldn't lock it down before it's had a time to breathe a little bit. And I think most books I read are like, no, we should do all these things right now to lock it down. Maybe talk to us a little bit about why you think that's a really valid approach that the EU is doing here as they roll out their AI directives.

 

Alex Pentland: So we worked, my group, myself a little bit, worked with the EU to be able to develop that. And the tale I tell in the book is we had this discussion group that was all the prime ministers and presidents of the countries together with the chief regulators. And I was advocating this ex post thing, which is basically you have audit trails, you have liability, if you screw up, you get slammed, right? And you learn from that.

 

And this is the way we regulate electricity. I mean, you know, yeah, if you electrocute somebody, you get slammed, right? And we have things like Underwriters Laboratory, those UL things on every lamp cord in the world, practically, that says, this is a design that won't electrocute you, okay? Because we learned about that, okay? Took us some time, but we did. And what I found in talking to these people is all the presidents and prime ministers thought that was a great thing to do because you can do it pretty directly. All you need to do is take the existing law, the existing harms, and say, yeah, this applies to those AI things too. All right, that's the basic thing you need to do. But all the regulators felt like that would put them out of business, right? They wouldn't have anything to regulate. So they wanted to have all these rules, and they did. And what's interesting is they're now walking them back because when they did it, you know, there weren't things, many of the things that exist today, even like five years ago, didn't exist. So they wrote something for the time without anticipating a lot of the things that are happening that maybe are more important than the things that they were all worried about. And I think that basing the law on taking the existing, we know what's good, we know what's bad. Killing people? Bad. Having good trade, it was reliable, good. Okay, so we know that, we just have to make sure that we connect good and bad to whoever did it. 

 

So that's a liability regime. We know about that. You just have to make sure it applies to these guys. And then you can nail them if they screw up. And it's in everybody's interest not to screw up because if you screw up, then people aren't gonna trade with you and you'll go broke. That's bad.

 

So that's sort of the logic of it. It isn't maybe everything, right? It isn't like the best, but what happens is now you have things where people are developing compatible systems for their own interests, right? And you will see things that are bad, and you can actually legislate against particular bad things. Sure, okay. Like, for instance, I'm not a fan of things like character AI, which is sort of like chatbots that are supposed to be your friend because people are really susceptible to that, and particularly kids. There's like these terrible stories. So I'm not a fan of that. It's treating the AI like a human. And just in general, that's the wrong thing to do. The AI can be an information source, can be a context engine, can be a search engine. All of those are tools, but they shouldn't be people. And whenever you sort of say, this thing is going to be a person, you've probably made a pretty bad mistake.

 

Justin Beals:  Yeah, I loved your idea of like the logging, like you've got to keep a trail, you could legislate that. And then if something goes wrong, you have a trail to decide where you know, the problem was who's at fault, what type of recompense is deserved, right?

 

AlexPentland: Yeah, I mean, in the security business, you your business, you keep logs of everything for just this sort of reason. And if you're in finance, right, you keep logs of what the money came in, went out, who did it, why, what it was for, so that you can fix things if they break. Right. It's not hard. It's not terribly expensive, but it's really necessary if you're going to find out what's going wrong and hold it accountable.

 

Justin Beals:  Yeah, absolutely. You know, one of the things that I thought was intriguing, too, is like we've been watching the NIST 2 rollout in Europe as well. And that's a little more community driven, right? Like different countries are kind of interpreting the directive and what their laws in the community will be and crafting it that way. I thought that might be successful, although all my compliance friends don't love it. So, yeah.

 

Alex Pentland: Well, won't be completely interoperable. It'll be sort of interoperable. It's the way trade is also, right? Everything's just a little bit different, but it works. And if it doesn't work, people will fix it because that means that they can't sell stuff to somebody else, or they can't buy stuff or some bad thing like that. Yeah.

 

Justin Beals:  Well, Sandy, I really appreciate you talking to us today. And I really loved reading Shared Wisdom. I thought it was a really pragmatic approach. It didn't lean too far either way. And I learned some things from it. And that's not unusual. There's always a lot for me to learn. But I thought it was an incredible book and highly recommend it.

 

Alex  Pentland: No, thank you very much. You know, the view I had about it is that everybody's like going up these fantastical stories about, you know, it's all going to be paradise or it's all going to be doom. And no, no, come on. You know, we're going to take our steps, and we'll make some mistakes, but we'll do some good things, too. Let's figure out what direction we want to go. And until you pick that, you don't know. You just know saying where you're going to end up.

 

Justin Beals: Well, and thank you so much for joining us on Secure Talk today. It's been my treat, really. Yeah, really fun. Good. Thank you.

 

Alex Pentland: Pleasure. Pleasure. Good. my tree too. No, this is good. Thank you. Okay.




About our guest

Alex Pentland is a Stanford HAI Fellow and MIT Toshiba Professor. Named one of the “100 People to Watch This Century” by Newsweek and one of the “seven most powerful data scientists in the world” by Forbes, he is a member of the US National Academy of Engineering, an advisor to Abu Dhabi Investment Authority Lab, and an advisor to the UN Secretary General’s office. His work has helped manage privacy and security for the world’s digital networks by establishing authentication standards, protect personal privacy by contributing to the pioneering EU privacy law, and provide health care support for hundreds of millions of people worldwide through both for-profit and not-for-profit companies.

Justin BealsFounder & CEO Strike Graph

Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.

Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.

Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.