Why Security Leaders Struggle With Security Culture | Steven Sloman on SecureTalk

December 2, 2025
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

Brown University cognitive scientist Steven Sloman reveals the hidden mechanism driving cultural division—and why it matters for security leadership. In this wide-ranging conversation, Sloman explains the fundamental tension between sacred values and consequentialist thinking, and how understanding this dynamic transforms how leaders communicate risk and build organizational culture.

Justin Beals opens with a personal story about leaving a religious environment defined by absolute values, setting the stage for an exploration of how cognitive science explains why extremists control discourse, why outrage dominates social media, and why having strong values might actually be essential for good decision-making.

KEY TOPICS: • The two systems humans use for decision-making and why both matter • Why simplified positions dominate complex policy debates • How humor breaks through absolutist thinking • The critical difference between AI association and human deliberation • Why communities radicalize when they become too insular • Practical frameworks for leadership teams navigating value conflicts

Sloman, author of "The Cost of Conviction: How Our Deepest Values Lead Us Astray," shares insights from decades of research on cognition, reasoning, and collective thinking. The conversation moves from abstract cognitive science to immediate applications for security professionals operating in organizations where tribal loyalties threaten evidence-based decision-making.

Whether you're presenting risk assessments to boards, building security culture, or helping organizations function during divisive times, this episode offers frameworks for understanding when values serve us and when consequentialist analysis becomes essential.

Resources: Sloman, S. (2025). The cost of conviction: How our deepest values lead us astray. MIT Press. (https://mitpress.mit.edu/9780262049825/the-cost-of-conviction/) 


 

View full transcript




Justin Beals: Hello everyone, and welcome to Secure Talk. I'm your host, Justin Beals. I grew up in a deeply religious environment. My worldview was defined in absolute terms. There were things that were simply true, positions that could not be questioned, values that were sacred. My belonging to the community depended on signaling that I shared those values, constantly. 

 

Every Sunday, every family gathering, every conversation carried the implicit requirement that I affirm what we all believe.

 

When I finally left that environment, it wasn't because I rejected the community or even the people. It was because I started asking questions that couldn't be answered. I discovered that many of the sacred values I'd been taught were based on fallacies, some were outright misrepresentations. And the more I learned, the more I realized that the certainty I'd been handed as a child didn't hold up to scrutiny. That experience left me with a particular sensitivity.

 

When I encounter information presented as unassailable, when someone tells me this is simply how things are, I get cautious. I want to see the evidence. I want to understand the reasoning. I want to know what happens when we test the assumption against reality. But this instinct has served me well as I built my career in technology. Computer science rewards the people who question assumptions. Data either supports your hypothesis or it doesn't.

 

Systems work or they fail. The machine doesn't care what you believe. But here's what I've learned more recently, and that's what today's conversation explores in depth. My reaction against sacred values was itself very absolute. I threw out something essential when I rejected the rigidity of my upbringing. Sacred values aren't inherently dangerous. They're actually necessary. They simplify an impossibly complex world.

 

They bind us to communities that we care about. They give us identity and belonging. As our guest today points out, he wouldn't want a friend who had no sacred values. Such a person is normally thought of as a psychopath. The problem isn't sacred values themselves. The problem is holding them so tightly that we ignore consequences. The problem is communities becoming so insular that they shape their values specifically to oppose other tribes.

 

The problem is mistaking certainty for wisdom. Now security professionals occupy a strange position in this dynamic. We're trained as consequentialists. We assess risk, we calculate probability, we measure outcomes. We build threat models based on evidence, not belief. And yet we operate within organizations and a broader culture increasingly dominated by sacred value thinking.

 

I watch security leaders struggle with this tension constantly. They present carefully reasoned risk assessments to boards that want simple answers. They communicate nuanced threat intelligence to executives who've already decided what they believe. They build security cultures and organizations where tribal loyalties override evidence-based decision-making. The current moment has made this worse. Our information environment fragments attention to the point where deep analysis feels impossible.

 

Social media algorithms reward outrage and outrage flows from violated sacred values. Extremists dominate public discourse because their simplified positions are easy to articulate and share. The moderate, the nuanced, the carefully reasoned gets drowned out. And then there's AI. I've spent years working with natural language processing and machine learning. What fascinates me about large language models is that they excel at association.

 

of finding patterns across vast amounts of information, but they cannot yet deliberate. They cannot reason step by step through novel problems, aware of their own process, the way a security architect works through a complex system design. This distinction matters for our field. We're deploying tools that mirror one part of human cognition while lacking the other. Understanding that gap requires understanding how human thinking actually works. Today's conversation goes there.

 

We'll explore the cognitive science behind sacred values and consequentialist reasoning. We'll examine why extremists control discourse and how communities radicalize. We'll discuss what humor has to do with breaking cognitive traps and why having strong values might actually be a prerequisite for good outcome-based thinking. For security leaders, this isn't abstract philosophy.



Understanding these mechanisms shapes how we communicate risk, how we build culture, and how we help organizations function when tribal loyalties threaten to override evidence. The goal isn't to eliminate sacred values. It's to know when we're operating from them, when that serves us, and when we need to shift into a different mode of thinking. 

 

Stephen Sloman studies and writes about higher level cognition, decision-making, thinking and reasoning. He taught at Brown University since 1992. He is a fellow of the Cognitive Science Society, the Society of Experimental Psychologists, the American Psychological Society, the Eastern Psychological Association, and the Psychonomic Society. Along with scientific papers and editorials, his published work includes a 2005 book, Causal Models, How We Think About the World and Its Alternatives, a 2017 book, “The Knowledge Illusion”, Why We Never Think Alone, co-authored by Phil Fernbach, and the 2025 The Cost of Conviction, How Our Deepest Values Lead Us Estray. He has been the editor-in-chief of the Journal of Cognition, chair of the Brown University faculty, and created Brown's concentration in behavioral decision sciences. He is the winner of the 2025 Presidential Faculty Award at Brown.

 

Join me on SecureTalk as we explore the class of conviction, how security leaders can harness it, and what Stephen can teach us about this work.



Justin Beals: Steven, thanks for joining us today on SecureTalk. We really appreciate it.

 

Steven Sloman: Thanks so much for having me. It's great to meet you, Justin.

 

Justin Beals:  Excellent. Well, let's dive right in. First of all, thank you for your book. I really enjoyed reading it. I'm a lay person that's constantly interested in the sciences, and cognitive sciences is something I'm always very curious about. I never can quite figure myself out, let alone everyone else around me. So I just find the topic deeply intriguing. And I want to start off with a quote from your book. You said, “I am writing this book during such a period of uncertainty. The world is experiencing rapid and unpredictable change, inhabited by strong, contradictory perspectives”. This really resonated with me. Maybe you can share a little more deeply about that inspiration and kind of what you're seeing in the world that is, you know, driving you to share your expertise. Yeah.

 

Steven Sloman: Well, the book had more than one motivation. I mean, partly it was just theoretical cognitive science that led me to it. But the other was looking around and seeing a world that is, you know, going through an intense storm. Let's hope it's not a perfect storm. You know, there are things changing at so many levels. I mean, on one hand, we have climate change, which should be a serious concern. Mean, maybe the climate isn't gonna turn upside down within the next five years, but there's certainly a chance that something severe is gonna happen after all. It's already happening. We see it all around us all the time. 

 

On the other hand, technology is changing our lives in so many ways I mean the the Social media has completely changed have not only how we get news, but how we interact with each other and other, you know, we're no longer watching television, but the broadcast arena has become incredibly fragmented.

 

Then of course there's the new technologies. Mean, AI may have a huge impact on how we live our lives in every respect. It's already changing some things. It's unclear what the overall effect is gonna be, but it certainly has changed our economy, right? So there's all those changes taking place, and then at the same time there are these culture wars. 

 

You know, we have two or more, but really two significant positions about how the world does work and how it should work that are completely at odds with one another, and we have extremists who are controlling our discourse that make it possible for the vast majority of us moderates to really sort of feel comfortable in the world. And partly it's our fault, I think, actually, and I'd be happy to talk about that if you want. But, you know, one thing that's clear is that we have a federal administration that is doing everything it can to feed the fires of division. As far as I can tell, its aim is a civil war.

 

I mean, it's just amazing how the extent to which it takes extreme positions and, you know, calls the other side out constantly calls them evil and makes fun of them and I mean it's not as if the left doesn't do that to the right as well, but the left isn't in power and when the left was in power it didn't do it in in the same way So there are all of these fires raging and it's it's it's a scary time to be alive and so as an academic my natural propensity is to try to understand what's going on and the book is you know a tiny tiny effort to do that.

 

Justin Beals: Yeah. You bring it up, and I think actually it's very topical, and certainly I see it. The extremist views are being accentuated by those in power, our current federal administration, in ferocious ways, in ways that I've never experienced. And certainly sometimes I feel like my particular perspective is not in power.

 

Sometimes I feel like my perspective had been in power in the past in our United States federal government in the elections, our president, the party, etcetera. But I agree, I feel young, probably not, but it's never been this virulent a desire to demonize a group of people. I struggle with the fact that I think they want violence, like they're trying to create a reality where we are at war. And this podcast about security is not going to be secure. No one's going to feel secure if we're in the middle of a civil war.

 

Steven Sloman: Yeah, no, I agree. And so my book is an attempt to point out one property of the mechanism that is leading to this division. It's an attempt to describe some of the dynamics.

 

Justin Beals:  Yeah. I wanted to start with some of the definitions. I think part of me learning about the challenges that are happening in your book and its communication, the reason we were so interested in what you had been writing about is you help us define some of the activities. Because sometimes it just feels very chaotic. Like things are going on around us, but I don't have the ability to categorize or perceive or assess the information that I'm receiving.

 

And I think two of the themes in the book, and I'm hoping you can help us capture them, are the concept of sacred values and the concept of consequentialism.

 

Steven Sloman:  So basically, the book points out that there are these two strategies we have for making decisions. One, consequentialism focuses on outcomes, and it sort of represents how we think we make decisions all the time, right? Like if we're deciding whether to brush our teeth, it's because of the consequences of brushing our teeth that we generally decide to do so or when we decide to do so. It's for the sake of the consequences. So, consequentialist decision-making is something we deploy all the time, especially for small decisions. 

 

But the problem is the world is incredibly complex, right? And this is actually what my last book was about. It's called The Knowledge Illusion. I wrote it with Phil Fernbach. It's about is the fact that the world is immensely complex we as individuals can't possibly understand it although we tend to think that we understand it better than we do and the reason for that we argue is because we live in a community of knowledge so when we think about cognition we shouldn't be thinking about in what's going on just inside the skull in the individual heads but rather cognition is a communal enterprise. We think together. For instance, we might take a strong policy position on something and think that we understand the reasons that we have this strong position. But often we don't. And if we query ourselves, we discover we can't really justify our position. The reason we have the position is because the people around us do.

 

So we're channelling our community all the time. That's what the last book was about. But this book points out that one of the central ways in which we simplify, and then when I say we simplify, I don't just mean we as individuals simplify by having simplified schema in our heads about the world.

 

I mean, we as a community simplify by having a discourse that focuses on this right.  So one way we simplify is by ignoring outcomes, ignoring consequences, and instead just focusing on actions and whether actions are right or wrong, and this is what I mean by an appeal to sacred values.

 

So we have sacred values about which actions are right and which actions are wrong and when we appeal to them to make a decision like you know some of them are really important things like you know that shalt not murder, that's a great sacred value that I hold right another is you know that shalt be a truth-teller which is actually a sacred value that I hold though. It's clearly not one that everybody holds.

 

But what these things do is they actually make life a lot simpler because they allow us to just do the right thing without worrying about the consequences right deriving consequences is hard, especially in a complex world with multiple causes operating all the time. It's very hard to predict what's gonna happen, right? So if we rely on our sacred values, we don't have to.

 

Justin Beals: This concept of the collective thinking, your prior book, and the current book, it's got to be impacted by the algorithmic nature of our reality today. Mean, I'm constantly, my concept of reality compared to when I used to read a newspaper from an editorial board or attend a class in university or even consume information via a popular musical artist and what they were discussing is very different today than the reinforcement algorithms that go into the attention economy. It's really disrupting that ability, even whether it's sacred values or consequentialism, to build a collective understanding because I feel cut off from other scopes of thought by a business. Do you agree, or is that an impact on this concept that we get the ability to do collective thinking?

 

Steven Sloman:  Yeah. So I think about that in two ways. On one hand, to be a good consequentialist, which in some sense is the optimal thing to be, right? Because if we're making a decision, presumably we want the best outcomes. That's why we're making the decision.

 

But to figure out when you have an array of options in front of you, to figure out what the outcome associated with each of those options is, is a significant task, is a major task, and difficult for any one decision, it's incredibly difficult. And one effect of today's attention economy is that we don't attend to anything for more than a few seconds. So we can't really think through these complex issues. We don't think through these complex issues. And we're sort of always dealing at this very shallow level of analysis.

 

So I think that's one consequence of the attention economy. And that sort of pushes us towards sacred values because we can't really engage in the consequentialist analysis. Or, I mean, alternatively, it just leads to a really shoddy consequentialist analysis.

 

But the other effect of the attention economy of technology is that it's constantly trying to outrage us. So the thing about sacred values is that we tend to hold them in absolute terms, right? Like when I say I don't believe in murder, I really mean it, right?

 

There's no amount of money you could give me to go and murder someone.

 

It's an absolute value. That's one reason it's so simple, and so people hold sacred values, and When those values get violated, people become outraged, right?  So, outrage is a product of a violation of sacred values, and we now so I heard Joe Manchin say recently, you know the X

Senator middle-of-the-road senator from West Virginia that both the Republican and Democratic Party are actually corporations designed to outrage us.

 

And I think that there's some truth to that, right? And that they live on outrage. And it's also clear that media companies live on outrage, right? That's what grabs our attention to shock us and get us excited. We want more, and we're, you know, our eyeballs get captured that way. So, yeah, so I think one effect of modern technology is to really reinforce this outrage machine, which gets us thinking more in terms of sacred values and thus missing the boat because we're not really figuring out how to achieve our own interests.

 

Justin Beals: Yeah. Your book was a little bit of a personal journey for me. You know, start with opening some of these concepts are coming forward, really enjoying it. And then you dive into the sacred values topic fairly early on. Now, where it aligns with me is that I'm quite terrified of sacred values personally. I grew up in a very religious environment. And I felt like my entire worldview was defined in absolute terms, deep sacred values, and the reinforcement of that community and my belonging to it was signaling that I shared those sacred values almost constantly. When I left my religion, I was really interested in science. I wanted to know how the world worked. I wanted to be open to re-informing myself with new information and learning. And I was probably, you know, I'm heavily interested in consequentialism.

 

So maybe I love that you define sacred values really well in the book and how to recognize them. Help us, help us, you know, help share with us how you define sacred values, how we can recognize when we're operating.

 

Steven Sloman: Well, so first of all, a sacred value applies to action so often it comes in the form of what we think of as a moral statement, right? This is right. That's wrong. Without any reference to outcomes, without any reference to the state of the world, right? But just a description of what we should do but it's the absolutism that really sort of gives it away right when you're not thinking about trade-offs, like when you're not thinking about how using oil will save money or how you know saving money can have consequences for the state of our environment when you're not engaged in in those sort of compromises between competing goals Then you're more likely

 

to be thinking in terms of sacred values. mean, there is, so there's another sort of more formal academic way to think about it, which is most philosophers don't talk about sacred values, but they do talk about deontology, right? So deontology is a kind of ethical theory that's been around for thousands of years. Jesus was a deontologist.

 

It's also a theory about how we should act or a way of theorizing about how we should act. And it's based on logic, right? So the basic idea, and this was fully spelled out by Kant a few hundred years ago, is that there's some basic fundamental premise. In Kant's case, it was the categorical imperative. And from that, we can deduce the actions that we should perform or not perform through logical derivation. 

 

Consequentialists rarely think in terms of logical derivation. They think in terms of probability, right? So the canonical theory of consequentialism is expected utility theory, where you take probabilities and you take utilities, where utility is a value that we apply to an outcome, and you combine them in this simple way, additively, in order to determine what you should do. 

 

And that kind of probabilistic thinking is really different than logical thinking. So I actually think that can be a cue to figure out how someone's thinking about an issue. Are they talking about logic or are they talking about in the domain of, are they talking in the domain of uncertainty about probability?

 

Justin Beals: Certainly in my computer science work, I've dealt a lot with machine learning and data science. Probabilities are a big part of how these machine systems communicate with each other. And there's a mathematics to it. That could be a sacred value, like the numbers are telling me something, I'm only gonna believe in it. But that type of UAT, as you point out in your book, thinking can lead to extremist issues as well, right? Where we make decisions based upon science or probability or the utility of an activity that would go counter to sacred values that probably many of us hold.

 

Steven Sloman: Yeah, I'm sure that that's true when you do. Look,  I I've had colleagues for instance do expected utility analyses for me, or you know, approximate such analyses that come to the conclusion that gasoline engines are better than electric cars, and that sort of violates one of my sacred values.

 

But if I'm gonna be honest, I have to say, well, okay, know, to the extent your analysis is correct, then that's the analysis I should live by. I mean, you know, like one sign of expertise is willingness to be corrected, right? Is willingness to discover, you're wrong, even about important things. 

Look, I despise pretty much everything Trump does and and just seeing a picture of him as a problem for me but I've got to admit that It's hard for me to imagine another US president who could have achieved what he just achieved in the Middle East right?, so I have to face reality and violate one of my sacred values in order to conform to the facts.

 

And the facts are he did. Mean, we'll see what happens, right? It's still very, very early. But nevertheless, there's been a step that it's very hard to imagine Joe Biden, for instance, have achieved.

 

Justin Beals: Yeah, I share your hope that peace prevails and we find an opportunity to end the sectarian violence that's gone on there, absolutely. It's funny, I walked into the book thinking to myself, I have a very scientific attitude, I'm really open to learning, I've let go of my sacred values as I've grown as a human being.

 

You know, I hold a woman's right to choose, all humans' right to choose what they do with their body, with their doctor, as a very sacred value. But there are certainly cultural contexts where, you know, that can be an issue. And so I love, you know, what you point out in the book, that like, probably the example we want to look for is both having sacred values and having an ability to reinvent them or utilizing consequentialism to inform what the best sacred values might be next.

 

I thought that you pointed out in your book that there was a relationship between the two that helped us perhaps build a future together.

 

Steven Sloman:  Well, that's a nice way to interpret what I said in the book. Yeah. So there's lots to say about this issue. So on one hand, many social scientists and evolutionary psychologists have pointed out that our sacred values tend to have a basis in consequences, right? So the reason that we might think that murder is bad is because you know the consequences of living in the society in which people don't believe that are obviously horrible and there have been some interesting analysis of societies which in which you abortion is permitted and the difference in them and societies in which it's not permitted and may have to do with the kinds of environments they live in and the consequences of how so.

 

It's certainly possible that sacred values have a basis in prior consequences over historical time. That does not mean that making a decision by sacred values is not fundamentally different than making a decision by consequences, right? So what I'm talking about are how we actually make decisions at a given point in time, not the origin of those decisions. The other thing I want to say is it's important to understand that sacred values are really important. I think one point I make in the book is that I would not have a friend who didn't have a sacred value, right? such a person is normally thought of as a psychopath

 

Justin Beals: That's true.

 

Steven Sloman: So we should value our sacred values, and we should appreciate that one property of our sacred values is that they associate us with our community. So communities are defined by sacred values, right? And I like to live in a community which values truth and which values kindness.

 

So I look for these things in the people I hang out with and and if they don't share those sacred values well, they're not going to be a close friend of mine

 

So I think it's important not to run away from sacred values. Although I certainly share your response to religion, right? When you live in an environment that's utterly constraining because it's imposing its own sacred values on you for no good reason, then that is something that we have to run away from. And in fact, that's really the heart of the book to argue that we should, right? , that there are better ways to make decisions. And I must say, tell you that looking around and seeing what's gone on in the world over the past 10 years or so has really turned me off religion, too. I mean, I, you know, I think I've sort of shared your experience to some degree.

 

Justin Beals: It's one of your highlights in the book about how sacred values become so extreme that they result in very violent acts is when a community gets too insular. Yeah.

 

Steven Sloman: Yeah although you know whenever communities as a group engage in violence, righ,t like when they're lynching people, or when there are riots, or when there's war, right, all these cases of communal violence the sense I have is that all of them are kind of grounded in sacred values. That is, the leaders always make a point of trying to instil and take advantage of sacred values to get people riled up. To argue that the other side is violating your sacred values and producing this common outrage that is contagious within the community.

 

Justin Beals:  We see that today with our current administration. We were just discussing how I think the images from Portland are so interesting as we have statements about how horrible it is, how evil a city it is, how it's war-torn, how many issues there. And certainly a lot of cities in the United States have challenges they're trying to overcome. You and I have both lived in cities, know, in the downtown, in the heart of those communities. 

 

At the same time, the imagery of costumed frogs dancing in front of the ice building. I thought it was such an interesting tactic in changing that picture that was tried to create with the reality of a very vibrant community in Portland, Oregon.

 

Steven Sloman:  Yeah. So I actually haven't seen the frogs. I've got to admit. I guess, you know, my wife would be shocked to hear that I haven't been watching enough news because I missed the frogs. But there's no question. So humor is something that I really need to study a little more because it's very clear that that's the way to puncture sacred values, right? Humor in a sense, sacred values to have their effect have to be kind of implicit, right? Like you can't reflect on them too much. If you reflect on them too much, you realize their limitations. So instead, they have to be an integral part of how you frame the situation. And what humor does is it breaks you out of that frame, right? It forces you to reflect and see what it is you believe in what you're doing.

 

So I actually, you know, I was so upset by the Jimmy Kimmel episode and and in part because if they take humor away from us, then we're lost

 

Justin Beals:  Humor comedy is a safe space in a way to poke fun at sacred values, even for those that hold the sacred value. It's not as scary because we're laughing together. Yeah.

 

Steven Sloman:  Yeah, that's a great way to say it.

 

Justin Beals:  The other thing, your background in cognitive science and in reading the book, I was, I'm kind of floored by concepts that I understand both in developing software or tools for people to use and trying to simplify. You know, we call it an easy to use or great user experiences the way we might talk about the way we build systems and that it's simplified. But I feel like we're cognitively attracted to simpler things; maybe it could just be a utility issue that our brains are trying to spend as little energy as possible to come to a conclusion as quickly as possible.

 

Steven Sloman:  Just think we're you fundamentally limited We're not made to retain information. You know, one thing I report in my last book is that there's a cognitive scientist named Tom Landauer Who years ago, decades ago, did this sort of back-of-the-envelope calculation. Based using data from memory experiments, and came to the conclusion that the human mind,  the human brain can hold about one gigabyte of symbolic information. So maybe he's off by a factor of 10. It's still a tiny amount of information. It's less than a thumb drive, the way we used to carry information from machine to machine. We're not repositories of information. 

 

We work in a different way, right? What we do is we try to pick out invariants in the world. That's, think what we excel in. And that's why we rely on our communities. Because, as a community, we retain an inordinate amount of information. Right? If you put everything together, all the information that's sitting in, say, a classroom or a seminar room that I'm sitting in, there's going to be a lot there, but much of what we have to do is make use of the information that's sitting in other people's heads that's our sort of responsibility I would say as human beings and that process so, we simplify both because we can't retain all that much and so you know we have to simplify to retain anything. 

 

But the other thing is that in order to pass information between people, we have to simplify, right? Because language only allows us to encode so much information in an utterance. So I actually think a big part of the problem with the political dynamics is that the extremists control the discourse.

 

And the reason the extremists control the discourse is because they have these simplified views. And so we can under, so we're constantly appealing to those views. Like, if you ask me, you know what my position is on any hot button issue, right? On Ukraine's say, just to pick one randomly. I can tell you what people who are strongly opposed to the war believe, and I can give you a slogan to represent their view, right? America should stay out of foreign wars. It's costing us too much money. And I can also do it from the other side, right? That Ukraine is the buttress against the Soviet or the Russian invasion of the rest of Europe and then the world. And that's, you know, those arguments are easy for me to access. The subtle arguments about, you know, what's going on in Russia and what's going on in Ukraine and what the future of the war is and how many weapons we should give them and, you know, whether we're close to a tree. I mean, there's all sorts of stuff that you have to be an expert to say anything intelligent about. So if someone asks me what my position is, I appeal to one side or the other to the extremists, so they get to control the discourse. These simplified views get to control the discourse.

 

Justin Beals: I think, you know, I want to own a little bit that a lot of our movements on the left, environmental issues, gender equality, things like that, we pioneered the use of sacred values, communicating sacred values and getting our communities to break out of their existing assumptions. Those have been weaponized full circle, it seems, some days the other direction, but we do look for the simple answers, you we're attracted to them. 

 

But I think we've, you know, we've, in our communities, we wanted to design space for expertise, our academic institutions, and even farther back, you know, the wisdom of a tribal leader or things like that. I think we had more respect for what they could teach us. And today we're a little more terrified of that expertise because we know it will ask us to think more deeply.

 

Steven Sloman: And it might tell us things we don't want to hear. Right? So yeah, it's both hard, but also unnerving. And you know, I mean, what's happened is we have evolved different experts. There are, you know, people running around claiming expertise on every category. And if we've already believe something, we can find people out there who, claim expertise and might sound like experts who will defend the position that we already have and that's a whole lot more comforting than having to deal with someone who might say something that's opposed to your view of the world.

 

Justin Beals: Yeah, you know, one of the things that I really liked in your book, in my work, and I have for many, many years, worked in natural language processing and large language models. Of course, they're the latest iteration. It's not some grand revolution. It's the latest iteration in this computer science work, and what we can do with language and the computing power that we have with machines.

 

You help identify, and I love this because it's very true, that all these LLMs work on a process as association, which is a part of human cognition, but not all of human cognition. Maybe help illustrate that for us about how association is so critical to us expressing an expertise.

 

Steven Sloman:  So I can't help but point out that a lot of the development, early development of LLMs was done in psychology labs. So in fact, one of my advisors at grad school was a guy named Dave Rumelhart, who was sort of the lead in the development of this algorithm called the backpropagation algorithm, which is sort of the heart and soul of teaching LLMs, of changing their weights to encode knowledge.

 

Justin Beals: I just want to add that a lot of my training work has been reading the papers on backpropagation and understanding how to set that up with a data set that I might have available to build a feature. It's powerful stuff. I mean, it was brilliant work. Yeah.

 

Steven Sloman: Yeah, no, Dave was a very it was a brilliant guy absolutely He died tragically. So, so it shouldn't surprise anyone that the basic properties of LLMs Mirror the basic properties of human beings But I would argue that they only mirror one part the human cognitive system. Namely, so there's been a fair amount of agreement within cognitive science, within decision-making circles, that when we think, we think with two systems. I tend to overstate the case because I wrote a paper decades ago whose title was the empirical case for two systems of reasoning.

 

So I've been a believer in this stuff for a long, long time, but I still believe it.  So on one hand we have an intuitive system that generates Intuitions based on association that basically appeals to memory, so if I say to you what seven plus five I assume that an answer just pops to mind. You don't know how, but it popped to mind because you have an association between seven and five and twelve, right? Whereas if I say to you, what's 362 plus 471? Well, for very few of us does an answer pop to mind. We have to, rather, we can come up with an answer, but we have to go through a bunch of symbolic operations. So I'll refer to that process as deliberation.

 

And the key cue that we're deliberating rather than Intuiting is that we're aware of the process by which the answer is derived. Not only are we aware of the answer, we're always aware of the answer, but in the case of deliberation, we're also aware of the process by which the answer was derived, right/ like I added the units, and then I added the tens, and then I added the hundreds.

 

So I would argue that LLMs are great models of intuition. They're actually much more intuitive than humans because they have so much more information, so much more knowledge, so much more power to combine all of this information that they hold.

 

But in terms of deliberation, they're nowhere near what humans can do. So,  humans have the ability to encode any sort of formal system that they're taught eventually, right? Like we can encode arithmetic, and we can encode computer languages, and we can encode natural language, and we can encode moral theories, right, that we that we want to apply, and we can encode mechanistic understandings of how toasters work. All of these things are kind of formal pictures, right, theories, models that our culture provides to us that we internalize, and then think about and if we do it long enough, then they become intuitive, too

 

So LLMs are great models of one thing, but humans are still capable of something else. LLMs may get there, and you know, they're clearly, I mean, AI has been working on this problem forever, right? How to algorithmize deliberation. But we're not there yet. Not in the way that we've captured intuition with LLMs.

 

Justin Beals: Yeah, and I have a very practical feeling about this for the work that we do. One of the big challenges lately is the wide variety of compliance requirements that come into a security team or security group. Everything from regulatory requirements to third-party risk, they're only metastasizing. They invent new ones every quarter. And one of the big challenges is like, how do I synthesize all this data? In the past, what they would have to do is go find an expert, but they were generally only an expert in one thing, like I'm an expert in HIPAA, or I'm an expert in GDPR. And trying to synthesize that was almost impossible. You'd wind up reinventing everything you did with the latest iteration of what was coming in with a human consultant that was helping you. And I argue that LLMs are better at it. I mean, honestly, they can hold more of that information.

 

In memory in associative situations than any one human being could do. But it can't deliberate for you on how to implement it. It's not there yet. We still need an interpretation of that synthesized association on how this impacts my team. Like, what are we going to do now, day to day, to see that happen? And so we've been really curious about certainly the strength, the opportunity, but also the weaknesses and to the sacred value thing, I've got Sam Altman with OpenAI saying this is a sacred value. We're gonna give you HAL 9000. There's no better metaphor for me to describe it to like a HAL 9000. Like it's got a soul all of a sudden. And I'm like, there's no way. We're not even close to that kind of work yet.

 

Steven Sloman: Interesting. I wrote a grant proposal recently that argued the way to prevent LLMs from causing catastrophe, right? So you probably encountered it as the paperclip problem, right? So if you give LLMs or AI the task of developing a system to produce as many paper clips as possible. It's possible there's nothing preventing AI from saying well the best way to do this is to get rid of all the human beings so that they're not using the resources right and and and we'll get more paper clips that way so how do you prevent AI from that sort of catastrophic outcome and what I proposed was that you start by teaching the AI sacred values by giving the AI sacred values. 

 

Once it has internalized those sacred values, I thought that maybe it would be better at making the kind of consequentialist decisions that we want it to make, right? In part because its values would be aligned with the user's values.

 

Unfortunately, the proposal was rejected out of hand and I'd actually, and I'm not sure why. I have no idea what they didn't like about it. Maybe you have some critique of it.

 

Justin Beals:  Well, first off, I think you've touched on exactly how we as humans are operating today. And a lot of times when we think about what technology can do, or computer science specifically, I think about what we do today? Either how do we mirror that with more efficiency or mirror those positive things that we've learned to do over centuries or millennia in a way that has a similar accuracy or efficacy to the outcome? Because otherwise, we are in danger of the paper clip problem will go awry. For me, I think that what we have to consider at the end of the day is that the, You know, Asimov, this to me is Asimov, right? The three rules of robotics, right? You know, you can't harm a human, but currently in science fiction, and if we think about science fiction as us trying to project ourselves into the future, we see us trying to play out those three values in things like Westworld or other scenarios, and they go awry. Mostly because we can't yet agree on what the sacred values are that we're going to prioritize. And if we program it, it will create a conundrum for the machine, and it will rewrite itself. And perhaps that's what we're hoping for, Stephen, too.

 

Steven Sloman: The machine to rewrite itself?

 

Justin Beals: Perhaps. I mean, at the end of the day, that's what we're doing. I think there's a discussion that these machines, I think we see it in, you point out in your book that a lot of the associative values in the machine are reflection of what we have in our own societies today. We shouldn't be surprised, right, in the way they behave. But are these an evolution of intelligence on our planet? I think it's a valid question. We're not there yet, but we should be asking ourselves, what do we want to build? And are we comfortable with it?

 

Steven Sloman: Yeah, yeah, look, I there's no question that there are huge cultural differences in sacred values and my community values different things than somebody else's community. It's actually worse than that in the sense that we sometimes shape our sacred values to be maximally distinct from the values of a different community,  right like the reason that we're talked so much about abortion or transgender rights, right? Like that's the obvious example like the truth is that I have not spent a big part of my life thinking about transgender issues. It's just there aren't that many of them, or there are som,e and you know it's not that I haven't encountered them, but it hasn't taken over my world, and yet it does seem to have taken over large swaths of politics, and I think it's because people tend to focus on the issues that divide in order to point out what's unique about their community. This is our sacred value and this is why you should join us and not be a member of that other team.

 

Justin Beals: Steven, this has been a, I'm so grateful for you joining us today. I'm really glad to get a chance to dive into some of these thoughts and queries of my own with your deep expertise. For our listeners, I just want to highlight that what your book did for me was provide an ability to help us in difficult discussions as leadership teams or boards in understanding, you know, when we should be thinking about the consequences of our decisions and sacred values that we should hold. And so just very pragmatically, it just helped me kind of create a contextuality in which to have a discussion, to hold what the different teammates were bringing together and find an opportunity for both our sacred values and when they matter and an understanding of finding the consequences or optimizing for an outcome. They work together. 

 

So in wrapping up this particular episode, I just wanted to say how grateful I am and how pragmatic the work was, despite this conversation being a little esoteric at times.

 

Steven Sloman: Well, thank you so much, Justin. I want to thank you for the invitation. This has been a great conversation and a great opportunity for me to think about issues that are dearly important to me and how they apply to something which I actually don't think about a lot, namely, security, cybersecurity. But I was so pleased to be able to do it with someone so thoughtful and insightful.

 

Justin Beals: Wonderful. To our listeners, we hope you have an amazing day and we'll see you again in another couple of weeks.



 

About our guest

Steven Sloman Professor Brown University
Steven Sloman received his PhD in Psychology from Stanford University in 1990 and completed his post-doctoral research at the University of Michigan. He began teaching at Brown in 1992. Steven is a cognitive scientist who studies how people think. He has studied how our habits of thought influence the way we see the world, how the different systems that constitute thought interact to produce conclusions, conflict, and conversation, and how our construal of how the world works influences how we evaluate events and decide what actions to take.


The focus of his current research is collective cognition, how we think as a community, a topic elaborated on in his book with Phil Fernbach, The Knowledge Illusion: Why We Never Think Alone. His work has been discussed in The New Yorker, The New York Times Book Review, Vice, The Financial Times, The Economist, Scientific American, National Geographic, and more. He is a former Editor-in-Chief of Cognition: The International Journal of Cognitive Science and is currently on the Editorial Board of Decision and Psychological Science.

His most recent book: “The Cost of Conviction: How Our Deepest Values Lead Us Astray” is A timely and important perspective on how people frame decisions and how relying on sacred values unwittingly leads to social polarization.

Justin BealsFounder & CEO Strike Graph

Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.

Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.

Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.