- Home >
- Resources >
- SecureTalk >
- Breaking Cybersecurity's 12 Hidden Paradigms: A Futurist's Guide to Security Evolution with Heather Vescent
Breaking Cybersecurity's 12 Hidden Paradigms: A Futurist's Guide to Security Evolution with Heather Vescent
Discover how strategic foresight is revolutionizing cybersecurity thinking. In this compelling SecureTalk episode, renowned futurist Heather Vescent reveals the 12 invisible paradigms that have shaped our entire approach to cybersecurity - and why breaking them could transform how we defend digital systems.
Back in 2017, Vescent applied strategic foresight methodology to cybersecurity, uncovering fundamental assumptions like "security always plays catch-up," "the user is always wrong," and "we are completely dependent on passwords." Her research, published in 2018, predicted the passwordless revolution that's now mainstream reality.
This isn't just theoretical - Vescent demonstrates how appreciative inquiry flips traditional problem-solving approaches. Instead of asking "what's broken and how do we fix it," she explores "what's working well and how do we amplify it?" This methodology helped identify paradigm shifts that seemed radical in 2018 but are now industry standard.
Key insights include:
- How to shift from reactive to proactive security postures
- Why attack surface analysis needs systematic approaches
- The role of AI as thought partner rather than replacement
- How transparency reduces insider threat attack surfaces
- Practical applications of decentralized identity technologies
- Why security teams should focus on strengths, not just vulnerabilities
Vescent also addresses the commercialization challenges facing promising technologies like self-sovereign identity, explaining how ethical innovations often get compromised during market adoption. Her work bridges the gap between cybersecurity's technical realities and its broader societal implications.
For CISOs, security leaders, and technologists seeking to influence rather than just react to the future, this conversation provides actionable frameworks for anticipating threats and building more resilient systems. Vescent's strategic foresight methodology offers a roadmap for moving beyond endless problem-solving cycles toward security that creates value rather than just preventing loss.
Resources:
- Shifting Paradigms Paper : https://www.
researchgate.net/publication/ 330542765_Shifting_Paradigms_ Using_Strategic_Foresight_to_ Plan_for_Security_Evolution - Threat Positioning Framework GPT https://chatgpt.com/g/g-
68100f6a8c7481919d693ec9d4d9fa ab-the-threat-positioning- framework-gpt-by-h-vescent - Self Sovereign Identity Book:https://www.amazon.com/
Comprehensive-Guide-Self- Sovereign-Identity-ebook/dp/ B07Q3TXLDP?&linkCode=sl1&tag= vescent39-20&linkId= 2797fe6ea49dff79952bc866ec8e8b af&language=en_US&ref_=as_li_ ss_tl - Heather's email list: https://research.
cybersecurityfuturist.com/
View full transcript
Justin Beals: Hello, everyone and welcome to SecureTalk. I'm your host, Justin Beals. You're a CISO sitting in another security meeting, going through the same checklist, patch management, user training, incident response. The problems feel endless, and the threats keep multiplying and your team is exhausted from playing defense. The numbers tell the story. 91 % of CISOs suffer from moderate or high stress. 65 % of SOC professionals say that stress has caused them to think about quitting. Your team is drowning in more than 10,000 alerts per day, most of which turn out to be false positives. Meanwhile, the threats keep winning. 49 % of all data breaches involve compromised passwords. 81 % of corporate hacking incidents stem from weaker reuse passwords.
Last year alone, 24 billion passwords were exposed in data breaches. The frameworks we used to think about cybersecurity might be the problem. The assumptions we've built into our entire security industry could be wrong. In 2017, a futurist named Heather Vessent set out to apply strategic foresight methodology to cybersecurity. She wasn't looking for new paradigms initially. She was just trying to test her research process.
But through interviews with security experts, she began to see what she calls the water we swim in, the invisible assumptions that shape everything we do. She identified 12 paradigms that govern cybersecurity thinking. Things like security always plays catch up, the user is always wrong, and we are completely dependent on passwords. Now, these weren't just observations. We all know they were, are the foundational beliefs that determine how we build our security programs, allocate budgets, and measure success. Now, once you identify a paradigm, you can flip it. Security can be made proactive instead of reactive. Users can become allies instead of attack surfaces. And suddenly, new approaches become possible. This isn't just theoretical. The password paradigm that we're talking about It seemed radical to change when Heather published her research in 2018. But now that change has become a mainstream reality. Passwordless authentication isn't a futuristic concept anymore. It is available today from major technology providers. But this research goes deeper than just identifying what might change. It provides a methodology for how security leaders can influence the future rather than just react to it.
Through something called appreciative inquiry, instead of asking what's broken and how do we fix it, we ask what's working well and how do we amplify it? Most security teams are trapped in endless cycles of problem-solving, fighting fires and trying to stay ahead of threats. This approach suggests we might be more effective by focusing on our strengths and expanding what's already working.
Our guest today has been thinking about the intersection of technology, identity, and society for over a decade, and her work spans from the early days of digital identity to the current challenges of AI implementation. She's seen how promising technologies get commercialized away from their original ethical intentions, and she's developed frameworks for how technologists can make decisions today that influence the kind of world we build tomorrow.
We'll explore how AI can function as a thought partner rather than a replacement, why attack surface analysis needs to become more systematic, and how security can shift from being seen as a cost center to a value creator. We'll also dive into the practical applications of this paradigm shifting research and what it means for security leaders who want to build more effective programs.
Now, Heather Vessent is a widely recognized industry thought leader and futurist with more than a decade of experience delivering strategic intelligence by consulting with governments, corporations and entrepreneurs. Heather has been speaking professionally for over a decade and is an influential voice in the fields of digital identity, cybersecurity and the future of money payments and transactions. She's spoken at RSA, TEDx, CBOs, the World Trends Forum, World Future Society, South by Southwest, IEEE, Identiverse, EIC, and many other events since 2011. Join me today as we challenge the fundamental assumptions that govern cybersecurity and explore how strategic foresight can transform how we protect what matters most.
—---------
Justin Beals: Hi Heather, welcome to SecureTalk.
Heather Vescent: Hey, Justin, thanks. I'm so excited to be here.
Justin Beals: Well, I'm really excited to chat. And I'm going to dive right into the deep end of the pool with our first question. Obviously, you have a lot of really interesting foresight into the future, some of the security and technology, and just broadly the internet and what's changed in our digital society. One of the things you've written about that we were really interested in is the 12 new security paradigms. looking at that from a strategic perspective. I was wondering if you could walk us through the process of kind of identifying paradigms and now that we, know, where we're at today, what do you think are going to have the most disruptive impact to cybersecurity?
Heather Vescent: Well, okay, so paradigm, as you know, is often really hard to see when we're in it because it's like, can a fish identify the water in which it swims? the 12 paradigms came out of this paper called Shifting Paradigms, Applying Strategic Foresight for Security Evolution. And we didn't really start out with the research to identify new paradigms.
I really wanted to apply my strategic foresight process to cybersecurity. So I partnered up with a friend of mine, Bob Blakely, and we kind of partnered into those things and really put my process to the test and came up with a bunch of scenarios. And as part of the process, I interview security experts. They're not the kind of interviewers that a security person might have. I have a specific strategic foresight interview protocol that I use.
And one of the methods I use is called appreciative inquiry. Appreciative inquiry is a little bit interesting and different because most of the time when we look at technology, we're like, what are the problems? What can we solve it? So we start with a problem and then how can we solve that problem? And that's kind of like the paradigm in which we do problem solving. But appreciative inquiry actually shifts this instead of doing the interviews and being like, what's not working? How can we fix this? It's like, what is working well?
What is like working really well with interpersonal stuff, with technology, and then what can we do to amplify it or make it work even better? Or can we learn something about why this is working really well and apply it in another place? And kind of using this concept of attrition, like let's focus on what we want to grow. So we're going to focus on the good stuff and ignore all these problems.
Now, I'm not saying let's ignore problems whatsoever, but it was really kind of a radical concept when I learned it when I was getting my masters of strategic foresight. And so when I was interviewing all of these like, you know, grizzled security guys, I asked a few questions about that were like kind of more appreciative inquiry. and when I was analyzing the answers to those, like reviewing after the interviews, I started to kind of see that water that we swim in.
And the first one that I saw was that we were completely dependent on passwords. Now I want to just put the timeframe of this in context because I did the interviews and the research in 2017. The paper was written and peer reviewed in 2018. And I think finally got published in beginning of 2019. And we did this in the context of this conference events called the new security paradigms workshop.
So as we were writing this paper and we had all these scenarios and I was kind of looking through the data and finding some really interesting things that I didn't really know where they fit. Like I didn't want to put them in a scenario, you know, where were they gonna fit? I just kind of was like, wait a second, this is for the new security paradigms workshop. I wonder if I can just identify some new paradigms. And so I think I had started with a few things. I'm just like looking at my chart here, like,
Also, one of the other really obvious ones was the human, the user's always wrong, right? The user is the attack surface, and everyone's using passwords, and also, security's always playing catch-up. Those were kind of some of the, like the water in which we swim at that time and still applied now. And so I was like, so then I went through all those data and I thought, okay, well, how many of these can identify? So we identified them. Some are stronger, some are weaker.
And then once you have the paradigm as a futurist, I have a lot of methods in order to take our existing point in time and alter it. You can just reverse it. That's the simplest, just do the opposite, but that can be a little bit boring and actually not totally true. So reverse it and then tweak it. And we have a bunch of variables we identified. So if I look at the old security paradigm is security plays catch up, let's just reverse it.
Security is proactive. Wow, that's kind of, that was super mind-blowing. And I think my most successful old paradigm that I had identified then was passwords, a password paradigm. And I was also working in, and I've been in the digital identity space for a long time. And so we always talk about access management and passwords as an attack service and being attacked and social engineering and all of that.
So back then at that time, this concept of like, there's no more passwords. Like some folks were talking about, maybe we could use zero knowledge proof or use biometrics or some kind of other types of things. And so like when it published in 2018, that was still pretty radical idea not to think about systems that might be able to be accessed securely without a password. And then fast forward to today, 2025, and that is not a radical idea at all.
So, Yeah, and some of the other stuff in here, if we want to talk about it, we can talk about it later. But I was just looking at it again, because I hadn't read my scenarios for a while. We've got like three different scenarios that include AI, applying AI in really different ways. And AI is so hot now that I was just kind of like, sometimes I go back and read my old scenarios. I'm kind of like, wow, how did we figure some of this stuff out? Have some of these ideas with the amount of data we had then and some of them.
I won't say it come true, but they resonate with the world in which we live today.
Justin Beals: Yeah. You know, one thing that strikes me as a challenge in that work, is most of the time security folks broadly, but especially cybersecurity, like I see, so their focus is on the problem areas, right? Like it is whack a mole. know, they feel like they can never get on top of it. And I've banged this drum a little bit lately where it's like, maybe we should focus on our strengths and like leaning into those strengths and expanding the footprint of those strengths as opposed to just the weakest link in the chain, right? I realize it's a little counterintuitive and that I need to fix my weaknesses when I think about defense as an activity. But I think that, for example, the number of paradigms or the number of areas where there are issues makes it just humanly overwhelming to feel like you're making progress. There is a human component to this work.
I love that you switch that into where is this a strength or how does this become a strength as opposed to fixing the problem?
Heather Vescent: Yeah. And I'll be honest with you, when I learned this methodology, appreciative inquiry, I thought, my God, this is so fluffy and foo foo. How can it be? How can it really get anything useful? You know? Cause like, again, I came from that really working in Silicon Valley, working in tech, focusing on problems and solutions. And as a challenge to myself, when I was like going through my education in this class, I was like, you know what?
I'm really skeptical about this technique and another technique. And so I'm going to use it. Like I put my judgment aside on my little judgment shelf and I said, let me just really like, see if it works. And I applied it to, the exercise that I had to do in class was take one of the world's biggest problems. Like they're called wicked problems, like climate change, you know, like some problem that seems too crazy to solve. Well,
Again, like I was really skeptical about this method and I said, I'm not going to just take one problem. I'm going to take all the world's problems. And I want to answer the question, how do you solve all the world's problems? And we threw this method and another method combined together and I wrote a paper and it won an award. And it's probably one of the most impactful. mean, like probably hardly any people know about it they don't think about it like this, but it's really mind-changing. to think about, using this methodology helps me answer the question, how do you solve all of the world's problems?
And so me really like banging this fool fool appreciative inquiry methodology, like I was super skeptical. It really gives unique insights you would not get any other way. again, that's just proven with the kind of insights I got from this paper was cyber security, because you'd be like, what, how could you do that? And I think part of it is like, it's scary to think, like we always have so many problems and we feel productive when we solve a problem. But what we don't think about is that the solution to that problem might create another problem. So we're actually not negating a problem.
We might be actually increasing the quantity of problems in the world that we need to solve by that one solution. But we don't think that way. And that's kind of like, I'm a systems thinker. And so I love to get down and like think about the little like relationships between the little things, but then also pulling way back and seeing like, sometimes you can solve something in a tiny little system and it might have a negative impact over here.
Justin Beals: Yeah.
Heather Vescent: There might be an easier way to solve it with a leverage point or something somewhere else.
Justin Beals: Yeah. I, one thing that hasn't helped this work, and I wonder if you agree, is the, is the fantastical claims without a methodology behind it. You know, I, I struggle, for example, we should talk about AI, but I chafed against the idea that we, know, Ray Kurzweil, we're going to reach this singularity. And in this magical moment, you know, this computer is going to seem to take on, you know, the concept of personhood.
And it was just so many issues wrong with it. But he made bajillions hitting the conference circuit. You know, the more fantastical claims that come out of Silicon Valley without real traction or whether it be aspirational or problem solving is really frustrating, right? And those of us that want tech to work for the better of all of us,
I'm abhorred sometimes by the attitude, the behavior of the people that are put on a pedestal. I'd include must in that group. Altman, I'm very frustrated by the things Altman says, yeah.
Heather Vescent: Yeah, well, none of those people have the kind of strategic foresight training that I do. that's kind of like, Justin, this is a really hard conversation to have because, and I don't want to be like a gatekeeper and I don't want to be like, those guys, don't know what they're talking about. Like what is a futurist? Is a futurist someone who builds something that they think is going to be like in the future?
Because someone who understands the medium of the future and how the future works, that's not them. Like a lot of people think that the future is like the past and the present. There's one past, there's one present. Therefore we linearly think there's one future, but the future's not linear. It's possibilities hasn't happened yet and it's constantly changing.
So like while in theory, like, you know, I have this paper with all of these different scenarios with different variables, we could probably put probabilities on the different variables to come up with a bigger probability for that scenario to figure out which of the scenarios has the highest probability, which one might have, and thinking that with the highest probability that, like that's the one that's going to happen.
But then you have someone like Naseem Taleb, who I love talks about black swans, I use the term wild cards and wild cards are low probability, high impact events. So if you have this belief that you've come up with story or something that's like very high probability through like your categorizations, there's this mistake that that probability has anything to do with it actually realistically happening. There's this different, the moment that this possibility, possibility of the future collapses into a single moment in the present is like that's happening every moment. And so all of those probabilities are constantly changing. And I mean, I can imagine what a piece of software could do to do all those probabilities. And for a while I was trying to figure out how to do it. And lots of people are like, let's track trends and the ability with the trends. How can we change these probabilities? But at the end of the day, I realized that it actually really doesn't matter what happens in the future because wildcards, a low probability, high impact event has the 50, 50 % chance of happening just as much as a high probability scenario. But those of us in tech, we want to have something we can stand on that we can like believe. And that's why we love probability.
But probably the biggest thing I've learned as a futurist and probably would be helpful is like, we do need to feel comfortable with uncertainty and embrace uncertainty. And there's this magic and possibility, like the future hasn't happened yet. And if you re but, but do you know where the future does happen? The future happens in the present moment. When you take action and make things and solve problems and solve code. And if we can kind of think like, okay,
I'm gonna build this tech or I'm gonna solve this thing this one way. How is that gonna ripple out into the possibility of the time of the future? I have a brain where I like to think about that. It's very exciting and stimulating, but not everyone's brain is like mine. And it can be overwhelming to say, wait, I might solve something, but I can't check it off my to-do list forever.
And so, I think that can be really intimidating, especially if we're in a, it can be intimidating to want to think about the impact of one's actions because we want to be done and move on to the next thing, especially if you're in like a business or a product or you're building a product and you need to launch it and have customers that you need to sell it to.
So I think that while it's like a wonderful intellectual conversation and thing to think about it, I think it can be overwhelming to think how can I nfluence the future? How can me as like a CISO or like a tech, a tech developer, how can I really influence the future? And I'm coming up with some ways to help people do that. To be honest, I speak in the language of strategic foresight, which I think sometimes like, I mean, I can speak in cybersecurity jargon too, but I haven't figured out a way to, to communicate what I mean about the future and taking action and influencing the future in the moment in the cybersecurity jargon. That's what I'm hoping to do with this book I'm working on.
Justin Beals: Yeah, it certainly is complex. And I think a little counterintuitive and in truth, probably a lot of cybersecurity professionals feel burned by a vendor that promised something by someone that said that this will be a huge issue, but it never became an issue and may have invested themselves in the outcome. And so that is tough. Although I think I'm excited to dig into some of those topic areas with you as we see where the bridges are.
One thing though that does resonate with me, Heather, that you mentioned, and I've complained about this over the past couple of years, I think in the industry broadly, whether you're building a cybersecurity product or you need to hire SOC analysts or engineers to build a product like software developers, one of the things that I've noticed is we were in such a rush to hire as many people as we could to fill seats, to grow to, You know, a budget future that we set down an achievement evaluation at a business and we're doing it today with AI that we're making the same exact mistakes. And yet so many of those people really were like, tell me what feature to build right now. And I'm not, I'm not impassioned by the idea that, which I was, Justin Beals was early in my career, that this network I'm building or this code that I'm building, this is just a tool that other people will use to build other things that I can't even imagine. And that was very exciting. I felt in the present involved in the future. And I think so much of our staff some days misses what they're doing because we've been so pedantic about what happens today and right now.
Heather Vescent: You know, Justin, that's so cool. Like I just felt like my mind went back to like the nineties, you know, like all the exciting things. when we were building, when you were talking about how excited you were about building systems or processes or product or whatever that other people were going to use to do other things. And I remember back then we were all like, so when we'd launched something, I worked in like my first job, I worked in like a 3d VRML company in South of market, you know, in San Francisco.
And like, just remember like, and even later I did usability testing. You, and I was a product manager, you build a product with a specific use case and a user scenario, and then you put it out there and people use it in ways you don't even imagine. and that's always what happens with technology. And I'll tell you why I figured out why, because we as technologists and we on the tech side, we're thinking about this cool, exciting new thing. This even applies to AI, right? Like let's use an analogy. I'm talking about the past when I'm talking about the present with AI. So we got people using building AI. We had people building whatever the tech platforms was in the past and we have no idea how people are going to use it.
We have some ideas on how we think we would use it or rebuild the technology or build our product requirements documents with the features that we think we would use it by.
But not everyone in the world is the same. We all have different perspectives. We have different jobs. think this is one of the things that's really exciting about tech is that people use technology. People are like subject matter experts in their jobs, like a lawyer or a teacher. I have a friend who is a teacher and she started a company where they're using AI in the classroom, not as a learning agent, not as a teacher agent, not as like education, the students, they're using AI to augment the teacher. And I remember, I went to South by Southwest with IEEE like, I don't know, eight years ago. And my whole thing at that point was like, how do we augment our identities? Because I love digital identity and our personal identity. So I've always seen like our phone, the internet, all these things are augmenting our human identity online.
AI can be like that too. AI is a tool and I've used it to augment myself. But very first thing I did, very first GPT I wrote with ChatGPT was I wrote a deep fake of one of my friends, Dave Burch. He's a digital identity and money, thought leader, pundit, researcher, British guy, very like gentlemanly and I programmed it.
And I said, hey, Dave, I made a deep fake of you. And then I had it write an article from your perspective about the deep fake. And I sent it to him and he was like, my God, this is great. And he started using it. And I was like, and if you send me PDFs of your books, I can train it better. And I did, you know? And then the second one I made was a deep fake version of myself.
And I think a lot about that because I think a lot about how people are using AI and chat GPT to like outsource their thinking. Um, and I have such interesting conversations with my own GPT, but I have trained it on, like, I have like 40 years of work. Like I have a blog that's still live. That's been around since 1996 or 94 or sorry, not 94, 2004. My first blog was in 90.
So like, mean, have like, if you have all this body of work, you can program these things. And I've found that it's helped me iterate my thinking process better. It's not like it, it doesn't tell me things I already know. I mean, it does tell me things I already know, because it's done analysis of my research. And then when I read it, I think, wait a second.
There's that thing. read between the lines and then I have a new idea that I would have gotten to eventually, but no, I'm sorry. We have totally, I have meandered down a tangent.
Justin Beals: Oh no, no, I agree. think what you're finding, what you're expressing that I am using these tools for as well is as a thought partner a lot of times, especially for areas that I don't have expertise in. This is one of the things that, you know, I think we can get out of this, the future is the singularity BS for all intensive purposes and into a space where we're like,
What's more pragmatic, what's more immediate? And for me, the future is I can be an expert in areas that a Google search would not have done for me very quickly. For example, let's see, I need to focus on an area in our go-to-market. All of a sudden I have someone with a breadth of knowledge. I like Claude. The breadth of knowledge is greater than a human can hold. And I'm mining that breadth of knowledge for synthesis.
And that's very powerful. I would have had to hire a researcher. You talk about identities. You're an expert and have written a lot of work on self-sovereign identity. Maybe, I think that's one place where I'm kind of interested, Heather. What is a self-sovereign identity? It's little bit of a new term for me. Maybe you can clarify that for us, yeah.
Heather Vescent: Yeah, okay. So I actually like to use the term decentralized identity. Now the book I wrote way back, gosh, it's been a long time now. It was called the comprehensive guide to self-sovereign identity. that's self-sovereign identity was the name the community called it. Self-sovereign ended up getting some baggage that wasn't helpful from the tech perspective. And so it kind of shifted to decentralized identity. Now, if you really think about it, it's about centralized versus decentralized. And if I can just step back as a futurist and put a little quick, we have been trending to a more decentralized model from a centralized model, like just as a long arc of tech for a while.
So like the internet is more, it was one of the points in the trends of a decentralized technology expression. Let's talk about identity on the internet. Oftentimes we join the internet as an identity within the context of whatever platform or app, Facebook or Google or back in the early days, of course, you could join yourself, like owning your own domain and joining and entering like that way.
Identity got really siloed with this idea of single sign-on. So basically you had a Google or Twitter or LinkedIn and then identity, anyone building an app just wanted to use their identity login system. So then I could use my Google app, my Google identity to log into whatever or Twitter to log into whatever app, right? That's centralized. And then Google owns all the data associated around tracking where that identity goes where it does, whatever. And we all know how that has gone when all that data has been kind of tracked and then putting into ads and having our own data weaponized against ourselves. So it's from that position of having the data collected and owned by someone that's not us, us not having control over that. And then us not even having like a say in how it's used back at us. Like if it was used to empower us and make us be better, happy people.
But it's not, it's like empowered us to make us feel less than to have to buy these products or politically motivated to vote or not vote or whatever, right? It's been used for a bunch of bad things, a bunch of anti-human behaviors, I would say, right? And so this idea of having decentralized identity is about taking your ownership of your own data and not just in a concept, being able to have that data be secured. Now,
The world and identity is really complex. I have my own identity that I create and I assert out there and I have my website, Vesson and I have a pretty big, a pretty strong personal brand online because I've been online forever. But like FICO has a Heather Vesson, Calpherny DMV has Heather Vesson, IRS has Heather Vesson, know, different employers have Heather Vesson, Facebook has Heather Vesson. like my identity is not just what I assert to the world.
Identity is contextual. I am created both by what I assert out there and also by what I express myself as in these different silos. Okay, so that's like the philosophy behind it, right? So let's talk about the tech. For a long time, this concept was really important about ownership of one's own data, but until decentralized identity came out, decentralized identifiers, verifiable credentials, it was really hard to be able to have both secure private data that you could confidently share. And that's what that technology did. Now, just disclaimer, I was very active with it in the very early days. I've kind of taken a break in the last few years because I've really gotten disillusioned with it. Because as what happens with all kinds of technologies that are really new, going to solve a big problem for it to get to market, there has to be a profit motivation for a company to develop a technology and a product around it to get it out there.
And so on one side, it's wonderful that decentralized identifiers and verifiable credentials have made it to market, but they have changed and evolved from this. ethical dream of wanting to have your ownership and control over your data. That dream that we had in the early days that we do have the technology for has not made it to market because of the, this market is so hard to crack.
The identity market is so hard to be successful in because it is so big and interwebbed and interdependent that the really only the best solution, the best identity solutions that come out are very narrow and specific and a company can solve that problem, solve that pain and someone will pay for it. So I'm grateful for that, but it's also kind of like, sad, like those, the dream that we had, same thing happened with cryptocurrencies.
Justin Beals: Yeah. It's, it is like we come up with some really great ideas. The internet itself, you know, I think again, we're reaching back to you in my past of being able to build a website, put that out there, have control over that website, understand, you know, even the basics, like how to write HTML and, and deliver this content. And now all my channels run through economic engines, whether it's YouTube or something like Reddit or Facebook where my attention is economized. And it is incredibly frustrating because the people that we wanted to be luminaries and leaders of these companies have eschewed that ethics and lost the original opportunity human wide for this. think what's sad about the name sovereign identity is that it got
tarnished with the concept of a sovereign citizen, which I think is a joke. Of course, you know, we're not all on islands, you know, it's like we are citizens of a collective, we got to live by a combined shared laws, it's community that's required. But our identities, I could see that being a sovereign issue. Like I want to own my identity within this community. Yeah.
Heather Vescent: Yeah, agreed. Or at least I wanna have control over how that data about me is used. The classic example that we used way back was like, have my California driver's license. I wanna go to a bar, say I wanna go to a club in Hollywood. And they don't need to know my age. They don't need to know that I was going to rave in the 90s. All they need to know is that I'm over 21.
That selective disclosure. But, when I show them my ID, they see my birthday, my year of birth. know my age. They know my, they know all kinds of other information that's on my, my driver's license. And I mean, of course, like a balancer is not going to do anything bad with that or do anything bad with that. But online we have seen people do not have that self regulation restraint. Like, I call myself a recovered libertarian because self-regulation doesn't work. Self-regulation in the market, self-regulation, like the people that want to push something through. No, they can't self-regulate and do the right thing. I don't know what happens. How did I self-regulate and not become a horrible person? I don't know. I do also, one of the other insights I had, because I worked in Silicon Valley, I worked at startups had my own consulting. I worked for big international collaborations. I've done like research projects for the army and DHS. One of the things I've learned is that capitalism is wonderful. It does definitely come up with products and services and ideas and productize them. And it's a great motivating method.
And it really does lift people out of poverty and lift people in small businesses. But what is needed for that capitalism to blossom? Well, commons. It has to have something to exploit. It really does. And you know what? They work together. You can't have a vibrant capitalist economy unless you have a vibrant commons. And there has to be a feedback loop.
Like, and this is, think the thing that really frustrates me in tech right now is like this idea of extracting and exploiting and pulling all this stuff out of the commons with a complete lack of awareness that you need to have these raw material, whether it's like a true raw material or like data as a raw material, that you have to have this raw material that's created by not you in order to do the thing that you do and complete lack of awareness of like wanting to contribute and strengthen that raw material, that pool of raw possibilities that you refine something amazing from. That really, that makes me really sad. And that's how I think we get into like super extractive toxic capitalism, because just imagine what we could build and what other people could build if there was just like this feedback loop that went back into strengthening these pools of common so that we can like apply our creative brains to it to do that.
Justin Beals: Yeah. Well, this is my criticism of, I mean, we even saw it with the chat GPT where they were bleeding executives and scientists out of their team because they weren't adhering to their own nonprofit mission to utilize the commons, you know, for pure profit. And they're still trying to get out of their own corporate structure that they set.
And it it's I the sad part is I think it has a lot to do with money and power right Heather and that's the anti seeded the the anti growth situation I think we saw this in Facebook where it started out being an open place to communicate and turned into an attention-grabbing Scenario. Yeah, it is very disappointing. Yeah
Heather Vescent: Also, we have the technology right now. This what's so exciting. This idea I had when I wrote this original paper, I was like, we have the technology where I could, and with decentralized identity, controlling my data. If I could have control over my data and say, you know what, I do want to share this sensitive health data securely with this pool of LLMs or these scientists that are doing research in this thing.
And I have control of opting my data to be in there anonymously. And I feel confident about that. And then the LLM just takes data from all kinds of people like me that have opted in. The scientists don't need to know it. The people building the LLMs don't need to know it. It's a flow into it. does all this magic stuff because you know, what AI comes out, even with the hallucinations, the hallucinations are fascinating too, right? They're useful. It does all this stuff and then it can be used for a really positive benefit. And there's some ways that, you know, health and medical and like, like what she said, thought partner, that kind of stuff. And we have the technology now, like previously these were kind of like dreams, but now we do have the technology, nobody's like, we can't build it. And that problem is not tech. That problem is people and a limitation of what we people think is possible. And I don't know how to get out. I don't know how, I don't know the path forward from that.
Justin Beals: Yeah. We've scratched the surface on some of the questions and topics that I shared with you, but absolutely exciting conversation. But I did want to bring back one thing, one other question as we wrap up our conversation here, which I think is really helpful to our listeners. You've been very involved in launching web applications and products on the internet throughout your career. And now you've been doing this work in future, futurism and security as well.
How do you advise development teams to think about future-proofing and security in their products, like at a design phase and dealing with emerging tech?
Heather Vescent: Well, it's not something I currently do because developers don't like reach out to a futurist. I mean, my husband is a developer, so we talk all the time. And I think as I started the conversation, like, when I talk about this, I use the language of strategic foresight, which has its own set of jargon from the language of development or tech or IT or security. And as our conversation has gotten very lofty, a lot of it can be like way heady stuff and, Hey, that's really fun. But like, how can I apply this to what I have to solve today?
So I'm working on this book to try to have like, to explain, to make the case for thinking about things in this way and to talk about different trends and how you can think about cybersecurity trends and then how can you influence those or anticipate that. Let me take an example, the attack surface. So attack surface is one category of variable, it's one catable category of variables that specifically applies to cybersecurity. And there's sub subcategories like is the attack surface increasing or decreasing? Is that an attack surface that's like of a specific type of technology? Like, is it human or is it AI data or is it IOT? Like, okay, let's just take IOT because that's that's common, right? So is the IOT attack surface increasing?
And If you're building technology that is using or that will be an attack surface or influence an attack surface or might specifically be part of the creation of like an IOT attack surface, you could kind of think about it from that perspective is like, is what we're doing improving, like reducing the threat of the attack surface of this endpoint, this IoT endpoint. E.g., maybe don't use 12345 or password one as your password or have a user or design a user interface or design an onboarding or activation process where that gets changed, you know? So that's just kind of like one way of thinking about it.
Although I will say, oftentimes, like when developers are building something, they're really focused on like their list of features or their things they have to build. so it can be overwhelming to think about it like this. and I admit, I have not been successful at making a case for thinking this way. Although I do know from Alyssa's paper, we did come up with some new things.
Um, I also just wrote a chapter. specifically applied this to research security, which is basically spies and espionage. the question that chapter answers is, um, what are future threats to the research security enterprise? And so I did like the whole process of these are the scenarios. These are the increased threats. Then I did an analysis of all of the threats to identify the top potential threats in the future.
And then from there, anyone who's working in research security can read that chapter and be like, the top threats and research security are, well, the top one is insider threats. Not surprising. So if you want to like secure for research security today in the future, having an insider threat, having some processes that address or reduce insider threat are useful. Interestingly enough, one of the ways to reduce insider threat is a system worldview of having a free exchange of scientists to be able to communicate with each other. So when you have more transparency, there's less likelihood of doing an insider threat because there's already collaboration going on. So sometimes the solutions to these future problems are both small and can be programmed or can be inside with like a company. And sometimes they are broader political things like that whole wanting to have more transparency. Like that's going away in the U S in science right now, which means that our academic institutions need to be more aware of the rising attack surface of insider threat.
So when you have more transparency, insider threat attack surface goes down. When you have less transparency, insider attack surface goes up. so, and I'm actually looking for some teams or CISOs or folks in security who are willing to like work with me to really like, let's bang on this and see can we get applied for like a case study or can you help me improve it?
Because my gut tells me there's some really great insights here, but I have not really been successful at translating it for real use. But that's what I am working on right now with this book and series of interviews and things I'm doing right now. So I hope that long winded commentary was an answer to your question.
Justin Beals: No, it's really good. Mean, I think one thing that I'm taking away from that response is you need to think about security as an improvement, as a value add, as aware it does add value, either to your consumer or your own corporate investor. You may come across things where security can be a value add, like more transparency reduces insider threat, which could be you need to be a leader in.
Like you need to decide that you want to reset the bar for what someone can expect and it could be more transparency in publishing. Like you're going to open RxBIV for your publishing as opposed to a more locked down journal that doesn't allow for that or that it's baked into the research grant that we're going to publish this publicly as opposed to keep it locked away.
Yeah, I can see a lot of value in that attitude.
Heather Vescent: Okay, so you have just hit on the leverage point of why this is important because like if you're building technology, and you want to influence a certain kind of world out in the future, you can make really specific decisions about your technology that can potentially have this broader impact, not just on your product, but on the way the world is. And that's what I want to empower technologists to do because that's kind of how we want to do. And I even figured out how to do this for cybersecurity marketers.
I was like, because part of me was like, wanted to share this information so that people like you could build their technology better. But then I realized like, there's a lot of products that are already out there that are like marketed in, I don't know. I've done lots of marketing in my career and I hate it, right? I don't want people to sell to me. I don't want to be manipulated. I want to get my information. But I took all my variables like the attack surface and defender effectiveness and you know, all these things. I took all these things and I thought, there are products that exist out there that do try to address these very specific threats. So what if we could write like value propositions or they could do marketing messaging that specifically speaks to the threats that that product is trying to influence out into the future. And if people can understand that, if I use this product, it's going to help me reduce my attack surfaces in this way, then that is going to result in something that's like way better for humanity out in the future.
So I actually wrote a GPT that I programmed with like my value proposition framework. And then I trained it on my cyber security threat levels. so, and in fact, I even did this test where I basically like took, well, Bob is my friend, Bob, who I wrote the paperwork with. He's at this company. There's hardly any marketing information about the company. I grabbed press releases, everything I could. threw it into my GPT and I had it like, like run some stuff and look.
You don't want to take anything that's at GPT or Claude or anything just gives you straight out. You need to like put your brain on it. But it gave a really interesting perspective and something to start with. And as a writer or sometimes when we're starting anything, sometimes we just need something there to respond to, to be like, that part's right. That part's not right. How can I tweak and fuddle that?
So yeah, I think that. I mean, what I'm hoping to do with my work at this point where I'm at is I want to take it to the cybersecurity community in a language that's very practical, that people can understand how I can use it in my space to have a positive impact in the world. And I'm looking for some folks who might want to do that before the book is out to see if it's just a theory or actually is something real.
Justin Beals: Well, if any of our listeners are interested in joining you on this journey, Heather will add some link or information to the show notes on how to contact you. I'm very grateful for this time today. I know we wandered, but I think the topics were really healthy. And we did sometimes look at a 50,000-foot view, but to your point, sometimes we were talking about very specific changes that can happen. So thank you, Heather, for joining us today on SecureTalk.
Heather Vescent :Justin, it's such my pleasure. It was such a great conversation. Thank you.
About our guest
Heather Vescent is a futurist, primarily known as an expert in cyber economics and cryptocurrency. Her far-reaching research, published by the New York Times, CNN, American Banker, CNBC, Fox, and the Atlantic, explores trends in the future of money, economies, identity, wearable tech, relationships, augmented intelligence, IoT, cybersecurity, and humanity.
Her talks have been featured at SxSW, Sibos, TedxZwolle, Cybersecurity Summit, Total Payments, Privacy Identity Innovation, The Future of Money, Tomorrow's Transactions, World Future Society and other global conferences.
She has been profiled and quoted in Wired, TechCrunch, CNBC, The Atlantic, American Banker, The New York Times, LA Times, Elle Decor, Italian Elle, LA Weekly and San Jose Mercury News, and has appeared live on Fox News multiple times.
For more than a decade, her company (The Purple Tornado) has produced media visualizing the future in the present tense, including 10 short films and documentaries about the future and more than 30 podcasts on money, wearable tech, and autonomous driving cars. Her short film “Fly Me to the Moon,” showcasing her vision of future payments, was nominated for a Most Important Futures Award by the Association of Professional Futurists.
Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.
Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.
Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
Other recent episodes
Keep up to date with Strike Graph.
The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.