- Home >
- Resources >
- SecureTalk >
- The Invisible Majority: How Social Media Erases 90% of Voices | Dr. Claire Robertson
The Invisible Majority: How Social Media Erases 90% of Voices | Dr. Claire Robertson
90% of Twitter users are represented by only 3% of tweets. When you scroll through your feed and form opinions about what "people are saying" about politics, you're not seeing the voices of nine out of ten users. You're seeing the loudest, most extreme 10% who create 97% of all political content on the platform.
In this episode of SecureTalk, host Justin Beals explores the "invisible majority problem" with Dr. Claire Robertson, Assistant Professor at Colby College. Together they examine how moderate voices have been algorithmically erased from our public discourse, creating pluralistic ignorance that threatens democracy itself.
Dr. Robertson's journey began at Kenyon College during the 2016 election—a blue island in a sea of red where Trump won the county by 40 points but the campus precinct went 90% blue. Surrounded by good people who saw the same election completely differently, she dedicated her career to understanding how we end up living in different realities.
Topics covered:
- The psychology behind false polarization
- How extreme voices get mathematically amplified
- Why conflict drives engagement in the attention economy
- The abandonment of scientific rigor in AI deployment
Resources:
Claire E. Robertson, Kareena S. del Rosario, Jay J. Van Bavel,
Inside the funhouse mirror factory: How social media distorts perceptions of norms,
Current Opinion in Psychology,
Volume 60,
2024,
101918,
ISSN 2352-250X,
https://doi.org/10.1016/j.copsyc.2024.101918.
(https://www.sciencedirect.com/science/article/pii/S2352250X24001313)
Subscribe for more conversations about technology, psychology, and democracy.
View full transcript
Justin Beals
Hello everyone, welcome to SecureTalk. I'm your host, Justin Beals.
90% of Twitter users are represented by only 3% of tweets. When you scroll through a feed and form opinions about what people are saying about politics, you're not seeing the voices of 9 out of 10 users. You're seeing the loudest, most extreme 10 % who create 97 % of all political content on the platform.
The invisible majority problem extends across every major social media platform. The moderate voices, the people who haven't made up their minds, the folks who simply don't care enough about every issue to post about it, they've been algorithmically erased from our public discourse. They exist, they vote, they live in our communities, but online, they might as well be ghosts. This creates what psychologists call pluralistic ignorance.
We systematically perceive what our neighbors actually think because we can't see them. When only the most passionate voices survive the algorithmic filter, we begin to believe that everyone is as extreme as the content we consume. We lose track of the fact that most Americans identify as moderate. The plurality of voters say they either don't know their political ideology or haven't even thought about it.
But conflict drives engagement, and engagement drives ad revenue,and every piece of content competes for survival in the attention economy. What used to be a bloody headline to sell the whole newspaper has become every single post, tweet, and video needing to grab attention. We've also accelerated this crisis by abandoning scientific rigor in how we study and build these systems. Too many AI models are deployed without human validation, without testing for accuracy against expert knowledge, without careful methodology that real social science requires. We've prioritized speed and scale over truth and understanding, creating systems that amplify our worst impulses while calling it innovation. The result, we're all trapped in a fun house mirror factory where every reflection shows us a distorted version of reality.
We can't agree on basic facts because we're not even seeing the same information. We've lost our shared reality. And without it, democracy itself becomes nearly impossible to sustain.
Today's conversation explores how we ended up here and what we might do about it. We'll examine the psychology behind false polarization, the mathematics of how extreme voices get amplified, and the rigorous research methods needed to understand what's really happening in our digital public square. Our guest brings both the scientific expertise and the personal curiosity needed to tackle these questions. She experienced the 2016 election as a student at Kenyon College.
A blue island in a sea of red where Trump won the county by 40 points, but the campus precinct went 90 % blue. Surrounded by good people she respected who saw the same election completely differently, she dedicated her career to understanding how we end up living in different realities.
Claire Robertson is an assistant professor at Colby College. She earned her PhD in social psychology from NYU in 2024. Her research primarily explores intergroup conflict, especially political polarization. She investigates how people's innate negativity bias and moral anxieties lead to excessive exposure to hostility online, fueling pluralistic ignorance and false polarization. Additionally, she studies misinformation and conspiracy theories. Her research utilizes diverse methods such as natural language processing, big data analysis,international mega study collaborations, and computational cognitive modeling. When not working, she enjoys exploring farmers markets and trying new restaurants with friends and reading extensively. Please join me in welcoming Claire to Secure Talk.
—---
Justin Beals
Claire, thanks so much for joining us today on SecureTalk. We really appreciate it.
Claire Robertson: Thank you so much for having me.
Justin Beals: Wonderful. So you've completed your PhD in social psychology at NYU and now, and you did some joint postdoctoral work at NYU. You actually have taken on a new position very recently. Maybe we'll start there with your new job. Let's celebrate it a little bit.
Claire Robertson: Yes, so yesterday I had my first day of school as an assistant professor at Colby College. And I'm really, really stoked and excited to be here. When I started my PhD, said my dream was to be a professor at a liberal arts college, and I got to do it. So really excited to kind of jump in. If at any point, I don't know if the video is going to go up, but if you see why my office looks empty, it's because I moved in yesterday.
Justin Beals: Well, we understand that you're getting all your tchotchkes out and decorating. I know I have to have a space that I feel creative in myself. I'm a little curious, you know, I think that your research area is absolutely intriguing to me. Social psychology, online communication and political polarization. You know, I'm curious about the influences that drew you to that.
Claire Robertson: Yeah, so I first got really interested in political polarization as a phenomenon, actually, in my senior year of college, which was happened to be the 2016 election. And I was at Kenyon College, where my love of liberal arts started. And it was a really interesting election. And at Kenyon in particular, it was interesting because it was this blue island in a sea of red. So it was so extreme that the New York Times actually wrote about Kenyon because our precinct, or sorry, excuse me, our county, that's what I'm looking for, in our county Trump won by 40 points. But Kenyon, the precinct of Gambier, which is where Kenyon is, was 90 % blue.
So it was this real kind of, in a way that doesn't happen all that often, like rubbing up of these two very different political kind of beliefs and ideologies. And I was around a lot of people who I'm kind of telling on myself here with maybe my political opinions, but I was around a lot of people who I really trusted and liked and, you know, thought were extremely reasonable, good people, people who had families, they just saw the election so differently than I did in a way that I could not comprehend how they could have seen these two candidates and come to what I consider to be like the polar opposite conclusions about them from me. And that just fascinated me. And it's kind of still like one of the questions that keeps me up at night is how do we end up in these places where we aren't even talking in the same reality, right, as each other. And we're not, it's not effective, and whatever we're saying, it's not changing people's minds, and how does that happen? So that was really the big kind of turning point, I think, for me, for why I wanted to pursue my PhD in this area specifically.
Justin Beals: Yeah, it is disturbing. And I'm, I've certainly revealed some of my political leanings on the podcast. So regular listeners will know we're probably very similar, Claire, and kind of how we perceive, you know, our instincts around who to vote for and why we might like that candidate. And I think this experience of like an island in a sea of another situation is definitely true.
Growing up in Atlanta, Georgia, know, in Atlanta of the state of Georgia. It's very different. And then I am, the conundrum of the reality thing is very difficult. Because it just feels like we're so disconnected from each other. How can we shop at the same grocery store in a way?
Claire Robertson: Yeah. How can we be in the same community choir? I was hearing my mom is, and I hope she doesn't mind me telling this story. She's an administrator at Wellesley College. And that was obviously Hillary Clinton was kind of a lot of people. My mom, especially like really thought she was going to win. They were at a watch party at Wellesley College. And she was talked like just couldn't understand what had happened, and there was kind of this like, how could anybody, like who are these monsters who are voting for Trump? And I was like, I mean, they're like, Sheila, like they're they're Debbie, they're, you know, Arthur, they're these people that I actually know and really respect, even though I also find it like kind of baffling how they could have seen this. So it was a really interesting like position for me to be in being from Massachusetts, my family's from there, but then having this personal connection and I just, it was so like, I have to understand this. I want to understand, and I don't yet. And spoiler alert for the entire interview, I still don't, but I do have, I have some more insights now. I feel like I have better answers for those questions and an idea of where to go next, which is encouraging.
Justin Beals: Yeah, that's wonderful. And of course, these passions, you know, lead to us to use the tools or these conundrums, you know, the things that don't feel congruent around us, lead us to want to investigate and understand why that's not always easy. at the, know, I think that one thing I've been frustrated by and, and we've talked about this on secure talk a fair bit, because I think it's a security issue at a nation state level, you know, our country, is that if we can't see things, you know, a reality together, we're going to struggle to operate a democracy.
You know, you use a metaphor in a paper. This is why I got interested in chatting with you. Thank you for your work here. But your recent paper uses a powerful metaphor of like the fun house mirror factory. And along the lines of all these issues, we in the tech industry have been building social media applications that adjust the content that we see algorithmically. I want you to walk us through a little bit.
kind of this metaphor of the fun house mirror and social dynamics, a little bit.
Claire Robertson: Yeah, absolutely. So I was really excited when we kind of landed on that metaphor. It was one of the metaphors I had been pushing to use because I think, at its core, the internet is a mirror, right? It is reflecting real things back to us. There's often kind of two camps in people who study social media, especially from a psychological perspective.
There's one that is the internet makes us all crazy, and it makes us act like totally different than we do and like all of kind of that camp. And then there's the camp that I'm in, which is it's a communication tool, right? It's for the most part, I'm avoiding talking about, for example, bots here, but let's just assume, right? For the most part, it's real people who are communicating. They're kind of doing these human behaviors. There's no social media part of our brain.
Everything we're doing on social media comes from something we did offline before, right? So it is a mirror in that way. The issue arises though in how the mirror is made when we are online. And there it is really, I think, overemphasizing, expanding, distorting these different parts of our reality. And I argue, and this is where the real issue comes in, is that it's doing so in very systematic ways.
So it's not that it's kind of a random distribution of who's getting boosted, who's not. What we're really finding, what we've argued, what we're continuing to find with empirical evidence is that it's really overemphasizing the most extreme voices, right? And I, of course, care about this mostly in the political sphere. But another example of this that kind of marketing professionals, for example, have known for a really long time is if you think about like online reviews, right? They're really polarized. You mostly get ones and fives. You're not getting a lot of threes, even though most people are probably having a completely average experience with your laundry basket. Like, right? That's not the reviews that you're seeing. You only see these absolutely most extreme kind of reviews. That, again, is like consequential for marketing, but for politics, right?
When we make those voices, kind of those most extreme voices, become so overrepresented, what we're worried about is that that can make those people and those opinions seem like the majority opinion. When in fact they're not. So in the United States, this was definitely true for the 2016, for 2016, and I believe it was still true in 2020, only 3 % and 4 % of Americans respectively identify as extremely liberal or extremely conservative. A plurality of users say that they either don't know or haven't even thought about, or sorry, yeah, are moderate, they don't know or they haven't even thought about their political ideology. That's most people in the US. That is just not the perception that you get when you go online.
Justin Beals: Yeah, I want to agree with you deeply, and I'm frustrated by it, but also feel like I'm swimming in the pool with everybody at the same time. You know, just I'd say in the last three to four months, I've had two experiences with us doing marketing work. You know, one on a LinkedIn where, you know, kind of an antagonist style comment came into a post that I made and it boosted the heck out of that thing, and I was like, this is not helpful. Actually, this is not the conversation I wanted. I wanted to kind of find a balanced approach to something that perhaps had some controversy to it. But I also needed the attention and the same for this podcast. You know, we had an episode on YouTube that with a four-star general discussing national security, and we got some, you know, extreme right-wing comments, and it boosted into the thousands that episode in just a couple of days. It's frustrating. And it leads me, as either a content creator or someone attempting to advertise our services, to a place where I think I have to play that game. I need to game the mirror to get the attention we deserve. And I despise it, but I'm also swimming in it.
Claire Robertson:
Yeah, it's really hard. I mean, people have known that is, you know, funny enough, that's not even just an online thing. the, you know, a like journalistic phrase of if it bleeds, it leads, right? That always been a thing. We know that kind of the bloody bad stuff tends to sell papers, right? I think a lot of the issue is now, like before, if you think about kind of the unit of the product, right?
They maybe had something on the front page that was negative so that you would buy the paper. But then you owned the paper. Then the rest of it doesn't have to be as attention grabbing, right? But now it's every, the unit of product has changed and now it's clicks, it's time on page, right? So that kind of forces people, I think, to make everything bleed to extend the metaphor, right?
Even when it's not something that really needs to be kind of alarmist or extreme or super high conflict, it just like, that's what gets attention. So that's the kind of thing that we have to care about because attention is now the unit by which these products are kind of measured up against each other, and how a lot of companies make money.
Justin Beals: Yeah. No, you highlight and I think it goes along the side of false polarization and actual ideological differences. And I think this, you know, even in cybersecurity, we see this kind of like tropism of like doubling down on where the polarization exists. You know, how do you, you know, differentiate between genuine political disagreement and our artificially amplified divisions.
Claire Robertson: This is such a good and important question. In psychology and political science, we have two different words that we use for polarization. We have ideological polarization and then affective polarization. Ideological polarization is, again, I'm oversimplifying a little, but it's when we disagree about a policy issue. I think taxes should be raised to help fund a new gym for the school that's in our town.
You think they should be lowered so people have more disposable income to spend at local businesses, right? We make our cases, and then people vote. And that's democracy. That's good. Ideological polarization is actually a really good thing. It prevents us from sliding into groupthink. It encourages checks and balances, all of those good things, right? The problem is affective polarization, which is the real kind of problem child here.
And it overwhelmingly is the kind of polarization that we are really thinking of when we say polarization. And it refers to people's feelings that the other side is immoral or evil, that we shouldn't trust them, that there's this intense animosity between the two sides. And that is not productive because then you can't trust anything that side is doing. And the problem too is like, that riles people up, that gets more attention.
I don't, maybe I'm not that interested in whether the tax cut is going through or not, whatever, right? But if you tell me that somebody is after my basic human rights or something that's really attached to my morals or that this side hates children, right? Or this side wants communism, right? Then, well now it's an emergency. Now I need to pay attention, right?
Because that's how our brains work. We care most when things are threatening. We evolved that way. It helped keep us alive for a really long time. But now it can kind of be manipulated in a way that I think is really frustrating for the average good intentioned user and really effective for sometimes, like more nefarious or self-interested or anti-social actors.
Justin Beals: I think a little bit about recent, you know, history in the United States specifically, and it feels like in some ways the left pioneered this idea, you know, like fighting for the planet or civil rights. There were some very effective ideological kind of even the phrases that were designed were meant to grab emotion over a discussion about policy. And I think that we've now seen this weaponized from all ends of the political spectrum in the United States specifically. And that's, I think, both in a way I probably would agree with you is that I wish, you know, my community could have more clarity about voting decisions they made along this kind of do we want to do the taxes or not and less about I hate this group so much i'm just voting against them. No matter what. Yeah
Claire Robertson: And you know, politics in a perfect world, right? It should be boring. Like, it should be about the basic functioning of our government, and it really shouldn't be about these huge issues. But politics is a business now. There are a lot of people who make their money on affective polarization. And that's individuals, that's entire conglomerates, right? MSNBC and Fox News don't exist if politics is boring, right? Not in the way that they exist now. So there's like, and then of course there's all of the kind of in the middle people who are making like a little bit of money. And then there's people who just want attention. Like they don't even need the attention economy part. They're fine with just the attention. That's another kind of factor as well. But all of those things, like they really kind of create this challenging vortex to get out of.
Justin Beals: Yeah. You know, I think in the age of AI, which is, I'm going to call it an affective trope, morphed this thing in its own entity. We love talking about the more technical side of data science. And obviously, our friends in academia and researchers have pioneered how probabilistic studies can help us understand, you know, what's going on at scale.
Your research involves natural language processing to kind of analyze social media content. Can you walk us through a little bit MLP pipelines and some of the feature extraction methods you might've used, especially in quantifying extremeness or some of these characterizations that we're looking at?
Claire Robertson: Yeah, so this is a great question and I have a somewhat boring answer for you, which is, and I'll kind of wrap it up by saying there are two other resources I'll point you in the same direction of. So when it comes to NLP, and I've been doing this now for six years, I have, I take a very conservative approach. So I tend to go for what I would consider to be the most like simple and interpretable version of NLP. So for that, for me, usually what that means is I do a lot with like dictionary coding, which is exactly what it sounds. It's basically having kind of a set of words that we're looking for in datasets, right? We code those, we then pull those out, we, and this is kind of again where it gets very conservative.
We then say, okay, these are, you know, a hundred of my data sets that have been coded for extremism because they contained words like awful, immoral, terrorism, know, stuff like that, right? And here's a hundred that were coded not that way. Now we're going to feed these to real humans and we're going to have them code. And then we're going to check and see what our correlation between the two is. And we kind of do it that way. So we keep a lot of the human aspect in our, in my natural language processing. And I do that because I think right now, and I say this as somebody who like does use GPT, especially for proof of concept, so to basically test something quickly, or it's really good for scaling.
But for hypothesis testing, I think it's really good to have something that is always going to be replicable. So the model isn't going to change, right? My dictionary, you can use exactly the same dictionary on exactly the same data set and recreate exactly my results every single time, right? I think there's real value in that. So I tend to opt for a kind of pipeline that looks like potentially kind of brainstorming, a little bit of testing with GPT, kind of see maybe what's in there because it's pretty fast, really fast, which is great. Then dictionary coding, checking those results, human coding, or sorry, I skipped a step there. GPT coding, dictionary coding with a subset of the data, pre-registration, there we go.
Because with natural language processing and these big data sets,you're almost at population level data, which means that when you think about hypothesis testing, you're gonna any difference is probably going to look significant, statistically, even if it's not like actually meaningful. So right, that's that's like there is that as well. So, pre-registration, then you do a confirmatory analysis, you test with human coders, and then that's what you go and you publish. And again, that is like a little bit low tech, I would say. But I think it is, it's a pipeline that works really, really well. Recently I did, there was a paper that we wrote about negativity bias in news headlines, for example. And because we did it that way, and again, this involved pre-registration and all of these things and publicly available data sets and dictionaries and everything.
We have now actually had that data set be replicated by two independent teams, which just gives me so much faith in those results. And they found when they used our scripts and everything and just adjusted a couple little things to make them run on their machines, they found identical results. So that helped me just feel like, OK, this is a good kind of capsule.
So again, I don't want to seem like an AI hater, but I do think when it comes to kind of direct hypothesis testing, the more like parsimonious and just like replicable you can make the data, the better. So that's my pipeline. But I said I would point you to two other people. So my colleague, Steve Rathje and then Stefan Freigel and he is German and I apologize if I butchered his last name.
They just wrote papers that I helped with both of them about the nuances of different NLP methods. So Steve focused mostly on GPT, Stefan focused on like all sorts of methods and the trade-offs that come with them. I highly recommend you check them out. Those are both linked on my website and on my Google Scholar because I was a kind of co-author on them. So definitely check those out as well.
Justin Beals:
We'll include links in the show notes as well for our listeners, and one of things I want to highlight here that I adore is that Obviously, you're very interested in accuracy. I mean, I think that's foundational to science and the fact that you use a human coding element in the process of reaching conclusions or predictions means that you tested for accuracy out of your system in kind of an environment with people.
And I've stressed this so often, like I've seen so many models deployed where they're like, well, it has an answer. And I'm like, yeah, but have you taken that answer in front of a human being that's perhaps an expert in this field, a writer, you know, an edu- someone that is an educator and said that's a right statement or a wrong statement? Like you can get accuracy out of a non-quantitative model and test it. You're just being lazy. And I'm, I'm being, I don't like that we don't do that. Yeah.
Claire Robertson:
Yeah, I mean, as a statistician, you can get a model to say anything, right? That's the other thing. I'm just being honest. That's why I think pre-registration and keeping it so clear, because you can always just add more until it says what you want. You can add more interactions. You can filter your data. So I just think the transparency, there's a great quote, and I'm forgetting what's by now.
But it's all models are wrong, but some are useful. And I think that's really good. We can't model everything perfectly, right? But some tell us something about the type of the data, but that only works when we know the parameters that we're working with. That's my soapbox.
Justin Beals:
That's good. I love it. And I think for those folks that are listening today that are interested in AI thematically, this is where it was invented with good science and people that cared about these kinds of outcomes. I hope that we as an industry and tech continue to draw from that wellspring of best behaviors instead of imagining that we don't need it. And so from that work, that excellent research work,
You know, you argue that social media increases pluralistic ignorance and false polarization. Can you talk to us a little bit about how that psychological phenomenon manifests, and maybe compare it against other styles of interactions?
Claire Robertson: Yeah, totally. remember earlier when I kind of mentioned that like, we worry, we think we know pretty surely now that kind of the extreme voices are being amplified, right? But we are concerned for the downstream effects of that because it may be kind of impacting the way that we are perceiving the norms. That's kind of really the crux of the issue that we're about to start running studies on. And it's mostly due to how our brains work.
So we take in a lot of information in every single second of our lives, and it's not practical for us to analyze every single thing. We typically engage then in something along with a lot of other kind of data crunching things that we do in our brain. But one of them is called ensemble coding. And basically what that means is we take the average representation of a series of items in our environment.
So when I'm teaching, I can look out into my class and see that five of my students are kind of nodding along, three are neutral, and one is maybe starting to fall asleep, right? And I can gather that the class is overall going okay, right? And extreme exemplars in our environments, can bias our perception. So if that one student is snoring super loudly or something, then it might make me feel a little bit worse about the class, right? Because I'm kind of over-representing that extreme person in my head. But those neutral faces and the ones who are kind of gently nodding, they're still included, right, in my overall representation of the class. I'm still seeing them. But that, I think, is the critical thing. That's the critical difference between online and offline, is that we do not see neutral people online.
They are completely invisible to me. Because nobody's commenting saying, huh, I haven't really thought about this. Or quite frankly, unless they mean it in a really kind of petty or snippy way, they're not saying things like, I don't really care about this. And I mean that as a value neutral thing. A lot of people don't care about certain things, but we don't get those people online. We don't see them. They're completely invisible. And that would be like trying to understand how your entire class is performing by only looking at your top or your bottom student, right? You're just gonna be missing so much information, and that means that your perception is probably not going to be accurate, right? So when it comes to pluralistic ignorance, part of that is that we're trying, we're constantly trying to figure out how other people feel, right?
That's a huge part of our social world, and it matters to us. We wanna be accurate in how we think about our in groups and also our out groups, right? Because that's what kept us alive. Humans are social creatures. We die with other people. We are not the strongest. We are not the fastest. We are not the biggest. We are all of our like ability to kind of like dominate the natural world as such we have is because we are able to cooperate and be social, right? So it's really important for us to understand how other people are feeling.
We're doing this all the time, but we're just missing so much information online when it comes to kind of figuring out how people feel about an issue, how they feel about a politician, how they feel about a laundry basket, right? Like we are just constantly missing information.
Justin Beals: Yeah. I worked in the theater in my high school and a little bit in my college and we used to have this. So you probably got this too, like 80 % of acting is, you know, nonverbal, right? Like it's, it's the way we present ourselves. It's, it's the way we nod our head. It's whether or not we're looking at the person we're talking to. And so we're missing a large part of communication that has built communities together in the online world. On top of that, you add in an algorithm and all of a sudden the reality is a funhouse mirror. Like it's not what you expected to see.
Claire Robertson: And it's also, the algorithm is directed usually by still, again, people's choice to like publicly, for the most part, engage with different types of content, right? And so that's likes, that's comments. They always say, you know, don't forget to like and subscribe, right? That's a really common refrain that we hear. And a lot of people don't do that.
Like it's only the people who feel the strongest or who have something to kind of they have a bone to pick, right? That they're the people who are going to go in and comment. so again, it's kind of still even just what we're seeing is being inflated by the most active users. And it's a really small proportion. I think Pew found, excuse me, that 90 % of political content is posted by only
It's the other way around. 97 % of political content is posted by only the top 10 % most active users on Twitter. And if you invert that, that means that 90 % of Twitter users' opinions are being represented by 3 % of tweets.
Justin Beals: So frustrating because it's, I think it is the way that we have economized these businesses that they're based on advertising. So they need to grab attention. They realize that they can grab attention by, you know, clicking into that effective, you know, ideology.
And then we walk away from what we thought was like a representation of the internet. And it's not at all, is it? This is not the human beings connected to the internet that we're seeing when we go to Twitter, when we go to Facebook, when we go to these social media algorithms.
Claire Robertson: Yeah, it is not. And I think it was funny when I started thinking about this because I knew a lot of people didn't post. And I come from a lab where my fellow postdoc at NYU, Steve Rathje, has over a million followers on TikTok. And my advisor, J Van Bavel, is very popular on Twitter and Blue Sky and LinkedIn and all of this stuff they were doing a lot of social media research, but it was all about like, well, what's driving people to post? Like, why are they retweeting this? And like, what's like, what would make them want to retweet it more? And I was sitting there, and I was like, nothing. I would never retweet any of this because I don't use these platforms and I can't be the only one, right? So I kind of had this realization. I remember telling them, I was like, I think you guys are the weirdos.
I think I'm the normal one, and we are really looking at me. When we scraped Twitter for when we used to be able to scrape Twitter, you weren't getting anything about me. So we're just missing those people. We're not seeing them at all. And so that was a real impetus for the work I did in the latter half of my PhD, and my postdoc was like, who are these invisible people? What are they like? What's going on with them?
Justin Beals: I you know, this has been a tough conversation inside my community sometimes because I'll will be having a discussion. I'll hear someone make a statement, and it seems to capture a whole population group like all population group men, women, whatever, you know, whatever the descriptor is has this characteristic, and I'll say, hey, I don't think that's true at all. Like I and they might have read it in somewhere that was a well-respected publication like the New York Times or the Atlantic or something along those lines, and I'm like, not all people are like that, and there's nothing real about that style statement. It is an emotional affectation that you want to place on a population, and that's damaging. It's not fair. Yeah.
Claire Robertson: Yes, and it can be based on also, like, the worst versions of that because a lot of what we see, again, this is totally anecdotal, but like I saw it really bad on Reddit. I think it happens on a lot of other places too, right? Where you kind of see, I feel like I'm trying to think.
I mean, Reddit has a ton of this. Oh, I'll even, here's a very brief example that is funny to me. The Reddit subreddit weddings, at least when I last checked, had about 100,000 subscribers or followers or whatever it's called. I cannot for the life of me remember what all the steps called. The subreddit wedding shaming had 700,000. Right? And like,that's funny and it's entertaining, but all that subreddit is about, like, there's all these weddings that, like, went really well and are beautiful on one, right? And then there's all these weddings that where people behaved really badly or, like, they were acting like bridezillas or momzillas or whatever, right? And those are the ones that are getting boosted. So that's also going to influence your perception, even when it's, like, it's not coming from the other side.
This is how you can get polar. I think that's one of the worst things that happens in echo chambers, is people post these examples. I think there's definitely ones also that are stuff liberals say or stuff conservatives say. Those are just meant for dunking on each other. They're not meant for anything else, but they're still going to bias your perception of what that group is, especially if that's the main time you're getting information about that group. So it's such a it's such a naughty K-N-O-T-T-Y problem. You know, you pull on one thread and it like tightens, like, I don't know, it's very challenging.
Justin Beals: Yeah, and I want to highlight Claire. We're kind of we're into our own personal experiences and outside your research work a little bit
Claire Robertson:
Yes, that is all my own personal experience. Yes, absolutely.
Justin Beals:Yeah And and but in my in my personal experience it has been a struggle to discuss this even with dear friends because in a way they feel like i'm attacking their ability to perceive reality, and It's not really what i'm trying to express. Although in a way, I think it's fair. It's just that I just think that hyperbole is something you've adopted as an emotional trigger, and that's not necessarily fair. We could all step back from that emotional precipice and the way we build community, I think.
Claire Robertson:
There is, speaking of kind of moving back towards the research, there is a lot of really great research on, for example, like selective empathy, where and I'm not as well versed in it, but I will just highlight, for example, empathy is a lot of work, like kind of going out on a limb feeling what others feeling like it's it's effortful. And there's emerging research that it's kind of more it's a finite resource, we only have so much to give. And we way selectively empathize with our in-groups over our out-groups. And that's hard. That makes this harder, right? You have to overcome kind of something that's really effortful to engage in these kind of, well, I could see how this person might have thought that, blah, blah. It's a lot of work, and it's asking people to do a lot of work. And sometimes we don't have the energy for it. And I think that's, again, it's not people making an active bad decision, it's them more just passively not making a decision that might help, which we do all the time. I should go for a run tonight, and I probably won't. We do that stuff all the time, so I don't want to point the finger. It's hard.
Justin Beals:
Yeah, yeah. Well, then, with deep empathy, Claire, we really appreciate you spending some time with us on Secure Talk. Great research, and love to check back in as you continue to investigate these areas. And of course, we're celebrating as you decorate your new office.
Claire Robertson:
Thank you so much. Yeah, again, I said this to you before we started recording. For academics, it's always so exciting when people want to hear about your research. We're always so excited to talk about it. So thank you so much for having me. It's been great fun.
Justin Beals:
Well, it's our pleasure. And I love having a real conversation about well done research where there's some science behind it, where there's some good data science work, there's some good statistics work. And I think it sets a great example for our industry and tech, and how we need to operate to the betterment of our society.
Claire Robertson:
Yeah, thank you so much. I'm excited to keep the conversation open and let you know when we find new and exciting things.
Justin Beals:
Wonderful. Well, I hope all our listeners have a great day, and we'll talk to you again soon.
About our guest
Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.
Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.
Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
Other recent episodes
Keep up to date with Strike Graph.
The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.