• Home >
  • Resources >
  • SecureTalk >
  • Securing Society Through Data Kindness: How Cooperation Becomes Our Best Defense with Aram Sinnreich and Jesse Gilbert

Securing Society Through Data Kindness: How Cooperation Becomes Our Best Defense with Aram Sinnreich and Jesse Gilbert

May 27, 2025
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

Are you attending the Gartner GRC Summit? If so, come along on our Sunset Trip on June 10, 2025. Register here! 

Episode Description:  

Every device around you is collecting data about you and everyone you interact with. Amazon Echo recordings are being subpoenaed in murder trials. Period tracking apps are being used to prosecute women. Ancestry websites are revealing family secrets. We're participants in the largest social experiment in human history—and we never opted in.

In this episode of SecureTalk, host Justin Beals sits down with Dr. Aram Sinnreich and Jesse Gilbert, co-authors of "The Secret Life of Data," to explore a revolutionary approach to digital security: data kindness.

🎯 KEY TOPICS COVERED: • Why our biggest security threat is social fragmentation, not just technical vulnerabilities • How tech companies profit from division (anger drives 5x more engagement than approval) • The food allergy transformation: how society changed practices organically in 20 years • Practical data kindness: simple actions that rebuild digital trust • Why cooperation is our best defense in an age of surveillance • How to reclaim agency over your digital life

🚨 CRITICAL INSIGHTS:

  • We can't rely on tech companies (profit over people) or Congress (no data protection laws passed)
  • Change must happen from the ground up through shared cultural practices
  • Simple acts like unplugging smart devices for guests or consulting family before DNA uploads matter
  • Divided societies are vulnerable societies—cooperation is a security strategy

👥 ABOUT THE GUESTS: 

Dr. Aram Sinnreich serves as a Professor and Graduate Director within the Communication Studies department at American University’s School of Communication. 

His research addresses the convergence of culture, law, and technology, focusing particularly on topics such as surveillance and privacy, intellectual property, digital rights, digital culture, democracy, governance, and music. 

Sinnreich has authored five books: Mashed Up (2010), The Piracy Crusade (2013), The Essential Guide to Intellectual Property (2019), the science fiction novel A Second Chance for Yesterday (2023; coauthored with Rachel Hope Cleves as R.A. Sinn), and The Secret Life of Data (2024; coauthored with Jesse Gilbert). 

Additionally, his writing has appeared in various publications including The New York Times, Billboard, Wired, The Daily Beast, and Rolling Stone. He is a core faculty member of the SOC doctoral program and the MA in Media, Technology & Democracy, regularly collaborating with SOC graduate students on research publications and projects. 

Jesse Gilbert is an interdisciplinary artist focused on the convergence of visual art, sound, and software design through his firm, Dark Matter Media. He previously served as the founding Chair of the Media Technology department at Woodbury University and has taught interactive software design at CalArts and UC San Diego.

Since 2010, Gilbert's work has revolved around his innovative software, SpectralGL, which is an interactive listening instrument that creates real-time visual landscapes in response to sound. Drawing on his background as a composer, sound designer, and lifelong technologist, his creative output investigates the phenomenological aspects of listening through improvisation and collaborative dialogue. His work has been showcased at numerous concert halls, festivals, and projection-mapped installations worldwide.

In 2007, Gilbert co-founded Dark Matter Media LLC to facilitate his independent creative projects and provide consultancy on emerging technologies across various public and private environments. From 2011 to 2017, he held the founding Chair position in the Department of Media Technology at Woodbury University and taught interactive software design at both CalArts and UC San Diego.

After documenting the problems in their first book, Aram and Jesse recently published insights in Time Magazine and are working on a new book about building cooperative societies with kindness embedded in technology design.

🔗 CONNECT WITH SECURETALK: • Subscribe for weekly cybersecurity insights • Follow Justin Beals on LinkedIn • 

RESOURCES:

Sinnreich, A., & Gilbert, J. (2025, April 3). How to be kind in a world that's always monitoring you. Time Magazine. https://time.com/7273469/data-monitoring-kindness-essay/

Sinnreich, A., & Gilbert, J. (2024). The secret life of data: Navigating hype and uncertainty in the age of algorithmic surveillance. MIT Press. 

Beals, J. (Host). (2024, May 14) The Algorithmic Mirror: Reflecting on Data's Role in Modern Life. in Secure Talk Podcast, Ep 173 with Aram Sinnreich and Jesse Gilbert 

 

 

 

 

View full transcript

 

Justin Beals: Hello everyone, and welcome to SecureTalk. I'm your host, Justin Beals. 

Our community has been growing quite quickly, and we really appreciate that. And before we get started on our episode today, I'd just like to thank you all and invite you to like the episode, subscribe or share if enjoy the content. Now let's get on with the discussion.

I want you to look around you right now. If you're listening to this on a phone, tablet or laptop, there's very likely a camera pointed directly at your face. There are microphones, GPS sensors, and a variety of other tracking technologies keeping tabs on you and your surroundings. 

Maybe you have an Alexa or Siri device nearby quietly listening. All of these devices are collecting data about you and about everyone you interact with. Sending it somewhere over the internet, to, well, it's hard to say exactly where or to whom this data winds up.

And of course, this isn't science fiction. It's just Tuesday. And it represents something that we're all living through, but we rarely discuss. We are participants in the largest social experiment in human history. Every click, every conversation, every digital interaction feeds into systems that shape not just what we see and buy, but how we think and relate to one another.

Consider this: Amazon recently announced that Echo users can no longer prevent their voice recordings from being automatically sent to Amazon's cloud service. Periodic tracking apps are being used to prosecute women under new abortion laws. Alexa recordings are being subpoenaed in murder trials. Ancestry websites are revealing family secrets, where roughly 5 % of customers are discovering siblings they never knew existed. 

And sometimes the data is simply wrong, like the Pennsylvania synagogue member falsely identified as Trump's would-be assassin, or the Yale professor placed on leave after an AI-generated article falsely tied her to a terrorist organization. None of us opted into this situation, no matter how many times we've clicked accept on terms of service agreements. Yet we collectively suffer the consequences. 

And we haven't done a good job of caring for one another. What if we, as a society, decided that the same technologies dividing us must actually foster cooperation? Wasn't this the promise of the internet when I first started programming?

What if the solution isn't abandoning our digital tools, but fundamentally changing how we use them? What if there's a simple principle that could revolutionize how we design, deploy, and interact with technology? 

The answer lies in something our guests call data kindness, regular human kindness, but taking into account the invisible webs of data that now surround every action and interaction. 

And here's what's remarkable. We've done this kind of social transformation before. Think about food allergies. In the 1990s, it was rare to consider allergies when planning meals or inviting someone to your home. Today, asking about dietary restrictions is standard practice. 

This dramatic shift happened because we had reliable evidence of a widespread problem, a public discussion about solutions and enough people willing to change their behavior.  The concept spread organically until it became a social norm. And eventually, businesses and government followed suit. 

Data kindness can follow the same path. We already have reliable evidence that unchecked data surveillance and data harnessing causes widespread problems. My favorite whipping horse these days is the outrage engine on social media. 

Discussing things like this are a part of building public awareness. solutions. Now the question is whether we'll take action. Our conversation today comes at a critical moment. As artificial intelligence and data surveillance become embedded in every aspect of our lives, we're facing fundamental questions about what kind of society we want to build. The decisions we make in the next few years about how we relate to technology and through technology to each other will echo for generations.

We can't rely on tech companies who prioritize profit over people, or in Congress, which has yet to pass a single bill limiting widespread data collection. If we want a safer, more ethical driven world, we're going to have to build it ourselves from the ground up. 

We're joined today by Dr. Aram Sinnreich and Jesse Gilbert, co-authors of The Secret Life of Data, who have spent the past year travelling the country exploring these questions with communities from retirement homes to tech conferences. After documenting the problems in their first book, they've taken their research in a new direction, recently publishing their insights in Time Magazine and now working on a follow-up book that focuses on solutions. 

Their new project explores how we can intentionally design cooperative societies that embed kindness as a fundamental principle in the programming of our technology.

They've discovered something remarkable: people everywhere are hungry for ways to reclaim agency over their digital lives and build more cooperative relationships through technology. 

Dr. Aram Sinnreich serves as the professor and graduate director within the Communication Studies Department at American University's School of Communication. His research addresses the convergence of culture, law, and technology, focusing particularly on topics such as surveillance, privacy, intellectual property, digital rights, digital culture, democracy, governance, and music. 

Aram has authored five books, Mashed Up, The Piracy Crusade, The Essential Guide to Intellectual Property, a science fiction novel, The Second Chance for Yesterday, co-authored with Rachel Hope Cleaves as R.A. Sin, and The Secret Life of Data in 2024, co-authored with Jesse Gilbert. 

Additionally, his writing has appeared in various publications, including the New York Times, Billboard, Wired, The Daily Beast, and Rolling Stone. He is a core faculty member of the SOC doctoral program and the master's of arts in media technology and democracy, regularly collaborating with SOC graduates on research publications and projects. 

Jesse Gilbert is an interdisciplinary artist focused on the convergence of visual art, sound, and software design through his firm, Dark Matter Media. 

He previously served as the founding chair of the Media Technology Department at Woodbury University and has taught interactive software design at CalArts and UC San Diego. 

Since 2010, Gilbert's work has revolved around his innovative software, Spectral GL, which is an interactive listening instrument that creates real-time visual landscapes in response to sound. Drawing on his background as a composer, sound designer, and lifelong technologist, his creative output investigates the phenomenological aspects of listening through improvisation and collaborative dialogue. 

His work has been showcased in numerous concert halls, festivals, and projection-mapped installations worldwide. In 2007, Gilbert co-founded Dark Matter Media LLC to facilitate his independent creative projects and provide consultancy on emerging technologies across various public and private environments. From 2011 to 2017, he held the founding chair position in the Department of Media Technology at Woodbury University and taught interactive software design. 

Thank you all in welcoming Jesse and Aram back to the SecureTalk Podcast.

 

—-

 

Justin Beals: Jesse, Aram, thanks for joining us again on SecureTalk.

Aram Sinnreich: My pleasure.

Jesse Gilbert: Happy to be here.

Justin Beals:  Excellent. We got to catch up about a year ago about your book, The Secret Life of Data, and you were exploring in there about how information takes a life of its own, kind of beyond its original purpose. Maybe you can catch us up. What's been happening for you guys over the last year?

Aram Sinnreich:  It's been a pretty eventful year for a lot of reasons, good and bad. Jesse and I spent most of 2024 on a book tour. We spent time up and down the East Coast. We spent time on the West Coast. We each individually did some book-related events in Europe, and we did a lot of work online. 

And it's been incredible to see how many different communities have responded to the basic message of the book from expert communities, know, cybersecurity professionals, and biomedical researchers to lay people. You know, we've given talks for the AAUP and in retirement communities and in public schools and libraries. 

And, you know, the work that we do to empower everyday people to make sense of these radical changes that data and technology are wreaking on our social and cultural lives seems to be really resonating with people at all levels of understanding. And that was our goal, and so it's been really gratifying for us. 

On the flip side, watching Elon Musk and Donald Trump essentially rooting the US government and installing data surveillance and AI at, you know, in all of these agencies while laying off human beings and eliminating the watchdogs has been like watching our worst nightmares come true. 

I mean, this is the stuff that Jesse and I, while we were writing the book, we talked about, you know, these threat factors and what a bad actor might do with the kinds of tools that we were investigating. And Musk has delivered on all of our worst nightmares.

 

Jesse Gilbert:  Yeah, not a whole lot to add to that. Although I will say we, our talks have ranged, as Aram said, so widely that we've had really a kaleidoscopic view of these issues. And I think, you know, what's been surprising to me has been not only the enthusiasm for some of the framework, but also the real need to discuss this together. 

You know, I think some of the tendencies that we've seen have been, and this all leads to the question, what do we do? What do we do about this? What, you tell us what to do? And I think the point of the book was really to try to empower people to find solutions and ask questions that lead to conversations in their communities, so these solutions can become a little bit more organic and emergent rather than prescribed from on high, but actually that this is an issue that affects us all. 

 

It affects us all even ambiently, even if we are not actively participating in data regimes, because of the nature of network societies that we all live in at this point, our actions or our participation actually are implicated in all of these different sorts of techno-social formations. And so what I would say has been kind of pleasantly surprising to me has been that those conversations have been productive. They've been rooted in practice and lived experience, and that people are responding to it who, you know, we don't even know what those conversations are all the time. And then we get these little glimpses into “Yeah, I've recommended this book to five of my friends and made sure people read it”. You know, over time, I think we're seeing the impact that a text can have. Because this is my first book, it's been a pleasant surprise to me to see that.

 

And also, I think, lastly, I would say some of what has happened is as we've presented the ideas in the book, our focus has started to narrow just some key points and key issues that we think need to be discussed. And some of them are emergent. Some of them weren't even in the book. 

 

So, for example, the piece that I think we'll be discussing today that was recently published is really emergent out of our conversations with readers and with each other. And I think that's one of the other pleasant surprises for me is that the ideas continue and we continue to grow with them.

 

Justin Beals: That's awesome. I want to say congrats on the book tour. First off, that sounds like one of those things I dreamed about when I was in my liberal arts college days. 

Aram Sinnreich: It's a pretty great feeling. It really is, 

Justin Beals: You know? And getting to connect with an audience that cares like y'all do about these topics is invigorating, I'm sure. 

We talked in the last episode, I think, to your point about a network society, Jesse,  about how the technologies shift our perception of reality. And that of course continues to be reinforced for me and my own consumption of media and my decisions about how I want to engage with the network world or what I want to consume. You know, I thought that your Time Magazine article, which is why we thought we might check in, you know, suggests that we need to intentionally harness the effect to promote cooperation. 

Like we need to use a bias in a way for an outcome, maybe you can talk a little bit about that new thread that you're finding in your discussions around cooperation.

Aram Sinnreich: Sure. Part of the conversations that we've been having around the book, you know, we introduce all these concepts about how data has these unexpected downstream consequences for our cultures and societies and interpersonal relationships and identities and politics. 

But then, after we get done doing that, people say, Okay, well, great, what do we do about it? And we can talk about data privacy policies, and we can talk about human-centred design principles, participatory design, and all these kind of big ideas.

But when it comes down to it, the problem with a lot of the mitigation efforts and the more utopian efforts to really rethink tech is that the language can be so easily appropriated by people who are not interested in the social good. So you think about a term like altruism. Well, it's hard to say anything against altruism. We all want to be altruistic and think about other people.

But then you get this kind of tech bro version of it, which is sometimes referred to as effective altruism, which, in practice, the closer you look at it, the less it has to do with helping people live good lives right now. And the more it has to do with kind of sacrificing people en masse for some technocentric vision of a better future that none of us knows how to get to. The same thing goes for ethics, right? Ethics, on the face of it, sounds really great.

You know, and now every tech company has like a specialist in AI ethics. But in practice, those ethics seem to have more to do with boosting shareholder value than they do with centering the needs of individuals and communities whose lives are changed by the technologies. So we wanted to start from a different vantage point and say, you know, let's not try to fix the tech with a bite by jamming ethics into it, or by making it align with altruistic principles. 

Let's begin with actually what people do best, whether there's technology in their lives or not, which is organizing and developing a community and basing their relationships on the fundamental principle of kindness, which we really just define as acknowledging that different people have different needs, that different people have different vulnerabilities and that in order to live together happily, we need to be sensitive to those needs and vulnerabilities and not assume that we ourselves are the center of the universe or that everybody else out there lives the same life that we live. 

And that basic kindness, that duty of care from one person to another person, that sense of I'm gonna proactively look out for you and not assume that you're being taken care of is, to us,  the most unassailable virtue that you can ask for. And the beautiful part of it is all of us by the age of six understand this principle. 

We're all intrinsically kind people. Now we're not universally kind. We're not kind to everyone we meet all the time.  But each of us understands in our gut without having to think about it what it means to be kind. So the sooner we acknowledge that technology is ubiquitous, that surveillance and computing and data are pervasive aspects of every dimension of our lives, the sooner we can come to terms with the fact that we don't understand how to be kind in a data-saturated world or how to be kind through data yet. And that's what Jesse and I have been talking about. And actually, we have been invited to write a whole book on the subject. So we're just writing the book proposal this week, to do an entire book that would be very focused on like a find the helpers kind of mentality. Going out there and illustrating data kindness in the wild, where we see it rather than just preaching to people on high about, you know, shaking our fingers in their faces and saying be nice. We're saying actually, no, this is kindness in action. What can we learn from?

Jesse Gilbert: I think that the other thing that I would add to this is, you know, some of this is really about looking at theories of change and understanding how change effectively happens in a society. And I think in a way, this is an attempt to answer the question of what's next? What do we do? What do we do about this issue? And, you know, one of the examples that we used in the piece that was published was the real change that we've seen in the society regarding food allergies as a sort of grounded case study, where 20 years ago, the idea that everywhere you went or if you went to someone's home or to a restaurant, that you would be asked if you had a food allergy would have been unimaginable. 

This wasn't a shared value that we had together. You may have had a food allergy, you may not have even known you had one, right? But you certainly weren't gonna be asked, and you weren't going, there wasn't a sort of concept of risk if you're a business that you're, you need to ask this question because someone can go into shock, and you can be held liable. But then also, how did that trickle into our individual behaviors? How did that become something that was so widely adopted? that this is now a standard question that we might ask someone who's a dinner guest. And that we might be willing to and certainly able to modify our menu choices based on that sensitivity. 

 

That there's a kind of acknowledgement and adoption of a core value, which is this is a vulnerability that others have. I may have, I may know someone who has, some of my loved ones may have it, and I've internalized that not only as a kind of actuarial action that I might take to limit my exposure, but actually as an expression of care. 

And I think that what Aram just said is very true, that effectively we don't yet have as a society a way to understand, calculate, quantify, and then internalize that set of values with regard to our data practices.

And so what we're advocating is thinking about how that change can happen and what are some examples of people working now as we move towards this book proposal, what are some examples of people who are actually doing this and walking the walk rather than just talking the talk.

Justin Beals: Yeah, I think that we did this with television sometimes it feels like, right? Like, there was broadcasting and a lot of material. And then we got worried about the content of that. And then we build things like public broadcasting systems so that we can generate, I mean, I think Sesame Street is all about generating kindness. Like we do model media to teach the behaviors that we want the outcomes for. And we've let businesses run rampant. with data for outcomes that are purely financial. I think it's disappointing.

Aram Sinnreich: And that often seems to be how new technologies and industries operate, right? There's a period of lawlessness during which regulators are really unaware of what the potential social harms and risks are. Most of the information they're getting is from lobbyists, industry advocates, a light touch because we need to win the AI arms race or whatever it is, this notion of market competition and geopolitical competition. 

And then after a while, the voices for it becomes easier to document potential harms and benefits, and the voices for wiser regulation eventually went out. I teach at American University,  I taught a class this semester on media law and policy. And one of the first guests in my class was a guy named Asad Ramzanali, who ran the Office of Science and Technology Policy for Biden, and who had just lost his job about a week ago when he came to my class. And I asked him when he visited whether he thought what it would take for AI regulation to happen? And his answer was basically he thought it would take a generation. 

That based on his historical expertise in the regulation of new tech, there's just an integration process that has to follow, as revolutionary and unusual and unique as this particular tech happens to be, he believes it follows a very well-worn pattern of kind of social understanding and integration and ultimately some kind of consensus over how it should be used. And to your point, public television didn't happen until the Nixon era, when TV had been in American households for 25 years already.

So, Jesse and I, we talked a lot about potential approaches to regulating data and AI in the secret life of data, but this book is really about the other hands. In the meantime, while the decision makers and the powers that be figure out how not to destroy everything, what can we individual communities and human beings, do o integrate these technologies more wisely into our lives, assuming that the powers that be don't have their hands on the steering wheel.

Jesse Gilbert: Yeah, and I think also, again, part of the question I think we're going to be asking and that we've been asking ourselves is, What motivates people on the individual scale? , which is a huge component of this, is it that there's fear of regulation and consequences from external sources, or is there some sort of internal adoption and understanding and resonance that these values have with something that feels like common sense? So again, I don't think that people started asking about food allergies only because they were being asked that when they went to a restaurant. 

I think that there was some kind of internal process, and this has to do with also, again, understanding how change happens and how we come to a consensus in the culture in a time when there's very little consensus about anything. But there are still things which, again, there have been areas of consensus like food sensitivities, maybe because there are more and more data that there are more and more allergies happening or there are more and more sensitivities that we didn't understand, that we didn't know how to diagnose. 

So there's probably three different legs of this table here, right? There's a regulatory sort of societal top-down approach. There's the individual level, and then there's the data that supports it.

There's the data that we are gathering, and as Aram has said, we're in the middle of a vast social experiment with regard to the proliferation of data. We're seeing more and more evidence as we go on of the benefits and the harms, and we need to have honest conversations about that. But we also need to view that. And I think, again, this has come a lot out of the results of taking the book on the road and talking to people, is we need to move away from that as a purely intellectual exercise, and we need to think about how we're going to integrate that information into a kind of, you know, we could call it a moral stance or we could call it a kind of, you know, ethos that is shared among people that is effectively, what are we going to do together to resolve some of the more egregious harms? And I think, look at things that are actually solvable on an individual level, like being considered with our dinner guests. 

 

Aram Sinnreich: And deliver on the promise, right? We don't mean to cast AI data or tech in general as a purely negative force that has to be mitigated, right? There really are tremendous opportunities to improve the quality of life and to empower individuals and communities, and strengthen democracies in ways that we couldn't before. But to get there, we really need to grapple with the, as Jesse said, the moral dimensions of the roles these technologies play in our lives.

 

Justin Beals:  Yeah. You know, the other, we talked a little bit about food allergies, and maybe this is a little more contentious, but I was always surprised at the speed at which society broadly in the United States kind of got on board with, with gay marriage, right? I, yeah, I thought I would take 10 years extra and lots more angst and anger than it did. Uh, but it seemed to catch on and all of a sudden we all agreed and broadly I felt like we, we, made a change.

 

Aram Sinnreich: Well, I was having this conversation with myself five minutes ago while we were talking about this, while Jesse was talking about food allergies, because it's such a great other example. 

It's the kind of thing that seemed in 2010 to 2012 like this rapid phase shift in American society. as actually Jesse can attest to, because he grew up with not only gay parents, but gay activist godmother who laid like for decades a lot of the groundwork for these social changes. The uphill battle that took there was, to get there was tremendous and took a lot of sacrifice and a lot of labor from a lot of people whose names are not recognized at this point. Jesse, do you feel comfortable speaking some more to that from a personal standpoint?

Jesse Gilbert: I mean, you know, I don't know that I need to speak to personally to sort of comment because I've been very aware of the argument for my entire life. I think that part of the reason that this is an interesting analogy is that I do think actually that sort of the, you know, the law and public opinion are in a dialogue with one another. And I do think that culture had a huge impact on this debate, you know, and I think it's part of the reason why a lot of the cultural practices and institutions that were part of, you know, opening up that dialogue are now under attack. 

So I also don't view this debate as settled personally. I think that, you know, what we see is that there is a pendulum that moves through this. I do want to hold open the notion that, you know,

by now there is enough of a consensus in our society and even globally, if you look at the developments with the previous Pope, the discussion of gay Catholics, we couldn't have imagined that a generation ago. But I do think that there's still a long way to go within that and that there's going to be ups and downs in that discussion. 

And what I would say, personally, about it is that while I think we were all surprised that the Supreme Court decision was the surprising part.

Justin Beals: I was surprised.

Jesse Gilbert: not the societal support for it. And I think that what that speaks to is a kind of cynicism of born and experience of the lag between the societal changes and the official sort of legal recognition of those rights and responsibilities. 

But in terms of the fundamental basis of that ruling, I think it was incredibly important because it reaffirmed the notion of the recognition of equal rights and status to all members of the society. And I think that's actually, frankly, the bigger picture, rather than focusing just on a specific right or recognition, is the reaffirmation of those values regarding the equality of all members of the society. 

And I think that that's where we say the work is not finished, right? Just because we've achieved one specific recognition, there's a long way to go still. And I think that, you know, in a sense, what we're arguing right now is that we all have work to do with regard to spreading this notion and understanding of what it means to be kind in this age as a shared value in order to further this larger social project. And I do think that that's actually the most succinct answer, although that was not a succinct answer. That's probably the most succinct answer that I could give to What do we do now, right? When we're asked that in a public setting, often we say Well how long do you have because this is actually a larger picture discussion

Justin Beals: Yeah. I, this concept of kindness really matters to me too, as well in engagement with people. I, I've often talked with teams that were struggling, that I might be managing or working with colleagues, that they needed to offer each other a certain amount of grace, you know, bringing that word back into how we engage with each other.

You write in the Times article that cooperation can be contagious. I think we've harnessed technology to fracture and dissect into minuscule pieces for better targeted advertising at the end of the day. You know, maybe talk to us a little bit about the social grounds for cooperation being something that our brains are harnessed to do as well. Not just be outraged, but we have other emotions. Yeah.

Aram Sinnreich:  Sure. I love that you brought it to the subject of targeted advertising because that really is the original sin. You know, we're all old enough to remember the promise of the internet in the 1990s, and this concept that it was going to somehow bring a global society closer together. 

We'd all be able to not only communicate with one another, but understand one another and establish a shared set of facts and develop consensus. It was all wrapped up in this kind of ostensibly post-Cold War geopolitical moment where democracy had won, free speech had won, free markets for better and for worse had won.

And then Google really became the tipping point when they acquired the technology circa 2000, 2001 to start. First, it was search engine marketing, buying keywords. And then, when they bought DoubleClick, and the whole targeted marketing infrastructure became integrated into the entire search portal infrastructure.

We began to see the development of this kind of technological chimera, which as you said, the profit motive became based on separating and categorizing people and exploiting the differences between them to the point where it created a feedback loop such that people's experiences of media and people's experiences of culture became more more siloed and more and more distinct from one another, in order to maximize the individual extractability of their data and effectiveness of persuasive messaging to them. 

And then, you know, I think the final nail in the coffin was about 10 years later when Facebook finally figured out that its business model was not going to be, you know, serving banner ads against people who are highly engaged in communicating, but rather separating people from each other and selling access back to them.

And then using hyper-targeting based on not just on stated preferences, but on behavioral data in order to create these kinds of shadow silos where people wouldn't even know that they were separated from each other, having separate conversations, being fed separate information. And then, of course, that was notoriously exploited best by the Russians during the 2016 campaign cycle.

You know, buying, like Facebook was literally selling a group of look-alike users that it called Jew haters to target with anti-Semitic disinformation about Hillary Clinton and her supposed ties to Jewish financier cabals. And you know, we've been living in that world for the last 10 years. This hyper-fractured, very agonistic kind of environment, and I skipped to the part where Facebook came up with a brilliant idea to monetize emotional engagement. So when they replaced the like button with the frown button and all that stuff, and they figured out, and this has been well reported on, that anger was five times more valuable than approval. And they actually used a whole points-based system to elevate anger-provoking information in the newsfeed.

And then suddenly America seemed to be at war with itself, and not just the US, but the whole world. So I think you're right that there is something fundamentally unkind about the entire infrastructure that uses data to separate people and inflame their most negative emotions and suspicions toward one another. And that is a cause agent, but it couldn't have happened without those pre-existing tendencies in our society. 

We grew up in the Cold War era, which was a highly paranoid era despite the absence of Google or Facebook. I remember movies like Rocky IV and Red Dawn, where the Russians were gonna invade us any day, and it was our patriotic duty to arm ourselves and mistrust everybody around us. Movies like They Live where a pro wrestler puts on special glasses and sees that everybody's lying to us all the time and evil aliens have taken over, right? And I mean, it's really like the older I get and the more I read, the more I realize how much of it goes back to, you know, The Protocols of the Elders of Zion and the international Jew and all these conspiracy theories from like the late 19th, early 20th centuries. You know, Musk's spiritual precursor, Henry Ford, basically printing up hundreds of thousands of copies and requiring that his employees read it and all that stuff. So, you know, the sins are very old, but the mechanisms by which they've been exploited are, you know, are new and are developing every day at this point.

Jesse Gilbert: Yeah, I think to your question, though, Justin, so, you know, that's a very, eloquent and specific analysis of some of the reasons why the, you know, the, these forces that are driving us apart are so prevalent in the society. I do think that, it's important to emphasize that while that has been a successful commercial strategy for a lot of companies.

I think we could make a pretty convincing argument that it is not an effective approach if we want a healthy society. And I think that we have quite a lot of data now, right? We're not in the early days of social media. We're not in the early days of the internet anymore. We have quite a lot of data that we can pretty convincingly, I think, link to a kind of tension between societal health and the profit margin, the profit mandate. And I think that there are, I think Aram has rightly said, sort of corporate capture type strategies to make gestures towards efforts for remediation of that. I don't think that we can really say that there are that many large companies at this point that have the autonomy to really make efforts to you know, try to affect positive social change in the ways that we might want them to. 

And that's why I think that we should also spend some time and energy and effort also looking at some of the more successful collaborative and sort of reinforcement of other value systems, even within a capitalist structure. I think that, you know, we're talking about some of the, the, you know,

The high-level discussions that we're kind of going to be addressing in the book. 

 

I think a very important one and part of the reason why it was such a, know, if you remember in the early 2000s, the degree to which the open source movement was in the crosshairs of Microsoft was because it really has been quite an important, large-scale global collaborative platform that has achieved quite a lot. Now it's not a perfect system. But there are interesting ideas there with regard to gaining a better understanding of why so many different talented technologists have contributed to open source movements over the last 20, 30, 50 years, right? The entire Unix operating system is built on the same ethos, the basic infrastructure of the internet. 

It is fascinating to me that we are going to really need to think more clearly about what are the deeper motivating factors. I think for Eric Singer wrote a brilliant book about this a long time ago, looking at open source movements really as also the creation and maintenance of reputation within a community.

The understanding of status as being expressed, in a monetary way, but within with respect of your peers and in communications, social capital. 

But what I find interesting about this analysis is also that I think that as the, you know, I'm part of an online forum right now, which is celebrating its 10th anniversary just this past week, which was started by someone I went to graduate school with who has a small company that makes hardware for the sort of electronic music community. And it has attracted a really interesting and productive, ethical, collaborative community to that, which are sharing information, not just about hardware, but about music itself. Talking about production techniques, talking about, you know, your obsession, talking about all of the elements that are, you know, endemic to that community, but also really establishing certain types of guidelines with regard to ethical behavior and self-policing. 

Now they have to do some work to moderate that. It's like any other internet site. But what's been interesting to me is just to see that people are hungering for spaces which actually express different value systems, and they're finding them. There are numerous examples of, you know, creatives who are finding audiences and developing them and nurturing them based off of sharing, based off of open approach to their process. I really do think that we are in a space where there are a lot of people right now asking these questions. 

How do I, you take a stand in some way in my life? How do I model and create alternate structures that are going to express other values? And I think what we're suggesting and hoping that we can contribute to that discussion is that kindness in and of itself should be one of those core values that gets expressed in whatever structure we are participating in, whatever community we're part of.

Justin Beals: Yeah, it's I've often read this neuroscience analysis that outrage, you know generates about five times the amount of blood flow activity Then you know something some other emotion and that's a evolutionary trigger point for us in that we respond with deep emotions when we feel threatened, you know, and these organizations have harnessed this outrage and I'm very frustrated by it too. 

But it's funny because I don't look, I get that we live in a capitalist society, but I remember the start of the internet, especially as being for collaboration, you know, and being designed around collaboration, open source deeply. mean, I was in my Git repo this week, you know, fiddling with something code-wise, and looking at my stats for contribution, being like, they look so green. I'm so happy with myself. Yeah.

Jesse Gilbert: Sure. Sure.

Aram Sinnreich:  I think it's important to acknowledge that a lot of these utopian technological spaces have been intentionally attacked and dismantled. I actually wrote an article with a couple of colleagues recently about the Fediverse, about Mastodon and the kind of associated activity pub-based social media infrastructures, looking at kind of past examples of promising platforms for collaboration and sharing and utopian community building that have been destroyed intentionally. 

And the list is really, really long. And even if you look at the success stories, you know, look at how Wikipedia has been attacked from the inside and from without. Look at the Internet Archive, and how various stakeholders have come for them. 

And then you look at technologies like peer-to-peer file sharing or the dark web or, dare I say it, blockchain, all of which were kind of developed with very kind of like pro-social, anti-authoritarian, techno-utopian use cases in mind, and all of which on the one hand got completely exploited by bad actors who took advantage of them to evade regulatory oversight and at the same time got tarnished by the kind of whiff of malfeasance that comes with criminal adoption of the infrastructure. And you see all this kind of squandered promise for exactly what you guys are talking about, or even social media sites like Tumblr, which 10 years ago was a haven for LGBT kids who had nobody in their local communities to connect with. Which, for a variety of reasons, has become completely dysfunctional towards those ends. So I think it's, like Jesse was saying, that the impulse has always been like I mean, remember like Jesse, like Jesse and I grew up together. So when, when, when the internet became a thing, like we, the joy of discovering it was something we experienced jointly. And, we used to dream about, you know, how, like what, what a radical delivery on, longstanding promises of exploration and equality. It could, it could amount to, and there was still, you know, John Perry Barlow was still alive and there was still a lot of that techno utopian

rhetoric, even in the heady days of the dot-com boom of the mid 1990s. And after, feel like after the crash of 2001, that was it. Even when, you know, web 2.0 arose a few years later and we saw the rise of, you know, platforms like YouTube and, and delicious and Flickr and, um, all the stuff that's now been eaten by social media and then by AI. You know, there was a, there was a wariness and a cynicism, like, here we go again, another tech bubble. And I feel like that rhetoric was only really deployed for marketing purposes that time around. 

I wonder what we can do, and maybe the data kindness argument can be helpful towards those ends, but I wonder what we can do to kind of recuperate the optimism for technology's capacity to help us reimagine social organization.

Jesse Gilbert: Yeah, it's interesting too, because I think in a way what we're seeing here is, you know, if we were to, for example, talk to those 2000 kids who were just exploring these technologies. And we were to list without necessarily talking about some of the more egregious harms, some of the accomplishments that have happened over the last 20, 25 years using those technologies, we would be astounded.

Aram Sinnreich: Absolutely.

Jesse Gilbert: I do think that what it has given us is a harsh and important lesson that just understanding achievement based off of scale or functionality in tech is not enough. We fell into the trap of assuming that by scaling these systems that we would achieve certain social goals and that there are larger systemic forces that are trying to exert pressure on the outcomes, even if some of those outcomes are inevitable. Right? .So we can make an argument, for example, that at the moment we're facing the inevitable advent of ubiquitous augmented reality.  We have numerous fictions that have been sprouted about this. We have some prototype devices that have been deployed.

We have quite a bit of a kind of chicken or egg situation with regard to content versus hardware, right? There's a lot of this that's facing, but for many of us who are still in the industry, there is an active discussion over what this is going to mean. 

And we know that if it is, you know, if it is ever achieved, that we have a portable, high-resolution, instant-on pair of glasses that one can carry around in your pocket, that it's going to have a major impact on our society. 

It's the exact right time to be learning and reflecting on the lessons of new media that were promoted at that time, and how they have been channelled and changed from even the original intention of the inventors and ways to think about embedding other value systems into them. It doesn't mean that we're going to be able to completely rewrite the media industry, race, the power that people have, companies have, governments have in the society. But I do think that. 

And maybe this is just a way to sort of, you know, circle back to the original argument that we're making, which is that change is often led from within. Change is often led from within ourselves and our communities, and that those values do have to get expressed. And in a way, what I see right now with a lot of the people I'm in contact with in the media industry is that there's a lot of head scratching and wondering what the next model will be. 

And some people find that disorienting and depressing or challenging economically, whatever it is, those are all real. I personally find that to be extremely interesting and a moment of real opportunity. And I think that's why we're interested in having these conversations right now.

Aram Sinnreich: Our book cannot come out soon enough. I'm really excited to write. Jesse and I were so depressed while writing the Secret Life of Data. And I think we're going to be equally inspired by writing Data Kindness.

Justin Beals: We'll wind up on the same path. Yeah, so much of this resonates with me. I certainly, I tell friends all the time, I'm not a deep capitalist, really, but I do understand the environment in which I live. And in my career of building companies, I have cared for my teammates, their well-being, and our customers ' their well-being, a nd many times, especially when I worked in education, the end consumer and their well-being were the focus. 

And there were forces in capital spaces where people perceived everything as competitive and not cooperative, where we did the wrong thing. And what I think we mistake, especially in the United States, is that capitalism is fundamentally a cooperative sport. 

Aram Sinnreich: Yes. Yes.

Justin Beals: Competitiveness is just a weird metric thing off to the side. You've got to get people around the idea and build and tend your garden. If you're not operating in a good garden, you're not going to grow. 

Aram Sinnreich:  Yeah, that's a great point, and I'm glad you brought it up, because there is a kind of reflexive, part of the tech lash has been a kind of anti-capitalist movement, and most of the analysis is right on, just like most of Karl Marx's analysis 150 years ago was right on, but it doesn't leave room for a lot of functional solutions.

And it's really important, I think, to differentiate between collaborative and community-oriented capitalism and, you know, vampiric winner-takes-all capitalism, which is what we're seeing in the tech industry and everywhere else right now. 

Justin Beals: Yeah. Well, I am a part of the change. I'm cooperative here. I love it. And the Southerner in me thinks that kindness should go everywhere we go. Yeah, absolutely. We really appreciate Aram and Jesse, you guys joined in a secure talk again with us. And I am personally deeply grateful for the work that you do, and in helping us build a better society, a safer society and a more cooperative society.

Jesse Gilbert: Thank you very much.

Aram Sinnreich: Thank you.

 

About our guest

Aram Sinnreich and Jesse Gilbert

Aram Sinnreich serves as a Professor and Graduate Director within the Communication Studies department at American University’s School of Communication. 

His research addresses the convergence of culture, law, and technology, focusing particularly on topics such as surveillance and privacy, intellectual property, digital rights, digital culture, democracy, governance, and music. 

Sinnreich has authored five books: Mashed Up (2010), The Piracy Crusade (2013), The Essential Guide to Intellectual Property (2019), the science fiction novel A Second Chance for Yesterday (2023; coauthored with Rachel Hope Cleves as R.A. Sinn), and The Secret Life of Data (2024; coauthored with Jesse Gilbert). 

Additionally, his writing has appeared in various publications including The New York Times, Billboard, Wired, The Daily Beast, and Rolling Stone. He is a core faculty member of the SOC doctoral program and the MA in Media, Technology & Democracy, regularly collaborating with SOC graduate students on research publications and projects. 

Jesse Gilbert is an interdisciplinary artist focused on the convergence of visual art, sound, and software design through his firm, Dark Matter Media. He previously served as the founding Chair of the Media Technology department at Woodbury University and has taught interactive software design at CalArts and UC San Diego.

Since 2010, Gilbert's work has revolved around his innovative software, SpectralGL, which is an interactive listening instrument that creates real-time visual landscapes in response to sound. Drawing on his background as a composer, sound designer, and lifelong technologist, his creative output investigates the phenomenological aspects of listening through improvisation and collaborative dialogue. His work has been showcased at numerous concert halls, festivals, and projection-mapped installations worldwide.

In 2007, Gilbert co-founded Dark Matter Media LLC to facilitate his independent creative projects and provide consultancy on emerging technologies across various public and private environments. From 2011 to 2017, he held the founding Chair position in the Department of Media Technology at Woodbury University and taught interactive software design at both CalArts and UC San Diego.

Justin BealsFounder & CEO Strike Graph

Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.

Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.

Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.