- Home >
- Resources >
- SecureTalk >
- They Sold AI to Play God. China Never Got That Memo.
They Sold AI to Play God. China Never Got That Memo.
The West has been building AI like it's the apocalypse. China has been building it like it's a tool.
That one difference — rooted in centuries of philosophy, theology, and cultural storytelling — may be the most important thing nobody is talking about in the AI debate right now.
SecureTalk host Justin Beals sits down with scholars Bogna Konior (NYU Shanghai), Mi You (University of Kassel), and Vincent Garton to explore their co-edited book "Machine Decision Is Not Final: China and the History and Future of Artificial Intelligence" — and what it reveals about the hidden assumptions driving the decisions we make about AI governance, security, and society.
What this conversation unpacks:
→ Why Western AI fear traces back to Christian theology — not rational risk analysis
→ How the Chinese term for AI literally means "human-made wisdom ability" — no alien mind implied
→ The 2019 Elon Musk vs. Jack Ma exchange that exposed the cultural divide in real time
→ What DeepSeek's open-source breakthrough says about innovation, restriction, and creative problem-solving
→ Why this debate matters far beyond the US and China — and who else is watching closely
If you work in cybersecurity, tech leadership, or AI policy, the cultural lens on this technology isn't a soft question. It shapes real architectural, governance, and regulatory decisions.
Chapters
00:00 Introduction and Perspectives on AI in China
02:41 The Meaning Behind the Claw Machine Image
05:33 The Book's Creation and Collaborative Efforts
08:32 Cultural Perspectives on AI: East vs. West
11:06 The Impact of Open Source AI Models
13:45 Innovation in a Controlled Environment
16:20 Human-Made vs. Artificial Intelligence
19:23 The Philosophical Underpinnings of AI
22:06 The Role of Human Agency in AI Decisions
24:54 Exploring the Future of AI and Society
27:26 The Synthesis of Technology and Society
30:22 Conclusion and Final Thoughts
44:17 Understanding Artificial Intelligence: A Cultural Perspective
47:08 Machine Decision: The Chinese Perspective on AI
49:59 Innovation and Openness in AI Development
50:27 Global Implications of AI Beyond Superpowers
50:37 Introduction and Context of AI Governance
01:00:53 The Role of Computers in Decision Making
01:08:26 Transparency in AI and Governance
01:17:58 Cultural Perspectives on AI: East vs. West
01:23:46 The Singularity and Its Philosophical Implications
01:27:15 Simulation and Reality in AI Discourse
01:35:14 Social Implications of Large Language Models
Resources:
📚 Book: "Machine Decision is not final: China and the History and Future of Artificial Intelligence" (2025) https://www.urbanomic.com/book/machine-decision-is-not-final/
🎙️ SecureTalk is hosted by Justin Beals, CEO of Strike Graph.
🔔 Subscribe for weekly conversations at the intersection of cybersecurity, technology, and leadership.
#ArtificialIntelligence #AIPolicy #ChinaAI #DeepSeek #Cybersecurity #AIGovernance #TechLeadership #OpenSourceAI
```
View full transcript
Hello everyone and welcome to SecureTalk. I'm your host, Justin Beals.
I want to start with a question that sounds simple but turns out to be remarkably hard to answer. What do we actually mean when we say artificial intelligence?
If you grew up in the United States like I did, the answer comes loaded with decades of cultural baggage. HAL 9000 trying to kill the crew. Skynet launching nuclear weapons. The Terminator. Even the more optimistic visions tend to frame AI as something alien, something separate from us, a new kind of mind that we either control or that controls us. The singularity. Existential risk. The end of human relevance.
I've spent my entire career building software, and for most of it, I accepted that framing without much examination. AI was this other thing we were creating, and the central question was whether it would be friendly or hostile. It took reading a remarkable book to realize how much of that framing is a product of specific Western philosophical and theological traditions, not some universal truth about the technology itself.
The authors we are interviewing today collaborated on a book called "Machine Decision is Not Final: China and the History and Future of Artificial Intelligence." And the title comes from a photograph of one of those claw arcade machines, the kind where you try to grab a stuffed animal. There's a sign on it that reads, "Machine decision is final." The editors loved the image but recognized that putting that phrase on a book about China and AI would reinforce exactly the narrative they wanted to challenge. So they flipped it. Machine decision is not final.
In Mandarin, the term we translate as artificial intelligence is composed of characters that more literally mean "human-made wisdom ability." There is no implication of an alien or transcendent mind. The technology is understood as an extension of human capacity, not a replacement for it. And as the authors explore, this difference runs deep. Western concepts of AI carry echoes of Christian eschatology, the idea of a transcendent being outside of space and time, an apocalypse, a singular endpoint for history. Classical Chinese philosophy simply does not have that structure. There is no wholly transcendent realm. The relationship between humans and their tools, between creators and what they create, is understood as continuous and reciprocal.
This matters for those of us in security and technology leadership because the stories we tell about AI shape the decisions we make about it. If we believe we are building an alien intelligence that will either save or destroy us, we make very different architectural, governance, and policy choices than if we understand ourselves as building tools that extend human judgment and require human participation.
The book also challenged my assumptions about innovation and openness. When DeepSeek released its open-source model, it surprised a lot of people in the West. The popular narrative holds that Chinese technology development happens under tight centralized control. And yet this small company from Hangzhou produced an open-source large language model that demonstrated you don't need unlimited compute budgets to build capable AI. As one of our guests points out, restrictions don't necessarily stall innovation. They can create isolated, vibrant ecosystems where engineers find creative paths around constraints. The history of Chinese technology development bears this out repeatedly.
There is also a perspective in this conversation that I think security professionals need to hear. We spend most of our time thinking about AI through the lens of the United States and China as the two dominant technology powers. But most of the world lives outside those two stacks. Southeast Asia, Latin America, Africa, smaller nations with their own needs and constraints. Open-source AI models are significant not just for competition between superpowers but because they make capable technology accessible to communities that would otherwise be locked out entirely.
One of our guests makes an observation about large language models that stuck with me. If you abstract away from the specific technology, what you have is a medium through which the linguistic output of many humans gets communicated back to other humans, but with individual attribution stripped away. Everything gets mixed into one enormous shared space. That has implications for how we think about knowledge, authority, and governance that we have not yet seriously reckoned with.
Today we are joined by three guests. Bogna Konior is an Assistant Professor of Media Theory at NYU Shanghai. Recently, she was a Research Fellow in the Antikythera Program on Speculative Computation at the Berggruen Institute and a mentor in the Synthetic Intelligence program at Medialab-Matadero Madrid. She is the author of The Dark Forest Theory of the Internet, published by Polity in 2025.
Mi You is a curator and professor of Art and Economies at the University of Kassel and the documenta Institut. She is the author of Art in a Multipolar World, published by Hatje Cantz in 2024. She serves as chair of the committee on Media Arts and Technology for the transnational NGO Common Action Forum and is a Berggruen Institute Europe fellow.
Vincent Garton is a senior software engineer in the energy sector and an independent researcher in technology and political thought. His forthcoming book examines the challenge of Chinese philosophy from a Western perspective.
Join me on SecureTalk as we explore what happens when you look at artificial intelligence through a completely different cultural lens, and what that means for the future we are building together.
---
Justin Beals: Bogna, Mi, Vincent, really grateful to have you on SecureTalk today. Thanks for joining us.
Bogna Konior: Thank you, Justin. So nice to be here and hello from Shanghai.
Mi You: Thank you and hello from Beijing.
Vincent Garton: Great to be here from Hello from sunny London.
Justin Beals: Excellent and I'm based in Atlanta, Georgia today. So I think we're covering a fair amount of the globe. That's excellent. Well, I loved the book that I was able to read the collaboration from you and your teammates, of course, and working together.
And I'd like to start off with your interpretation of an image in your book. The image and I'll describe it for our listeners here is of a kind of a classic claw style arcade prize machine, the one where you try and move the claw around to get the best prize. And there's a sign on the machine that says, machines decision is final. And I thought we'd start by asking each of you in turn about the meaning that you find in this image. And perhaps Bogme, you can kick us off.
Bogna Konior: Okay, so as the editor of, co-editor of the book, Here's the Story, we were brainstorming titles for this collection and, you know, thinking about the landscape of books on AI and artificial intelligence and China, very often you would see titles and covers that are kind of menacing and threatening, and suggesting some sort of cybernetic authoritarianism or something of that sort.
And majority of the books on the subject deal with the political system in China and technology. So as we were looking for our cover image and our title, we were trying to signal that our collection, while not negating what these books are necessarily positing and studying, would rather want to focus on the glitch and the surprise and where control and determinism is not actually that complete. So we came up quite accidentally upon this image of the claw machine that says machine decision is final. And we loved the image and we loved the sentence itself. But we thought if we put machine decision is final as the title of our book on China, we would be replicating the sort of discourse and narrative around China and technology as super controlled and like centralized and so on. So we kind of did it, but with a twist, right? We put this “ Machine decision is not final: China and the History and Future of Artificial Intelligence” to suggest that things are really not as predetermined as people think they might be, that we might break certain stereotypes about how we think about China and technology.
And the second reason why we have this title is I guess we work in a certain tradition of philosophy where we like to think about the questions of when it comes to the history of technology, how much agency does the machine have versus the human? How much do we decide as humans, what is happening, how we develop the technologies, and how much technology itself has a certain decision or agency or influence upon its own history? So this is kind of the double meaning of the title and the image.
Justin Beals: Vincent, anything you'd like to add to that or your personal interpretation?
Vincent Garton: My interpretation, I guess, is very similar to Bogna’s and it reminds me of this image which has become kind of a meme which was taken originally from an internal presentation in IBM in the 70s, which is just a blank slide which says some black text, a computer can never be held accountable, therefore a computer must never make a management decision. And I think one of the problems that we have to think through
Nowadays, the fact that more and more decisions are being handed off to computers and what kind of social and political implications that have. That's what I'm interested in.
Justin Beals: Yeah, and me, I'm curious if you had any perspective on this image, “ Machine decision is final”.
Mi You: Okay, so I'm not in charge of choosing the image, but I do think the message is quite clear. It also almost serves as a warning because I've been in some conversations with people recently that are in, you know, in the sort of US-China, try to dialogues and a lot of the questions are how do we make sure that in terms of warfare and decisions of using military technology, how do we make sure that a machine decision is not the final decision, the final one. So, yeah, so it almost comes as, you know, the book is actually a few years in the making, and it almost comes as a sort of pre-warning to a lot of current legal discussions.
Justin Beals: Yeah, I loved the juxtaposition of the statement and yet the probabilistic nature of these types of machines, especially as we think of AI, you know, most of the computer science technology that I've been built lately is a prediction or a prediction system or a prediction system ensconced in a rule-based system. And the idea that we have final decisions that are probabilities that we are dealing with and developing these tools, I thought, was quite funny. Hilarious in its own way. I really loved it. Bogna, you co-authored the forward in the book, and I'm curious how you came to work on this project or be inspired to bring together your collaborators.
Bogna Konior: So the book is a project that we started at NYU Shanghai, where I work, and where we have the Center for Artificial Intelligence and Culture, and kind of taking this opportunity of being an American school in China and having a great network of scholars and artists and science fiction writers and engineers around us, we thought we would like to pull together a collection on AI and China with a sort of aspiration that we had that artificial intelligence describes both existing technologies like machine learning, but also is a certain aspirational term that has a longer history and a very possible vibrant future of simply trying to make a smart machine or a sort of autonomous technical intellect. And we wanted to put that question within the context of long Chinese history, from philosophy to political history.
And we wanted to create a book that would really stand out in the current market. So we wanted it to attract people who are interested primarily in AI and not, because you know, very often books about any regional question would be just relegated to regional interest. So if you're interested in China, you will read a book about AI in China, but otherwise you will not pick this up.
We wanted to create a book that avoids that. That's why we invited people across the disciplines from sci-fi to history to anthropology to really make an alluring mix and so that everyone can find something for themselves. And that covers both history and sort of wild speculation about what might happen in the future. So we wanted it to be both historically grounded and informative and educational, but also sort of imaginative and creative, like giving voice to the artists or people who don't normally are consulted on these conversations on technology and future development.
Yeah, so that was the main idea. Of course, the book itself was assembled through many accidents and strange incidents, as many books are. We went through the COVID lockdown in the middle of it, which really changed our whole framework, and then Deepseek was being released, and ChatGPT was being released, so as we were making the book, we had to keep updating the introduction and changing the book so it was transformed constantly because of how fast AI is moving and especially here in China. And we also took care to make sure that as much as it's trying to keep as much to date with what is happening right now, that by engaging with the long history of thinking about machines in China, it also stands on its own as something that can be read, you know, in 10 years, in 15 years, even 20 years. Those are some of the main intentions.
Justin Beals: I first of all, I really enjoyed reading the book immensely. It was brilliant. But I also want to own that I am a product of the United States, our media culture and education system. I certainly grew up with very common for us cultural perceptions of China, specifically. But I had a great privilege of having a really great professor during my bachelor's degree in liberal arts degree that took us deeply through Chinese history. I had no concept of the vibrant nature, the different cultures that make it up. And then also, I'm a huge fan of the Three-Body Problem series. It is probably the best science fiction book I've read in the last 15 years. I thought it was immensely brilliant. And so these themes that I have just barely touched on Tip of the Iceberg, came through in the writing and expanded my perspective of computer science, machine learning, AI, and what possibilities we as cultures can approach them with than I had before.
And so for that, I'm deeply grateful. It was immense. I'm sure you guys learned from each other in the process, Bogna, Vincent, and Mi, yourselves as well.
So I, one of the things that really has impacted my work in what we do, certainly, over my computer science career is concepts of open source. I would not have been able to build many of the pieces of software that we delivered without something nearby. And one of the things that I'm very concerned with professionally is the clothes, clothes source or corporate ownership of many of the AI systems that are being developed today.
I celebrated the release of DeepSeek. It felt like exactly the way we wanted to go in building some of these models, and that they were more approachable by more computer scientists. And they showed us a way of doing it without needing immense amounts of power and wealth, you know, to develop them. And so I am a little curious, each of you's perspective, maybe we'll start with you, Vincent, on what the impact of the DeepSeek model should be for us globally.
Vincent Garton: I say I'm also a professional software engineer, and I totally agree that just the fact that we have an open-source massive LLM is very important, and the fact that it prevents it from becoming a completely closed ecosystem is massively important. But I'm also interested in the political implications in the sense that this is something I wrote about in my chapter, that the West has a long tradition of thinking about the government as a place of secrecy, and an area where secret things happen. As AI becomes more central to governance in a broad sense, I think it's very important that it is transparent, even if not necessarily completely legible to an individual person; it is transparent in the sense that you can, kind of with the right tools, programmatically understand what is happening. And that is crucial, I think.
Justin Beals: Yeah. And perhaps you, Bagna, your impressions of the DeepSeq model and what it means for global computer science.
Bogna Konior: I think it's notable that something like DeepSeek and the open source model comes from China because the perception of China from the West would be quite contrary, right? That it's a very controlled space, that something like that could never happen.
And yet at the same time, we were exactly in that moment a couple of years ago, where the American company, like OpenAI, is actually quite close contrary to its name. But this little company from Hangzhou in China was actually working on a more open-source solution. So I think that is a notable thing in itself. And it's interesting to notice that DeepSeek actually comes from an environment that, at that time, in China, was quite regulated in terms of what they could develop, but as it often is in this country, people do find loopholes and ways to kind of work around the restrictions in a really creative way. S
So if we think about the whole setup of how Chinese technology develops,the fact that there is the great firewall and the fact that there's many restrictions actually does not stall innovation. It did create something like an isolated, vibrant and native ecosystem of technology that we see here in China.
And you know, from something like ByteDance that created TikTok, which obviously is very successful with streaming, to what we see currently with robotic,s that's really taking off and being properly crazy with the development.
We see that there is this impulse towards innovation despite restrictions, and kind of going beyond what is possible. So, now, DeepSeek, for example, because it was at that time quite difficult to create consumer apps using AI, they focused more on developing something like a model that they could then sell to businesses that would do something with it. And that's how they got out of the loophole that was imposed on consumer apps themselves. So I find this fascinating how this like control and freedom is working here in China, and also you know the fact that the tagline of the company is I think into the unknown. I think it's quite beautiful and also opening onto this potential unpredictable future of artificial intelligence. So let's see where it goes. We're observing what's happening closely.
Justin Beals: Yeah, and Mi, anything you might like to add?
Mi You: Yeah, I'll just add that I think we speak a lot about the US stack, the China stack. And somewhere along the way, there's a sort of a half baked idea of a euro stack. But we forget about all the other small stacks, right? So that's the global south, that's the South East Asia, that's Latin America, that's Africa, and I think China or rather DeepSeek or China's ambition for open source AI is that it can be easily adopted by these folks in smaller stacks and applied to specific use cases. So yeah, that's the perspective that I hear a lot from folks here.
Justin Beals: I appreciate the global perspective, Mi. I've had an opportunity to work with a lot of computer scientists in Latin America over my career. And at one point, we were investigating the possibility of working with computer scientists in Cuba. And we had a teammate go and visit many of the software developers that were based there, and were surprised at the creative ways that they found in taking things like the content of Stack Overflow, loading it on a drive and being able to share it with their teammates and collaborators. At that time, we needed things like Stack Overflow to understand how to build new products. And I really celebrated that. That was, to me, the ethos of myself in grade school and wanting to play a video game that I couldn't afford the hardware or the software for, and trying to find ways to develop it myself.
I think it is brilliant and maybe something that we share from a human perspective as opposed to a nation-state perspective. Yeah, brilliant. Excellent, so I wanted to start .The book is broken up into some really intriguing sections. I won't be able to talk to them all, but I want to talk to a couple of them, and one of the things that I found really exciting is the section called human made. And I really think that it's very interesting that you juxtapose the way the United States may perceive artificial intelligence, and perhaps that's a more Western ideal versus a more Chinese perspective about it being human-made. We think, you know, the popular culture, I can see Sam Altman getting into, you know the media and saying we're going to build general intelligence like it is separate from us But I perceive it as something that we've built, and Bogna, perhaps you can talk a little bit about this section of human-made, and we can round robin that as well
Bogna Konior: So as you know, the book is divided into different sections, and actually, the contents of the book are quite varied. Actually, we first wrote it, not at all expecting the popularity of the book. So some contents are quite philosophical and like quite difficult maybe to get into, versus some chapters are more historical and contextual or there's even like field work from various places in China. So there's something for everyone.
And the chapter you're describing and the section you're describing opens with an essay by anthropologist Schwang Frost, that it so starts with the scene in 2019 at the World AI Conference where Elon Musk of Tesla fame and Jack Ma of Alibaba were put together on the panel, which I think was a traumatizing experience for both of them. I don't think it went that well. They did not really speak a common language. And something that Schwang Frost analyzes in her chapter was that Elon Musk has this vision of artificial intelligence as a sort of alien inhuman entity that poses existential risk to humanity.
And it's really some sort of incomprehensible new agency or new agent a different sort of mind, versus Jack Ma is pretty chill about the whole thing and just thinks about how to implement it to optimize certain business strategies, and the conversation just kind of does not flow very smoothly. And she uses that as a starting point to think about the different cultural histories that even the term artificial intelligence has in the Western imagination versus the Chinese term, which translates to human-made wisdom ability. can pick up from you later to explain it further. But in Shuang's chapter, she's saying, you know, this type of contrast between artifice and nature, which can have even moral judgment in it already in western philosophy is not necessarily innate to the Chinese term.
And she also describes how, you know, from the 1970s, where the sort of research in AI is received in China it is filtered through, of course, Marxist theories of labor. And it's like much more seen as something that is an extension of the human labor capacity and it's sort of augmentation of human capacity for work rather than being something that is like inhuman and like overhauls like the whole of society. So I will also just note that this is one chapter and we are careful in the book to not say, you know, this is a Chinese philosophy of artificial intelligence. We're trying to present really a multiplicity of perspectives and to not try to speak for China or like on behalf of China or essentialize or solidify what AI is. But it's one kind of line that we can explore and to account for this conversation between Elon Musk and Jack Ma, for example.
Justin Beals: Vincent, something you'd like to add?
Vincent Garton: Yeah, I completely agree with what Bogner said in terms of the different reactions that I think Chinese and Americans have to with the concept of artificial intelligence. And I think there's a broader point to be made, which we might want to dig into later about the different kind of genealogies of these terms, these ideas in the West and in China. Of course, it's not the case that when a Chinese person talks about Drunong, they're referring to classical Chinese philosophy or whatever there is this background of different conceptual genealogies, different intellectual histories and different metaphysics as well. If you go back, a lot of the ways in which Westerners think about these concepts are conditioned by assumptions inherited from basically theology.
So when you think about all of the inhuman AI, which is completely external to everything, that is essentially taking the place of God in Christian metaphysics, something totally transcendent and outside of space and time. Historically, China just does not have that concept. Classical Chinese philosophy is highly imminent in the sense that there is no completely transcendent realm. Everything is in one sort of realm of space and time. And even God in China have this kind of two-way relationship with humanity, humanity almost makes God. Gods reflect back on humanity. So there is this different set of cultural assumptions, which I think is really interesting.
Justin Beals: Yeah, vibrant and me, I'm going to own that I grew up in a very Judeo-Christian environment myself, spent a lot of time studying scripture as a kid. I'm sure there's a vibrant tapestry in the cultural upbringing across China and that gives us new perspectives and you a different perspective.
Mi You: Yeah, I mean, as Bogner said, I don't want to essentialize the Chinese view on AI too much, but it's true that, you know, people largely see it as not necessarily the opposite to some, you know, the dichotomy of artificial and natural and so on. So we don't have that. And in fact, in the very word itself, you have different elements somehow fused together. The one way that I was thinking about this question of artificial intelligence is also this is based on some conversations with technologists, but I've not managed to bring it into my article, is that they do see artificial intelligence as a way to achieve better intelligence. And by that they draw on a certain Buddhist tradition where you have, I believe, seven levels of consciousness.
And that, you know, as we are, you know, living in a three, four dimensional world and unable to see all these other dimensions, one way that, you know, when you have the efficiency of artificial intelligence, helping us performing our daily deeds, but also helping us finding patterns of what is there and that is maybe beyond plain set, we can actually finally dedicate our time and energy to this higher dimensions of things. So yeah, so quite a few Chinese technologies are actually paradoxically maybe spiritual.
Justin Beals: I've certainly gotten delight moments in working with technology from time to time. The problem is something you might like to add.
Bogna Konior: Yeah, I wanted to point out that just in recent years in China, there's been a lot of change in the discourse around artificial intelligence when it comes to the intellectual reception. We have, for example, an edited collection by Bing Song called Intelligence and Wisdom that's about Chinese philosophy and AI. And there is this attempt, for example, to relate it to Confucian traditions and Buddhist traditions. So something we do in the introduction is we pose this question
Is there really something like a visible difference between something like Chinese AI and American AI? And of course we answered that being scholars quite ambiguously. So we want to point out that of course there's a different set of assumption animating these conversations. And there's certain interesting differences such as that when you poll Chinese people or like you survey people on their fear of AI.
Chinese people score much lower than the average American person on fear of technology in general. So of course that has to be related in some way to the cultural narratives that are happening around these technologies in the United States. As Vincent noticed, the apocalyptic kind of end of day narrative is kind of going strong that maybe is absent from even like philosophy of time that something like Buddhism presents where there's just no apocalypse or higher being that's going to come and punish you. So again, I don't want to create something like a very simple explanation, but all these things kind of fit together into some sort of exploratory framework that we're trying to present.
Justin Beals: Yeah, excellent, Bagna. And actually, I'd now like to dive into a couple of articles. Maybe I'll start with you, Vincent, in your article and then me. I'd like to talk about your article because I think they provide a nice counterbalance. So Vincent, you know, I've you and I both have probably built some models in our careers. And certainly I have loved watching users get delighted when they do things that they didn't imagine that they would do.
And, but I'm really flummoxed oftentimes by the popular concept, especially I see it in the United States of the singularity or the general intelligence and your article, automaticity and the mystery of the state identifies the Western origins of these concepts. Could you help us describe how these concepts were developed early on and, and how we continue to utilize them in strange ways, I think.
Vincent Garton: Yes, I suppose this picks up on what I just said in the sense that this concept of the singularity and to some extent also the concept of artificial intelligence is situated in this or against this background of secularised Christian metaphysics. look at the singularity in particular, this supposes a kind of concept of time, we call eschatological time, where there's
sort of apocalyptic, it's the end of the world or at least the end of human life and there's a sense that all of history is kind of leading to this single point which we'd call teleological, the sense that there is a purpose for history and the purpose is this singularity which is the end of time as far as humans are concerned. I think this is very much a secularized religious concept and the presuppositions behind it don't necessarily make sense against a Chinese background.
Maybe you can find parallels in things like Marxism, they have very different implications. The concept of intelligence, I think, is interesting. The classical Chinese roots of the equivalent term in China, 中国, suppose a kind of ethical component which is not really there in the Western term. And it's dangerous to essentialize and say that Chinese people are thinking about it in terms of classical.
Chinese philosophy again. But what I found fascinating is that if you look up AGI on, for example, Wikipedia, it will emphasize AGI means an AI that has cognitive capabilities similar to or exceeding humans. However, if you look it up on Chinese websites like on Baidu, then you'll find one of the components of AGI is that it behaves like a human, it shares human ethical norms, which is not really there in the Western concept. And I think that's maybe something which is quite deep rooted and deep seated in the Chinese intellectual background here, that intelligence doesn't just suppose being able to do things, it also supposes some level of self-governance. One of the component parts of that Chinese word is jiu nong, which means wisdom. One of the best definitions of these two terms in classical Chinese philosophy comes from Xunzi who says that you have knowledge, and when knowledge accords with reality, we call that wisdom. On the other hand we have ng which is capacity that is the ability to do things so it's a sort of combination of understanding and efficacy which is governed by self-regulation and norms so it's a more maybe holistic idea of what intelligence means and I think that might be partly why Chinese people obviously do talk about concepts like artificial general intelligence translated from English but a might have a less kind of horror story understanding of what that term actually implied.
Justin Beals: Yeah, the other thing I loved about your article was the mystery of state, which for me had a couple different layers of meaning. Certainly, you know, we could look at something like a government and be like, it's a constituent parts How do we define what it does? It's very hard. You could think about it from a neural net model, it's very hard to take any single decision point in the matrix of algorithm and understand how it fits in the hole and even from a physics perspective, it seems as we drive towards more specificity in the nature of the universe, we find more variability in what it actually means. is almost, the deeper you get, the less understanding you can find, which is beautiful in a way, yeah.
Vincent Garton: I totally agree with that. And I think that ties into maybe a concept of simulation, which I think is very interesting to think about. notice a lot of the people who talk about singularities are also obsessed with the idea that reality is a simulation. And what does that actually mean? sort of maybe supposes that there's some kind of again, transcendent realm, or you have the simulator who is simulating some kind of reality, sort of Plato's cave style model where we're all just looking at illusions and then outside somewhere there's the blinding light of reality, which is not again something which is really there in the Chinese philosophical tradition.
When you think about simulation or maybe a related concept of ritual and doing things so as to create a sort of order, it's not just the simulator versus the simulated as two kind of completely opposed things, where one of them sits in the realm of truth, and one of them is false and illusion, they're all a kind of holistic whole. So from that perspective, maybe the idea of kind of creating these artificial intelligence models, which can simulate reality and simulate reality in order to maybe derive truths about it is less disconcerting and less kind of apocalyptic.
Justin Beals: Wonderful. Mi, I'd like to turn to your article now, a little bit if we could, on the, and I'm, I apologies for my both Southern accent and inability to pronounce some of these things, but the article was on Qian, Zhuxian and Cosmo Technics. And this concept of the harmonization of natural science and social science. I found the article brilliant. really opened my mind to what's possible. But maybe you can talk to us a little bit about your perception of the art.
Mi You: Yeah, Qian Xue Sen, I know it's a difficult name to pronounce. Qian Xue Sen is really the, I think the emblem of many of our interests in the book, from technology to cosmology to society and politics. My contribution is actually based on, or my article talks about, an artist's film that is loosely based on his life. So I'll speak about Xie Xiaosan first. He was a rocket scientist. He worked in the US during the Second World War under Von Kahneman, part of the US nuclear program. He was naturalized as a US citizen until he was expelled essentially during the Red Scare in the 50s.
So he went back to China, helped essentially kickstart the entire rocket and missile sciences. And in the 80s, as the country steered away from high socialism and embraced consumer industrialization, not the heavy industrialization. He kind of shifted his research to somatic sciences, supernatural phenomena and Qigong. So there's a lot of theorization around that. So, he sees the human body, technology and the environment around us in a very holistic way. And there you could even ask if he has been in some kind of telepathetic correspondences with people like Bateson, separate second generation, separate additions.
So he was interested in all this quote unquote, all caught sciences, but at the same time, he was also designing a system called open complex giant systems in which you had the human expert working together with databases and various early generation AI systems and computers to perform extremely complex tasks. But the whole point is that where machine intelligence not necessarily fails, but where machine intelligence could use a spark of human intuition and human imagination, that's where the humans should come in.
So for him, computation is never a machinic or solely machinic endeavor. But the human is always there. And I think another way of reading is that computation is not an individual endeavor, but a collective one. So from there, artist, Shi Qing, who I commissioned to work on an artist film about Qian Xuesen, he went on to speculate about a so-called Yangzi River computer that Qian Xuesen designed, which he didn't, but he took many of the elements, key ideas of Qian Xue Sen in the film. That's what the contribution is about.
Justin Beals: Yeah. So next I wanted to chat a little bit about the state of AI. And I'd love to go round Robin with the team here. One of the things that I think I learned especially about the article you wrote me and some of the discussion you had, Vincent and broadly, Bogna, across all of the articles, is this synthesis of the technology and society and how we work together. And of course, today, a lot of the large language models that we work with are incredibly capable of navigating huge corpuses of information, more than one human being can store in their head at a time. And certainly in our work day in day out, we're developing large databases of regulatory or compliance information and how to utilize that or synthesize that in particular outcomes.
The human is critical. They must sit in the decision matrix of what's going on. And what considerations do you think are important as we develop AI tools to manage, instruct, or even define community interactions? For example, the state defining the application of law. Like I can see a future where a large language model may be defined to help us understand how the law works more than a singular judge, for example. Bogna, maybe you wanna lean in on around that concept.
Bogna Konior: I guess one of the principles that I would advocate, although as a scholar I'm always more interested in like analysis than creating necessarily prescriptions, but I think something that I am quite, I believe in, and I'm passionate about, and that's also the spirit of the book, is to think about a certain multiplicity of models and applications. And there is a definite danger of hegemony, right? So to think about just a number of models kind of dictating certain uses, that would be the worst outcome and the worst case scenario.
So whether we are multiplying possibility by kind of having a more open geopolitics and a diversity in this way, or in ways of engineering feeds such for example we have DeepSeek that proved that you don't have to do scaling in order to increase capacity which is an innovative kind of technical solution so to keep fostering innovation and to keep fostering multiplicity is something that I would kind of, yeah, be not too afraid to advocate because I cannot predict really like bad consequences of that. When it comes to like any other solutions, I guess something that we are interested in, the book is also thinking about how human technological actions don't always have outcomes that you can predict, right? So, to think about technology as something that is very much to be legislated and thought about in terms of governance, but also as something that has some sort of irreducible complexity to it, and how the history of technology happens. So to create structures where these surprises and eruptions of complexity can still be possible, to not be too over-regulated and to not be too bureaucratic seems really crucial as well.
Justin Beals: Thank you, Bogna. Vincent, perhaps you have some perspectives on the future with these technologies.
Vincent Garton: Yeah, I think it's a very important question in terms of what they are socially. I think this is still something that we haven't really come to grips with. What is a, what are large language models as a social phenomenon in the way that they've been deployed today? I think if you abstract away from the specific technology, away from specific models and specific deployments of models, large language models kind of altogether, this entire technology acts as a sort of field, because they consume all this linguistic input, ultimately originates from humans, even synthetic data is ultimately permuted out of human sources. And then they present it back to humans after they've been prompted. So if you abstract away from the individual technology, what you've got there is essentially a medium through which linguistic output from people gets communicated to other people.
The thing that's happening though is that it's kind of undermining or even maybe demolishing this idea of individual agency, because see a large language model might give you citations, but generally speaking, all of that data has been kind of mixed up into this enormous tensor space. And it no longer has these sorts of individual attributions, it's just become part of this shared knowledge of humanity, or depending on what the domain of the individual model is. And I think that has pretty enormous social implications if you take that far enough. It kind of undermines the way that we thought about liberal democracy as set of institutions inherited from the Enlightenment and obviously it poses separate challenges in a political system like China's but that's something that I think we really need to grapple with and haven't really yet intellectually.
Justin Beals: Thanks, Vincent. And Mi, I'm wondering your perspective, especially, you know, I think when we think about China, we do think about scale. It is a very large country. A lot of people live there. Yeah.
Mi You: Yeah, I also sit, I sit somewhere undecided between, or rather, it depends on where I am at the moment. It's either over-regulation or under-regulation. Sometimes it seems finding the balance is difficult. I do agree that we don't know what it will bring, and the challenge to democracy in general.10 years down the road is a big question. also don't, but then I guess Justin, you're pointing to maybe perhaps how we can cleanse our data to start with, right? I've seen proposals from not necessarily technologists, but more like philosophers of technology who would say, you know, we need to cleanse our data. But that seems rather naive. And as you go the China way, which is really heavy-handed, but maybe a little bit of that could be useful. I do, I do remember reading a paper about how deep sick uses a certain, um, there's a tolerance threshold, right? So a lot of the data it uses are kind of okay for, uh, for, you know, it passes certain ethical scrutiny, tiny style. But then it allows something like 5-10%. of free data from everywhere outside of the Great Firewall. So I don't know whether that makes sense or not.
I'm actually curious to hear your thoughts on that. But I guess maybe my bottom line is to go back to the question of machine decision not being final, how we can make sure that there are humans there actually reasoning certain things at the end of the day when it comes to crucial decisions such as weapon deployment, but also in not so crucial things such as you guys might remember the IADOTE I think it happened about a year ago when when TikTok was about to be taken down there was a huge exodus from TikTok to Xiaohongshu red note which allowed all these users, internet users in the US to see the Chinese internet world. That seemed to be an organic happening, but in the end, it was not the company's decision to open the floodgate, but actually a collective decision from the company, but also some people sitting in the government. So it's that kind of reasoning, right?
So sometimes you know that it's actually good when there are humans there reasoning things and making sure certain things happen or not happen for that matter.
Justin Beals: Yeah, Mi, you are very kind to ask my opinion. guess having, I think two things, one is that I'm constantly reminded about how much our brains want certainty and how little of it is available. And I think that in all the work that I've done, we've been careful to include a human in the loop where it is critical to human outcomes and B, to leave some room for chance because that feels like part of human cognition, a space in which randomness can be allowed to exist. And I think we've got to carry on with the philosophical concepts. What I find amazing about computer science and the internet is how human a thing it is, and I'd like to celebrate Bogna Mi and Vincent. Your book. It expanded my horizons as a computer scientist, as someone who comes from a cultural tradition. I was very grateful for it. And so as we're wrapping up here, I just want to say thank you to each of you for the book. I hope that our listeners enjoy reading it as much as I have and appreciate your continuing work. Thank you for joining us today on SecureTalk.
Bogna Konior: Thank you so much for having us. I will also add just to end that there's so much more in the book that we were able to cover here, so many authors that we featured. So I hope the listeners can pick it up, and if you do, don't hesitate to contact us and let us know what you thought.
Justin Beals: All right. Thank you our listeners today and thank you to our guests.
About our guest
Bogna Konior: Bogna is an Assistant Professor of Media Theory at NYU Shanghai. Recently, she was a Research Fellow in the Antikythera Program on Speculative Computation at the Berggruen Institute and a mentor in the Synthetic Intelligence program at Medialab-Matadero Madrid. She is the author of The Dark Forest Theory of the Internet (Polity, 2025).
Mi You: Mi You is a curator and professor of Art and Economies at the University of Kassel / documenta Institut. She’s the author of the book Art in a Multipolar World (Hatje Cantz, 2024). She serves as chair of the committee on Media Arts and Technology for the transnational NGO Common Action Forum and is a Berggruen Institute Europe fellow.
Vincent Garton: "Vincent is a senior software engineer in the energy sector and an independent researcher in technology and political thought. His forthcoming book examines the challenge of Chinese philosophy from a Western perspective.
Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.
Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.
Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
Other recent episodes
Keep up to date with Strike Graph.
The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.
.jpg?width=1448&height=726&name=Screen%20Shot%202023-02-09%20at%202.57.5-min%20(1).jpg)
%20(5).png?width=500&height=300&name=Untitled%20(350%20x%20200%20px)%20(5).png)