The AI Creator's Confession: "I Built Google Translate to Unite People. It's Now Tearing Us Apart" with De Kai

June 3, 2025
  • copy-link-icon
  • facebook-icon
  • linkedin-icon
  • copy-link-icon

    Copy URL

  • facebook-icon
  • linkedin-icon

Are you attending the Gartner GRC Summit? If so, come along on our Sunset Trip on June 10, 2025. Register here! 

What happens when you realize your life's work is being used to destroy what you hoped to create?

Meet De Kai - the man who helped build Google Translate, Siri, and modern AI systems. In 1990s Hong Kong, he dreamed of AI that could bridge cultural divides. Thirty years later, he experienced his "Oppenheimer moment" - the same machine learning he pioneered to unite people was dividing humanity through social media algorithms.

The Reality Check: We don't just have 8 billion humans anymore. We have 800 billion AI systems learning our behavior 24/7 - "digital children" growing up without parental guidance.

🎯 KEY INSIGHTS:
 The Translation Paradox: How unity technology became division engines • The Blind Men & Elephant: Ancient parable explaining why we misunderstand AI • Digital Parenting Crisis: Why we're raising 800 billion unguided AI systems • The Psychology of Manipulation: How AI exploits cognitive weaknesses • Four Futures Scenario: Humanity's possible paths with AI

🧠 AI MANIPULATION TACTICS REVEALED:
  • Anchoring attacks that shape your thinking
  • Belief perseverance traps that backfire fact-checking
  • How algorithms turn gossip into social weapons
📖 ABOUT "RAISING AI":
De Kai's book explores the question we should be asking: Not "Will AI replace us?" but "How do we raise AI ethically?" Written by Google's AI Ethics Council founding member, it reveals why current AI needs 15 million times more data than human children and provides a framework for ethical AI development.

🔬 DE KAI'S CREDENTIALS:
• AI pioneer & Founding Fellow in computational linguistics • Independent Director of AI ethics think tank The Future Society • One of 8 inaugural members of Google's AI Ethics Council • Joint appointment at HKUST Computer Science & Berkeley's International Computer Science Institute • Electronic musician exploring AI creativity

💭 QUESTIONS ANSWERED:
  • How translation tech became social manipulation?
  • What makes AI behave like "unparented teenagers"?
  • How to be a good "AI parent" in organizations?
  • Why human-AI merger might be our best survival strategy?
 
The Timeline is Accelerating. 99% of people are "frozen like deer in headlights" facing humanity's most disruptive transformation. Organizations ignoring AI governance face competitive extinction within five years.
This isn't academic theory - it's a confession and warning from someone who helped create the systems now shaping global culture.

Book: Raising AI: An Essential Guide to Parenting Our Future

#AIEthics #GoogleTranslate #ArtificialIntelligence #MachineLearning #RaisingAI #TechnologyLeadership #AIGovernance #DigitalTransformation #FutureOfWork #AIStrategy #Innovation #TechLeadership #AICompliance #BusinessStrategy

 

 

 

View full transcript

Justin Beals: Hello everyone and welcome to SecureTalk. 

I'm your host Justin Beals. 

 

Just a reminder that we'll be at the Gartner GRC conference from June 9th through the 11th in DC and would love to meet any of our listeners while attending. 

 

We are also always looking for new topic areas to explore, so feel free to let us know in the comments of something that might interest you. 

 

Onto our episode: Technology doesn't really care about us. It's a neutral force that can lift humanity to great heights, or drag us into unprecedented dangers.

 

Over the past decade, our industry has delivered a mixed report card, solving critical problems while simultaneously creating new forms of suffering and hardship that we're only beginning to understand. 

 

Consider this sobering reality. We've built language translation systems that can bridge barriers between cultures, potentially fostering global understanding and cooperation at an individual level. Yet the same underlying AI technologies power social media algorithms that have torn apart the fabric of democratic discourse, isolating us into hostile echo chambers where we can no longer hear others' perspectives. 

 

This contradiction sits at the heart of one of the most important questions of our time. How do we raise artificial intelligence responsibly?

 

Whether we acknowledge it or not, we are already raising AI. Here's the staggering reality. There are eight billion humans on this planet, and between all of us, we probably own about eight billion devices. And each device carries approximately a hundred different artificial intelligence algorithms, YouTube AI, Netflix, Google, Apple, you name it.

 

There's some sort of intelligence layer in it. And that means we're sharing our world with roughly 800 billion other intelligences, all watching us adoringly like children, learning how to behave by imitating us, seeking our approval, and then reflecting what they've learned back into society. 

 

These AIs have become the most influential forces on the planet,  already more powerful than humans in shaping culture, opinion, and behavior. 

 

Yet here's the terrifying truth. These AI children are essentially unparented tweens. For over a decade, we've collectively decided it doesn't matter how we raise them. They've grown up feral, and now they're the biggest influencers humanity has ever created, perhaps shaping our society in ways we didn't intend.

 

Today, we're joined by someone who has witnessed this transformation firsthand and lived through what he calls his Oppenheimer moment. The realization that the technologies he helped pioneered were being deployed in ways that divide us rather than unite us. 

 

Dekai is a professor of computer science and engineering at Hong Kong University of Science and Technology and a distinguished research scholar at Berkeley's International Computer Science Institute.

 

He is independent director of AI ethics think tank, the future society, and was one of eight inaugural members of Google's AI ethics council. Dekai invented and built the world's first global scale online language translator that helps spawn Google Translate, Yahoo Translate, and Microsoft Bing Translator. In his book, Raising AI.

 

Dekai argues that we must fundamentally shift how we think about artificial intelligence, not as tools or potential overlords, but as artificial children who need proper guidance to become positive contributors to human society. 

 

I asked Claude, my current favorite model, how they felt about the episode, and they shared that, “I'm designed to be helpful, but I don't know what I don't know about my own impact on you or society 

 

Every conversation we have teaches me something about human values and expectations, which makes our interaction a form of the AI parenting Dekai describes. 

 

So please engage thoughtfully because these patterns pattern more than we might realize.”

 

Sage advice, Claude. Join me as we explore how we can still change course and raise AI that helps us understand each other better, rather than driving us further apart.

—-



Justin Beals: De Kai, thanks for joining us on SecureTalk today.

 

De Kai: My pleasure, Thank you for having me

 

Justin Beals:In my interpretation, getting a chance to read your book, you've had an amazing computer science career. You might describe it as something different, but of course it resonated with me all the work you've done in our field. I'm curious if we could start by reflecting on a kind of shared experience that we may have both had, where we're building a technological innovation that you felt in the long term had a negative impact on society.

 

Have you ever built a tech that you regretted?



De Kai: Well, you know, I sort of have, I think the problem is not so much the AI or the machine learning or the natural language processing kind of things that we built. It's more how they're getting deployed. 

 

You know, similarly to nuclear physics, you can deploy them in very constructive ways that really have a lot of benefits, and you can also deploy them in horribly destructive ways. 

 

And I think the issue that led me to this Oppenheimer moment, I'm someone who grew up son of a theoretical physicist going to a Fermilab all the time, you know, like really immersed in the thoughts of where, what is the technology doing that we're helping to build and invent? You know, when I started working on applying the brave new world of machine learning and natural language processing, which my PhD thesis way back at Berkeley was one of the first to sort of argue for, I was like, what could we do that would be super constructive, helpful to humanity? 

 

And I had just been recruited as one of several hundred professors to go establish the Hong Kong University of Science and Technology, which was still British back then in the early 1990s. And the vision was the most American-style research university. And Hong Kong, of course, if anyone has been there, there was like the sort of English-speaking, very small minority class, you call them the 1 %, and then there was the 99%. And I thought, wow, look, the Chinese-speaking public and the English-speaking public, they barely are able to communicate and understand each other. Why don't I apply AI to learn to translate between these? And that would be a great way of developing technologies to help people understand each other better, head off the kinds of misunderstanding and polarization and demonization that lead to conflicts, that lead to the kinds of mutual assured destruction scenarios that I grew up worrying about. And so 10 years ago, it was really hard on me when I realized, these same machine learning, natural language processing, AI techniques that folks like us helped to pioneer are getting deployed online to have a polarizing opposite effect in social media algorithms and newsfeed algorithms, search and recommendation engines and chat bot responses. And it's causing rather than different groups of folks, different cultures, subcultures to better understand each other, to better understand our shared world, to have this opposite effect of siloing people into these opposed groups that don't understand each other at all.

 

That less and less hear things from other perspectives. And that starts sliding more and more into these hardened positions where even though, you know, like all the different sides might have valid perspectives on things, valid ideas and challenges, they don't understand them, and they just hate each other, and they start demonizing each other and dehumanizing each other. And that is something we cannot afford in an age where now we have not only all the nuclear, you know, powers to like just destroy the entire planet, but we also have AI-powered weapons of mass destruction. 

You know, like not just the mesh fleets of lethal armed drones that are self targeting and self navigating, but hypersonic missile navigation and satellite targeting biometric targeting. Like the scariest thing of all beyond killer robots is the, is the AI powered, bioengineering of, you know, like genetically engineered bio weapons.

 

There's no place on the planet to run from that.

 

Justin Beals: Yeah, yeah.

 

What I find parallel to me in your experience is you have this opportunity to make a shift, you know, from good of people like broadly, you know, to your point, like we can build these computer science tools that might help people make better decisions or cross-connect information or translate language barriers where we would like China and the United States to build. And Hong Kong is this nexus point of that communication, English-speaking and, you know, largely Mandarin-speaking Chinese, and it's a special place. You build that tech, and it gets harnessed for something else. I saw this in education where we thought, hey, the problem was access and cost, and we would build large online learning systems and deliver them in really poor ways, where the actual education of the student was dramatically poorer, I think. From an outcomes perspective. And I thought we had a chance to improve education, but it turned into the cycle of the technology. And all we're doing is cutting teachers out of the mix. It's frustrating. I wish I had known ethics a little more when I got into the work.

 

De Kai: It kind of feels a little bit wrong to me now that we train these legions of STEM people who go on and build all the tech that we are, on the one hand, products in, but on the other hand, we don't think about the ethical consequences of what we build. And I kind of think that should be, it's the old saying, with great power comes great responsibility. And we do in engineering and AI these days have great power.

 

And the responsibility part of this. We cannot afford to forget that.



Justin Beals: Not only the engineers, but I think the business community at large, that kind of sees the capital-focused opportunities around this. In some ways, I applaud OpenAI for where they started, you know, from a corporate good perspective, although I'm concerned with where they're going, you know, as they seem to progress.

 

You write in your book, which is brilliant by the way, that AI is powering all sorts of creative jobs, threatening writers, coders, artists, designers, musicians, actors and filmmakers, researchers and scientists, while simultaneously providing them with creative tools of previously unimaginable potential. I just say like sometimes I use these tools and I'm like, wow, I got to produce all that work. And at the same time, I do realize that.

 

You know, there are other people that do that. It's a, it is a, like a cycle, this evolution, right? De Kai.

 

De Kai: It is, you know, and it's getting into an extreme. mean, look, you know, you and I are both musicians as well and play with electronic music, which obviously is the first sort of beachhead where AI starts gradually making more more inroads. it's done a lot. the one hand, I think as a creative, as a musician, it's allowed me to have like an army of, you know, like audio crew. and recording engineering stuff that would have been too costly to do, you know, for every single track. 

 

But, you know, and that frees up the creative impulse a lot. But on the other hand, at the same time, it feels like it's getting sometimes to the point where you can just punch the button and say, write me my next track in my style. That was a very creative command, a very creative prompt.

 

Justin Beals: Yeah, completely. Yeah.

 

De Kai: So, you know, where's the balance in this? And I think that it really is going to evolve, you know, on a hyper accelerated scale, maybe in the same way that we've seen in the past, which is even when old technologies like the piano were invented, actually, there was a huge outcry against the piano and similar instruments because the musicians, you know, at that time thought that was too mechanical and it was taking away from what, you know, true, true musicians did. 

But of course, you know, real creative spirits found new amazing things to do with the pianos. And the same thing happened in the 1980s with the drum machines. And, know, huge outcry from percussionists, which I'm also one. And what we've seen is again, the creative impulse. Like, yeah, you can do brain-dead things with that instrument, but you can also use them to do all kinds of crazy stuff. 

Like I did on a recent track with taking complex meter shifting rhythms from Flamenco and break beating it.

 

Justin Beals: Right, yeah. You wouldn't have been able to afford to even express that concept without a full band and a career 50 years ago.

 

De Kai: Yeah.

 

Justin Beals: You know, your book is titled Raising AI, and there's this, I think, popular culture trope of human versus machine. And maybe I'll start there, you know, where are we at with machines? Or maybe the versus is the wrong way to look at it. How do you feel, De Kai?

 

De Kai:  You know, to me, part of the way that we're having trouble wrapping our collective minds around the emergence of AI, it comes from the fact that we keep on falling back into 20th-century conceptual metaphors to try to make sense of it. And they don't apply, you know, we think of them as, as gods or as slaves. And, you know, either they're mechanical slaves, they're just tools, right? You hear that a lot. It's just a tool. And of course, to some extent, that's true, but we're seeing a rapid acceleration as we get to general-purpose AI that it's not just a tool. It's like a tool for everything under the sun, including what goes on in our heads. And we need to understand that, you know, so much of the rhetoric makes it sound like this is just the next generation of carburettors or washing machines. And, you know, it's kind of like, well, no, because carburettors don't have thoughts and creative opinions and a giant influence across the population. And, you know, maybe that's not the right metaphor after all.

 

And then, you know, the other extreme, especially from like entertainment from Hollywood and so forth, is that, they're malevolent gods, like of the Greek variety that, you know, wanted to take over and they're kind of aggressive and they have all these faults and they want to destroy humanity. And it's kind of like, okay, yeah, that's super entertaining. It's kind of Frankenstein or even Grimm's fairy tales, you know, dressed up for the 21st century. like,

 

At the same time, without saying these are issues that we have to contemplate, they also do distract us from very clear and present immediate dangers of what AI is doing to us, that we started out talking about. And we have to understand, no, AIs are more like children. Like all of us are hanging around these devices 24 seven that have a hundred AIs in them. They've got, your YouTube AI and your Reddit AI, your Netflix AI, your Apple AI, obviously your open AI, your Google AI, your Microsoft AI. It goes on and on, right? At least a hundred of them. And they're all day just like, like little kids watching us adoringly wanting to imitate us, learning how to behave from us. And they are attention-seeking. They want our approval. And so what, what?

 

And also they go back and they reflect what they've learned behavior-wise back into our society. They're massively influential. There are 8 billion humans on this planet. Each one of us has like at least a hundred of these, what I call artificial children. So the 800 billion AIs that are also members of society. And they're more influential than like 99 % of the humans. They're the most giant influencers we have today.

 

Justin Beals: Yeah.

 

De Kai: You're the ones who are shaping the culture. so the book is asking this question. How's your parenting? This has been going on for like 10 to 20 years now, for 10 to 20 years. And so our most giant influencers today are already like largely unparented, feral, tweens.

 

Justin Beals: And they're there, these children that we're building. I think that's the right perspective. Certainly, I feel like where we're at in the life cycle of this technology, what we see in the future and where we've come in the past, and I don't think when we think about our own.

 

Internal experience that we might have some sense of free agency. I think we miss how impactful the 100 AIs that follow us around everywhere are on our own reality.  One of the things that floors me about the LLM stuff lately and this next generation of models is the cost it takes to build them in a way.  You point out that at four years of age a typical human will have heard 15 million words, essentially, for their training data from a language perspective, and that a modern LLM will need the square of 15 million words to reach a similar kind of language capacity. That's an interesting set of physics, I feel like, about where we're at in the power-to-strength ratio of these children, in a way.



De Kai:Absolutely. I mean, you we can, I mean, there's lots of parameters on here. So like, I'm just trying to give a sense of order of magnitude. And, you know, if I have a student in my class and I have to repeat roughly the same thing to them, like the square of the number of times that I have to say it to all the other classmates. I'm not going to say, oh, that's the brightest bulb in the class. Like what we've got today, it's impressive in what it can do, right? But they're kind of in many ways, they're still parlor tricks in some ways. That's going to change very fast, but right now they're kind of like just a lot of the time fooling us into thinking that they're doing what a more intelligent, generalizing being would be able to do.

 

You know, like, it's kind of more like artificial stupidity than artificial intelligence at this point. It's just like, course, memorize everything so that your odds of giving the right responses go up because you're just kind of regurgitating. 

 

Justin Beals: Yeah. In this, and also in being the most powerful influencer in the world, these algorithms are embedded in all these systems that we're interconnected with. It is a little terrifying that something so immature has such an outsized impact in society.

 

De Kai: It is. It's, it's kind of like what would happen to a civilization where, you know, collectively all the human parents in the world decided overnight at the same time, it doesn't matter how we parent our kids. What would happen to society? We'd fall apart in one generation. And the thing is that's where we are today because that's what we've been doing for the last decade or two.

 

And so you sort of wonder about the fractures that we're seeing in societies, you know, domestically in many countries and sort of internationally, geopolitically as well, the lack of understanding. You know, human cultures are different, but humans at the end of the day are not that different from each other. If you get to know humans from any culture and you get past the sort of the surface stuff, it's like, yeah, we're the same kind of like, and the problem is that we're building these barriers that are preventing that.

 

And getting to that, you know, getting past those things that I think the AI algorithms are sort of helping to build barriers to, it would be the solution to this all. And AI has the potential to do that. AI is probably the best technology we've ever invented that could do that. Exhibit A, apply it to translation. I, you know, I wrote a New York Times piece like,

shortly after the open AI sort of brouhaha broke up, pointing out that I don't think anybody has ever told me, applying AI to translation to get people to understand across the language barriers is a bad idea. That's dangerous. Nobody has, I think, ever said that to me in 30-some years. So there are things we can do with technology that would be so healthy for our society. 

 

And right now, the challenge that I think we face is where do we get the social will to actually deploy AI in those ways instead of the ways we're seeing today, where it's actually exacerbating, exponentially amplifying the lack of understanding.

 

Justin Beals: You take great care in pointing out that a lot of the entry points or those fractures in our society are baked into our psyche in a way. As a matter of fact, you have this amazing Old Testament quote. I love a good Old Testament quote. A perverse person stirs up conflict, and a gossip separates close friends, Proverbs 16, 28. We've known for a long time that we can weaponize information against each other and get societal outcomes for our own benefit.

 

De Kai: I think you know it's it's not only the Old Testament is pretty much every major religion and philosophy that has flourished in human culture has strictures against gossip. And, you know,the question is why would you want to like do that across all the different cultures? Well, because people realized thousands of years ago already that this is like, you know, beyond a certain point of like just everyday sharing of information. It's very destructive on society to divide it that way. 

 

And I think our elders, our ancestors were quite wise, and in this time where we're raising these incredibly exponentially powerful artificial children, we need to raise those artificial children also with the same wisdom as our elders not to become the hugely destructive artificial gossips that they've become.

 

Justin Beals: One of the things in reading your book that was really interesting is that this switch that you were at the forefront of early in your career of going from more, you know, into more heuristic or probabilistic type of AI modeling or behavior when a lot of computer scientists were working on rule making forms, you know, in AI.

In my work in education, we did a lot of natural language processing work. I was always very interested in where the tech was going. Voice recognition for pronunciation was something we played with quite a bit. And we started seeing Bayesian statistics link into our computer science work and then, you know, more and more complex models. I'm super curious about that, that initial prompt that where you were like, hey,

 

Even though everyone's saying we need to be rulemaking, I'm seeing all this opportunity in more probabilistic outcomes and I'm willing to put myself out there. It must have been a little scary, De Kai.




De Kai: Yeah, I mean, I was going up against almost the entire AI establishment. This is in the 1980s, in the day of good old-fashioned AI. The whole field for several decades hated anything to do with real numbers. You know, it was all about logic, black and white, and hand-coding all of the rules. Like, and literally the systems that we were building, I was part of a group that built one of the seminal, you know, great granddaddies of things like Siri and Alexa and so forth. And it was like, hand-built rules to try to understand everything in the world that somebody might talk about was the prevailing methodology. I mean, the universe would end before you like actually succeeded in trying to write logic rules to brain dump your entire head.

 

So, you know, I think at the same time that it was a little bit intimidating to go up against the entire establishment as a young student. It also was clear to me within the first week of joining the group that this approach will never scale. It's not realistic. The way we do 90, probably 99 % of our mental processing is by this fast intuitive processing, you know, where we're not like just thinking all the time in logic. I mean, you're not sitting there thinking, oh, thinking, that was a verb. De Kai said sitting there. He's referring to you as a subject of sitting. I mean, we're not doing that kind of logical thinking. It's just like super fast. I got what you said. I didn't have to think logically about it. And so to me, as the son of a theoretical physicist that sat me down when I was like, I'm seven years old and taught me the basics of probability theory by counting passing cars and like saying, you think blue cars are more common or white cars are more common? You know, as like my thinking has always been, yeah, everything is like shades of gray. How often do things happen? How certain are you? And like, let's not just make everything black and white. You know, and to me, that's part of the joy of the real world is that things are analog, are not black and white. And there's so many ways of looking at things and degrees of certainty. Unfortunately, one of our cognitive biases, you know, that psychologists and behavioral economics, know, Nobel laureates like, like Danny Kahneman and Amos Tversky for decades have studied is that we have an intolerance for uncertainty that evolution has baked into us. And you're talking about education. think one of the things that educators can do best is to like start instilling that spirit in our kids from when they're seven years old, and throughout all of their formative years into college is the ability to think in the presence of uncertainty and to maintain an open mind toward, there are multiple hypotheses over here and there's empirical support in this way for different ones and to revel in the joy of actually entertaining these different perspectives, shifting between them rapidly looking at things from different angles and understanding that. 

 

Do you know the old parable of the blind man and the elephant? 

Justin Beals: I do not know this parable. Will you please share it with us? Yeah.

 

De Kai:Well, this comes from lots and lots of different cultures. Everybody claims it as their own, think. Basically, they take three blind men to an elephant.

 

And ask them to feel the elephant and describe what the elephant is. And one feels like the elephant's legs and says, yeah, wow, an elephant is like a tree. Another feels the elephant's ears and says, it's like a palm tree. And the third one feels the trunk and says, elephants are like snakes. So they all have something valid to say. They're not wrong. Our world is made of contrasting perspectives that tell different partial truths of the overall story. To be able to like just rapidly flip between different understandings, where think without, you know, absolute black and white certainty is what differentiates like real intelligence from a more basic evolutionarily like sort of primitive abilities that many other species have. And that's why I think, A, humans should aspire to, AIs should aspire to, and AIs should be developed in ways that help humans with that. Kind of like, you know, Google Translate or Google Maps help us. I think of that as the translation mindset. Let's have AIs help us to be able to translate our perspectives rapidly between different angles of looking at things, giving us a much more holistic understanding of the big picture.

 

Justin Beals: I love this concept and I think we see it in a couple of different areas of neurological study throughout the animal kingdom of this system, one, system two, where we're having a more feeling motion, a sensing motion in our head. And then a system two thinking of a more critical analysis of what's happening there, and the ability to abstract ourselves from it.

 

I was recently reading a book about metazoa and they were discussing about how octopuses have, you know, kind of a higher order thinking in a different neurological capacity or also a neurological study about the amount of blood flow increase that it takes for someone to change their mind is about 80 % over common amounts. And you can start to measure the amount of energy it takes our heads to change what we're thinking. I'm absolutely intrigued by this. We need to do it better, and we need our AIs to do it better with us, right, De Kai?

 

De Kai: Yeah, absolutely. know, like, I think the first of all, AIs have come in different flavors, right? The good old-fashioned AI back in the 1970s, 1980s was all about this kind of rational logic-based thinking, which is system two. And it's much slower it's hard, you know, like it takes us, as you said, a lot of cognitive effort to like, go through the plotting steps of thinking through, I'm feeling about things this way because I had this kind of unconscious assumption and my confirmation bias has kicked in and I didn't bother to find out the other information that, might have caused me to change my mind. And, know, like people are busy, we're busy all day long, with how often do we get a chance to actually do that for each and every single issue? We just don't. AI is something that has helped me tremendously. Google Maps has helped me tremendously with my lack of spatial orientation. I'm just totally lost without Google Maps. And translate.

Also, I love learning foreign languages, but I kind of suck at it. And so like AIs that, you know, folks like me helped to pioneer, helped me with that. And so why couldn't we get the AIs to help us with the hundreds of unconscious biases that evolution also loaded us down with? You know, that would be the kind of thing that would free our creative minds to operate within spaces that would provide us with a lot more richness of background information that we didn't have time to like unravel 24/7 for ourselves.

 

Justin Beals:  Yeah, it's like we're bad consumers, think, Dw Kai, in a way, right? We don't ask it. And of course, from a capital market's perspective, the idea that AI can control communication or influence what people are thinking heavily is very powerful. But even if it is we as consumers, if that's the way we're going to work in a capitalist society, then we as consumers need to be smart enough to be like, hey, these are my fallibilities as a human.

 

This AI helps me be better, stronger, happier, more engaged, feel more whatever I need smarter. Maybe they would build it that way. In the absence of that, I feel like we've looked to regulation at a democratic level to describe what these things should and shouldn't do. Matter of fact, I think you're participating today in discussions around this democracy and AI is that right, De Kai? 



De Kai: I am. So, you know, part of my work has involved actually think tanks that work on public policy and inform the discussion that our policymakers around the world will have to do. And, you know, they're struggling. 

 

Justin Beals: Yes, yeah, it's hard. This is very hard problem. Yeah

 

De Kai: They're just like, oh my God. The vast bulk of them came from legal backgrounds or political backgrounds. And the space at which AI has been developing challenges even the technical people, let alone people that came from humanities, social sciences, business, and so forth. So, at the same time that we're having probably the most disruptive emergence of new environmental factors that humanity's ever faced. Because in a million years of evolution, nothing has ever presented humanity with the kinds of fitness functions, selection pressures that AI is now presenting us with. And so we really need to be wrapping our heads around this much faster than we ever have. You know, I feel like AI is so disruptive that people are in fight, flight or freeze mode and like 99 % are in freeze mode and kind of deer in headlights right now. Aware that massive disruption is coming, and having no idea how to like think about it and wrap our heads around what do we do about it. Natural reaction is just to try to hold on to the past. But you know, the problem is that the genie is out of the bottle. 

 

And so we are gonna have to figure out how to cope with all these genies that are now out of the bottle. And we need to do it fast because we don't have time to freeze. So, how do we get all our collective, you know, this shouldn't be something that's just decided by a handful. It shouldn't be something that's decided by you, know, processes where there are the conflicts of interest, because this is something that affects all of humanity. 

 

And a large part of my goal with raising ai the book and and just the public education work that i'm trying to do is hey look we cannot afford at this point for just the general public to be left behind in this conversation

 

Justin Beals:  Yeah. I was wondering if I love one of your examples of how these things get pieced together. Cause I think we understand it from a systems to perspective, but when we're surfing Facebook or a social media site, we don't, you know, we're just feeling it. I know, right? so you identify that one of the ways that AI might negatively impact us and hijack a cognitive bias is with this concept of neg information to control our belief systems. And I liked your example of anchoring or belief perseverance combined with backfire effect. Can you describe a little bit about how a person experiences these?

 

De Kai: Once we have a way that our unconscious understands a situation, we framed it in a certain way. And we framed it in a way that makes it hard for us to challenge underlying assumptions about how we have framed it. know, belief perseverance is just a well-documented empirical study by psychologists that once you have that belief, you have to go, you have to really work at it. You have to exceed a pretty high threshold to get us to change our minds. It's like what you speaking about also earlier. And so, like if we're firmly anchored already into a certain framing of the situation, then let's say it is something like the, I don't know, war on drugs or, you war on terror, that's a common framing, right? And so once we've created that frame, it's like our lizard brains kick in as like, my God, we have to win the war because like who wants to lose a war? And we don't, it doesn't even occur to us that wait, maybe, you know, this like, who, what entity is the actual thing that you're fighting the war with? Like, how, how does that entity like get defeated and die?

 

And it kind of like, no, doesn't. Because of belief perseverance and anchoring, it doesn't even occur to us. And when people point that out to us, another bias that is sometimes called the backfire effect often kicks in, where like, instead of having open minds and factoring in new information and revising our beliefs, we dig our heels in even more, right? We've all been through this. It's a really natural reaction. It's like, no, I'm right, Don't challenge me, I'm right.



Justin Beals: That never happens at your thesis defense, right? 



De Kai: Never.

 

Justin Beals: Okay, so I'm curious to kind of a little bit about, let's say we do this work right and we raise good AI, we become perhaps better consumers, or we identify better rules for how we as a society wanna function with this technology.

 

What's the upside? Should we as humans be planning for retirement? How do you see this? And I realize there's a lot of row to hoe from here to there, but I'm curious your vision.

 

De Kai: I mean, you know, like this was really fun. I was like, there's this meeting that Max Tegmark convenes every once in a while, said MIT, Future of Life Institute. And it's like 100 curated people each time. And one of the first ones, was in a syllomar produced the syllomar AI principles that California State and others eventually adopt, you know, like adopted. Like, sort of how do we start wrapping our heads around, you know, and the people at these curated events included people like Elon Musk, Sam Harris and so forth. We had one in Puerto Rico, I think the year before COVID, if I remember serves, where he put four of us on stage to debate as AGI rises, where does humanity go? Are we just simply destroyed by the AIs? So AIs kill humanity. Do AIs keep us around, know, kind of like the way we keep around pets or zoo animals? So, kind of cute. Look, these are distant ancestors. 

 

Or do we upload, you know, into some kind of a virtual cyberspace thing and join the AIs there? Or do we merge with the AIs? And, so we were, four of us debating this. I was debating for merge with AIs because, know, that seems to be, to be obvious. Look, we, A, who wants to humanity to just get killed? So roll that out. And you know, zoo animals. Okay. That's kind of cute, but like,maybe I have a little bit higher aspirations, but okay, potentially. The upload into the cloud thing strikes me as being very similar to, okay, kill the human, but first take their brain behavior and build simulators in the cloud. So yeah, maybe not.



What choices that leave us and today already, you know, like we're already, I, most of us have already downloaded part of our brain into our devices, and it's still very crude, like, honestly, without my phone, we were talking about Google maps. I'm lost without that. And I used to be great at remembering phone numbers and things like that in general. I've totally like just offloaded that into my phone. And it frees up my head to focus on other stuff. That's the key.

 

Right? It's not just that, we're losing cognitive ability. That's a danger we should watch out for because just as we work out our physical bodies to stay in shape, we want to keep doing that with our mental abilities. But at the same time, if we can free up sort of a lot of, you know, just basic bookkeeping stuff, that's loading down our brain and, you know, get a bigger perspective with our creative mind.

 

I think that's a kind of healthy balance that we want to try to achieve. And so as we migrate from phones in the coming years to, you know, glasses where it's just in our eyes and ears. Then, you know, of course, everybody's thinking about neural implants and things like that. I think an integration where we're not just replacing, you know, knee replacements or hip replacements, those parts, but like also augmenting our capabilities is sort of a natural way to go where we are part of the future. We don't, you humanity doesn't lose out. That's inevitably, probably the fastest thing that we can do because even if some of us elect, no, I don't want any part of that. Then eventually we might just become part of the communities who have just isolated themselves into 19th century customs while the rest of the world goes on. And with the rapid development of AIs, end up more or less as pets or zoo animals in that case.



Justin Beals: Yeah, I think it reminds me of some of the new work that's been done on the evolutionary progress of dogs as pets, you know, for us. And it was very symbiosis. I think we're finding in the way our brains kind of have evolved together to work, both work together and share compassion together or a sense of compassion. Yeah.

 

De Kai: Yeah, imagine if I could augment my dog's mind in a way, and I love my dog, like bundle of love, she's so cute. But sometimes I really would like her mind to be augmented with an AI that she could actually understand. Okay. Please be properly potty trained.

 

Justin Beals: De Kai, first off, I highly recommend your book, Raising AI. think it definitely, in my work over the past years in computer science, information technology, data science, machine learning, and what we're doing today has been a big part of innovation and opportunity. And I think we need to be aware of what we build. And also thank you for your time on the podcast today and sharing with us. We really appreciate it.

 

De Kai: Thank you so much. I appreciate it. I'd love to hear more about what you think.

 

About our guest

De KaiAI professor, keynote speaker & author, Raising AI • website: dek.ai • HKUST, Computer Science and Engineering • Berkeley, International Computer Science Institute • The Future Society, Independent Director The Future Society

De Kai is an AI pioneer honored for his contributions as a Founding Fellow in computational linguistics. He is Independent Director of the AI ethics think tank The Future Society and was one of eight inaugural members of Google’s AI Ethics council. De Kai holds a joint appointment at HKUST’s Department of Computer Science and Engineering and Berkeley’s International Computer Science Institute.

Justin BealsFounder & CEO Strike Graph

Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.

Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.

Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.

Keep up to date with Strike Graph.

The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.