- Home >
- Resources >
- SecureTalk >
- Building a Thriving Future: AI Ethics & Security in Virtual Worlds | Dr. Paola Cecchi - Dimeglio
Building a Thriving Future: AI Ethics & Security in Virtual Worlds | Dr. Paola Cecchi - Dimeglio
The mistakes we made building the internet don't have to be repeated in the metaverse—if we act now.
Join SecureTalk host Justin Beals for an essential conversation with Dr. Paola Cecchi-Dimeglio about building secure, ethical virtual worlds. Dr. Cecchi-Dimeglio brings 25 years of experience advising governments, Fortune 500 companies, and global institutions on AI ethics and technology governance.
Her new book "Building a Thriving Future: Metaverse and Multiverse" (MIT Press, 2025) provides frameworks for building virtual spaces that serve humanity rather than exploit it.
CORE THEMES:
• Security by design vs. security bolted on after problems emerge
• How biases get encoded into AI systems—and prevention strategies
• The critical role of "human in the loop" for AI oversight
• Why good regulation creates business stability
• Digital identity systems for global inclusion
• Authentication and verification in virtual spaces
• Cross-border legal frameworks for technology governance
REAL-WORLD IMPACT: Over 1 billion people globally lack legal identification—virtual worlds could solve this through blockchain-based digital identity, or create new exclusions if built poorly. The standards we set now for authentication, verification, and identity control will determine whether these spaces become tools for human flourishing or mechanisms for surveillance.
WHY THIS MATTERS NOW:
- Virtual worlds already exist—gaming platforms host billions of users
- AI is accelerating everything, including security vulnerabilities
- Deepfake technology is improving faster than detection methods
- The decisions made today will shape digital society for decades
SURPRISING INSIGHTS:
→ Children currently detect deepfakes better than adults (but not for long)
→ Major consulting firms have sold governments expensive reports full of AI errors
→ Voice recognition systems historically failed on non-Western accents due to training data bias
→ Email autocorrect defaults "Paola" to "Paolo" because datasets contained more men than women
ABOUT THE GUEST: Dr. Paola Cecchi-Dimeglio is a globally recognized expert in AI, big data, and behavioral science. She holds dual appointments at Harvard Law School and Kennedy School of Government, co-chairs the UN ITU Global Initiative on AI and Virtual Worlds, and has authored 70+ peer-reviewed publications. Her work advises the World Bank, European Commission, and Fortune 500 executives on ethical AI implementation.
THE OPTIMISTIC VISION: Virtual worlds can tap talent anywhere, breaking geographic barriers. They can connect separated families, provide legal identity to excluded populations, and create opportunities we can't yet imagine—but only if we build them with security, ethics, and human values as foundational requirements.
ABOUT SECURETALK: SecureTalk ranks in the top 2.5% of podcasts globally, making cybersecurity and compliance topics accessible to business leaders. Hosted by Justin Beals, CEO of Strike Graph and former network security engineer.
Perfect for: Security professionals, technology leaders, business executives, policy makers, anyone concerned about building ethical AI systems and secure virtual worlds.
📚 "Building a Thriving Future: Metaverse and Multiverse" by Dr. Paola Cecchi-Dimeglio (MIT Press, 2025)
#AIEthics #Cybersecurity #VirtualWorlds #TechnologyGovernance #MetaverseSecurity #DigitalEthics #AIRegulation #SecureByDesign
View full transcript
Justin Beals: Hello, everyone, and welcome to SecureTalk. I'm your host, Justin Beals.
In 1998, I was a network security engineer at Concert, helping British Telecom roll out their global data network. We were building the backbone of the internet, connecting network engineers in Tokyo with field engineers in Germany, creating the infrastructure that would carry humanity's digital future. It felt revolutionary, the sense that we were creating technology that would connect the world, break down barriers and democratize access to information.
25 years later, I look at what we built and I feel conflicted. Yes, we connected the world, but we also created surveillance capitalism, algorithmic manipulation and information ecosystems where truth itself has become negotiable. The internet didn't stay democratic. It became corporatized, concentrated and in many ways more controlling than the systems it replaced.
Now we're standing at another threshold, the metaverse, virtual worlds, AI driven environments. They're not science fiction anymore. They're being built right now by the same tech industry that gave us social media's attention economy and search engines optimized for advertising revenue rather than accuracy. The question isn't whether these virtual spaces will exist. They already do. Gaming platforms host billions of users. Virtual offices are standard.
Digital identities are proliferating across systems. The question is whether we'll repeat the mistakes we made with the internet or whether we'll learn from two decades of watching technology concentrate power instead of distributing it. Now here's what's different from 1998. We have the scars to prove what happens when we build without guardrails. We've seen how algorithmic bias gets encoded into systems that affect hiring, lending, and criminal justice.
We've watched misinformation destroy democratic discourse. We've experienced how platforms designed to connect us have isolated us into hostile echo chambers instead. And we know something else now that we didn't fully understand back then.
The people building these systems have enormous power over the reality we experience. When voice recognition doesn't understand your accent, you can't access basic services. When facial recognition is trained primarily on Western faces, entire populations become invisible. When AI models are trained on biased datasets, they replicate and amplify existing inequalities at scale. The person joining us today experienced this personally. She was trying to catch a train in San Francisco around the same time I was building network infrastructure. Her voice recognition system couldn't understand her accent.
And that single frustrating moment sent her on a 25 year journey to understand why our most powerful technologies keep leaving people behind. What she discovered goes far beyond technical limitations. The biases in our AI systems aren't accidents. They're the direct result of choices made about training data, labeling methodologies, and whose perspectives matter during development. When your email automatically corrects, peola, paolo, that's not random.
It's because the system was trained on datasets with more men than women. Now these aren't just small problems. Over one billion people globally lack legal identification. In many countries, not having an ID means you can't open a bank account, access healthcare, or participate in the formal economy. Virtual worlds could solve this through blockchain-based digital identity.
Or they could create new forms of exclusion if we build them the same way we built the internet. Think about avatars for a moment. In massive multiplayer games like World of Warcraft, we've been experimenting with digital identity for decades.
Players choose how to represent themselves, trying on different aspects of identity in spaces with different consequences than the physical world. That's not trivial. It's a laboratory for understanding how humans relate to digital representation. But avatars in the metaverse will carry different stakes than gaming characters. They'll represent us in job interviews, business meetings, legal proceedings. The standards we set now for authentication, verification, and identity control will determine whether these spaces become tools for human flourishing or new mechanisms for surveillance and manipulation.
Our guest has been thinking deeply about these challenges from multiple perspectives, as a lawyer concerned with justice and fairness, as a researcher at Harvard applying AI and big data to leadership and workforce transformation, and as someone who chairs a UN initiative on AI and virtual worlds.
She's not approaching this as an abstract academic exercise. She's helping governments, Fortune 500 companies, and global institutions design systems that work for everyone, not just the people who look and sound like the developers. One insight she shared with me is that children are better at spotting deepfakes than adults. They can identify unnatural elements in faces and movement that we've been trained to overlook.
But that advantage is temporary. As deepfake technology improves, that gap will close. We need verification systems, provenance tracking, and authentication tools built into virtual spaces from the beginning, not bolted on afterwards when the damage is already done.
The stakes are particularly high right now because AI is accelerating everything. The same large language model that can help democratize access to information also hallucinates with confidence, generating plausible-sounding falsehoods that even experts struggle to detect. We've seen major consulting firms sell governments expensive reports full of AI-generated errors.
We've watched these systems replicate exactly the same security vulnerabilities and code that human developers create. 45 % error rates that haven't improved despite billions in investment. Now this isn't just about better technology. It's about better governance, better ethics, better intention. When my guest says we need cross-border legal frameworks similar to GDPR, she's not asking for bureaucracy.
She's pointing out that stable regulatory environments make business less risky, not more. The worst thing that happens in business is massive instability, risks you can't identify or prepare for. Strong laws create the foundation that lets innovation flourish sustainably. I've made the transition from writing code to reading PNLs.
And I've learned something my fellow tech leaders don't always want to hear. Regulation is not our enemy. It's the difference between building on sand and building on bedrock. Yes, compliance has costs, but chaos has greater costs to our businesses, our users, and our societies.
The optimistic vision here isn't naive. It's grounded in decades of experience of what works and what fails. Virtual worlds could tap talent anywhere, breaking down geographic and economic barriers. They could connect families separated by distance or circumstances. They could provide legal identity to the billion people currently excluded from formal systems. They could create professional opportunities we can't yet imagine, but only if we build them differently than we built the internet.
Only if we treat verification, privacy, and human control as foundational requirements rather than features to add later. Only if we recognize that these systems don't exist for their own sake. They exist to serve human needs, human values, human flourishing.
The conversation you're about to hear explores these tensions honestly. We discussed the failures we've experienced the scars we carry from building the internet, and the specific design choices that could help us do better this time. We talk about why the human in the loop isn't optional, why autonomous AI without human oversight is neither possible nor desirable, and we examine what it means to be a thoughtful designer of technology, someone who anticipates consequences and builds safeguards proactively rather than reactively.
This matters for everyone, not just technologists. The virtual world is being built right now will shape how your children work, how you access healthcare, how societies function. Understanding what's possible, what's dangerous and what's negotiable isn't optional anymore. It's civic literacy for the 21st century.
Dr. Paola Cecchi - Dimeglio is a globally recognized expert in AI, big data, behavioral science and talent analytics for the past two decades, she advises Fortune 500 executives, global law firms, and public institutions on using data and behavioral insights to improve decision-making, build high-performing teams, and drive future-ready talent strategies.
At Harvard University, she holds dual appointments at the law school and the Kennedy School of Government, where she chairs the Executive Leadership Research Institute for Women and Minority Attorneys Initiative. Her research applies AI and big data to leadership, decision science, and workforce transformation. As the CEO of People Culture Data Consulting Group, she leads a global behavioral science and AI firm, and invented IDEA, a patented platform that advances fairness and transparency in performance and succession systems.
Dr. Checchi - Dimeglio, co-chairs the UN ITU Global Initiative on AI and Virtual Worlds and advises the World Bank, European Commission, and the US Equal Employment Opportunity Commission. She is the author of Diversity Dividend, MIT Press 2023, and Building a Thriving Future, Metaverse and Multiverse, MIT Press 2025. Her work has earned recognition from the US Department of Health and Human Services, and a 2017 ASBE Award of Excellence. She has authored over 70 peer-reviewed publications and contributes regularly to Harvard Business Review, MIT Sloan Management Review, Forbes, and other outlets. Please join me today in welcoming Paola to our podcast.
—---
Justin Beals: Paola, thank you for joining us today on SecureTalk.
Paola Cecchi - Dimeglio: Thank you for having me, very happy to be joining you and your audience.
Justin Beals: You know, we had a chance to connect a little bit before we started recording and you related a kind of a story for me that I think was very impactful about how you got interested in technology to begin with and could you just tell us a little bit about what led you to want to fix things? Yeah.
Paola Cecchi - Dimeglio: So, you know, by training, I'm a lawyer. So, lawyer have a tendency to like justice and the balance. And one of the elements that dragged me 25 years ago, I was saying to tech was, you know, I was living in the Bay Area. And for people living in the Bay Area, I was more specifically living in the sunset. And I was trying to get to Palo Alto. And you have to imagine early in 2000, I was trying to get to my school and trying to use a cell phone I had, getting into the Muni asking, when is my next train? When can I catch the baby bullet? Unfortunately, with my accent, I lost the win and I'm still losing up to today. The voice recognition could not pick up my accent.
At that time, I started realizing that there are powerful models in how we train technology, including gender, including ethnicity, including accent, whatever you want in the background, come in. And I said to myself, I need to get into the bottom of why my accent is not recognized. And so that's tear me to the past of really working into AI big data talent, understanding what is behind it and leadership because I believe that the three of them are connected and are the solution. I don't think, and that's the lawyer side in me, don't think tech are bad. I think it's leadership failing somewhere along the way if tech are not providing the level of security, the level of integrity that we are looking to have.
Justin Beals: Yeah, I have worked in software for a long time now. You experienced this kind of issue as a lawyer around 2000, similar time for me. We were just starting to build websites and you know, the thing about it is prior to the internet, distribution of these systems was pretty, there was friction in the system of distribution.
But with the internet, there's almost zero friction, right? We can distribute software tools for a business to interact with their consumers almost immediately, almost globally. And that is a power I don't think we're used to having in some ways.
Paola Cecci - Dimeglio: And you know, you're talking about Internet, so let me make a side way here because I don't know if that happened to you, but I remember when I received my email, well, my email, people asking me to create an email, let's put it that way, and I was like, for what? And I remember my first email, I was trying to put all my last name and my first name and what I received as it's too long, cannot be created.
And when you think about it, you're like, OK, that was an interesting twist in how at that time, Al Gore, we're picking up what was allowed in a number of symbols in an email. We have evolved now. We can have very long email. But I think we understand more and more about when technology comes to us with learning from the past and the experience. And I think, you know,
the book is about virtual world, metaverse and multiverse, and I think we can compare that to what internet was to us in the late 90s, early 2000s.
Justin Beals: Yeah, it was a very exciting time for me personally as the internet, you know, was being born and the idea of the metaverse multiverse was strong in science fiction. I was certainly reading about it. And, and I think that we, kind of envision this path from internet, you know, web one.o or as we might talk about it to this kind of democratized futures. was an old punk rock kid, or a young punk rock kid back then. And I was like, oh, the idea that I can just build information, content, stories, whatever I wanted, art and distribute that without any friction in the system felt to me like I was democratizing access to an audience that used to have to go through a group. But the internet has been highly corporatized and that ability to access is limited to channels quite specifically, almost through weight and gravity.
So let's talk about this invention of the metaverse. Does it go the same way? You know, are we, is it something that we as humanity own or will it just become another big tech product, do you think?
Paola Cecci - Dimeglio: So, you know, I'm an optimistic and I'm also willing to believe that human and humankind is aiming to do good, right? That's my premises where I start in everything that I'm doing. I don't think that there are things who are there to get us and I think that's important to have because I do think that overall, there are institutions who have been there for a very long time and I'm thinking about ITU, the UN International Telecommunication Union.
I'm also thinking about ESOS and IEEE. All those institutions are there to make sure that they are working with private actors as well who are building the infrastructure, who are building the tech as such.
But also bringing our say, scholar and grassroots community to have an input on to outcome that will be allowing us to have a level of standardization, a level of operability into the system that coming into our life. And so I think we are benefiting from the past where we know where we have created gaps that could have been avoided. And now we are at the beginning to some extent of it, you know, early 10 years and all those organizations, all those forces are coming together to make sure that the past allow a better sense of control.
Justin Beals: Yeah, we do have infrastructure and you're pointing this out from the perspective of governance like our governments and the laws we want to create as a collective, the ethics that we want ourselves to behave with in developing these things. It's counterintuitive to me sometimes how tech and business leaders don't lean in to developing good governance. It bothers me that, I mean, I could name any one of the tech stars these days that seem to chafe against the idea that they should be hemmed in by the laws that we as a community should need instead trying to help develop those laws. But it's not a, it doesn't have to be a battle, more of a construction. Do you agree?
Paola Cecchi - Dimeglio: I absolutely agree that it should not be a battle. I think it should be a conversation because, you know, along the way you need to bring all the actors into a conversation, into a table. Sure, you are not going to see eye to eye on every topic, but as long as you have a conversation about what are the implications of any decision and they are sought out at all levels, then you are able to create the safeguard that needs to happen and be that's what I call being a thoughtful designer of a tech. Being able to anticipate what may come, running all the scenario. They may not happen, that's fine, but at least you have gathered various and diverse point of view. You know what are the risks. You are able to create safeguard if they need to be created upfront and not as an aftermath.
And so you are able to guide people in using that technology in a way that empowers them and is not harmful to them. And I think that's key.
Justin Beals:And you have to have intention to do that, right? I mean, we can't just build all of our businesses or technology with agile software development thinking two weeks ahead of the number of clicks I'm going to get. You have to think about the broader impact to your constituency to be successful at that.
Paola Cecchi - Dimeglio:
And I think the broader impact is not just of today. It's not a short-term memory of what happened in a week or in a month or even in 12 months. Because when you look about how technology impact our life, we have, think, to take the long term. What it is in five days, that's a perspective, but more what it is in 50 years.
And how that's going to come, because the timeline of, I will say, technology implementation coming into our day-to-day is not happening overnight. That has been built up over the years. And what you are experiencing today is something that has been built 20 years ago, the time they reached out to us. But if we think, OK, how is this technology today will impact with the rest who is being built?
then you are more likely, I will say, to take a long-term perspective of generation also coming to work. And when I say that, you know, I'm always thinking about the intergenerational gap that we had early on with phone and with computer. We have left people behind of the workforce, not...
people being aware of what was a computer because they didn't grow up with them and because we didn't help them to educate them on how to use it and how powerful that was. And, you know, on the personal side, I was so grateful to have a grandmother who was tech savvy and always have pushed me to understand mathematics and to understand tech because she just passed away. But at 100 years old, she was still using a computer. She was still able to connect with me. And I was glad that she was able to do that. But I also have other individuals around me who have parents and grandparents who are not that well versed in technology. And so you're gaps.
Justin Beals: First of all, I'm sorry for your loss. Just want to hold emotional space for that. But also, I think one of the things that's become apparent to me lately, and certainly we can think about this in the context of the multiverse is, if you don't know how to operate the internet very well, the information you will acquire is limited to what is pushed to you, and it changes fundamentally the reality in which you operate. I say that as a fact, but do you agree? A lot of people don't like it when I say that.
Paola Cecchi- Dimeglio: You know, as a choice architect, I always think that what is in the front of you is a choice of somebody else. And so what is important is to understand what is your choice toward the decision that you will make based on the information you have. And that's where a power as a human being is, is to know, the information I have in the front of me.
How can I assess that accuracy? How can I make sure that the information is also presenting counterfact to the belief I have? And I think all of that is something that needs to be taught and questioned. I think at school, when we are a young kid, that's something that we learn. Question the world around you and anyone around children of the age of five to seven and even younger are always surprised to realize that they keep asking questions. And nothing is a truth. For them, everything is a discovery. And I think we need to keep that as a state of mind as an adult, always question what is presented in front of us. So we can make a better assessment of the world around us, knowing that we are the ones in power to decide what that information is.
Let me just pause here for a moment because I think it's a critical point that you are addressing about where we are, especially where the world is going. You know, we have, and we were talking about early internet, you know, the time where you had to search and looking into AOL things that you were dialing, are not something far away in my memory.
And also the number of search and how did you see that? was a real website or a fake website it took us a while to be able to get into all of that and We had at that moment the power of making Empowering human being to say here is a list of choice. You're gonna have to decide which one is the most accurate for you and who can be leftover. Now, with the use of new GPT, there is one information that is pushed and no secret there. Those GPT sometimes provide information that are not accurate. And that's something that we call ghost in the machine, at least in my world in data science, that the ghost in machine don't know where it's come from.
Justin Beals: Yeah.
Paola Cecchi- Dimeglio: Well, the reality is now you're going to have to question it because what is always interesting is if you compare the various GPT, you are not going to get the same answer and the same order of answer. So that will depend on you as a human being to say, okay, what is a checklist that I want to put in my head that every information that I'm going to get, the ASOS GPT are passing through a ceiling of accuracy or a ceiling of believable? And what are the key elements that I always need to ask? And I don't know for you, but I see something that came up recently in the newspaper where one of the largest consulting firms was able to sell to a government almost a $500,000 report full of wrong information in it.
Justin Beals: Yeah, I didn't see that news, but I have to question everything any of these large language models give me. You know, we're on the topic here now of AI and that of course would be a big construct in the concept of a multiverse, metaverse. But one of the things that I feel is a little bit of a positive negative in dealing with the LLMs is that on some level, they hallucinate, right? Like I'm unable to trust what's coming in. At the same time, I kind of like the subscription model compared to Google's advertising model. Because Google's advertising model led to more corporate interests being ranked higher in the amount of information. And perhaps if the business relationship is more with me as an individual syncing information, that business, Anthropic, OpenAI, will kind of build a better product to gain more subscription because what we're looking for is validity and efficacy in the responses from the model. the jury's still out, right? Who knows where we'll wind up? Probably they'll get a bigger check than I could ever give them from some big corporate entity and we'll lose control of these models or what they recommend, yeah.
Paola Cecchi- Dimeglio: But I don't think there is one model that better than the others. I will say the paid model, where we used to be to where we are currently, and I don't know what the future holds. What I think is important is whatever the model is, not forgetting how those large models have been constructed. And here I think it's something important from the data set, where it's coming from, and what are the implications for everyone.
Justin Beals: Yeah.
Paola Cecchi- Dimeglio: To the unstructured way of data labeling. And that is also very important because the perception of being able to give a fact and a counter fact and know what is the answer that will come into that decision tree of, I will say, the model are important. And I think that's where we need to be aware that whatever answer we have.
From the past, I will say from a Google search to a new GPT model, it is a choice of somebody else. And so it is your responsibility and your own accountability to question the choice being presented to you. And I'm going back to that because we need to be able to feel empowered to do something for us. But there are also good things about it.
Justin Beals: You and I have both built a lot of models in our career and have of course uncomfortably had to look at the bias in those models, some of which we wanted and some of which we didn't. You point out the concept that we need to understand the data that comes into the content. You know, what are your thoughts and you write about it in the book some about fairness in and understanding bias in the models data, you know, understanding data that you're trained with. What are other ways you'd like to test for that? Or you think we should require testing for it?
Paola Cecchi- Dimeglio: So first, this is my next book. So that's, you know, I'm giving hints in this book about what is coming next. No, I do think that there are many things that we can do related to biases in AI. You know, I remember more than 10 years ago, I was in an elevator pitching out
Justin Beals: Okay, wonderful. Yeah. We'll have you come. I'm excited to read volume two, Paola Cecchi- Dimeglio. Yeah.
Paola Cecchi- Dimeglio: To Thompson Reuters, a column on AI. And I remember the editor asking me, Do you really think it's gonna be something? And I'm like, you have to think as changing your life. But I also wanna be able to address the biases coming into it. And I was very grateful because Thompson Reuters gave me that lean away to explore through columns my opinion on AI and all the biases. And why I'm saying that is this is not new. Biases in AI is something that we are well aware of since the beginning. Why? Because the entire process on, if you think about choice and behavioral choice, the data you're collecting, how it is presented to you,to the way in which it is labeled and I'm back by if it's labeled by somebody in Western Europe or in Latin America, the meaning may not be the same as well culturally, contextually. And there are now more and more research showing that it's impacting as well as how we construct after it the decision and the answer that come from you.
And so it is very important upfront to always talk about the biases and the type of biases that show up. I will give you one very simple example. When you type my name in an email, it will automatically default into Paolo. Why? Because a data set has been trained on more men than women.
And so those corrections,have not been implemented yet. And so I think that should be a constant loop about the biases. And that's a never-ending story. When people tell me I built the safeguard right up front and I'm done, I'm worried. I'm truly worried. Because that should be constant loop coming on a daily minute basis.
Where are the red flags that we see? How can we correct for it? What can we think about it?
Justin Beals: Yeah, I mean, I would make a call to ethics to the data scientists that build these models that you can't build a model that you intend to release to the public without an effective testing plan for human bias and impact. And if you don't before you ever design the data before you ever train the model, these are so powerful that I want us to think about how you're going to test it. I built a company in a prior life in the past that would predict the likelihood that a job applicant would be a high performer. And we were surprised to find that the Equal Employment Opportunity Commission actually published guidelines on algorithmic recommendations, which I was very grateful for as a data scientist, because we designed a testing algorithm around our models before they were released, and published a report for every model that anyone could see whether the biases may or may not be in the model. And I thought that I just felt good about the work being able to do that.
Paola Cecchi- Dimeglio:
Yeah, I mean, the EUC and those guidance that you are touching upon were under the leadership of Commissioner Burrill. She has been a thoughtful lawyer, you know, making sure that providing guidance to public and private institution regarding biases that can creep in. And I think that's back to what we discussed earlier on is the ability to have institution helping setting up guideline to know where the standards are so we are not creating and putting into the world algorithms that are not meeting some level of standards.
Justin Beals: Well, we've talked a lot about the types of technologies that are today on the internet quite a bit. Your book is a lot about future technologies and spaces that we will build like the metaverse. And one of the areas where we start to break into the future technology, although it's happening, is the idea of avatars. In the past, it's been a username. There's been a challenge between verified users and fairly anonymous or very anonymous usage. Tell us a little bit about how you think we should consider the concept of avatars, stereotypes and misuse in future digital spaces.
Paola Cecchi- Dimeglio: So, two things I want to jump back to what you said. The virtual world are already there. Like it's not a near future, it's already there. And that is something that we may have difficulty in understanding, but I promise you I'm happy to share more examples to show that. The second thing is the avatar. Part of the group that I was...
Justin Beals: Yeah.
Paola Cecchi- Dimeglio: Working for ITU was actually defining Avatar. And I have to say on the personal side, that was probably the most challenging task I have ever had to do. Putting into a few lines what is Avatar and building all the concepts behind it. And I pulled my hair more than once, going back and forth with many edits and feedback from various institutions about a definition that can evolve also over time, can be adapted to generation and what it means. Because if you talk to people of our generation, what is an avatar, and to a young kid, we don't have at all the same definition. And so making sure that I could have the notion of avatar being inclusive, being free of stereotype- based default, and that is something extremely important to my heart, and making sure that we can create various levels of authentication so the user is the one deciding how we want to play, right? With his own perception of himself in the virtual world.
I have to say that I think we are creating the right guideline over it. Are we there? This is a work in progress. Because every time I see something that I thought we are and we settle what is an avatar, something else come up. And despite all the input we had into the definition, I realized we should have had that small things. again, continuous loop of the avatar is essential. But again, most important is controlling, having the user being the one who decides.
Justin Beals: I you know as you talk about it I'm reminded of things that I just don't consider like my experience playing massive multiplayer games like World of Warcraft very strong avatar concept already very fleshed out in fantastical terms right we of course many of those creatures are not human but we we place our identity in them and it's a form of self-expression and
Just as much as the fashion we might choose to wear in the day. Yeah You know one thing that I think you mentioned you mentioned in your book specifically that over 1 billion people globally lack legal identification. But in talking to you as you talk about like digital identification I think you're pointing out a tiered system even like how you know, how much can you identify of me? Not very much. I'm operating in a very anonymous way through to I'm very identified. That's new to humanity, I believe, in these digital spaces for us.
Paola Cecchi- Dimeglio: You know, I believe that there are several assets in life, and one of them is you as a human. And so you need to know that that asset has a value somewhere along the way. But you should be in control. I'm a true believer of you being in control, being aware of all those data that are being cross-referenced at all levels, from just your presence online to what your face and your body look like to how you interact with other people when you are at work and you're moving. And that, I think, is extremely important to have in the back of our mind because, there is a reality where a big chunk of the world do not have an ID. And whenever you go in the world, an ID is something that allows you to open a bank account, to have access to community. And so being aware of how the virtual world indirectly and via the blockchain, we can create digital ID for everyone. So we can open new door to individual to be registered, knowing also, you know, their history, it's something very important. And I like to make an analogy because we have a tendency as human beings to forget what happened 10 years or even 20 years ago.
But if you think about your medical record, you go and you change state or you're going and moving to another country, people may not know what is in your folder. Yet, that may become extremely important as you age. And so, the ability to track back what happened to you when you were five years old, may be giving a cue of you having this disease today. And if we had that level of digitalization of your health, we could prevent, and there are number of research showing that, that we could prevent at least half of the disease.
And so I'm thinking exactly the same way with ID. If we have an ID, a digital ID that we are empowered to use, we will have the capacity to create wealth of opportunity for oneself in a way that we are not yet anticipating.
And so, you know, the ability to embody into oneself your privacy and the wealth of opportunity for developing a career, moving abroad, I think is key.
Justin Beals: Yeah, it's intriguing because certainly this is becoming more more apparent to me as there are digital spaces in which I like to operate fully recognized like LinkedIn. And there are other digital spaces where we try on different characterizations and aspects of ourselves potentially. And that's OK. And it's very anonymous. Right. When we want to explore how we feel or explore without quite as many consequences as perhaps more professional space. I have a comfortability to that. That is something that I am comfortable with.
Paola Cecchi- Dimeglio: You know, one element that I have, and I agree with you, I think we should also have the right to be forgotten on what we have done with our ID online. I think that's important. But in terms of digital ID, you know, I want to point out one state who is doing quite a great job. The Bermuda government was actually one of the first to create the virtual world for their citizens and on a diplomatic way. And so I think with those type of things, when people join in, they ask a whole type of question, but they are always in control. And it's setting up into the right direction of where government can become useful in providing digital ID to all their citizens.
Justin Beals: In your book, you write that governments have a vital role in keeping things orderly. We should see a little more of that, think, some days. I say that phrase and it makes me giggle in the moment. But to finish my question, I apologize. What are the most urgent rules that you believe should be put in place now to make sure we stop fraud and exploitation and other online risks from kind of infecting a metaverse as we want to roll it out.
Paola Cecchi- Dimeglio: Yeah, I think cross border legal framework are quintessential. And I see that with the learning of experience of the GDPR when the European Union went on their track of positioning and creating a framework, there is what people sometimes don't realize.
But as a lawyer, this is something that is a 101 class that you're going to. It's what we call the long harm statute. Meaning that with the GDPR, European Union have been very smart in the way that every US company or every company around the world, as soon as they interact with European citizen, they have a right. And so they set up a tone towards certain direction. And that's great.
But in US, we see the multiplicity of legislation that happen statewide and at the federal level, and in other country as well, in Latin America or in Middle East or in Asia. And so I believe that by bringing all governments, and that's what ITU is doing in a very good way of bringing the dialogue across states over the virtual world, over the legal implications that this new technology will have and has already in our lives.
Justin Beals: I deeply agree with you and I'm a fan of GDPR. I wish we had a similar nationwide law in the United States where I live. And I want to complain a little bit, Paola Cecchi- Dimeglio, to my fellow business leaders. I've made somewhat of a transition from someone that gets to write code to someone that has to look at a P&L on a regular basis. And I think what, you know, certainly you can chafe as a business leader at the idea that regulation increases cost, that could be true. At the same time, it creates a more stable environment to build a business. And the worst thing that I've had happen in business is have large instabilities in the marketplace, risks that you can't kind of identify or see coming at you to meet. And so I think from a resilience perspective, we need effective laws from our governments to keep our population safe. Otherwise, business is much more risky than the other way around.
Paola Cecchi- Dimeglio: And there are many reports from public institutions as well as private that show that when we have strong law, we are able to boost even more businesses. So I can agree with the necessity of having a legal framework behind it.
Justin Beals: Yeah, I probably run counter to a lot of my colleagues in the United States. I wish we had national health care because I am perturbed every time every year when we have to look at health care plans. Why is I am a person running a business making a decision about this? It is quite simply ridiculous. But of course, I've had the real treat to operate companies internationally and it was so much easier to be like, hey, these are the taxes that we pay to make sure everybody has good health care in the country or as best as we can afford. Yeah.
Let's talk a little bit about the misinformation issue. I think that misinformation has really wrecked kind of large parts of the internet for us. I think there's a challenge that I have even personally, and I'm a very native digital citizen in finding information that I can trust and rely on. How do you think we should develop the metaverse or future digital spaces? to tackle misinformation issues.
Paola Cecchi- Dimeglio: So first, if I have the magic wand, I will be the first one going all over and pushing it. So let me be clear on, I don't have it. But what I think my experience and my expertise brought me to is the ability to anticipate some solution that will be the right match, because deepfakes are increasing in our real world, but also will be in the virtual world. Interesting fact to have in the back of our mind is children are actually better at spotting out deepfakes than us as an adult. They are able to grasp more what we see in our face that are not natural, compared to us as adults, yet I do think that this gap going to close very shortly with the advance of technology that we see in the pace in which we're in. Yet, what can we do? And that is a very important question because I think if we are able to bring AI verification tool with the human oversight, as well as knowing some provenance of the digital coming toward us and able to label the authenticity over certain criteria and ranked them. Having the power to turn it on and off based on looking at a movie where is it fully accurate? Do you really care as you are trying to entertain yourself?
But if you are in the virtual world and trying to have a job and an interview for a job, you want to know that the accuracy of the potential employer that you're meeting in that virtual world is completely legitimate and that there is a trust that can be built. So depending on the context, I think we should have a level in our power, like a dashboard, deciding us where we want the veracity to come in.
Justin Beals: Yeah, and I I'm I'm kind of floored by this Really the distraction from good information that we've had but I like that we we as Technologists could build these into our systems like if we're developing metaverse or virtual space You know, we can make decisions around where valid information is the other thing that strikes me about your content is the human in the loop
None of these models, none of these AIs or even virtual systems are autonomous. They have no value if humans don't engage with them. But so often we hear people dismiss the human part, know, lead tech leaders today dismissing the human part of their own employees and the impact that they have for saying, hey, we're just going to use models to do this. It just, it is
And I built a bunch. It drives me crazy because I think these are tools for people not for corporate entities. Yeah
Paola Cecchi- Dimeglio: But the model are essential to solve problem, right? Like that's where technology should come, solving problem that we have and help us as a human being. And I compare technology with electricity. And God bless her heart. My grandmother always remind me when we had electricity.
Justin Beals: Yeah.
Paola Cecchi- Dimeglio: We were wondering why we had electricity. And nowadays we cannot even imagine a life without electricity. And so, you know, having that perspective I think is important.
Justin Beals: So, Paola, we've made a lot of mistakes in our tech industry, but we've also had some positive impacts in a lot of ways as well. I certainly, think if you work in an innovation space, it is easy to have a lot of failure. That's very normal. And I'm very proud of the opportunities I've had in software I've built to make people's lives better. That feels like a good impact in the world.
Do you, what impacts for good do you think that these future virtual spaces can have for us? But know, paint us a vision of your most hopeful for the future with these tech systems, yeah.
Paola Cecchi- Dimeglio: First, I like that you say we fell, because that's a very important point that we need to understand of an optimistic future of what virtual looks like. And Amy Edmanson is the one who coined that word of psychological safety. And Amy has been a big part of my personal and professional life, and I think failure in innovation is key because that built the future. So what is the optimist part of me looking at the virtual world? The ability to tap talent absolutely everywhere. To know that there are no boundaries or geographies that limit us or limit oneself.
The ability to go when I need to be close to my family who is very far away from me and be via an avatar with them experiencing the same situation because the world decides to shut down and there is no airplane allowing me to go from one place to another. All of that you will have told me 20 years ago that will never happen, it happened. So I'm optimistic that will create a better world for all of us, personally and professionally, where we can create connection, where sometimes the physical part of us cannot be there.
Professionally, cutting down the barrier. Legally speaking, providing ability to everyone to have an ID. Financially, breaking down barrier that one may not think of, and providing a future where you may not know where the new opportunity comes from.
Justin Beals:
I love this vision, some of which we're already attempting to build, know, the opportunity from an education perspective, the ability to connect with people that we might not have been able to. I love the job opportunity concept, the professional opportunity. The part of the greatest joy I have ever had in my professional career is getting to work with people in other countries and learn from them, and experience their culture.
And it's because we have these digital spaces to share information, the product of our work that I had the ability to do that. And I would like to continue that. Yeah. Paola, thank you so much for your book, giving me an opportunity to read it and join us today on SecureTalk. It really was a treat to have you with us today.
Paola Cecchi- Dimeglio:
Thank you so much, Justin, to have me. I'm very grateful to you and your audience. if I could, now I will interview you, because you give me too many cues during our conversation that I wish I could pick up on.
Justin Beals: Please. Yeah.Good. Well, thank you everyone for joining us today. Thanks to Paola, and we'll talk to you again soon.
About our guest
Paola Cecchi-Dimeglio, Ph.D., J.D., LL.M., M.Sc. is a globally recognized expert in AI, Big Data, behavioral science, and talent analytics. For the last two decades, she advises Fortune 500 executives, global law firms, and public institutions on using data (AI and Big Data) and behavioral insights to improve decision-making, build high-performing teams, and drive future-ready talent strategies.
At Harvard University, she holds dual appointments at the Law School and the Kennedy School of Government, where she chairs the ELRIWMA Initiative. Her research applies AI and Big Data to leadership, decision science, and workforce transformation.
As CEO of People Culture Data Consulting Group, she leads a global behavioral-science and AI firm and invented I.D.E.A., a patented platform that advances fairness and transparency in performance and succession systems.
Dr. Cecchi-Dimeglio co-chairs the UN ITU Global Initiative on AI and Virtual Worlds and advises the World Bank, European Commission, and U.S. EEOC.
She is the author of Diversity Dividend (MIT Press, 2023) and Building a Thriving Future: Metaverse and Multiverse (MIT Press, 2025). Her work has earned recognition from the U.S. Department of Health & Human Services and a 2017 Azbee Award of Excellence. She has authored over 70 peer-reviewed publications and contributes regularly to Harvard Business Review, MIT Sloan Management Review, Forbes, and other outlets.
Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.
Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.
Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
Other recent episodes
Keep up to date with Strike Graph.
The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.
.jpg?width=1448&height=726&name=Screen%20Shot%202023-02-09%20at%202.57.5-min%20(1).jpg)
