- Home >
- Resources >
- SecureTalk >
- From botnets to AI health: Michael Tiffany’s mission to empower personal data sovereignty
From botnets to AI health: Michael Tiffany’s mission to empower personal data sovereignty
In 2000 the internet was expanding at an astronomical rate. Consumers were logging in via dial-up modems by the hundreds of millions and businesses were racing to maximize their footprint in the digital world. A hacker named Onel de Guzman living in the Philippines had been playing with a script called “I Love You”. Distributed via email, it could assume secret control of an individual's computer from a centralized control. The “I Love You” virus spread to over 50 million computers, creating the world's largest botnet. Michael Tiffany and his co-founders were aware of these types of cyber attacks and wanted to ensure the Internet worked for businesses wanting to connect with consumers. They founded Human Security, one of the first companies to combat botnet activity for major brands and today a very powerful cybersecurity company for major corporations.
In this episode of Secure Talk, host Justin Beals interviews Michael Tiffany, co-founder of Human Security and the current leader of Fulcra Dynamics. Michael shares his early experiences with computers and his journey into cybersecurity, discussing topics the founding of Human Security as a solution for botnets, ad fraud, and early “Know Your Customer” challenges. He explains his current company, Fulcra's mission to empower individuals by unifying their personal data, promoting privacy and control in the age of AI. Listen as Michael reflects on the ethical responsibilities in technology and shares his vision for a future where individuals have sovereignty over their data.
00:00 Introduction to SecureTalk
00:32 Host's Journey into Computer Science
01:39 Introducing Michael Tiffany
03:12 Michael Tiffany's Early Experiences
15:26 The Birth of Human Security
20:56 Challenges and Innovations in Cybersecurity
27:11 Fulcra Dynamics: Empowering Personal Data
37:22 Vision for the Future of AI and Data Sovereignty
43:59 Conclusion and Final Thoughts
View full transcript
Secure Talk Ep. 206 - Michael Tiffany
Justin Beals: Hello, everyone, and welcome back to SecureTalk. This is your host, Justin Beals. I think many of us got involved in the computer science industry due to curiosity. I wanted to know how video games worked. Thank you. And to help me figure out how they worked, obviously, what I needed to learn was how to program.
And so I spent a lot of time when I was a kid tinkering with computers, learning what types of commands did what, and essentially learning how to write code. It was always intriguing to me, especially as computers went from something that I worked on locally to something that was connected to a network, how computer systems were talking to each other and the types of secrets that they might reveal.
And I think that innate curiosity led me to some really interesting media outlets and computer hobbyist-type clubs and interest in the way computing works that eventually led to a professional career. Like I said, a lot of us have had this experience, but today, we're going to talk to somebody that had that experience and continue down the path of being a true visionary in the security space, especially cyber security.
I'm really happy to introduce today, Michael Tiffany. Michael Tiffany has worked in cybersecurity and AI personalization for most of his professional career. His work is renowned in botnet defense and innovative AI-driven personalization for a more deliberate digital life.
Michael describes himself as a lifelong hacker. He's been a member of the legendary hacker group Ninja Networks. He was also the founding CEO of Humansecurity.com, the world's winningest botnet detection and defense company. He was part of the team that architected a world-leading botnet mitigation technology. Today, he spearheads Fulcra Dynamics.
Fulcra is working to empower AI to work within a personal context, making it both safer and more intentional. Really driving to have AI empower people to live more deliberate lives. In our conversation today, we're going to dig deeply into how we got into computer science, what botnet detection looked like in the early days of developing human security, and also where we should be driving our expectations of AI systems, our data, and our power over those relationships.
Join me today in welcoming Michael Tiffany to the podcast.
—---
Justin Beals: Michael Tiffany, thank you for joining us today on SecureTalk. We're really lucky to have you.
Michael Tiffany: Oh, it's a pleasure to be here.
Justin Beals: Excellent. Well, we always like to get to know our guest's early experiences in computers and computer science. And let's talk a little bit about how you got interested, especially in computers and security.
You know, what were your inspirations before becoming a professional in the field?
Michael Tiffany: Oh, well, I had a personal computer that my, my dad, like, traded someone for in, in the very late eighties. And that was a, um, you know, what at the time was called a PC clone, an IBM PC clone based on, I still remember, the DTK or so bios.
It ran an Intel 8088 processor. And. I didn't know anybody else who had a computer. I was fascinated with this thing, and I wanted to figure out how it works. So I got books out of the library. I found that those usually weren't helpful and, I didn't really have a lot of spending money. So I would I tried to earn money to buy computer books, and sometimes I would just go to the bookstore and, like, read a book while there, you know, try, try to retain the knowledge and go back home.
And because I didn't know anybody, I didn't have access to any compilers or anything. So, as I got interested in programming I used the basic that came with IBM PC DOS 3. 1, which was the operating system on that machine. And then when I felt like I ran, you know, I ran into the limits of basic, and I really wanted something that could run from the command line, not just be interpreted.
My only option was running debug. dot exe in with the parameter a, which puts it into assembly mode. And then I learned x86 assembly from books that I managed to get my hands on. And in fact, no doubt one of the reasons why I still remember the bios on that machine is that one of the very first things I figured out how to do was to hit a single interrupt that boosted the speed of my processor from 4.77 megahertz to a whopping eight, something that that bios.
Justin Beals: I've always been, I don't have a traditional computer science background and you learned it early on, but I was always a little jealous of my friends that were like, Oh yeah. One of my first projects in my computer science degree was to learn assembly and develop a basic operating system. I was like, Oh, I'm. I missed out. I jumped in with basic and tried to make fireworks and games and stuff.
Michael Tiffany: Right. Right.
Justin Beals: Oh, that's epic. Yeah. So that's quite the early introduction.
Michael Tiffany: Yeah. The, so I guess, in some ways, I learned the hard way. On the other hand, what I was really doing was tinkering. So, I figured out another little hack on that machine where I could ask for keyboard input.
So I wrote a very simple program in assembly that asked for keyboard, Input and then set an error code on exit, which was a kind of environment variable that PC Dallas understood, which could then be read from a batch file. So then I'd write these really complicated like dot bat programs that were interactive because of my really simple keyboard input assembly program. And from there, I would, I would assemble these, you know, Frankensteinian little automations.
And then I tried to, you know, modify games., I guess I, like many people, ran into some of the bugs in, in a shareware program that I had somehow traded for called castle.exe. And I tried to fix like some of those bugs, but. You're just, with a hex editor. So, you know, if you start modifying things with the hex editor, it's like really easy to find strings and modify those strings, but not modify their length, modifying the length. Whoa. You know, that's going to take some recompiling, which was like far more than I could like figure out on my own as a tinkerer.
Yeah. So, so I had. really spiky knowledge. You know, I, I went, I went inordinately deep in, in certain areas while having, you know, just absolutely no education in like theoretical, you know, foundational computer science whatsoever. And that was great when I, you know, fell in with like the teenage hacker crowd, right?
Like a bunch of other just tinkerers.
Justin Beals: I did play with hex editing. I was not very good at it. It was more like, what happens when I change this value? Like, will it run anymore? But it was a little later on. I had, we had had to reverse engineer for a project kind of flash. Uh, the old Macromedia Flash runtime engine as a browser.
Yeah. And the only way I could figure out to do it was to get in via the hex editor and change all the permissioning on the screen system. But it was a lot of fun. It was very experimental.
Michael Tiffany: Yeah. In, in the very, in the like absolutely earliest days, of White Ops, when we were chasing down some advanced nets doing ad fraud, we actually made use of some old flash tinkering knowledge that we had. This was back, you know, now well over a decade ago. So, so Flash was a pretty ubiquitous extension in browsers. So, in many ways we could, we could count on it being installed.
And, because of our not-like-normal education when it came to things like this, but, but rather our hacker tinkering, we figured out that there were some ways to call the Flash plug-in, you know, within the JavaScript runtime environment of a browser and basically do timing attacks where we, you know, we try certain operations, which would take, you know, a certain amount of time if a browser is running properly in like real user space, and it would operate it in a different amount of time if it was actually hidden or backgrounded in some way, you know, so, you know, So that, that like random hacker tinkering ended up being something that we put to practical use to fight cybercrime like a decade later.
Justin Beals: How did you get interested in security? I mean, your career has been engaged in the security space. Was it a professional choice during your education, or how did you wind up in cyber security?
Michael Tiffany: I didn't think I was going to work in cybersecurity when I was a teenager in Northern California, in the nine one six area code, II met other local hackers in nine one six, thanks to 2600 magazine and, you know, public media. Love that magazine.
Justin Beals: Yeah. I still have my old copies. Yeah.
Michael Tiffany: Oh man. I, I, my life would be radically different. were it not for 2600 and for Byte magazine. Those two made me feel collectively not alone. Byte was really, you know, Byte I discovered first, and this was at a time when sometimes they would publish Source code that you would just type in yourself to like a way of software which is hilarious.
I learned a lot from source code that was just printed in byte that I would, you know, type on my own then I remember just the anxiety and deciding, I'm just going to go for it. I'm going to show up to one of these 2600 first Fridays and see if there's anyone else there that's like me and there was something.
So that was an absolutely wonderful experience of meeting other weirdos and feeling like I wasn't the only one. So, socially certainly really formative, but I didn't think that I would work, in computer security. I was actually really skeptical of the burden in the computer security industry because it seemed to me like it was just monetizing fear.
And I was like, I don't want to join that game. That does not seem cool. I worked, I really cared about connecting people to the internet. So that was an early passion of mine, and another hacker, Matt Harrigan recruited me to a fiber-to-the-home company was America's first and very ambitious fiber-to-the-home company.
We, lit up the first residential. fiber network. Also, interestingly in Northern California, this time in Sacramento. So, actually the 916 area code, which is amazing, even though that company was headquartered in Colorado. And I was there working on that when 9/11 happened. And there was something about that as a turning point.
Actually, I would say that I thought it was weird that in a causally connected, we, you know, we, we severed this devastating terrorist attack. And then, as a nation invaded Iraq, and I just thought, Hmm, that this is, this is a tenuous causal chain. And if I disagree, if I disagree with this, Because, you know, perhaps of some of the things that I learned about, you know, practical security, maybe I should actually, you know, try to work on that.
In other words, that there are a lot of people who seem to care a lot about, you know, connecting the world that there, there are very few hackers in the world. So maybe this is a rare talent that I actually need to you know, put my shoulder to the wheel and, you know, try to make people more secure in a way that does not involve invading other countries.
So, that led to this sort of slow professionalization that was dedicated. to finding, you know, highly levered ways of trying to make people more secure, which culminated in the founding of White Ops, where the big idea was let's make people more secure by taking out the profit centers of cybercrime.
There's basically an infinite number of ways of breaking into a computer and trying to make computers impossible to break into seems like a failed enterprise after decades of effort.
Justin Beals: Yeah,
Michael Tiffany: but maybe we can do something about the human incentives. And even though there's there's honestly an infinite variety of cybercrime, there's actually precious few crimes that scale in terms of profits. So the theory was, let's go after the profit centers, and that'll knock out some of the supporting kind of supply chain of cybercrime.
Justin Beals: Certainly, the cultural melange, like around that time was really intriguing. We, you know, I think you and I grew up in an environment where a lot of information was controlled that we received. There were lots of layers to how we got it. I too saw the internet as an opportunity to just connect with whomever I want.
And essentially, people to say whatever they were feeling in a way. And, the almost democratization of the distribution of information, I thought was like, really exciting. Like, that was very powerful.
Michael Tiffany: Yeah,
Justin Beals: yeah. And, and in the event of 9/11, and then I agree, some of the questions that I even had at the time, uh, with our reaction as a country, I was kind of like, does all this information actually tie together? Are we, are we doing the right type of thing?
You, you know, you were part, you had a big success in a company called human security. Is that right? Yeah. Yeah. Uh, still doing quite well, I think. Very busy. That's right. Could you tell us a little bit about kind of the problem you were trying to solve at human security?
Michael Tiffany: Yeah. So it was really a big instantiation, of that theoretical grounding that I was just talking about, which is what can we do to take out the profit centers and one of our foundational insights was that botnets are a force multiplier for a variety of of crimes. So, really a variety of ways of victimizing people.
So there are a lot of things that go from being a nuisance to being a major societal problem because of the amplification provided by a botnet. So if you have to, if to break into a computer one by one or, you know, automate some sort of action that, That you can profit from, I don't know, let like scalping tickets.
If you're doing that manually, it's just a nuisance. If you're doing that with a million computers under your control, then, you know, you've got a million angry Taylor Swift fans. So, that, force multiplication from controlling a botnet turns out to be, you know, a critical asset for our adversaries.
And so I, we, my co founders and I wanted to do something about the profitability of botnets. We thought that this could be an area where, you know, a small gang of motivated hackers such as ourselves could have a big, you know, positive impact on the world if we could change some of the arms race dynamics in botnet detection.
And then we had this further insight, and this is what Dan and I got our first patents in. Is that whatever the thing is that makes money? So, however, our adversaries actually profit, what they have to do is remote control the computers that are under their control. So the thing that gets them paid is multivarious. Like who knows what it is.
But if you can't teleport in front of the computer that you've pwned, then you either need to automate it or remote control it like there's not really another move. So if we did an extraordinary job at detection, at detecting automation and remote control, then those techniques like the IP that we developed to identify automation remote control would actually kind of get under the problem.
It would apply to a wide variety of different cybercrimes. So, when we started White Ops in the summer of 2012, it was based on some really cool breakthroughs that we had made in the detection of browser automation and remote control. And we were originally thinking about banking trojans and detecting those, right?It was like obvious problem at that time.
And Dan and I met a man who became our co-founder, Tamar, who was supporting media sites at the time. He had some media sites as customers who were getting weirdly good traffic, like weirdly generous traffic, super interested in ads and would click on them all the time.
And it just seemed very suspicious. Yeah. So we tried the prototype software that we had developed to identify Zeus a banking trojan, and it identified that the visitors to this website that were clicking on all the ads really were automated. It was bought.
And that was fascinating. So, we just kept following that. It was like a thread that we pulled on, like, why are the bots visiting this website? How are they making money just from clicking on ads? So we had to unwind all of the ways in which they were actually cashing out. This turned out to be a multi-year investigation.
And that turned out to be a really big problem and a really big source of botnet profits. So, White Ops up started wiping out ad fraud botnets, and our reputation grew; we started protecting more and more of the internet. And it was White Ops. We had named it White Ops because our idea was to make the world more secure out in the open.
We wanted to do something the opposite of Black Ops. And then as our ambitions grew and we even started thinking about making some acquisitions, it felt like the right time to reflect that with a name change. So we changed our name to Human. And now Human is protecting a massive proportion of internet traffic where we're protecting a bunch of advertising.
We're also protecting things like online ticket sales and making sure that only humans get those things. And that company continues to grow, and I think will be a big independent public security company one day.
Justin Beals: I remember there was an era where the bot farm was the nemesis, you know, usually it was because they were overloading servers.
And I think there was some ransom situations, whether it be like, we're going to, you continue to hit your servers with our bot farm unless you pay us money, and then we'll turn it off. Yeah. But they've got much more sophisticated as they're attempting to actually get into accounts steal data.
That's so intriguing. And of course, they were both owned computers, but then we saw them in like internet of things and, and we have so many chips in so many places that it can sprout up all over in a way.
Michael Tiffany: That's right. Yeah, we shut down some pretty serious bot fraud operations that were based not even on compromised computers but uncompromised routers.
And then, in a shocking discovery, we found some really large networks of Android devices that came pre-pwned, if you will, to people who were buying brand new devices that had the malware pre-installed. It was a, it was a scaled supply chain attack.
Justin Beals: That is wild. You know, one of the things that I thought about a little bit, and you could tell me it's not a good comparison, but part of the work at human security seemed a lot about identifying who was human and who's not, right?
You know, even we do that today on our go-to-market work, we really want to talk to people that are actually human and want to buy what we sell, and we, we don't want to waste time with someone that's not, you know, I, I read on the website a little bit about comparisons to the CAPTCHA technology. And how do you feel about some of that?
I think a little lower level now identify a human being tools that we embed in our applications.
Michael Tiffany: Well, I have some, I think, philosophical differences with many of my, colleague's in computer security.
Justin Beals: philosophical differences, by the way, Michael,
Michael Tiffany: I hate captures because captures bother good people and are often bypassed by our adversaries.
So it's like the worst kind of security technology that presents a barrier to ordinary people who just want to get something done.
Justin Beals: Yeah,
Michael Tiffany: Without presenting a sufficiently difficult barrier to our actual adversaries. And it's not just CAPTCHAs. Like, as an industry, I think that there are a lot of defensive security tools that add a great deal of drag to ordinary people's lives.
And hell, as long as I'm talking about philosophical differences., I sort of hate education being trotted out as a solution to security problems because it's a way of blaming the victim. The, I am a happy warrior., I loved building White Ops and then Human as a gang of happy warriors.
We had happy warrior culture, which is great. I don't want to live in a warrior culture. Like the, if we do our jobs well as defenders, then actually, the people we are defending should be blissfully ignorant of all of the ways in which they could be victimized. I do not really want my mom to have a detailed mental model of all of the ways in which she can be victimized.
This is absurd. And, I especially, I'm not going to like install that mental model. I'm not going to educate her as some sort of first-order measure of how she should be defended. This is not right. It's, I think, actually kind of arrogant. The, like I said, it's sort of a blaming-the-victim problem.
Like, oh, well, if you were just educated, like us hackers, then you would not have been victimized. Nonsense. We should be holding ourselves to an extraordinarily high standard where we are keeping people protected without them necessarily understanding all of the ways in which they could be victimized.
And the best security barriers, the best defensive measures present very little friction to ordinary users and very, very high friction to our adversaries. And the CAPTCHA is almost the exact opposite of that.
Justin Beals: Yeah, I couldn't agree more. You know, having built a lot of software over the years, been a coder, and racked hardware for networking equipment, it was on me.
I felt an ethical responsibility to do my best. To certainly build the best product, but that included and at times, very sensitive data. It would terrify me, like it would keep me up at night that we had this kind of data that we needed for our product. And how, how were we thinking about, you know, security for it?
And I find oftentimes that we're so fast to launch a new product, no matter what the data or utilize it for that. We just don't carry ethics in our computer science work, you know, for what it means.
Michael Tiffany: I don't want to minimize the challenge here. Like, like it's, it's very difficult to work out the, these kinds of, let's call it scalable solutions, not least because we're operating under arms race conditions where we, as defenders, innovate and the attackers innovate against us. Yeah. So, so the ground is moving beneath our feet, and what we might consider to be the best and even the most moral action is, is itself based on, you know, the, it is situated in a, you know, set of facts when the facts change, then, you know, perhaps our conclusions should change. So I, I have a great deal of sympathy for practitioners, you know, getting in the arena and actually doing it. It's easy. It's much easier to be a critic, right?
Justin Beals: Yes. I guess I hold everyone else to my standard that I hold our team to. Maybe that's not fair. Yeah.
Well, you know, speaking of like the present day problems, you're currently leading Fulcra Dynamics. Can you describe for us a little bit Fulcra and the product that you offer?
Michael Tiffany: Yeah, this is a product dream that I'm really excited about where my colleagues and I are using our security expertise, not to build a security product, but to build a consumer product to help people gather all of the data that their lives produce so that it can be in one home.
And then from that one home, you can see your data de siloed, you can, you can do holistic personal data science and, and, and you can connect, you know, the next generation of AI tools to, to what is in effect your own personal data lake house. The, foundational insight there is that, We as individuals are living lives like enterprise of several decades ago from an IT perspective because we are almost all multi-device, multi-vendor.
It's some mix of, to use the term of art, cloud and on-prem. You know, I have a bunch of computers here at home. I have a bunch of cloud services. Yes. This is the ordinary enterprise kind of architecture from decades ago. But in the enterprise, there's always some kind of platform you get to plug everything into because silos suck.
So, it could be a CRM. Many companies plug a bunch of things into Salesforce. It could be an old style, you know, old might be a little derogatory. It could be, you know, a mainstream ERP like SAP or something along those lines. Or a system more like MuleSoft or or. Repellent here. There's just a thing you get to plug all your other systems into to desiloed them, right?
People have no such thing. There is no single orchestration platform for all of our devices, so they mostly like don't really work well together. So we thought we would jump into that breach and build a common data model and an orchestration layer for all of the devices in your life to make it all work together.
First, to just help you understand your own life better, making use of all these different tools. And then, hopefully, it's a sort of next-generation integration platform for useful AI agents and other really cool forms of personal software that can't exist until you unify all this stuff.
Justin Beals: Yeah. I was a little curious. This feels like a big gear shift in, especially the type of markets that you approach. You know, has it been a big change going from human security that sells to businesses to a B2C application? Right? Yeah.
Michael Tiffany: It's simply a big jump. Then, we talked about my attitude shift after 9/11, which is now, you know, what, 23 years ago.
Since then, I have been working on making people safer and, and, and then by extension more free by trying to make them harder, to be victimized. And I'm very proud of that work. But another thing you can do is, is make people themselves more powerful. When people have better tools, when they literally have more power, they're literally harder to victimize.
And so what I'd like is for the back half of my life to be about empowering people, making them more formidable. You know, I've tried taking power away from organized crime and other, you know, would-be victimizers. Now, I want to empower everyone, which, in some ways is it's the same problem; I'm just trying to push on the other lever, you know, in the other direction.
And I think there's some big levers to find here. The technology that surrounds us could be brought to bear to make us all more self-aware to aid us all in living more consciously and deliberately and making better decisions wiith more data turned into useful insight, if it could all just be brought together.
And I guess another sort of connecting theme here is that I think that good security is almost invisible, right? That's what I was talking about when I was saying, let's let's not bother the end user. So here, What we're trying to do is make something that is secure, that is an important almost force in personal data sovereignty and in privacy at the level where the security is just a feature, the privacy is just a feature, and what people are really paying for is, you know, the life benefits.
So we'll see if we can pull it off. So far, so good. I think we've built something really beautiful that people are, are responding to. So that feels good.
Justin Beals: Yeah, I did get a chance to take a look at the app, and it is really gorgeous.
And I do; it reminded me of my old William Gibson novels, you know, the local assistant, you know, that's kind of chatting with you in your phone. I, it felt very near, you know, I've, I've mostly worked on B2B software. And, of course, when we take in security and data and management, it's a lot about. Also, uptime in the supply chain of what we need to deliver.
But a B2C beast is very different, right? You have privacy concerns, there's legal ramifications. . Has that been a part of your challenge as a CEO at Fulcrum?
Michael Tiffany: Oh, yeah. There are, frankly many new skills to learn in trying to launch a B2C app. And something that really connects emotionally with people.
So that's a really ennobling goal. And when we see it happen, it's extraordinarily satisfying, but this is new for me and, and for my colleagues. So, if we weren't driven by this vision so passionately, I think that it's been hard to be hard to put ourselves out there. You know, it's also, uniquely vulnerable to put out, you know, a public product. Anyone can try it. That means anyone can have an opinion about it. But I'm selling an expert, expensive, anti-botnet product, you know, there's, that's just more insular. Right. Yeah. You know, your audience.
Justin Beals: Yeah. You know, your audience.Yeah,
Michael Tiffany: Exactly. Yes. So here we're, you know, we're putting ourselves out there. Uh, And that can feel very nerve-wracking, but commensurately, it feels extremely gratifying when people really connect with it.
So right now, I would say that the people who love Fulcra the most are the ones that, like us, are rocking multiple wearables. You know, I've got my Aura Ring. I've got my Apple Watch. I've got my smart bed. I've got my IoT devices. All of it's brought together in my Fulcra dashboard. If you're like that and you see this product, you say things like, Oh my God, this is what I've always wanted.
Yeah. I've heard that feedback. Whereas if you're not, people just like, blink at me, like, like an incomprehension, right? Like, I don't get it. What does it do? Right. So a real bimodal distribution of responses, like you're either really into it or you're, you're sort of mystified.
There's a, there is, Interestingly, kind of a B2B aspect of this business in that those platforms that I was talking about before, whether it's Palantir or MuleSoft or SAP or, or Salesforce, they all support an ecosystem of enterprise apps that could not exist if it wasn't for the connectivity between subsystems that those platforms provide.
And similarly, I think that there's a tremendous opportunity to write software that doesn't run on a single person's like device. It runs on their life. That is, it's, it's accessible for multiple devices. It takes an input from multiple devices, and it's deeply incorporated into their lives. It's really hard for developers to write apps like that today because there is no platform until Fulcra.
Now, with Fulcra, you can write something that's that sees me, that knows I'm giving this interview right now, knows what my heart rate is, knows what's up next, and can give me insightful advice. So it's, it's actually quite possible that, that helping innovators right on top of fulcrum turns into the number one job like, you know, right now in the early days, it's, you know, we very much need to, you know, find that audience of multiple device wearers and show them what we've built.
But I wouldn't be surprised if, you know, 10 years from now, actually the empowering developers. And creators is the bigger part of this business.
Justin Beals: How do you, um, think about the agreement from your consumers and the data they bring into Fulcra and the innovators on the other side? Are you starting to design how you want that relationship to work and what promises you want to make?
Michael Tiffany: Yeah, and thank you for asking because., I think that helpful AI agents really will happen. So right now, even the, you know, the best frontier models, which can be useful in many ways are not, they're not so great that you want to delegate tasks to them, right? but, the pace of innovation, innovation is great.
So I do think that we're going to get there. And so there's going to be a new form of software, which is your helpful agents. And in order for those agents to function, they're going to need to know things about you. And there, I think there are two ways that this can play out. And I think one of them is right and one of them is wrong.
So I'm really trying to make the right way come about. So, we can empower these AI agents by shipping all of our data. To be co-located at the, you know, giant GPU clusters that also run the models. So, you know, you can absolutely see, you know, hypothetical offers coming from, you know, companies of the future saying, we already host your email, we already know who your friends are. You should also use our AI because, you know, we've got your data.
And that scares me. And the reason it scares me is that it's a one-way door to use Jeff Bezos's framing. if the model works by having pure data, you know,, if you change your mind, you have to like politely ask the model to please forget you and delete the data.
And yeah, if you think of that formally as a computer security person, it is like a control service. That's not a great control service. What instead I would like to bring about is truly personal data sovereignty. So, you have the best copy of your data that every provider to you will still maybe know their bit, but they'll be siloed.
You have the gold desiloed version that has your email from one provider and your location history. And your calendars, and your streaming music preferences, and your purchases for me, right? It's the only one that has the most amazing, most de siloed version is you. And because of that, because you will have the best sort of picture, you have the gold copy of all your data. Anyone who wants to personalize a service to you can either do some guesswork based on what other data they have. Or ask you.
And I think that most people want to provide good services. And so the simple fact that you will have the best data means many people will actually want to ask you for payment for permission. And it will in effect, put you in charge. And that's what I mean by data sovereignty, right? It's not just about ownership.
It's like who has the power. And very importantly, if you want to give access to your data to a helpful AI agent, you could do that by managing like access control. Like, oh, sure, you can talk to my data store. Here's, you know, some, some, some authentication in order to make that happen based on what I believe I want to, you know, give you access to and, then the agents will ask about what's next on your calendar, or your heart rate time series or whatever in order to do their thing, and it'll do some calculation, and it could probably cache those answers.
But if you change your mind and you decide, you know what? I do not trust this AI agent. You can cut off its access control, and you are the one that has the data.
So that's a two-way door. And, and it's very important, but my vision of the future is one where we individuals have sovereignty over our data. So we can say, yes, AI, you may access my data. I will let you see my heart rate, and then, I for any reason or no reason at all, you can turn the knob the other way and say, you know what, never mind.Or I want to use this other AI agent instead?
That is why we started this company. It is because that power is hardly thought of today. And I think it's going to be one of the most important powers for an individual to have 20 years from now. So I'm trying to make that happen 20 years from now.
Justin Beals: Yeah, you know, I think of this phrase: informed consent.
You know, when we deal with things like that, I need to be informed about what you're going to do with a representation of me, and I need to be able to consent to it. And that's not a one-time thing, to your point. I can turn it on or off. It's something that you continue to be informed about how things are operating and continue to consent.
Michael Tiffany: And importantly, You have an auditable log. So if you control the data and, and it's logically and, and probably physically separated from the AI models that you want to give, give it access to, then you can write your own log of everything that's ever been requested. And you can go back to it. If something funny happened, right? Once it's co-located, you're also trusting the co-load provider to also do the logging, which is crazy.
Justin Beals: I, I also having built enough models, now in my computer science work and trying to figure out what they're doing. You know, on the backside of building one, the idea that you can remove some aspect of some data from a model is it just doesn't work that way.It's a one-way street so often. Yeah.
Michael Tiffany: That's right. Yeah. Yes. So, after, after years of effort, we have some great ideas about differential privacy, but how do we apply those breakthroughs in differential privacy to things like transformer architectures? Don't know. I'm, I'm really optimistic about humanity. I'm really honestly optimistic about AI, but there's like a lot of work to be done to bridge these gaps because it's just uncharted territory.
Justin Beals: Yeah, absolutely. Well, Michael, it has been an absolute pleasure to chat with you today. I love sharing the old computer stories and my old hacker magazines. That was a lot of fun. I can hear the Violent Femmes playing in the background. Some I love the work you did with human security. I certainly was aware, especially of some of the ad-style fraud work that was happening in the marketplace then.
And, and I, I love this go forward. Take that you have with Fulcra and the consent and sharing. It's very powerful. I know you're very active in in the world of security at some of our conferences and stuff. And so we're grateful that you've joined us today on Secure Talk. Share all the hard work and the expertise that you have.
Michael Tiffany: Well, this was a dense, meaty conversation. I absolutely loved it. Thanks for having me.
Justin Beals: I try. You know, I'm, I'm curious about these topics. So I feel like in being the host, I get to learn so much. Well, Michael, have a wonderful day. Thanks for joining us.
Michael Tiffany: Thank you.
About our guest
Michael is a visionary in cybersecurity and AI personalization. He is renowned for his pioneering work in botnet defense and innovative AI-driven personalization for a more deliberate digital life.
Michael is a lifelong hacker and member of the legendary hacker group Ninja Networks. He was the Founding CEO of HUMAN Security.com, the world’s winningest botnet detection and defense company, and architect of world-leading botnet mitigation technologies. He now spearheads Fulcra Dynamics to empower AI to work with personal context, making it both safer and more intentional and to empower people to live more deliberate lives.
Justin Beals is a serial entrepreneur with expertise in AI, cybersecurity, and governance who is passionate about making arcane cybersecurity standards plain and simple to achieve. He founded Strike Graph in 2020 to eliminate confusion surrounding cybersecurity audit and certification processes by offering an innovative, right-sized solution at a fraction of the time and cost of traditional methods.
Now, as Strike Graph CEO, Justin drives strategic innovation within the company. Based in Seattle, he previously served as the CTO of NextStep and Koru, which won the 2018 Most Impactful Startup award from Wharton People Analytics.
Justin is a board member for the Ada Developers Academy, VALID8 Financial, and Edify Software Consulting. He is the creator of the patented Training, Tracking & Placement System and the author of “Aligning curriculum and evidencing learning effectiveness using semantic mapping of learning assets,” which was published in the International Journal of Emerging Technologies in Learning (iJet). Justin earned a BA from Fort Lewis College.
Other recent episodes
Keep up to date with Strike Graph.
The security landscape is ever changing. Sign up for our newsletter to make sure you stay abreast of the latest regulations and requirements.