Cracking Down On Cyber Threats and the Internet | Ron Deibert | Wondros Podcast Ep 155
We published a case about a massive global cyber espionage campaign that was targeting all sorts of NGOs in many different sectors. And we couldn't figure it out at first. There were, like, lawyers being targeted, environmental NGOs, even short sellers so that had a financial and like what is going, we knew this was all united in some manner. And it wasn't until we cracked the case and realized that all of this, the common link was a single firm in Delhi, India called BellTroX. That is basically like a hack for hire firm that advertises, I think their slogan was "you desire, we do".
And they basically would hack for any client. And that's why we were seeing this odd collection of victims and targets. Some were being hired presumably by, you know, energy companies, others by companies that were being affected or impacted by short sellers. So it just goes to show there's just so many ways to accomplish the same thing.
Hey Ron, how are you? Thank you for, thank you so much for coming on with us. My pleasure. Thanks for having me. So, you know, we always hear every day about social media this, social media that, but you know, cyber espionage, you know, what does, you know, what does it actually mean? Are we getting right into it? Is this? Yeah, yeah.
For people who are watching, listening, Ron is, does many things, but he's also director of CitizenLab, which he's gonna tell us all about and is a professor. And I don't know, anything else? Hockey player. Happy to get right into it though and answer the question about espionage. All right, espionage. Yeah. So the way I would define cyber espionage is by looking at the practice of espionage historically.
It's something that governments and even private companies have done for eons. It's as old as humanity itself. And it's evolved over time in terms of how it's practiced, what type of technologies are involved and so on. And so what we have now is the latest iteration of it, employing digital technologies and taking advantage of the preexisting conditions of the digital ecosystem that surrounds us. It would be highly unlikely that those who undertake espionage wouldn't try to capitalize on the fact that we carry around with us devices that are invasive by design, insecure, poorly regulated, and easily exploited.
And so there's been a huge growth of activity in cyber espionage, as well as extraordinary capabilities, principally provided by commercial firms, although not entirely, that enable government clients to undertake espionage in our digital age. So when you look at something like GhostNet, like how do you just sit down in a room and say, well, this is what we're gonna look for. Cause like I have a phone, but I don't have any concept of how I could look for anything to find somebody snooping about something. You know, how do you look at it? Well, GhostNet is like ancient history now it feels like to me. It's so long ago that was a investigation we started in 2008 so 14 years ago, I guess. Things looked a lot different then. Most of the world was principally using desktop PCs or Macs and not a lot of mobile phones in the world, especially in the developing world.
That's of course completely changed now. And at that time there was also not the threat in the intelligence industry that we see today. So these enormous companies like Trend Micro, Mandiant Fireye, they really didn't exist back then. There were no reports of its kind.
That was the first public evidence-based report on a cyber espionage campaign. Something that we see almost on a daily basis, how that all got started was I would say not entirely by accident, but it wasn't like we had a plan for it. There was a focus of the CitizenLab around information control, studying how governments are mostly trying to shape what their citizens see online through internet censorship and in the course of actually doing work on social movements in the former Soviet Republics, I came to realize that some countries weren't approaching this problem in a passive manner, they were actually using offensive means to try to accomplish the same thing. So you hack somebody, gather their personal information, and blackmail them and intimidate them. So we had been kind of exploring the possibility of this and keeping our eyes open for it. When we heard about Tibetans in Dharamshala and in particular people who worked at the office of his holiness, the Dalai Lama, being concerned that they might be under surveillance and wondering if we could try to investigate. Things were happening that seemed not coincidental.
So somebody would, you know, receive a phone call from a Chinese official saying, you shouldn't go to this meeting. And it's like, well, how did they know about that meeting? And one of the researchers in the CitizenLab, Nart Villeneuve, at the time was himself a self-taught hacker and really understood the kind of threats and the way you can exploit. And we just went up to Dharamshala and gathered a bunch of network traffic data and looked for suspicious things going on.
That's usually how it happens to this day, although we use a much broader array of tools and techniques than we did back in 2008. But can I ask a question? CitizenLab was, it's, I think you're celebrating your 20th anniversary last year, correct? That's right. So what was the, you know, impetus for what made you feel like, okay, we better start doing something about this? Was there an inciting incident or? You mean the motivation to create the CitizenLab? Yeah. Yeah. So I was interested in this area going back to my graduate school days. That's, I went to graduate school with this topic in mind: information technology and international security. That was my kind of nexus. Briefly did some contract work for the Canadian foreign affairs ministry, looking at satellite reconnaissance for arms control verification and came to learn through that contracting work about the world's of signals intelligence and what governments were doing in the information realm that wasn't widely understood or appreciated.
And it dawned on me that there's a potential model to use those same techniques, methods, and tools that governments do to spy on each other, to actually spy on them. "Spy," I put in quotation marks here, scare quotes, cuz we're not really spying, it's more about holding them to account. There was nothing like Bellingcat then, people who were studying the internet weren't really aware that the internet itself, as a network of interconnected computers, speaks a language that can be interrogated in a way that provides firsthand primary evidence of anything really, but especially of maybe some kind of malfeasance that you want to expose and hold accountable.
So I had this vision of creating a laboratory where I'd bring together people with these different skill sets and turn the tables, watch the watchers, and lift the lid on the internet is the tagline. And I just managed to get funding for it, put together a proposal, described it almost exactly the way I'm describing it here in the proposal to the Ford Foundation. And there was a program officer there at the time named Anthony Romero, who is now the president of the ACLU.
Yeah, we know Anthony well. We know Anthony. So Anthony was the program officer and he took a chance on this unconventional project and it just grew from there. Wow. Now, are there other labs that observe the watchers or is this maybe, do you have other people, you know, is there a connection or a bunch of people or is this like, this is one of the only ones of these kinds of places? I would say the field has grown and matured quite considerably.
It depends on if you're speaking strictly about university labs or academic approaches in the way that we are doing what we are doing. Honestly, there really is nothing quite like what we do. There are university-based projects or centers that now come close to what we do or overlap in some ways with what we do.
Like there's a internet observatory in Stanford that looks at disinformation, very similar. People at the Berkman Center or Oxford Internet Institute have appropriated some of the same methods that we do to do some of the things that we've done. But broadly speaking outside of academia, this idea of interrogating the internet systematically and looking for open source information that you can use in a way to hold people to account is now of course, widely practiced. So I see the CitizenLab as part of a growing family that includes investigative journalists like Bellingcat and academic organizations as well as just, you know, individuals in some cases doing this type of sleuthing.
You know technology's always changing. And a lot of, there's a lot of companies, as you alluded to earlier, that are coming up that have new diabolical tools that are being used against dissidents around the world. Is it, can you go to them and say, hey, give us some of this technology because we wanna observe what's going on. You know, because it's, if you ask them, they would say the technology is agnostic. You know, is it easy to get them to give you some of the tools or do they keep it a secret? Is it a, you know, how does it work? I think first, we have to characterize the private sector space around cyber, let's say, to use that language, I know it's a bit awkward.
But the way I think about it is you have a wide spectrum of companies providing different tools to different clients. And a lot of it is dual use. So there are technologies that enable telecommunications companies to filter out bad traffic, but that can also be used for mass surveillance. The most insidious or harmful types of technologies are the ones that we've been tracking around spyware, which is sold to governments to enable them to hack into devices. Ostensibly, there is a dual use there insofar as a legitimate need by law enforcement to undertake criminal investigations in this manner. Some, not everyone agrees with that, but I think there's certainly a justification or rationale for it. But then you have a lot of other companies that do things that bleed into the kind of surveillance capitalism industry. So location tracking,
data analytics, net flow data, there's just a lot of data flowing all over the place. And some of that data is quite useful to us at the CitizenLab. For ethical reasons, we obviously wouldn't go to a company like say NSO Group or Hacking Team and ask for something from them because that would, I think, put us in a conflict of interest. But there are companies that are more neutral, I guess you might use that way of thinking about it, who are truly trying to defend networks from harmful attacks that share data with us. Either we purchase it from them,
if we have the funds, or maybe we get a discounted rate because we're a university group and that's just you know, part of our toolkit. We have a variety of data sources now that we didn't 12 years ago. So in a lab, you know, of this kind, you know, we're trying to defend open society values, right? Or democratic values, you know, so what are the kinds of people, or I should say, what are the kinds of jobs that you need in the lab to do that? Cuz you need people to collect data, but you know, but you have to understand what you're defending, you know, like at Berkman, I kind of would understand, you know, the root, you know, but what are the jobs you need to actually be active about this? That's a good question. So we are truly a mixed methods laboratory. You know, it started out with myself as a social scientist, international security as my background. And I hired somebody who was actually not even an engineer so much as a self-taught hacker and that's how it got started.
I then went out and hired a computer science student, Michelle Levesque. Those were my first two hires. In the early days, we really relied on technical methods. But over time, the lab has grown, so there are quite a few people who have expertise in different slices of computer science and engineering science that I could talk about if you want. But then we've really broadened out to include people who have legal background. We have probably about five lawyers on the CitizenLab team.
Some of them provide informal type of legal counsel. We also have proper legal counsel provided by the University of Toronto and we need it by the way because our adversaries tend to be very litigious. But you need people who are legally trained to really understand the law, the companies that you're tracking, regulations that impact this space.
And then we have people who are area experts. We work all over the world and it's very useful for us to have people who are either from or knowledgeable about certain regions, whether it's Southeast Asia, Middle East, the Gulf, Latin America, very important to have that kind of core strength. The team is about, I'd say 30 full-time researchers now. You know, it's still a professor's lab, I'm the principal investigator, it's all research grants. We're not like the Berkman Center. We're not an endowed center where people come and go and maybe they'll spend a summer, like it's very much organized, directed research.
So still a project in that sense. I have a question around this idea of rights. I mean, we're really, cause there's a, we're really talking about people's rights, but rights when they don't even know that they don't have rights, right? We don't know what our rights are, as they pertain to something like the internet or personal information and the, one of the first things that you referenced, the Tibetans being listened to. So how is that really changing and getting worse? And of course, right now at the time of this particular conversation we're having, we're having probably one of the most terrifying, dangerous situations, probably any of us have ever encountered happening across the world right now. So what is the idea of letting individuals know that they have these rights and how, what can be done to Pandora's box? It's been unleashed many, many years ago. Yeah, that's a very good question and a nice way of phrasing the challenge and the mission really of the CitizenLab is to expose things that are going on beneath the surface that we feel the public needs to know about.
So we define our mission as doing this careful evidence-based sleuthing in the public interest. And that obviously, you know, there's a very wide margin there and people can debate about which rights matter or not, and what context and so forth. But I think, you know, most thoughtful people who are inclined towards human rights, you know, have a ballpark of values that they all agree on.
And I think that's what motivates us. So for example, you know, there's certain categories of individuals and organizations that we wouldn't work with or study. So somebody whose device has been hacked, even though we might be interested in it, but they're working for a government security service or a big company. That wouldn't really fit our profile. And then just broadly speaking, the question you raise about rights and the opaqueness of the communication technologies that we depend on. I think that's definitely true. As never before, it seems to me we're surrounded by so much technology that we rely on that is intimately connected to almost every aspect of our lives. And yet we know so little about what goes on inside of it.
Part of that has to do with things like the, you know, the broken, flawed consent process, terms of service agreements that no one reads, which actually allow the companies to appropriate us as their property, is the way I think about it. Our data as their property, we're basically signing away. And we do this not once, but routinely, dozens of times with all of the applications that we use and depend on.
And then of course the companies hide behind intellectual property protections. I find it really strange, I was using this analogy yesterday, that most of the big social media platforms do not allow the equivalent of like a health inspector to examine their algorithms and their internal workings is astonishing to me. If you think about like a meat processing plant, you know, it would be shocking if somehow there weren't inspectors allowed into, you know, the public wouldn't go for it, but here, you don't have the equivalent sort of thing around social media. Although I think it's changing now, if you look at the Digital Services Act in Europe that's being proposed. Part of that would involve precisely what I'm talking about here.
Some kind of independent mechanism to inspect what's going on inside the companies. I feel like you just painted the most frightening analogy though of, you know, all the people, I have not got an Alexa, okay. Cuz I don't wanna have. But all the people that have something like Alexa, I don't know what the other equivalents are, and they don't know what's going in on inside. So who is in your house with you? It is, you know, great for a thriller, like a terrifying story because who's in there with you is just beyond, but we're already in a crisis. It's not like, right? It's not like, okay, we're getting ready for the crisis. It's already upon us.
So do you also focus on how we're gonna dig our way out of this or just try to, you know, take little pieces cause it's such a vast, as Jesse was saying, problem or problems. Yeah. Personally I'm motivated to not only do the uncovering and exposing, but also point towards what I think are viable paths of reform and ways to mitigate some of the harms that we're seeing, especially around, for example, the largely unregulated commercial surveillance industry that I believe is not only causing widespread harm, I truly believe it's the single most important threat to global civil society right now and not quite properly recognized for what it is. But even with respect to something less immediately harmful, like social media and surveillance capitalism, I'm among those people who has been advocating for the application of restraints, legal restraints, regulatory restraints on both the private sector and governments. If you look at policing capabilities, for example, and you know, you mentioned Amazon or Alexa, you know, the Ring security system, how closely integrated it is now with local law enforcement in the United States.
And that's just one example of a suite of technologies that are now available to law enforcement that has really astronomically propelled forward their capabilities in a very short period of time. So if you look at like just facial recognition, you know, I'm old enough to have appreciated shows like Barney Miller or The Rockford Files, you know, a detective would have a photograph and they'd go into a bar, hey, do you recognize this person? You know, were they in here last night? And now they can, thanks to a company like Clearview AI, push a button and have a facial recognition match in an instant. That's just one example. So that we live in a age where we have 21st century policing yet governed by something like Victorian era safeguards and restraints. And it's the gap between the two where the prospect of the abuse of power lies. So we really need to double down on restraint, which is basically oversight mechanisms and undersight of the sort that CitizenLab does.
You know, I, so I'm on TikTok. I have a lot of things of Ukraine coming through, you know, you watch 'em and it's, obviously I support Ukraine, you know, but I wonder why there's no Russian imagery coming through. I mean, it just seems like an obvious demarcation point where you go, well, wait a minute, why is this algorithm only returning the Ukraine things? And that gets into, well, how can you judge what these algorithms are gonna, you know, even if we put somebody in to guard them, you know, how do we know which way is up within the world of social media? Well, it's sometimes difficult. I don't know, I haven't studied it closely enough to know about the proportion of Ukraine-generated content on TikTok versus Russian.
I do know that there are some pretty extensive Russian disinformation and influence operations that are run through TikTok and Telegram that are targeting, latest I heard, Spanish-speaking populations around conspiracies having to do with biolabs and things like that. But stepping back away from that particular conflict and answering the general question that you ask, you know, we have to approach those platforms in a somewhat adversarial manner, the way that we examine applications. We did this just recently with the My22 Olympics app.
We did it with the platform that we're on right now, Zoom. We just kind of look at it - and when I say "we," it's really the talented engineers in the CitizenLab - they examine it from the perspective of risks to users and look for anything that may suggest some kind of hidden control or mechanism that's happening that really should be disclosed. So those are different things, right? With Zoom, what we discovered was that the, first of all, the encryption that was used to secure the video conference that we're having was poorly engineered such that we could actually intercept any Zoom call if we were controlling the network. And that was part of the disclosure that we made to Zoom. The second part was, I think, slightly more disturbing. In about 10 of our sessions, we observed in,
I think, two or three instances, the keys that are used to do the encrypting coming from servers in mainland China. So, you know, you just kind of scrutinize, take apart, reverse engineer where you need to experiment. Sometimes you come up with nothing, but usually the way that we kind of triage this is somebody will say, hey, I've, you know, a journalist came to us about the, prior to the Olympics saying, hey, we're being required to install this app when we go to Beijing to cover the Olympics, might CitizenLab wanna take a look at this? And we did. And we just found it was very poorly engineered, nothing nefarious, just bad design. But poorly engineered can be nefarious, no? Yeah, it can.
But I think everyone in this particular case jumped to the inference that the problems that we identified were put there by the authorities, when in fact, that's not the case, as far as we could tell. it was just poorly engineered. And that doesn't decrease the risk per se. It means that, you know, it's a different threat profile than it would be if the government had introduced some backdoor surveillance.
So North Korea, you know, the ones we always hear about are Russia, North Korea and China, you know, are they actively working against their populations? I mean, not in the case of North Korea, cause people don't have computers, but, and are they, you know, what's North Korea trying to do, you know, with us. I mean, they obviously had those successes with Sony, but I mean, it seemed like those were password, you know, silly password things. Are they actually actively doing things against citizens? I think the way that I would answer this is comparatively speaking. It's very interesting to kind of look across different countries in terms of what we're talking about here is really cyber espionage. And if you just zero in on that, of course, we could talk about other things as well, like disinformation or the control of information domestically.
But if you're just looking at cyber espionage, each country has a slightly different flavor. So both Russia and China have pretty extensive, like world-class capabilities, but they do things different than say the United States does it, or North Korea. In North Korea's case, it's been a while since I've looked at it closely, but my impression is most of the cyber espionage they undertake is financially driven. They want to extract funds through ransomware to enrich the regime. And a lot of their revenue comes from the black market.
And so that's kind of their motivation. I don't think there's a lot of political espionage in the case of North Korea, probably. I could be wrong about that. Whereas with, with Russia, with China, of course, it's a combination: economically driven, but also politically driven. You know, it's a gather as much as can mentality and sort it all out later, much like the NSA does as far as I understand it.
And then you have other countries that are following the model of, hey, my cousin can hack. This is what you see in like Iran and Syria. You know, it's kind of like the equivalent of a garage band for cyber espionage, but that could be very harmful. We published a case about a massive global cyber espionage campaign that was targeting all sorts of NGOs in many different sectors. And we couldn't figure it out at first, there were like lawyers being targeted, environmental NGOs, even short sellers, so that had a financial, like what is going, we knew this was all united in some manner.
And it wasn't until we cracked the case and realized that all of this, the common link was a single firm in Delhi, India called BellTroX. That is basically like a hack for hire firm that advertises, I think their slogan was "you desire, we do". And they basically would hack for any client. And that's why we were seeing this odd collection of victims and targets. Some were being hired presumably by, you know, energy companies, others by companies that were being affected or impacted by short sellers. So it just goes to show there's just so many ways to accomplish the same thing.
So on a case like that, do you put forth recommendations about what to do or, you know, as a lab, you're just saying, this is what's going on, but you know, what's the kind of the end of the, your paper, does it say, so, you know, turn this, what do you do about it, I guess? Well, in this particular case, some of the victims asked us, well, what could be done about this? And we said, well, you could go to law enforcement. And so there is an active DOJ Southern District of New York investigation into BellTroX as we speak. And we, of course, are at arms length from that. We will cooperate with it as long as it's a legitimate lawful investigation.
In terms of recommendations, it depends on the report. Sometimes there are specific recommendations, like we've done many reports on hacking of news organizations or journalists. And for a while, we felt it was important to recommend to news organizations that they prioritize digital security and that journalists are given the proper training and equipment and resources. So there are times when we do those sorts of recommendations. Typically we see ourselves as not an advocacy group and we definitely don't do training. Like there, you know, people are always asking us, can CitizenLab do this and that, well, no, it's outside of our mission and we're not out there campaigning, but as individuals, of course, you know, I'm a political scientist and I try to engage publicly on these matters.
And we advocate for solutions to various things, whether it's governing social media or regulating spyware, you know, I have a whole series of recommendations, I just published an article on, you know, what could be done about this horrible problem of out of control spyware, listing like four or five recommendations. Now, do you ever talk to the bad guys? Do they ever make contact? And do you ever have a discussion in some, you know, park in the middle of the night? You know, the two of you exchanging views like that kind of thing? Yeah, for sure. I mean, it depends on who the bad guys are in which circumstance. So, you know, as a professional matter, whenever we're investigating or researching or publishing on a company, we typically will write them a letter in advance asking them questions.
And it's just a good, proper way to go about things and also to protect yourself from some potential liability. And sometimes you can get some useful information back, even if they don't answer, that can be informative. We've also made data access requests or access to information requests.
Those are useful. So you get things back, maybe redacted, that tells you a little bit about what maybe a government is thinking in a particular case. We've also received threats, so sometimes the bad people try to intimidate you. We were, of course, perhaps you know this, targeted by Black Cube, the same company that was hired by Harvey Weinstein, a convicted rapist, to go after those who made allegations against him, that company was pointed at us. And they tried to insert themselves clandestinely into the lab's operations through secretly recording a couple of our staff, but we actually managed to turn the tables and expose them. So yeah, it-. Now, did you, how did you get tipped off or did your own programs detect that? Like how did you realize that was even going on? You know, as a matter of fact, Ronan Farrow has a HBO series called Catch and Kill and the last episode is all about the targeting of the CitizenLab and it tells the story.
So you might wanna look at that. I did, I saw that, it was incredible. Yeah, but you know, my view of it, you know, from my vantage point: first, there was a staff member who was approached about something unrelated to the CitizenLab, was invited to a very fancy lunch at a expensive downtown hotel in Toronto, and just came away from it feeling it wasn't right, there was something wrong about it, just the way the person was behaving.
Some of the questions initially weren't about the lab, but it ended up progressing towards asking about our funding and things like that. So he came back and reported it and had a business card. And we just started investigating like, what is this person, who are they? What are their, looking at their LinkedIn profile, and immediately noticed that geez, this doesn't look right. This looks like a fictitious identity. And we had our antennae up and then separately, Black Cube reached out to a second staff member, John Scott-Railton. And I've said this many times, if there's one person in the lab that you wouldn't want to target, it would be him because he is, he's like a dog with a bone when it comes to that sort of thing.
So he just simply turned the tables. He let this go on for quite some time and we worked with Associated Press journalists to record conversations and get them in a Manhattan restaurant where we were recording them. So it was a very much a kind of spy vs. spy thing. Wow. Amazing. And at a certain point, you know, the cameras come out and the guy had to scramble, knocking over chairs to get out of the restaurant. It was humorous, but also,
you know, scary. It's scary. You had a burning desire, there was something that you've referenced twice that you really wanna talk about right now. What, you know,
I think it's like commercial surveillance of companies. What is the one topic that you would like to just kind of get a few minutes on? Well, the CitizenLab does research in many different areas, but one that I think we're most well known for probably, and we're very active in, is tracking the mercenary spyware market. So there are companies that sell to government security agencies the ability to hack into devices. This is a very lucrative industry. Companies are valued well over a billion dollars, they're owned by private equity firms.
It's a space that's almost entirely unregulated. So you have companies selling to government clients that routinely use those technologies not in the way they're marketed. So they're marketed under the auspices of helping law enforcement investigate pedophiles or terrorists. When in fact, as we are showing, this is the proof of our research over the last like six years in this space - or longer actually, 10 years - is to show that, well, you know, you sell that very powerful technology to a government like El Salvador or Saudi Arabia, they're gonna use it to spy on journalists, dissidents, human rights, activists, lawyers, whatever presents a risk to the regime.
And I do believe that's the vast majority of this marketplace right now, because the world is full of crappy autocrats and despots, right? Unfortunately. And it's really having an insidious effect on civil society. So you both, no doubt will recall the, it was almost a truism a decade ago, people would say the internet is empowering global civil society. People, there's a new type of people power. Well, what I'm seeing is the opposite. That thanks to these technologies, governments are able to intimidate people through surveillance into self censorship and there's a real climate of fear. People are being murdered on the basis of data that's being collected. You know, we investigated everyone around Jamal Khashoggi, just to give one example: his fiancée, his close confidant here in Canada, Omar Abdulaziz, all had their phones hacked. No doubt,
the information gathered from that was instrumental in the decision to execute him and there are, of course, many other examples we'd never know about people disappearing. Rwanda is a client of NSO Group. Paul Kagame is known for sending out death squads. So, you know, this is a crisis as far as I'm concerned. And we could publish probably a new report on this topic every week, because there's so many victims out there in the world right now.
Obviously CitizenLab is unambiguously for civil society, for open society. It's, let's say they call that a white hat. Right? You know, the black hats are people that would be considered completely, you know, autocratic, you know, in some sense, probably, maybe too strong a word, but evil. Right? Do, do you see the world at this moment in time unambiguously one or the other, or are there shades of gray in the middle of all of this? Yeah, of course there are shades of gray. I think there are different interpretations of just about everything and, you know, even core values can be debated. I'm just one of those people that believes strongly in basic human rights and principles around a liberal democratic society.
I think it's the best of all possible options, and the alternatives look pretty bad to me. And so I think where it comes up mostly for us, to answer your question, is around ethical decisions that we have to make, because that's where we really see these gray areas like, oh, you know, there's this data that was hacked by somebody. Can we use it in our research? Or would we be condoning the illegal act that brought the data forth in the first place. We've had, you know, discussions around things like that. And where did you come, where did you come down on that? Because that, obviously in science, that's been a question forever. A lot of the things that we use came out of research done by Nazis on, you know, people in concentration camps and we're still using it. So this is an extension of that into this area. Where do you,
where do you come down on something like that? I don't think we can even generalize about it. We have to look at it on a case by case basis because there might be different circumstances and we have certain principles that we follow. And I'm really proud of my team actually, that we spend a lot of time on ethical questions and risk analysis. You know, I'm often asked what does it matter that the CitizenLab is at a university? It could be an NGO, you could be doing exactly what you're doing, which is not actually correct. A major reason I believe for our success is the fact that we have to follow strict research ethics protocols. Every human subject that we deal with is guaranteed confidentiality, is read into an agreement, just in the same manner as someone who would sign up for a psychological study or a medical study at a university would be granted certain rights.
And that, I think, forces us to be really reflective and thoughtful about our methods and how we handle data. And the reason I think that's been a contributing factor to our success is, we don't cut corners and we don't get sloppy or do things that might call into question the integrity of the lab. I mean, it is quite remarkable if I step outside of my own shoes and look at our organization from the outside that we've gone this far and have had such, you know, really impactful reports over, you know, 20 years, and not had some slip up where a scandal at something that we're doing. As I said before, we've been sued, threatened with suits,
and we've managed to get through that successfully. Thanks again, to the university supplying us with legal counsel. It's very important to think these things through, if you're gonna take on the bad actors in the world, because they're gonna try every possible angle to get at you and prevent you from doing what you're doing.
You are brave. You are a brave man. I dunno. You know, and me and Priscilla's business, you know, at Wondros, you know, one of the things is just never to do anything that could be questionable ever, you know. Just always, even if it's not advantageous to the company, just choose to work on things that, you know, is that something that you've had to really focus on within the CitizenLab? Yeah. It's very much leitmotif of the lab is that, you know,
if you have this kind of core set of principles and you're thinking ethically about what you do, I find it actually very liberating because you're never worried, like you can't sleep. Oh, if somebody finds out about that, you know, we're in trouble. I agree. A hundred percent. It's a value. Makes your life easier. You can sleep better at night,
but it doesn't make it easier because sometimes there are like really interesting debates where there's no right or wrong that's immediately apparent. It's like, okay, what do we do here? Geez, I don't know. I gotta consult somebody about this. Well, thank you so much for coming on and talking to us about this stuff. So it was really fantastic. Really appreciate it.
And I want, to everybody who's paying attention, please check out the work that CitizenLab is doing. Cuz they're keeping us sort of safe. I'm gonna end on an optimistic note because we have to. We have to be optimistic. You wouldn't be doing it if you weren't,
if you didn't have some belief. That's very true. Thank you so much for having me on your show. Thank you so much for coming on. It was great.
Thank you very much. Stay safe. Take care of yourselves. You too. See you later everybody.