Cybersecurity Thinking to Reinvent Democracy

Cybersecurity Thinking to Reinvent Democracy

Show Video

>> Please welcome Bruce Schneier. [Applause.] >> BRUCE SCHNEIER: Nice to see everybody back again. There has been a lot written about technology's threats to democracy. Polarization, artificial intelligence, concentration to wealth and power.

I have a more general story. That the political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don't align incentives well, and they're being hacked too effectively.

At the same time, the cost of these hack systems has never been greater across all of human history. We have become too powerful as a species and our systems cannot keep up with fast-changing disruptive technologies. I think we need to create new systems of governance that align incentives and are resilient to hacking at every scale from the individual all the way up to whole of society.

So, for this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It's not about democracy versus autocracy. It's not even about human versus AI.

It's something new. It's something we don't have a name for yet. And this is blue sky thinking.

Not even remotely considering what's possible today. Throughout this talk I want you to think about democracy and capitalism as information systems. Sociotechnical information systems. Protocols for making group decisions, ones where different players have different incentives.

These systems are vulnerable to hacking and need to be secured against those hacks. So, we security technologists have a lot of expertise in both secure system design and hacking. And that's why we have something to add to this discussion. And finally, this is a work in progress.

I'm trying to create a framework for viewing governance. So, think of this more as a foundation for discussion rather than a roadmap to a solution. And I think by writing - and what you are going to hear is the current draft of my writing and my thinking. So, everything is subject to change without notice.

Okay, let's go. We all know about misinformation and how it affects democracy, and how propagandists have used it to advance their agendas. This is an ancient problem amplified by information technologies. Social media platforms are prioritizing engagement, filter bottle segmentation, and technologies for honing persuasive messages.

The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve critical problems, and then to collect feedback on how well those solutions are working. This is different from autocracies that don't leverage collective decision making or have reliable mechanisms for collecting feedback on their - from their populations.

These systems of democracy work well but have no guardrails when fringe ideas become weaponized. That's what misinformation targets. The historical solution for this was supposed to be representation.

This is currently failing in the U.S., partly because of gerrymandering, safe seeds, only two parties, money in politics, our primary system. But the problem is more general.

James Madison wrote about this in 1787 where he made two points. One, that representatives served to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those for extreme views to participate. It's hard to organize.

Now, to be fair, these limitations are good and bad. In any case, current technology, social media, breaks them both. So, this is the question. What does representation look like in a world without either filtering or geographical dispersal? Or how do we avoid polluting 21st century democracy with prejudice misinformation and bias? Things that impair both the problem solving and feedback mechanisms. That's the real issue.

It is not about misinformation. It is about the incentive structure that makes misinformation a viable strategy. So, this is Problem Number 1.

That our systems have misaligned incentives. What's best for the small group often doesn't match what's best for the whole. And this is true across all sorts of individuals and group sizes. Now historically, we have used misalignment to our advantage.

Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and inexpensive. Individual self-interest leads to local optimizations which results in optimal group decisions. But this is also inefficient and inexpensive. The U.S. spent $14.5 billion on the 2020 presidential Senate

and Congressional races. I don't even want to know how to calculate the cost in attention. And that sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that's how you impose your own incentive structure on the whole.

More generally, the cost of our market economy is enormous. $780 billion is spent worldwide annually on advertising. More billions are wasted on ventures that fail.

And that's just a fraction of total resources lost in a competitive market environment. And there are other collateral damages which are spread nonuniformly across people. So, we've accepted these costs of capitalism and democracy because the inefficiency of central planning was considered to be worse.

That might not be true anymore. The costs of conflict have increased, and the costs of coordination have decreased. Corporations demonstrate that large centrally-planned economic units can compete in today's society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the 8th largest country on the planet.

Microsoft would be the 10th. Another effect of these conflict-based systems is they foster a scarcity mindset. And we have taken this to an extreme.

We now think in terms of zero sub politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero sum economics. My product's success depends on my competitor's failures.

We think zero sum internationally. Arms races and trade wars. And finally, conflict as a problem-solving tool might not give us good enough answers anymore.

The underlying assumption is that everyone - is that if everyone pursues their own self-interest, the result will approach everyone's best interest. This only works for - for simple problems and requires systemic oppression. We have lots of problems, complex wicked global problems, that don't work that way. We have intersecting groups of problems that don't work that way. We have problems that require more efficient ways of finding optimal solutions. Now, note that there are multiple effects of these conflict-based systems.

We have bad actors literally breaking the rules and we have selfish actors taking advantage of insufficient rules. So, the latter is Problem Number 2. This is what I refer to as hacking in my latest book, A Hacker's Mind. So, democracy is a sociotechnical system, and all sociotechnical systems can be hacked. And by this, I mean that the rules are either incomplete or inconsistent or outdated.

They have loopholes. And these can be used to subvert the rules. So, this is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering the filibuster, must pass legislation, or tax loopholes, financial loopholes, regulatory loopholes.

In today's society, the rich and powerful are just too good at hacking. And it's becoming increasingly impossible to patch our hack systems because the rich use their power to ensure that the vulnerabilities don't get patched. This is bad for society. But it's basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking ineffective if parasitic strategy.

Hacking is no new problem. But today's hacking scales better and is overwhelming the security systems in place to keep hacking in check. Think of gun regulations or climate change or opioids. And complex systems makes this worse. These are all nonlinear, tightly coupled, unrepeatable path-dependent adaptive coevolving systems.

Now add into this mix the risks that arise from new and dangerous technologies like the internet or AI or synthetic biology or molecular nanotechnology or nuclear weapons. Here, misaligned incentives in hacking can have catastrophic consequences for society. This is Problem Number 3. Our systems of governance are not suited to our power level.

They tend to be rights-based, not permissions-based. They are designed to be reactive because traditionally there was only so much damage a single person could do. We do have systems for regulating dangerous technologies. Consider automobiles. They're regulated in many ways.

Driver's licenses and traffic laws and automobile regulations and road design. Compare this to aircraft. Much more onerous licensing requirements. Rules about flights, regulations on aircraft design and testing, and a government agency overseeing it all day-to-day. Or pharmaceuticals which have very complex rules surrounding everything around researching, developing, producing, and dispensing.

We have all these regulations because this stuff can kill you. The general term for this kind of thing is the precautionary principle. When random new things can be deadly, we prohibit them unless they are specifically allowed. So, what happens when a significant percentage of our jobs are as potentially damaging as a pilot? Or even more damaging. When one person can affect everyone through synthetic biology.

Or where a corporate decision can directly affect climate or something in AI or robotics. Things like the precautionary principle are no longer sufficient because breaking the rules can have global effects. And AI will supercharge hacking. We have created a series of noninteroperable systems that actually interact. And AI will be able to figure out how to take advantage of more of those interactions. So, finding new tax loopholes, finding new ways to evade financial regulations, creating micro-legislation that surreptitiously benefits one particular person or group.

And catastrophic risk means this is no longer tenable. So, these are our core problems. Misaligned incentives leading to two effective hacking of systems where the cost of getting it wrong can be catastrophic. Or, to put more words on it, misaligned incentives encourage local optimization, and that's not a good proxy for societal optimization. This encourages hacking which now generates more greater harms than at any point in the past because the amount of damage that can result from a local optimization is greater than at any point in the past. Okay. Let's get back to the notion of democracy

as information system. It is not just democracy. Any form of governance is an information system. It is a process that turns individual beliefs and preferences into group policy decisions and uses feedback mechanisms to determine how well those decisions are working, and then makes corrections accordingly. Historically, there are many ways to do this.

We can have a system where no one's preference matters except the monarchs or the nobles or the landowners. Sometimes the stronger army gets to decide. Or the people with the money.

Or we can tally up everyone's preferences and do the thing that at least half the people want. That's basically the promise of democracy today at its ideal. Parliamentary systems are better but only in the margins. And it all feels kind of primitive.

Lots of people write how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives. Now, I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes.

I'm taking it for granted that democracy is good for all of those things. I'm focusing on how we implement it. Modern democracy uses elections to determine who represents citizens in those decision-making processes. And all sorts of other ways to collect information about what people think and want and how well policies are working. These are opinion polls, public comments to rulemaking, advocating, lobbying, pressuring; all those things. And in reality, it's been hacked so badly that it does a terrible job of executing the will of the people, which creates further incentives to hack those systems.

Now, to be fair, the democratic republic was the best form of government that mid-18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a course approximation of what we wanted. And our principles, values, conceptions of fairness, our ideas about legitimacy and authority have evolved a lot since the mid-18th century. Even the notion of optimal group outcomes dependent on who was considered in the group and who was out. But democracy is not a static system.

It's an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have.

And blocking progress in democracy is itself a hack of democracy. Today, we have much better technology that we can use in the service of democracy. Surely, there are better ways to turn individual preferences into group policies now that communication and travel are easy. Maybe we should decide representation by age or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone's preferences.

Whatever we do, we need systems that better align individual and group incentives at all scales. Systems designed to be resilient against hacking and resilient to catastrophic risks. Systems that leverage cooperation more and conflict less and are not zero sum. Why can't we have a game where everybody wins? And this has never been done before.

It's not capitalism, it's not socialism, it's not communism, it's not current democracies or autocracies. It would be unlike anything we've ever seen. Some of this comes down to how trust and cooperation work. When I wrote Liars and Outliers in 2012, I wrote about four systems of enabling trust. Our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior.

I wrote about how the first two are more informal than the last two and how the last two scale better and allow for larger and more complex societies. They enable cooperation amongst strangers. What I didn't appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They are person-to-person based on human connection and cooperation.

Laws, and especially security technologies, are newer systems of trust that force us to cooperate. They're sociotechnical systems. They're more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country's most dangerous professions.

Uber changed that through pervasive surveillance. My Uber driver and I don't know or trust each other, but technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, and that aligns local and global incentives.

In today's tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms and enforced compliance, and innate trust in people with compelled trust and process in institutions. That scales better, but we lose the human connection. It's also expensive and becoming even more so as our power grows.

We need more security for these systems, and the results are much easier to hack. But here's the thing. Our informal systems of trust are inherently unscalable. So, maybe we need to rethink scale.

Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to eat. One is vegetarian, one is kosher.

They would never use a winner-take-all election to decide where to have dinner. But that's a system that scales to a large group of strangers. Scale matters more broadly in governance as well. We have global economic systems of competition, political, economic. On the other end of the scale, the most common form of governance on our planet is socialism.

That's how families work. People work according to their abilities and resources are distributed according to their needs. I think we need governance that is both very large and very small. Our catastrophic risks on a planetary scale, climate change, AI, internet, biotech. And we have all the local problems inherent to human societies.

We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well at a local level but don't scale to larger group. But now that we have more technology we can make other systems of democracy scale. This runs headlong into historical norms about sovereignty. That's already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy.

But constituent boundaries are now larger and more fluid and depend a lot on context. It makes no sense that decisions about the drug war or climate migration are delineated by nation. The issues are much larger than that. Right now, there is no governance body with the right footprint to regulate innate platforms like Facebook, which has more users worldwide than Christianity. We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant.

Growth is often extractive at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it's okay that we waste some of the pie in order for it to grow. That doesn't make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size.

Sustainability makes more sense and a metric more suited to the environment right now. Finally, agility is also important. Back to systems theory. Governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And catastrophic risk raises the cost of getting it wrong.

In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentals scaled better but the social cost was enormous. A lot of how we think and act isn't captured by those models. And those complex models turned out to be very hackable, increasingly so at larger scale. Lots of people have written about the speed of technology versus the speed of policy.

To relate it to this talk, our human systems of governance need to be compatible with the technologies they're supposed to govern. If they're not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech in the United States. This means that governance needs to be agile, and to be able to quickly react to changing circumstances. And imagine a court saying to Peter Thiel, "Sorry, that's not how Roth IRAs are supposed to work.

Now give us our tax on that $5 billion." This is also essential in a technological world. One that is moving at unprecedented speeds. Where getting it wrong could be catastrophic, and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking.

And also, red teaming. In this context, both journalism and civil society become important checks on government. So, I want to quickly mention two - two ideas for democracy, one old and one new. I'm not advocating for either. Just trying to open you up to possibilities. The first is sortition.

These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy and are increasingly being used today in Europe. The only vestige of this in the United States is the jury. But you can also think of trustees of an organization.

The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to somebody else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have.

We have something like this in corporate governance. Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier to 21st century technologies. They are both democracies, but in new and different ways.

And while they are not immune to hacking, we can design them from the beginning with security in mind. This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal, biological kind and the formal compliance kind.

We know how to use technology to help align incentives, and how to defend against hacking. We talked about AI hacking. AI can also be used to defend against hacking.

Finding vulnerabilities in computer code. Finding tax loopholes before they become law. Uncovering attempts at surreptitious micro-legislation. Think back to democracy as an information system. Can AI techniques be used to uncover our preferences and turn them into policy outcomes and get feedback and then iterate? This would be more accurate than polling, and maybe even elections. Can an AI act as a representative? Could it do a better job than a human at voting the preferences of its constituents? Can we have an AI in our pocket that votes on our behalf thousands of times a day based on the preferences that it infers we have? Or maybe based on the preferences it infers we would have if we read up on the issues and weren't swayed by misinformation.

It's just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying enough attention to politics. But slow down. This is rapidly devolving into technological solutionism, and we know that doesn't work. A general question to ask here is, when do we allow algorithms to make decisions for us? Sometimes it is easy. I'm happy to let my thermostat automatically turn my heat on or off or let AI drive a car or optimize the traffic lights in the city.

I'm less sure about an AI that sets tax rates or corporate regulations or foreign policy. Or an AI that tells us that it can't explain why but strongly urges us to declare war right now. Right? Each of these is harder because they are more complex systems. Nonlocal, multi-agent, long duration, and so on. I also want any AI that works on our behalf to be under my control, and not controlled by a large corporate monopoly that allows me to use it.

And learned helplessness is an important consideration. We are probably okay with no longer needing to know how to drive a car, but we don't want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the processes of democracy, not replace them. So, while an AI that does all the hard work of governance might generate better policy outcomes, there is a social value in a human-centered political system even if it is less efficient.

And more technologically efficient preference collection might not be better even if it is more accurate. Procedure and substances need to work together. There is a role for AI in decision making. Moderating discussions, highlighting agreements and disagreements, helping people reach consensus. But it is an independent good that we humans remain engaged in and in charge of the process of governance. And that value is critical to making democracy function.

Democratic knowledge isn't something that's out there to be gathered. It is dynamic. It gets produced through the social process of democracy. The term of arduous preference formation.

We are not just passively aggregating preferences. We create them through learning, deliberation, negotiation, and adaptation. Some of these processes are cooperative, and some of these are competitive.

Both are important, and both are needed to fuel the information systems that are democracy. We are never going to remove conflict and competition from our political and economic systems. A human disagreement is not just a surface feature. It goes all the way down.

We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested. Optimal for whom? With respect to what? Over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information.

And it is the process of making all this work that makes democracy possible. So, we can't have a game where everybody wins. Our goal has to be to accommodate plurality. Harness conflict and disagreement; not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.

So, there is a lot missing from this talk. Like what these new political economic systems of governance should look like. Democracy and capitalism are intertwined in complex ways.

I don't think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven't even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them.

We haven't talked about rights or responsibilities. What's off-limits to democracy is a huge discussion. And Buterin's trilemma also matters here. That you can't simultaneously build systems that are secure, distributed, and scalable.

I also haven't given a moment's thought to how to get from here to there. Everything I have talked about, incentives, hacking, power, complexity, applies to any transition systems. But I think we need to have unconstrained discussions about what we're aiming for, if for no other reason than to question our assumptions and to imagine the possibilities. And while a lot of the AI parts are so science fiction, they are not far off science fiction.

I know we can't clear the board and build a new governance structure from scratch, but maybe we can come up with ideas. That way, we can bring back to reality. Okay, to summarize.

The systems of governance we designed at the start of the industrial age are ill-suited for the information age. Their incentive structure is all wrong, they're insecure, and they're wasteful. They don't generate optimal outcomes.

At the same time, we're facing catastrophic risks to society due to powerful technologies and a vastly constrained resource environment. We need to rethink our systems of governance. More cooperation and less competition at scales that are suited to today's problems and today's technologies with security and precautions built in.

What comes after democracy might very well be more democracy, but it will look very different. This feels like a challenge worthy of our security expertise. Thank you. [Applause.]

All right. I think you just won most non-RSA RSA talk. >> It was fabulous. >> BRUCE SCHNEIER: Thank you. I'm not signing books now. I'm taking questions for another 20 minutes, so you do want to sit.

Trust me, I will sign your book. I promise. Just not any time soon. I'm happy to take questions. Rumor has it they will show up on Twitter somehow. I don't know how this works.

But there are microphones somewhere. There's one over there. Yes, sir? >> Yes, there is one here. I liked the talk. The - some of the ideas are hard to wrap your head around if you've, you know, been doing this for 60 years already. But one of the things that I think has surfaced that is the worst for democracy, in my point of view, is this contentiousness between groups of people who 30 or 40 years ago were cooperative and aligned with each other, but who I think, through hacks in the system that you referred to by the rich and the powerful, has seen it opportune to set parts of the populous against each other.

So, they ignore what's going on outside the scope of that conflict. What do you think about that thought? >> BRUCE SCHNEIER: So, I think it is right. And I think what we are seeing is what's an optimal strategy in our current media environment. And this is not - this is not - this happens because it works.

Tribalism is a powerful, powerful force. And it is - you know, the notion of politics as sports is relatively new. And we can look at the technologies that enable that. A lot of people writing about technologies that diminish that. Some of it is the for-profit nature of the platforms. That - that they make money based on engagement, not based on, like, any other metric.

So, things that piss you off or keep you on the platform are things that are going to be prioritized. So, some of it is how the market interplays with governance. But there are things going on as well. But I - definitely, I agree. That is one of - one of the problems we have sort of solve for, here.

That the way information flows inside today leverages these tribalistic tendencies, which I think are very dangerous. Yes? >> All right. So, I do see your point about democracy and capitalism being intertwined. My concern is not an AI that uncover our preferences to make policy that good for all of us.

My concern is about an AI that actually influence our preferences to not just toward policy that's good for the one who is willing to pay for more the AI. >> BRUCE SCHNEIER: And I think that's right. To be clear, it is not the AI influencing you.

It is the humans who are configuring the AI to influence you. It is very, very important not to lose agency. I think when we talk about AI, we are very quick to use human terms.

So, I mean - and I agree. I want an AI that's going to work for me to be under my control. I mean, if - like, if an AI suggests a resort for me to vacation at, how do I know it's not getting - people aren't getting kickback from the chain? Or if I'm using AI to learn about a political issue, which I think will be a thing that will happen, how do I know that the people behind it haven't biased it in a way that benefits them? So, yes.

As we move these - as these technologies start being used in our lives as assistance, knowing the agendas of the humans behind them becomes extremely important. If you were at the last panel, it is really hard to understand what's going on. But we can - we do know the humans who are behind them. So, I'm really interested in - in AIs that are being done in the public interest. The best one we have right now is Bloom coming out of the EU. Hopefully, we'll do better.

Yes? >> Yes. You talked about how one of the main features that you are striving for with these systems is to create a system where people that are not like each other and do not think like each other still work for the benefit of the whole and for each other. I've heard it both ways.

That the internet can cause this - easier, and I've heard it that it can cause it to become harder. Because in one hand, you're more connected, and on the other hand, you see that, my God, that person really is insufferable over there. And I know from historical text, that Greeks thought that democracy couldn't survive outside of one city, because it wasn't homogenous enough. And that other people have tried various things, from patriotism for a political ideal to patriotism for our country to patriotism for less good things, I would say, without getting into it. What's your opinion? How do you - >> BRUCE SCHNEIER: You know - so, it is interesting.

And a lot of that scales with technology. And in ancient Greece, it probably couldn't scale outside of a city. Like, Rome tried it, but it was actually a mess of democracy.

Read how it worked. It really wasn't very democratic. >> I did. It was - >> BRUCE SCHNEIER: Yeah, it was a kind of - yeah.

It's only democracy in name. But we tend to be - to think larger now. I think over the course of the centuries, we lose a lot people into - into us. You know, 200 years ago, we kept slaves. 100 years ago, some of us in this room couldn't vote. But we have now increasingly looking at other humans.

We live in a world where nonhumans have some rights. So, we are getting better slowly. My guess is that, you know, we will never all be one until we discover aliens, in which case we'll certainly all be one.

Because there will be another "them" that'll be even more different. We might be able to do better. When you get to issues like climate, we are literally all in it together.

Like, there is no other way to think about it. So, how do we make that work? I think some of it will happen naturally due to technology, but it is going to take a generation or two, which might not be long enough. And that's some of the issue here. That things are moving so fast that the normal human pace of societal evolution might not work given the pace of technological evolution. Right? They are outstripping each other in a way that might be dangerous. But I'm kind of just making this up at this point.

So, more to come. Yes? >> DEEPAK PAREKH: Deepak Parekh, Democracy Labs. One of the - >> BRUCE SCHNEIER: You actually know something about this then. >> DEEPAK PAREKH: One of the things that you brought up was democracy is based on feedback from the people who are being governed. Now, if bad actors are compromising the feedback mechanism, whether it's the election process or the voting machines or however you vote, what are your thoughts about strengthening that? >> BRUCE SCHNEIER: Well, I mean, we certainly need to strengthen the voting systems. And - and a lot of groups doing that.

But think of it more generally. The feedback process is not just - voting is a very narrow slice of the feedback process. Like, up - us getting all angry and marching on the State House is another one.

And if you think about the way democracies work, it is not feedback once every two years, every four years. It is constant feedback from all sorts of mechanisms, and those are also being compromised. So, if we think of it broadly - and yet, I'm not big on answers here. This is the - this is the time to come to RSA with the questions.

So, maybe next year I will have better answers. But, yes. I mean, I think those are definitely things to think about and worry about. >> Very interesting talk. And very heavy, complex topics. I have a very simple question.

>> BRUCE SCHNEIER: All right, I'm ready. >> What gives you hope? >> BRUCE SCHNEIER: You know, so, I - I - it's actually a good question. I get asked this kind of a lot, especially with some of these, you know, catastrophic risks. That we as humans have always managed to muddle through.

Like, as bad as things get, we have - sometimes it has taken us 50 or 100 years, a World War, or something. Right? I mean, it's not always been pretty. But we have figured it out. And it just seems unlikely to me that this is the thing that ends human civilization as a know it. It feels like it's not the way to bet.

So, I have faith in - in our ability, you know, not to preplan and do it right the first time. We never do that. But our ability to, as we go, figure it out. Now, that might run headlong into the noise of these catastrophic systems. We have never lived in a society before where the costs of getting it wrong are so great. So, that's the downside of that - that way of viewing hope.

But I still do. I mean, I don't think we're done. That just seems kind of silly. >> Thank you.

>> BRUCE SCHNEIER: Yes. >> I guess that's encouraging. >> BRUCE SCHNEIER: Doing my best.

>> Thank you. >> One of the - the difficulties I think that we can all relate to as security-focused people is the difficulty of getting non-security people on board with certain ideas. I can't get my grandmother to use the password manager, even though they are very simple nowadays.

Do you believe that one of the things that will be necessary or at least very helpful in moving along and evolving democracy, as you say, will be to increase the public awareness, increase education concerning these very complex topics, and kind of distilling them into maybe simpler forms so that people can understand them? >> BRUCE SCHNEIER: I think so. I mean, I don't know. I was trying to be simple. I guess I failed. I think - I think we lose a lot that we no longer have civics education.

That the - that knowing how government works is something that isn't really taught, and I think it needs to be. Because these things don't work unless we participate. So yes, I mean - and, you know, I think we need to start thinking and talking about this.

And really what I'm doing here is trying to see, like, what do we have in our community that is unique? That is valuable? And that is systems thinking in an adversarial environment. That is what we all do. And that directly translates into this new area. Now, we probably have to, you know, pick new language because no one wants to hear our techie terms. But I think we can do that.

So, yeah. I - I do think we need to start talking about this more simpler - more simpler? Terrible phrase. More publicly. How that looks, I don't know. It's really hard. Right?

You know, in a - in a world where what matters is this week, next week, next month. To have a discussion that says okay, like, pretend we have all landed on an alien planet. We need to form a government from scratch. What would we do? So, you know, I think this is going to always be a niche way of looking at things. But I think there is value in doing that, because we will come up with ideas that we could bring back to reality. >> Thank you.

>> BRUCE SCHNEIER: Yes? >> Do you believe that there has to be some sort of arbitration as to the kind of AI that's used? That would - that would essentially enforce the kinds of principles that - that you are talking about here? And for which if - and for which the - the arbiters could be held accountable if - if they approve something that - that potentially goes awry? >> BRUCE SCHNEIER: You know, so, I don't know if we need arbitration. So, all right. As a system, arbitration is - there is a set of humans who are the arbiters. Right? They are the ones, if there's a problem, you and I have a dispute, we go to the arbiter who makes a ruling. So, that is an authority-based system of governance. Very useful in a lot of systems because they are incomplete. Right?

All contracts are incomplete. You cannot write a complete contract, and you wouldn't even try. But when there is a dispute, you go in front of an arbiter.

That is one mechanism. I'm not sure it is the only mechanism that will work in this case. You know, we can have a mechanism that involves liabilities, that really doesn't have that same kind of arbiter role. It does another adjudicator, but that's kind of different.

So, we do need something that - that has an accountability mechanism. But what it looks like, I'm not sure. The question is, when do we move into a world in tech where we can't just do anything? Right? Where everything new is automatically allowed unless it is forbidden.

I mean, that's been fun. We have done that for decades. And that works as long as the mistakes don't have catastrophic consequences. You can't do that in aircraft design. You can't do that in pharmaceuticals. There will be a time when you can't do that in software.

I mean, it might not be for 20, 30 years, but that time will come. And it will be no fun for any of us. But I think that time will come. Because this stuff is so powerful. And it is increasingly getting physical agency. And that physical agency turns it into a medical device, a car, an airplane, all those things can traditionally kill us.

All right. You are next. >> Thank you for your thoughtful talk, sir. >> BRUCE SCHNEIER: You say that, but they are all, like, leaving slowly. >> In your new book, you talk about how applying the hacker's mindset can help improve different social technical systems. >> BRUCE SCHNEIER: That's what I'm trying to do.

>> My question is, right now there are different public health issues. For example, domestic violence. Do you think this community, the security community, needs to work more on - for example, this particular public healthcare issue to help victims of technology abuse and domestic violence? >> BRUCE SCHNEIER: Yeah. What I want us to do is to consider varying use cases of technology to develop. You know, very - very often, we develop technology to the average.

Where the average is, you know, right, the white middle-class Silicon Valley male. Like, big libertarian. We - we all know the type. And we need to do - need to be much broader than that to - to really engage the users of technology in the development.

Now, I have done - I have done some writing in not exactly that. But trust relationships intimate partner technologies. And I looked at both romantic partners - I looked at child and parent. Both ways, right? There is abuse and trust.

And elderly and their caregivers. In a lot of these cases, the technologies we build don't work for those - in those situations. I mean, the - the authentication technologies of the secret question. Your spouse knows the answer to all your secret questions. Or technologies that assume that physical possession of an object is an authentication mechanism.

Right? Your intimate - your intimate partner has access to your physical object. All those things don't work. So, yes. I want us to really think a lot about these special use cases. Because people are getting harmed.

And they are, you know, not the average case, but they're important cases. And I think we are doing a better job at that. But in security, it is really hard.

It is - it's - we're - and we see this in, like, Facebook account takeovers. Not by strangers but by a relative. Someone who knows you. Where that, you know, that - that nice great authentication mechanism of, here's a bunch of pictures, and click the ones that are your friends. Your intimates can do that, too.

It doesn't help. So, yes. Yeah? >> Sort of as a follow-up to the question two questions ago about sort of checking who makes the AI and having some sort of accountability for it.

It is somewhat unfortunate truth that the people who make AI tend to oftentimes not really represent the common perspective, especially when it comes to things like Machiavellianism, which has been sort of a boogeyman of AI, ever since AI was first conceptualized. And one can easily imagine a short story along the lines of those who walk away from where it is the AI suggests something that to fix a problem is not necessarily considered worth it by the majority of people. So, the short story I was mentioning was a variant way, and to fix climate change, you had to commit mass atrocities upon human society. So how, in that case, would there be arbitration? Because you said that it was not necessarily needed to be arbitrated fully.

Would there need to be arbitration in that case? Or would you just trust in the people that were making the AI to solve this problem? >> BRUCE SCHNEIER: You know, I mean, I don't think you can - I think transparency really is going to be important. These systems are super opaque, and that's - that's going to become increasingly not okay. And even transparency is going to be hard. Because we know they are robustly opaque. Even if we know the training data, even if we understand how the model works, there are all these emergent properties that just make no sense. So, I don't know how to solve this.

But this feels like, again, something that we as security people can help the AI community with. Because we - I mean, these are the sorts of things we think about and the sort of ways we think. So, good question. I don't have an answer. >> Okay.

Sort as a follow-up to the follow-up, then. In - in some cases - >> BRUCE SCHNEIER: I don't know that you get one of those. We only have a minute and a half left. Two more.

All right. Go. Sorry. >> So, we talk about some of these kind of wicked problems sort of - and problems that seem like we can't really solve them, and don't have an owner, gerrymandering, incentive structures, voting systems. But do you see an opportunity to apply startup mentalities or - or the option to build structures of government? And then, if they appear to be successful, apply them at a larger scale? >> BRUCE SCHNEIER: I mean, I - I liked doing things at small scale and then scale them up if they work. I like it if local governments try different things.

They're not really - not really good at that. So - so, yes. I mean, we're going to - we didn't need to figure out actual governance. The governing at the speed we do isn't working.

And if you think about how tech platforms are governed, you can - you can push down patches every day if you like. We cannot patch laws every day. We don't have that system. And we're going to need something.

Some way to - to govern at the speed of tech, and we don't have that. I think that's going to be hard. Because it's very different than what we're used to. And whether it is, you know, trying things small and scaling them up or - or being able to, you know, have regulations that are continuously updated. I mean, the best in our society that does that are the regulatory bodies because they can issue regulations at a speed approaching what we need. They are still not very good at it.

We don't really give them that authority. You know, maybe we don't want to. But agility of governance, everyone should think about that. Because we need to figure that out, like, really badly. All right. You are my last question.

>> So, going back to - I agree that - >> BRUCE SCHNEIER: Weren't you my first question? >> Yes. >> BRUCE SCHNEIER: How - how bookend-y of you. >> Apropos.

So, going back to your point about civics education. Right? I think that gets onto your larger theme of if you don't know how the system works, you should probably run away from the system. At least, that's my interpretation. But to your point about zero sum, what I have noticed in education a lot in the United States in the last 20 years is kind of the zero-sum thing. In order to put in music education, sports has to suffer. In order to put sports in, the wood shop has to go away.

In order to put the wood shop in, we have to stop teaching math. And I think that's an absurdity. And I'm wondering if you have a thought about how we get rid of absurdity in - >> BRUCE SCHNEIER: I'm trying to work on an eight-day week. Failing that, you know - I mean, you're right.

And that is - I mean, you will hear that discussed at all levels. Adding security education to computer science degrees in college means we take out, like, data structures. But you - you hear the same thing. I agree. I don't think it has to be either/or zero sum. I mean, yes, there are only a limited number of hours in a day.

But there are ways to teach effectively. Some of this is that, at least in our society, we do not value teaching as a profession. It is not a well-paid profession. It is not a well-respected profession. That seems incredibly stupid. I mean, like, I don't want to teach.

If someone wants to do that, pay them a lot of money so I don't have to do it. Because we need that. So - but yes. I think we need to really rethink how we do that as well. And this is hard.

I mean, I'm - you know, I'm tossing a lot of really basic things in society in time to fix it. And now the question is, could we possibly do that? I think you are right. We - I think we can put it all in.

You know, maybe at different - at different degrees, different levels. But I think civics is extremely important. And I think there is a - there is a movement to get rid of civics education.

Because - right? Because if you don't understand the system, then it could be used against you. Now, you don't have to understand the system to use it. I have no idea how a car engine works, and I drive a car all the time. So, we are - we are okay with using systems we don't understand. But we know that somebody does.

We know that there is someone who is watching this - I'm going to get on an airplane today. Like, I have no fricking clue how that thing works. But there is a government body behind my back to make sure you don't have to know. Just, like, pick your seat. That - that's valuable. In a complex society, we do use these proxies.

And that we trust. We trust these processes. All right, I need to get off the stage. Thank you all for putting up with this very non-RSA RSA talk.

If you come back next year, who knows what else will happen? [Applause.]

2023-06-10 18:37

Show Video

Other news