Cybersecurity: Trends, Risks and Innovations
but now i'd like to introduce the speakers here this evening first i'd like to introduce professor terry cramer who's a ucla alumnus and he's also an adjunct professor of technology management and serves as the faculty director of eastern's technology management centre some students anderson students may have already taken his core technology management course his tech and society course or may even have taken the global immersion course that he teaches for the centre that focuses on technology driven transformation and innovation in the greater china region he'll also be teaching the course for us again next year professor terry kramer has extensive technology and leadership expertise in the domestic and international telecommunication industry where he enjoyed a 30-year career 18 of those 30 years he worked for vodafone in a variety of executive roles both domestically and internationally while serving as an entrepreneur in residence at his alma mater harvard business school in 2012 he was appointed by president barack obama to serve as ambassador head of the u.s delegation for the world conference on international telecommunications this delegation actually formulated and communicated the us policy regarding the criticality of a free and open internet as well as an inclusive multi-stakeholder governance and the need to proactively address cyber security issues fortunately for us he is now a full-time adjunct professor at anderson please join me in welcoming professor terry kramer i'm next going to introduce before we hear from professor terry kramer professor andrew selps who is an assistant professor of law at the ucla school of law prior to joining ucla he was a post-doctoral scholar at the data and society research institute and a visiting fellow at yale law school's information society project professor selps research examines the relationship between law technology and society and over the last several years his research has focused on the effects of machine learning and artificial intelligence on varied legal regimes including discrimination policing credit regulation data protection and tort law his research has appeared in many preeminent legal publications including the california law review and the international data privacy law he worked as a design engineer before attending law school at the university of michigan and then following law school he was a privacy research fellow at nyu school of law information law institute and also an al morrison supreme court assistance fellow at public citizen litigation group so please join me in welcoming professor terry kramer first and then secondly we'll hear from professor andrew selps thank you you lucy and i hope you guys can hear me uh okay i want to first just thank lucy for bringing all of us together because i can't think of a more timely and relevant topic something that's affecting all of us today real time and something that's significant in impact um you know eric schmidt who's a former ceo of google recently was talking about the number of antitrust cases that are being brought against u.s tech companies certainly in china you're seeing a massive focus on tech but one of the comments that he made that was so interesting is he said a lot of these issues we're seeing today are a manifestation of the growing penetration of technology all the services that we use whether we're in e-commerce or social networks or ai for healthcare etc is that there's a growing penetration of these and you're going to have more and more issues get created so when lucy was talking about this balancing act that to me is the most important element it is easy to get ideological and absolutist on one side or another but figuring out how do you create the right outcomes that benefit technology and its users which traditionally technology is cheaper better and faster now that's obviously the the promotional line that people would use but you do see a lot of companies are able to offer that but you also see a growing set of tech lash issues data privacy issues any trust issues cyber security issues future of work issues etc etc etc and how do you balance those matters so i want to share a few thoughts with you here let me see if this will advance so to the point about something being timely friday's wall street journal talked about the latest in ransomware it is on pace this year to 2x increase over the previous year it was a monitoring of what looked like suspicious payments that have been reported by banks and again numbers up twice this is a big area of focus in the biden administration next slide so what i'd like to do in just a few minutes time is make sure there's kind of a level set about what is going on and whenever i think about business problems or opportunities and for those of you that take my classes you already know this and for those that are going to take them in the future you'll see it in the future everything in my mind is wired around three things when you're looking at an opportunity or challenge one is what's the problem you're trying to solve the second thing is what do you know about context what do you know about what's going on with technology and consumers and public policy and a variety of things and then ultimately what would you go do otherwise we're kind of shooting from the hip about what we think the problem is and what needs to get solved so in this environment the problem very much is how do we reduce the cyber attacks the cyber security challenges while not creating an unintended consequence of lack of benefit of technology and technology-based businesses how do you strike that balance to me is the is the problem what i'd like to cover just briefly is what are some of the the kind of state of the union issues in cyber security the types of attacks the targets of the attacks because they're very different targets of the attacks recent trends and what are the costs of these obviously if the costs are small we don't need to worry about anything the costs are large you got to worry a significant amount then the next issue is once we have all of that who owns the problem there's a lot of players involved in cyber security related issues businesses and individuals and governments and regulators etc and then the final area i'd like to cover is areas of innovation and potential solutions so first of all types of cyber attacks they are very very broad there's not one single type so denial of service anytime you get an outside player that wants to stop the workings of the internet you have a denial of service it doesn't even need to be for ransom it's just somebody that wants to see an interruption of service phishing attacks people that mislead others to get them to provide confidential information their passwords their birth dates other uh identifying information password attacks similar idea somebody that gets you to share what your password is they then use it in some other context and they're able to monopolize on that information malware probably one of the most worrisome events that occurred just a few months ago was a solar winds attack that affected our u.s government so basically unbeknownst to others in a software upgrade you basically had malicious software that then caused huge problems um the other one ransomware and and by the way if you remember the continental pipeline i think it was about a year ago it provides basically fuel natural gas for 45 of the east coast of the us they were attacked and interrupted all of the energy capabilities so lots of different types what are the targets individuals businesses and government they're very different types of attacks different players if any of you want to see an interesting interview of president obama just about a month before he left office he was interviewed and asked what are the biggest worries that he sees for the nation number one issue was cyber attacks it wasn't conventional warfare and battles with xyz country it was cyber attacks that would affect the infrastructure of the united states it would affect the stock exchange it would affect energy would affect water etc the number one area so as much as we all feel the individual cyber attack impact think about what it would be on a national basis so it's um it's huge with all this not being bad enough what are two key trends to overlay on top of all of this the number of connected devices so wearable devices connected homes connected cars etc that number is going to increase threefold in the next 10 years there are now about 25 billion connected devices in the world going to about 75 billion they provide a whole range of great services so again the answer isn't well let's stop all the connected devices but it's what do you do with every single connected device being a potential breach so that's a a key trend the second one is remote working remote working is creating lots of nightmares for big companies because of lack of security of people working in remote areas accessing company information and by the way for all of you would say that we're going to go back to this old environment of being in person every single study that's been done recently saying that is not true there's a study by mckinsey global institute that surveyed ceos saying how many people your employees want to go back in person they kind of thought 60 70 percent wanted to go back four days a week that number was inverse it was maybe 25 30 percent actually want to go back fully in person so in general the mode is people want to work remote that is going to increase the the challenge so what are the costs of all of this there's huge financial costs of this right so loss of money with attacks etc companies have loss of productivity when they're trying to deal with these issues they're not developing products they're not serving their customers etc you have reputational harm so think of some of the big organizations yeah the social security administration even think of target stores that have this kind of question now saying wait a minute you didn't protect my information i don't trust you and if you're in a corporate environment that trust is key to future business to more information being shared etc so reputational harm is big legal liability is large target by the way and i think it was uh four or five years ago now when they had their breach of all the credit card info they are just as a class action suit against them saying you didn't manage data well they're just settling at all now huge costs and again reputational impact business continuity and then national security as i mentioned the obama interview who owns the problem again because of all these different types of attacks individuals own it there's a certain amount that says individuals caveat emptor they need to be taking the right actions to protect their information that's kind of one argument one player businesses own it if they're viewed as being irresponsible in this they're going to have a huge problem industries you can make an argument that industries get tarnished if one player does something wrong so industries may have a significant interest and then regulators and ultimately governments elected uh officials so what are some of the solutions there are a bunch of technology-based solutions that are very interesting so you know use fire to fight fire here biometrics and two-factor authentication so the ability to recognize uh via image recognition or putting your thumb on a device or two-factor authentication to protect your identity and your interactions by the way many of these have got concerns around them on data security especially on image recognition very effective way to identify somebody in most cases but yet creates a separate problem if it's not administered and developed in the right way artificial intelligence so facebook when you're following i'm interviewing roger mcnamee later this week it's going to be a fascinating discussion he's got some very clear views about how facebook has fouled up well one of the articles that just came out this week is saying facebook says you know we can use artificial intelligence to identify misinformation disinformation etc etc they just had several their engineers come out and say you know what it doesn't work it's not effective enough yes it sounds good yes it's the future but no it doesn't work right now so an interesting dilemma but if you could get it to work you could fight fire with fire behavioral analytics down to an individual if you could understand regular patterns of activity what somebody buys and when they access the internet you could then use that information say this is something unusual this seems like somebody else is accessing somebody's account again you could say there's some data privacy issues there but it's also being used to help eliminate cyber attacks blockchain digital registries that can keep track of transactions and ownership of content is viewed as being more secure than many other areas that could be a solution and then zero trust models overall saying we're going to design our networks assuming we can't trust anybody and that may be a potential solution so what are some of the considerations and this is my last slide and i hope it'll kind of engender an interesting conversation is first of all what are the roles and responsibilities here so how much of the solution lies with individuals how much of it actually lies with businesses how much of it lies with government and generally my ripe old age i've realized the world is a very complex place there are no easy answers on a lot of the stuff and the challenging issue is to look at what the trade-offs are and how do you make the best balance trade-off one of the questions i'm going to pose to roger mcnamee even though i i agree with many of the things he's saying is facebook yeah we kind of have a u.s centric view
of it but 80 90 percent of all their users are in developing markets where the willingness to pay is very low they're offering a free model and for a long time they were helping democracy in many places the arab spring as a example so just saying well listen i want to do a paid model and i'm okay with you know fewer users etc not necessarily a good outcome so you want to look at what are the roles of the different players and when i was in the state department it's always interesting what is the role of government versus a multi-stakeholder which is government industry civil society versus individuals we may say government is great but remember not all governments look like our government they may not be democratic governments so when you're allowing a government that's not democratic to access information you do need to ask the question what are they accessing that information for so not to put a judgment that one thing is good or bad but to think about what are the trade-offs and where these solutions work and where do they not work can businesses self-police so i love to challenge my students it is absolutely easy to say bring on the regulation mark zuckerberg's saying this right now he's very shrewd bring on the regulation because he knows we're in a politically divided environment and there's going to be no decision made that's substantive on areas of cyber security so bring on the regulation it's a complete punt so if you can self-regulate you got a much better outcome can businesses self-regulate and eliminate the chance for unintended consequences of regulation is there enough capital going into innovation hopefully the answer is yes if the answer is yes you have more chance of good solutions there are regulations and penalties necessary this is where andrew and i are going to have a great conversation is it necessary and if you do say yes to that what are the unintended consequences of that so roger mcnamee just said he believes there should be an fda for all technology products that approves all products and i thought oh my god i just fell off my chair and thought when you think of all the iterations and number of products that technology companies develop and to think that all of them are going to need to be reviewed like a vaccine would be reviewed think about what would happen to innovation so thinking about these trade-offs to me is a huge one and then the final and ultimate one and lucy said this at the very beginning how do you balance all of this how do you get the best of technology which is mostly change people's lives for the better and still manage the the bad stuff and that bad stuff can create bigger bad stuff but how do you balance all of this together that to me is the is the holy grail so let me stop on that i'm going to turn it over to andrew uh lucy thank you for bringing us together and look forward to a good conversation uh thanks terry that was a great presentation um i'm gonna focus on one little aspect of the the who owns the problem and ask like what what regulators what regulations what governments are even supposed to do about this right so or first i'll ask you know what do they do and and try to think through what can be done right what's the role of law here uh so the first thing to to think about is that um when we think about regulating i don't know information issues right people tend to lump privacy data security and cyber security all together including the law itself tends to lump a lot of these things together so a lot of the law are based on these idea called the fair information practice principles from you know 1973 adopted by the oecd in 1980 and basically became all of our sort of data everything law and in there there are you know issues of transparency um there are there are issues about data minimization which is one way to deal with a data security problem before it becomes a problem and then there's like security issues so it all gets lumped together so the first thing just as a matter of terms right when people talk about cyber security it's often more the infrastructural denial of service attacks and things like that um data security is often about you know did your data escape right did you get hacked or whatever um i know more about the latter but the regulations tend to be the same and i think what i'm gonna say is applying to both um so here's the real challenge right everyone gets hacked we have data breaches all the time always and we can't get at the people doing the data breaches right so often particularly in say the us right the people who are actually creating or stealing data who are doing the phishing attacks or whatever are not in the u.s right so it's not not really necessarily subject to us law um are hard to find you can't find them you have no idea who they are right so how on earth do you go after them so this is the problem we're starting with right we're saying everyone's getting hacked lots of things going down and we can't actually deal with the person responsible and so the reason we're asking what um what to do with businesses on this is because we're faced with this sort of moral quandary of how much blame do we lay at the feet of the business that in some sense was a victim right they're the ones that get hacked and they're the ones that get taken down and lose people money as a result but they get taken down they lose data they get um like what what is it that's their fault and so that's sort of the the real challenge in regulating data security and to a degree cyber security um so the way we do this is kind of a hodgepodge right we don't have any sort of overarching regulation there are a whole bunch of sources of this kind of regulation so consumer protection law is one where data is one in the u.s especially where uh data is protected the federal trade commission is the primary regulator over data security and privacy in the u.s state attorneys general also work with consumer protection law um and can go after companies for letting themselves be breached i'll talk about that more in a minute uh there are some federal sectoral regulations about this in healthcare in um in banking right very specific sectors where we expect incredibly sensitive data there are much more targeted laws um there's data breach notification laws uh every most states have them um in europe the gdpr right has data breach notification laws that's one of the ways they approach this um it's actually kind of legally not really that interesting if the laws tell you like or you have to notify after a breach and the laws tell you like 24 hours or 72 hours and what size font you need on the form but that's the thing that actually a lot of businesses turns out care about because it's actually really expensive and hurts the reputation of the company so they're really important um there's there's a lot of private ordering agreements right contract law comes into place a lot of this i'll talk about that more in a second um and then there's there's a lot called the cfaa which is a computer fraud and abuse act um that actually regulates hacking it's like the one law that goes after the hackers themselves um again not really gonna talk about that that much because it turns out it's also used that to go after security researchers who are claimed to be hackers and that's what's in the news more and that's what we had the supreme court case a lot about last year and things like that anyway okay whoo point is right huge distribution of ways we regulate this and what do you make of all that right what is the overall approach i would say that the overall approach as a result is we can't is to go back to this point that we can't necessarily always blame the business and so we need to figure out how and when we're allowed to blame businesses right so that gets to two ideas one is reasonableness um in my first day of my torts class i tend to put up this slide that i wish i had the meme up here i did not prepare enough um in advance to get you this meme but there's this meme that some of you might have seen with like this building falling down and being propped up and it says the entire american legal system and down here is like the word reasonable um it's true though the entire american legal system rests really really hard on the word reasonable and this is no difference right so first we want to ask what is reasonable what is reasonable in protecting people against hacks against data security breaches against cyber security breaches right and the other one is is a compliance framework right so maybe we can't maybe it's too fact intensive to say were you being reasonable in this particular instance and say we just don't we don't have the way to evaluate that instead we're going to say here are the practices that you should be doing if you're in compliance with them fine right so those are really overall the two approaches that get mirrored throughout a whole bunch of different types of law so let me tell you about some of those specific areas of law right so the biggest one and when we talk about innovation hey the government innovates the last 15 or so years the federal trade commission has been using their authority to police um unfair and deceptive practices to go after companies that um don't do anything on data security right so they've become a de facto privacy uh security regulator we don't have one in the u.s right in europe and in a lot of the rest
of the world there are data protection authorities that are charged specifically with doing this kind of work in the u.s we do not have that so the ftc has taken on that role now the ftc gets to do this because their remit is to um is to regulate unfair and deceptive practices that affect commerce which is basically as broad as it sounds right um and so what they've taken that to mean is that these data security practices when you um come in and don't do anything and then get hacked is unfair to the consumers that gave you their data with an with an assumption that you would do something to protect their data right in this day and age we know this happens you got to do something right so their approach their approach um is that well because they getting technical in a second but they lack really good rule making authority so they use enforcement right so they go after people they enforce they bring cases against the worst of the worst actors um one famous case that actually most i'll say most of the cases settle right they settle and there's a consent decree right an agreement that the ftc remains uh retains jurisdiction for 20 years uh they report make some reports for anywhere between two and five years they implement a security program they appoint a data protection officer there are all these pretty standard things that go into an ftc settlement agreement and they don't admit liability um but so there's this thing that keeps happening right the ftc lays out what it what they did wrong they admit to none of it but over time we get an idea of what it is the ftc is expecting you to do and some people have called this a quasi-common law which makes sense right at the same time these are like the things you don't do the worst of the worst they also have a guideline document that's like here's what you should do please pay attention to this and then nobody does anyway so some of these cases get litigated rarely uh there's one called ftcv windham which wyndham hotels challenged the ftc's authority to do data security regulation at all because again no one said they could do it right they just said it's unfair to consumers and we get to say what's unfair to consumers right so the ft's window hotels came and said hey we had no notice this was unfair um like this is beyond your remit what are you doing like this is you know you can't possibly hold us against this or hold this against us and the third circuit came back and said what the hell are you doing in court you guys got hacked three times um you didn't have firewalls your passwords were one two three four um not literally the last one i actually don't remember what it was basically you just didn't change your passwords and that's the thing right the ftc is not it's weird they've innovated a lot on this but they've also been like not super out on a limb on a lot of these things they'll only go after actors that are the worst of the worst because they don't want to lose cases and so what we have is a great description of these cases of things you absolutely should like bars you should clear that are below the floor somewhere and a guidance document and everything in the middle it says to businesses go figure it out right go do what's reasonable their baseline here is reasonable right that's their whole idea and there was another case that came out um a couple years ago called labmd um where labmd said that's i don't know what's reasonable how the hell am i supposed to know what to do and this is a weird case they also challenged the ftc's authority the judge kind of dodged that issue but the judge came back and said yeah by not telling the business how to improve their security you're micromanaging the business anyway um nothing seems to have changed after that opinion uh they said like these broad remedial orders the ftc came out whatever nobody really knows what the effect of that opinion is it was 2018 the ftc seems to be proceeding um on its course but this is all recent developments right we don't we don't exactly know what's going for it so that's one way right reasonableness also comes out in negligence lawsuits right the reasonableness negligence that's like those are synonyms basically right so these these suits like like target um uh class action suits where people are settling for billions of dollars potentially uh they're essentially negligent suits right and you could again you can claim negligence with any kind of any kind of injury um financial injury is usually the big one um and so negligent suits are another way that we can uh that the law tries to regulate these things individual suits face difficult problems and this is where we run into the the supreme court's doctrine of standing um and for those of you who are not law students that may not know what standing is it essentially means you have to have what is referred to as a concrete and particularized injury in order to be heard at least in federal court states have similar obligations sometimes not completely what is a concrete and particularized injury well the supreme court came back last year with another case that tried to tell us a little bit more it was a case where the fair credit reporting act made it um a violation to have the wrong piece of it have an incorrect piece of information about you in a credit report and not be able to to fix it right so the plaintiff in that case had um had credit reports there was a class action they had credit reports claiming they were terrorists and they said that's not okay please fix it and the people essentially laughed at them right um and the supreme court came back and said yeah unless they gave your credit report to a third party that read you were a terrorist you don't have standing um because even though congress said you have standing to the tune of 1500 dollars of damages for every violation that's not a concrete enough injury now where does this come to a data security question well it turns out if people are hacked right it's not clear whether your data going to someone else is necessarily an injury right if your social security number is out there in the world first of all it's already out there in the world so one more time i don't know um but like if it's out there in the world and you don't get your identity stolen have you suffered an injury and this is turns out circuit courts are split on this right so in some sense the risk of harm is itself a the increased risk of harm is a legally cognizable injury right you have to protect yourself you have to get um you have to have credit monitoring services and things like that you have to spend money on it other cases have said no this is just speculative this is just a risk of harm and this is an issue that's teed up in the future with this case that came down nobody really knows what's going on with an individual lawsuit um let's see so the last substantive uh thing i want to talk about are these private ordering schemes right and so this is goes to the question of like can can industries self-police right so there are lots of um individual private ordering schemes out there uh the the payment card industry right has the pci data security standard um which is enforced by you know collection of like the five major credit card companies and ultimately every contract that uh that a pay uh payment system uses and adopts has this language that they opt in to this um pci data security scheme right so it's not legally enforceable from without but the industry has created a situation such that every contract says you must abide by the existing standard right so it's using law to help an industry self-police right so that's a really interesting different kind of model that gives individual agent like industries much more agency over it there's also vendor management frameworks right there are kind of standardized ideas out there that if you hire a if you hire a vendor you create clauses in your contracts that are going to make them liable for any breaches they claw they cause right and when you're actually i mean when you get employed as a lawyer doing privacy and data security work that's a lot of what you do is vendor management um stuff because that's such a really important part of how this gets regulated is through private law private law is another another way of um addressing some of these issues uh and but i guess the the ultimate question is uh given the different contexts for different uh data security and um cyber security given what's needed is so different in each different context right and we need flexibility in order to figure out what standards should apply and how we even think about the fault of the sort of intermediate hacked player right we need to think about all the different ways that we might actually like have law support um business in creating the and essentially self-policing part of it i haven't even gotten to the political reality that we can't pass a lot of laws so maybe we just need to come up with other ways to allow self-policing the big question at least for me here is whether market incentives are um are there right a lot of this discussion of so reasonableness what is the content of reasonableness it often comes from existing industry standards right we draw the law even when it does do enforcement draws on existing industry standards and yet hacks are happening all the time there are a million data breaches every day so if market incentives can get it to work right if we don't need outside regulation why haven't we done it yet and i don't know the answer to this right like i don't know um i think it said for a while that data security people didn't take it seriously until it became a c-suite problem right and but now it is but how much of it is right how much are can a business just get away with not caring uh i don't know and so i think these are a lot of open questions or what form of legal regulation can it take and you know law versus markets like what what what can work can reputational harm even do if everybody's getting breached right does it really hurt that much i don't know um so i think i think the questions you're going to end up having to answer are kind of related to these so i think that's a good time to leave off um so thank you everyone so for the next 15 minutes or 20 minutes or so we'd like to open this up to you too for questions for the two professors so let me invite you to do that and i'll start with the first question while you're thinking of of so the question i have after listening to both of you in this this wonderful controversy as to who owns the problem and how should the problem who should be responsible for resolving the problem is what if we moved away from the concept of negligence and move to strict liability as an experiment and say for the next five years or next decade any retailer who is the subject of a breach and of course you could put some qualifications around that but it's the subject of a breach is strictly liable right so there's no more controversy you don't have to go into you have to go to court but there's not there are no defenses right will that create the incentives that the businesses need so that i think professor cramer's view that businesses might be the better uh place to find a solution than government or would that create the incentives for them yeah my initial reaction is it's a blunt instrument so you've got to be able to have a business see the impact on its consumers you have to have a business feel as if there's a reasonable effort that they can take to solve a problem so in this situation this cyber security you know the the challenges the risks the the cleverness of the hackers is getting pretty notable much greater than i would submit than the capabilities of companies right now so there's got to be a reasonableness range you use your term again but reasonableness range also for what a company can do and also what actions you know when i was in the state department we always talked about unintended consequences of policy because policy in the abstract always seems great but they have to say okay now how does a chess game play out here and if companies say listen you know based on that liability i'm not going to get into this business i'm not going to provide this service i'm going to add pricing i'm going to raise pricing to a comedy for all this then you've harmed the consumer in essence so i worry about the blunt instrument i would rather go back to seeing naughty behavior of business people i remember we used to sign contracts we'd always say you know if in doubt add the term uh uh willful misconduct or gross negligence those are the two most basic things as a business person i would remember if that's happened you've got you've got a legitimate so that would be my initial reaction to the scenario you posed yeah i mean i i actually mostly agree um i mean i think you know random executions are also something if you could decrease the probability that you get randomly executed um it will create an incentive not to do certain things oh i'm in favor of those two yeah right but fundamentally i feel like strict liability for data breaches amounts to a degree to a probabilistic execution right like the the kind of damages we're talking about could make or break companies um and if they can't do anything about it then it's almost like it's so bad that you kind of have to give in and doesn't create the incentive to try to do anything about it you just recognize that there's a five percent chance that you're just going to not exist next year and you have to live within the shadow of that maybe there's a new maybe it increases cyber insurance um and makes that a much more right much more applicable thing just what has any company any significant company been broken by these these kinds of lawsuits i don't think so because there's cyber insurance cyber insurance is not as far as i understand not like very easy to get and i i don't know i might be out of my depth on that one if you can tell us more about fiber insurance i think well you historically have been able to get it so i don't know if that's changed all right yeah opening it up uh my question is based on recent experience um so i worked for a law firm over the summer that's in the tech industry and we had of course training talk about how their first second third and fourth concerns were phishing attacks right you get an email that says um that says hi it's me your boss what's your password maybe take a second look at that email i think we also see every day that a pretty visible source of cyber concern is misinformation by hostile state actors so this is leading to the question is our problem really technical barriers or vulnerable machines or is it more exploitable people and are those two problems actually that closely connected uh i mean i i think if you talk to security professionals people are always the answer um right they're almost always the weakest link and i mean and like there's only so much training can do but ultimately it's social attacks are just much more common and hard to hard to deal with i mean they've i think they've probably got gotten better over time as general awareness has increased but a one cyber security training for half an hour every year maybe it does something um but doesn't do enough of it yeah no i i agree and i you know the companies have a responsibility to know that most consumers are not well educated on a variety of issues so they have to kind of assume that people aren't understanding and trained to that level and to test and and to follow up on it so i think there's a fair amount that companies do own and would be willing to own on that hi thanks so much my question is about complacency so it seems like you know the alarm bells are getting louder and louder from leaders in business and government especially um relating to what we need to do now about cyber security concerns um at the same time now that like ever like as you said everyone's been hacked um you know there's all these things on internet websites that at this point i'm just kind of like used to like ignoring to be perfectly honest the uc um attack i kind of ignored those emails for a while it's so common so is that going to affect sort of what the responses are if that sort of mentality becomes pervasive in our society it just seems like right now it's still like do we need a huge awful event to happen before people take this more seriously or maybe i'm just sort of a you know lazy about that kind of thing yeah you know it's interesting maybe it's glass half full or half empty i actually think that people are more concerned and more aware now on a variety of these issues than they ever were so if you look at social media as an example the concern about your information and what's getting shared etc i think people are pretty worked up and concerned on that i think what happened with facebook on the the russian interference with the elections and what happened in january 6th i think people are pretty aware of that i just think the challenges are getting greater but i don't think people are complacent i think companies are more aware i sit on a variety of boards and the concerns and the liability on cyber security issues now is a discussion at a board room it's just the number of incidents and a tax is getting greater and that's why i'm a bit more excited about the technology-based solutions that can be scaled that can be used to understand all the nuance at lower cost i think that ultimately is going to be the best protection even more than training you know customers and employees uh yeah i i mean i i mostly agree i mean i could see i can see it going in a sense in either direction i think i think that's right that right now there's a lot more awareness a lot more concern than there used to be um so i i don't know i mean i think like in in industry you have companies often moving lockstep right like not wanting to be the first out there um a lot of the time not wanting to stick their neck out but that goes in both directions right if nobody's doing anything in an industry then there's then there's cover to not do anything but if certain people are out there really concerned then your industry potentially moves along with them and at least the trend seems to be that people are more concerned um but i'm not sure that that's uh like if they're not paying more financial penalties over time right when these hacks happen and they're increasing then it's not clear to me that that is a trend that will necessarily continue um so i don't know i think i have a follow-up on adam's question a little bit and that our firm similarly had like the trainings on that but something that our it group did was they would randomly send out phishing emails to us um to test us basically and so it kind of kept us sharp right so maybe that is like the people's weakest link aspect of it um i was curious about i guess like two different questions and one might go uh more professor kramer and one might go more to professor salps but i think it seems like there's been a shift away from minimal viable product testing um lately we've been trying to get things to market as fast as possible and because of that i feel like we could be doing some of this testing early on and perhaps that would be a way to use tech to kind of like create this self-regulating aspect of it but i am curious also at the later end of things if if maybe it doesn't require fda type disclosure but couldn't there also be some sort of regulatory solution at the later stages that say is running these tests that has people that are actually trained to do that kind of the way that my firm's i.t department did so i guess first first side different may be on the minimum viable product and then the second side to professor yeah um i'm sure the speed of product releases is part of the problem i wouldn't rate it as the number one issue i don't think you know people can say i'm rushing through so i'm gonna eliminate the cyber security risk i think most innovators don't know what they don't know because the types of attacks that are happening are changing so rapidly and new means i mean look at some of the most quote-unquote conservative organizations you know our us government and they're getting breached as well so i don't know if it's the the speed of things and just to be provocative for a second you know i worry that some of these actions again create negative outcomes and let me give an example it's not cyber security but it's just general data and technology you look at autonomous vehicles and i always remember we had raj kapoor speak a year or so ago as the chief strategy officer at lyft and made the case for autonomous vehicles with ride sharing so basically the average person only uses their vehicle four percent of the time they spend 750 dollars a month to support that car and that car costs in aggregate with all the other cars in the us 40 000 deaths a year almost all of them are human related so the extent of what you say hey listen i want to make sure autonomous vehicles are picture perfect it's very noble and very exciting but every day and every week and every month you're waiting 40 000 people die so i think it's it's a bit dangerous to to slow the wheels down too much excuse the pun because you're creating again the loss of benefit of the innovation i would go back to whether you're seeing something that's egregious in nature so if i look at tech companies as an example we tend to kind of lump them all together and i think this is what's so unfair it's a little bit of bias in its own form because we assume they're all kind of evil and they all do this and that there's a spectrum of to me people that have been much more societally focused than others candidly i put facebook at the worst end um they're trying to do things but they've had multiple warnings it's like an employee that isn't performing well and you've had them on a performance plan and then you kind of well i'll look the other way and let me keep them on let me keep them on camera and you just kind of argue your way into never doing anything that's been them because they've had cambridge analytica they've had the russian interference they've had january 6 blah blah blah you have a bunch of companies that are really trying to address issues mark benny off at salesforce he is all about the north star and that flows into all the product design for what they're doing with einstein and their use of ai etc they're to me a very very good company you look at google even and they get a lot of criticism i had susan wojcicki who's the ceo of youtube speak and she talked about content moderation on their platform i was very impressed and she went through the criteria they used the people in the organization that own it she went through the statistics of what's been taken down etc etc etc so this is why i worry a little bit kind of this like let's slow things down and everybody's kind of been irresponsible i worry that's creating more net negative um and not acknowledging some of the the good acts that are being uh uh seen now so here we're finally gonna get to some disagreement um okay so the first thing i'll say on security right the there's no perfect security that's just a sort of a baseline like it's a defensive strategy and so so no matter what you're gonna get like i mean a lot of the hacks that we've we've seen of like apple right uh um recently are like zero day attacks they're doing their best and still you're gonna find things that we could argue about whether they should have caught but they don't and the question is what do they do to respond how quickly do they patch it and things like that so there's a question of you know how quickly you move things and do you test beforehand but what might look like you're shipping things too early might also just be that hackers have gotten very good at this at the same time um and this is i'm gonna jump on a little soapbox here we don't spend enough time talking about whether stuff works um like just works in a basic sense i'm in the middle of a uh we're writing a paper right now with some computer scientists talking about ai and ml and a lot of people are talking about bias and discrimination um and literally the paper if you'll excuse me this bleep doesn't work it's like the title of the paper at the moment because there's a very basic question of like is this functional and part of insecurity has to be part of the conversation of what it means for something to be functional and so i don't know i mean to to one way to deal with that's actually to move to a strict liability framework maybe but if we're thinking about what negligence is or i guess i should say in law right we're used to thinking about whether something is functional in product liability regimes right so for something like an autonomous vehicle we could potentially talk about that but in a case we are not actually suing in product liability the idea that a judge will go under the hood and ask what you designed for what you tested for and things like that is something that it's really hard to do because lawyers are afraid of tech they think these tech wizards know what they're doing they see a device that's a black box they don't want to sort of under like look under the hood they don't think they have the ability to do so in a lot of ways product liability is the one area where like we ask about the design but in a lot of other areas we don't ask what kind of testing you did is it enough um what are the standards and this is the approach that we we have and we need more of the basic question of like did you test enough ah we're gonna these should be the last three so that we can try hi it's kind of a follow-up to that so the california tried to go on the law side a little bit because the california consumer privacy act they they tried to kind of come up with like you said a framework of here's my data here's what you can do with it here's what you can't do with it here's here's how long you keep it things like that do you see that in california a lot of times leads you know do we see that going federal do we see that going to other states you know it's kind of gdpr light from what i you know and so and it kind of it kind of falls right in the middle i think between both of you so just love to hear your thoughts yeah go ahead so the there are the last two years or so there have been a bunch of states that have passed various data data privacy laws right there's uh california past two of them somehow in the span of two years um and then washington virginia colorado have all passed these laws they're all slightly different um washington looks most like the gdpr because because microsoft already did their best gdpr implementation and wants that to be the regulation um and you know other states have had other approaches to things the more individual states there are the more businesses are going to push for federal regulation that preempts state law and makes it all null um i don't know there's a big fight the preemption is the big fight right so the degree to which federal law overrides and makes null um individual state laws is is going to be the big fight for the question and i think for for um public interest advocates for like civil society that's a sticking point that they're to the extent they have any pull at all right progressive so this that they have any pull at all are fighting against this preemption idea one possibility and sorry if there's no preemption it's possible we don't get a federal law because businesses don't want just another kind of regulation um one possibility in environmental law actually they grandfathered california and preempted the rest when they passed um federal environmental laws maybe there's something like that but that seems unlikely the so the future's uncertain but i would say it's more likely than ever that we get some sort of federal law what it looks like and whether it actually happens given we can't get any federal law right now yeah pick him yeah so terry i'm going to stop you for a moment okay because i want to get the other these up sorry sure right so as everyone asked was asking questions i have now five more questions but i'll stick to my one that i had when i came up so within like a technology-based company especially if they're looking to target cybersecurity as one of their branches they have a lot of different things that they need to follow which go through lawyers it goes through tech specialists subject matter experts expec experts etc so how much of and of course the government is doing their fair share to combat hackers and other breaches that companies are facing so how much of this growth in cyber security especially over the last five ten years and into the future do you feel like it's going to be driven by the government versus customers and consumers of technology yeah so i believe it's gonna be more by customers and people that are adversely affected because again i think at the end of the day if people feel their data is at risk they will walk with their feet and they'll leave and they'll not use products and and services so i think that's going to be the biggest driver of concern out there then the related question what you're asking is what should companies be doing in this area and to me the most impressive companies when you look at their core product design teams they're very much focused on how the product should work what are the unintended consequences you see this a lot with bias and ai they're looking at how the product actually is working in practice they're looking at target user groups etc i think there's going to be more of that that is going to drive the future and i think to andrew's point although i'm not going to speak for you but then his point there's a certain amount of limitation that you know uh legal action and regulatory action can take and at some point you got to be symbolic and assume that companies are going to get the message and take action thank you final final question so sorry this is probably a bit of a philosophical question for the last question but i think um professor kramer you had mentioned several times that innovation is always good but we've been on a long-term trend of inequality so i would wonder how you'd reconcile that and for example with autonomous vehicles and cars yes cars create a lot of accidents autonomous vehicles but we had solutions it was public transportation which obviously um there have been some issues with amtrak but that would have been a way to you know have less people in charge and probably mitigate a lot of those accidents um but we had business come in and it's very famous in la but get you know rid of the rail cars etc so innovation doesn't always mean progress it would seem so how do you kind of uh reconcile that and to go to the law aspect it seems like a lot of tech companies you know your airbnbs and your ubers basically create businesses that try to leapfrog over regulation with tech in order to circumvent existing consumer protections so how do we kind of get to a point where people are being protected and innovation is actually progressing yeah great question so and it's a philosophical question i'm going to give you my own view on this so first of all um and don't take away from this evening i'm saying innovation is all good i'm saying innovation does a lot of good things and in aggregate does a lot of great things but there are a bunch of negative externalities so what my message is is in most cases technology is going to be able to drive costs down and it automates and gets rid of jobs etc and that's partly how it drives costs down it often will improve outcomes when you think about the use of data and judgment on data think about diagnostic tools and health care to be able to detect early onset of alzheimer's to be able to understand somebody's likely to have cardiovascular disease etc you need a fair amount of data in a technology environment to really drive better outcomes there so i'm saying in aggregate it creates good things but not in every case then this gets to the issue of what do you do with the inequalities because most of these technology-based solutions are job killers so i tell my students when i look at their team projects that they do eighty percent of their team projects are job killers it's basically how they get to the roi that they need to um that would allow a customer to buy whatever it is they're so they're uh using but to say okay well don't advance the technology don't advance the solution to me is throwing the baby out with the bath water that the next question is what do you do to ensure that the inequity isn't out of control and so then that gets into what's the job of a business when their job losses how much retraining do they do when you look at the issue of autonomous vehicles take truck drivers a fascinating stat here by the way there's four million truck drivers in the u.s when you look at the percent of population that are involved in truck driving in joplin missouri the number's 10 times higher than the number in los angeles is so the net job loss is going to be in joplin missouri it's not going to be here so what is the obligation of businesses to create jobs in those areas to retrain people and ultimately what do you do with income disparity so if everything works in the future model what you should have is a society that has lower costs of health care which is one of the biggest expenses lower costs of transportation the cost of living our life should go down our income in aggregate is probably going to go down because again of automation and that's the question is how do you create equity in compensation now this gets more political and philosophical i personally believe we're gonna have to get to universal basic income or some solution like that so my aggregate solution is don't stop the technology it's going to do some great things but you better be ready to step up for managing what are going to be the inequalities including redistribution of wealth including hiring people etc that would be the final most important message i could leave you it's so easy to take again the absolutist positions stop the technology we're in we have a bad health care system we don't have great outcomes we have a lot of people that don't have insurance don't stop the technology solutions in the in the process figure out how do you deal with the the negative externalities thank you i'll give you one minute yeah just one quick thing i mean uh what you're the way you sort of moved from asking a question about autonomous vehicles to a question about sort of transportation and inequities right the answer to bring it full circle to what uh terry started with is like what's the problem how are you gonna solve it the problem framing is the answer right like widening the lens thinking about context how you frame problems um leads to how you end up like what solutions you end up with and so i think the battle i mean like the the battle for equity the battle for justice is never ending it's it's like it's just a battle you either commit to or don't you fight forever and it's constantly a question of trying to get um the most good uh trying to get equitable outcomes and often it involves asking different questions that are being asked just the way you posed you posed the question moving from a question about autonomous vehicles to a broader we need to solve a transportation system in a way that advantages all people um that's a great move so i would say keep doing that right that exact reframe problems ask different questions thank you both so we have some questions who was just wondering if we can bring pieces um of regulation under a single umbrella in other words have one single regulatory body owning a problem end to end such as the sec chairman sulps i don't think so um and the answer is the reason is context i mean i'm looking over here but i guess i should look at the camera um the answer is context right like every sector um is so different the kinds of problems the kinds of threats they face the kinds of data they work with the sizes of organizations crea creates very different like approaches what they can afford to spend um what kind how much data they need to guard um you can have a combination of an overall
2021-11-07 20:29