CSINT Conversations: Stopping online abuse of children - Could Apple have the answer?

CSINT Conversations: Stopping online abuse of children - Could Apple have the answer?

Show Video

Thank you all for being here for today's event.  Where we're going to be discussing Apple's   proposed CSAM policy and a bunch of other fun  questions that are related to that new tech. So   I’m going to go ahead and introduce our wonderful  panel today starting with Laura Draper. She is a   Senior Product Director at the Washington College  of Law's Tech Law and Security Program where she   manages research focused on identifying the tools,  best practices, and legal and policy options   available to tech companies and law enforcement  as they combat online child sexual exploitation.   Prior to joining the College of Law she served  as an Assistant General Counsel with the Federal   Bureau of Investigation and as a judicial  clerk in the southern district of New York.  

She also previously worked at the Council of  State Governments Justice Center where she   focused on law enforcement matters. She earned  her BA from Case Western Reserve University,   an MS from the University of Pennsylvania and   an MPhil from Cambridge University,  and a JD from NYU School of Law.  We also today have John Verdi who is the Senior  Vice President of Policy at the Future of Privacy,   where he supervises the organization’s extensive  policy portfolio on a broad range of issues,   including: Artificial Intelligence & Machine  Learning; Algorithmic Decision-Making;   Ethics; the Internet of Things; De-Identification;  and Drones and so many more other policies.  

John previously served as Director of Privacy  Initiatives at the National Telecommunications   and Information Administration, where he crafted  policy recommendations for the US Department of   Commerce and President Obama regarding technology,  trust, and innovation. Prior to this work, he was   also General Counsel for the Electronic Privacy  Information Center also known as EPIC, overseeing   their litigation program. He earned a B.A. in  Philosophy, Politics, and Law from SUNY-Binghamton   and a J.D. from Harvard Law School. And last but certainly not least we   have Dr. Hany Farid who is a Professor  at the University of California, Berkeley  

with a joint appointment in Electrical Engineering  & Computer Sciences and the School of Information.   He previously served as the Dean and Head of  School for the UC Berkeley School of Information.   Dr. Farid’s research focuses on digital forensics,  forensic science, misinformation, image analysis,   and human perception, and he has consulted  for various intelligence agencies, courts,   news organizations, and scientific journals  seeking to authenticate the validity of images.   He’s also the recipient the prestigious Sloan  Fellowship and Guggenheim Fellowship for his   work in the field, as well as a lifetime  fellow of the National Academy of Inventors.  

He earned an undergraduate degree in Computer  Science and Applied Mathematics from the   University of Rochester and a Ph.D. in Computer  Science from the University of Pennsylvania. So thank you to all of our wonderful panelists  who are here today. As a kind of intro to what   we're going to talk about today - in August  Apple announced a planned suite of new child   safety features, including scanning users'  iCloud Photos libraries for Child Sexual Abuse   Material also known as CSAM, their Communication  Safety to warn children and their parents when   receiving or sending sexually explicit photos,  and expanded CSAM guidance in Siri and Search.  Following their announcement, the features were  criticized by a wide range of individuals and   organizations, including security and privacy  researchers, the Electronic Frontier Foundation   also known as EFF, politicians, various policy  groups, and even some Apple employees themselves. 

The majority of this criticism was leveled  at Apple's planned on-device CSAM detection,   claiming that it was dangerous technology that  bordered on surveillance and in itself ineffective   at identifying images of child sexual abuse. So, Apple actually went ahead with the   Communication Safety features rollout in their  Messages, which went live in December of last   year, but Apple actually decided to delay  the rollout of the other detection tech.   But they have stated that they still intend to  move forward with their detection technology,   they're just taking some time to gather  information and sort of think about that feedback   from everybody who had comments and concerns. So to start us off,   Laura could you give us a little bit of background  information on what it is that we call CSAM?  Sure. So, I think of CSAM as actually a subpart of  a larger conversation around online child sexual  

abuse and exploitation. So to the extent that  that is our umbrella term, there is within that   three primary patterns which are neither mutually  exclusive nor exhaustive. The first of those is   what we traditionally think of as when we think  about this issue which is a child is abused   in real life, images are taken of that abuse  and uploaded to the internet and then shared   among like-minded individuals. Those images are  referred to as CSAM - child sexual abuse material.   In the U.S.. code, criminal code, they are also  referred to as child pornography. Most people   who work in this space prefer this term CSAM  because children cannot consent to their own   abuse and so CSAM better captures the gravity of  the situation. So that's our first kind of pattern   of behavior. The second I would refer to as  perceived self-generated content. So that's  

children, that is people under the age of  18, who take photos of themselves in sexually   compromised or explicit ways - those images are  then shared knowingly or unknowingly by the child   and in some instances they can then be the  basis of sextortion which is where the recipient   extorts the person who created the images  in the first place to produce increasingly   explicit images under threat of public release of  those images. The third category of online child   sexual abuse and exploitation that I think of is  internet facilitated prostitution of children,   which follows what we would typically think of as  the pattern of prostitution of children, but that   is aided by the internet in some way or fashion  through either communication technology or through   online advertisements and other things of  that sort. So those are really the three   primary patterns. Of course, there's overlap.  They're not exhaustive. There are things I’ve   not mentioned, but so we can think about both the  umbrella and then more narrowly into the CSAM.  Great. Thank you so much for that overview. I  think obviously we have Hany on the panel today,  

who perhaps is our technical expert here  with all those computer science degrees   and so maybe you could start us off and kind  of get us going on thinking how effective would   Apple's proposed photo checking tech strategy  really be given the current use of this technology   to distribute the content as Laura had mentioned? Good, thank you Divya. So there's…   if we acknowledge that we would like to develop  and deploy technology to limit the spread of   this - so let's put aside the privacy argument and  all the arguments that I’m sure we will be having,   just put that aside for a minute - there's  sort of two problems if you will that you   could try to address. One is - for every  image, every video that is uploaded or sent,   try to determine if there's a child in the image  or video, if it's sexually explicit, and if the   child is underage - that's a really, really hard  problem. Even with all of the advances of today   of machine learning, of AI, the technology is  not there to work with the accuracy and speed to   analyze literally billions of images and video a  day. So we don't have that artificial intelligence   ability to reason about the underlying content the  way our human visual system is capable of doing.   Now another way to think about this problem -  and for this I have to explain one thing that   you should understand about CSAM  - which Laura did a very nice job   of giving us some scaffolding to think about - is  that the same content is distributed year after   year and decade after decade. So once the content  is created, the original exploitation of a child  

and that exploitation is recorded - that  content lives on - for in many cases decades,   and in many cases that image is at some point  identified by law enforcement, the National   Center for Missing and Exploited Children, the  Canadian Center for Child Protection, Interpol,   whoever it is. And so a different way to think  about this problem of limiting the spread of   CSAM is to say we have previously identified these  pieces of material. Today at the National Center   for Missing and Exploited Children that number is  in the tens of million and - we are going to limit   the spread, the redistribution, so for content  that is newly created that we have not previously   seen, we will be blind to it, but the reason  we think about this problem is that technically   that problem is a little bit easier. Because I’m  handing you an image or a video and I’m saying  

this has been identified as a child underage and  we know it is sexually explicit, we know it it's   categorized as CSAM - we would like to stop  the redistribution. And that's the category   that we are talking with the Apple technology  and photo DNA, which is the technology I worked   on with Microsoft many years ago. A couple  things I want to say about this. So one is,   it is technically feasible. We can scan images  with very very high accuracy and very very high  

efficiency to determine if it is previously  identified CSAM and I think the most important   thing to understand here is that when we develop  photo DNA, which is this type of technology for   stopping redistribution - it was never meant to  be a law enforcement effort. It was meant - it   was a victim-centric technology - because  if you talk to the children who are abused   what they will tell you is that was probably  the worst day of their life, but what they'll   also tell you is day in and day out knowing  that the images in the video of their abuse   being shared online is horrific for them. So  if we can disrupt that global distribution   of past abuses it is very much in the interest  of the children and in their interest - and in   their privacy as well. And so, very briefly the  way the technology works is it extracts from a   piece of content an image or a video, a distinct  digital signature, and it tries to match that   signature on upload. That's how photo DNA work.  That's the core of the way Apple did it. Apple  

had some nice features that were more privacy  preserving - which ironically got them in trouble   with the privacy groups even though it was the  more privacy preserving technology than photo DNA.   I’m sure we'll talk about that in more detail.  And the last thing I’ll say about this is that   this technology is very similar to other  types of cyber security technology.  

Every time you send an email every attachment  is compared against the signature of malware,   viruses, spam filtering…every time you  upload a file those are scanned against known   harmful content. So that mechanism, that  scaffolding that Apple is using, that we have used   in the past - sort of, they already exist in cyber  security. We're not really getting very far out   of accepted best practices. Now how effective is  it? It's a great question. Is it going to stop   the abuse of children? No, of course not. Does it  protect children? Absolutely. Just to give you an   idea. Last year alone - in one year the National  Center for Missing and Exploited Children - their   cyber tip line received 20 million reports of CSAM  - the vast majority of which were done with photo   DNA, this type of technology that we're talking  about. That's 20 million images of kids being  

abused that did not get distributed. Does that  solve the problem in its entirety? Of course not,   but nothing is going to solve the problem  its entirety. The way you solve this problem   is you chip away at it - and I consider  this type of technology the absolute   bare minim that we should be doing to protect  children online and to protect their privacy.  Thank you so much for that. I will say before  I move on, that I had meant I meant to mention  

this in the beginning, but if you do have any  questions for our panelists, please be sure to   write them in the Q and A section and we'll  definitely have time at the end of the panel   to have our panelists answer your questions.  So, building off of what you've said,   obviously you've mentioned that privacy  advocates have had issues with what Apple’s   put forth. There was this idea that Apple would  integrate privacy features but of course perhaps   not to the liking - or to the level - that privacy  advocates would like and so I’m going to ask now,   John, you know - was that - is  there really valid concern about   the privacy of people's personal sensitive  photos with the tech and that Apple has proposed?  Sure, so let me first say that my views on  this matter are probably more aligned with my   fellow panelists than some others in the privacy  community. And I very much appreciate the history  

on photo DNA and I appreciate the context here.  I’m actually sort of glancing down at my notes and   my colleagues here have made some of the points  that I was actually going to tee up to sort of   frame this technology. So thank you so much  for doing that. That gives me the fun part,   right? Where I can delve into the controversy.  So let me say this - the concerns that have   been expressed by cyber security experts and  privacy experts about Apple's proposed technology   are no doubt real. The question in my view and  I hope we can drill down on this, is whether or  

not those concerns are greater, equal to, or less  than alternative models. Models that are used by   industry every day today, not by Apple, but by  others in industry - other big players, right?   Communications companies, cloud storage companies,  etc. in order to detect and mitigate this terrible   material. And I think that's a really robust  conversation that I would love to have. So in   order to perhaps tee up some of that conversation,  let me just flag a couple of things. I appreciate   the kind introductions - the one thing that I  have in my notes that we have not covered so far   is the legal status of this material - and I  think it probably goes without saying for my   colleagues up here - but I just want to flag it  for attendees and for anybody who's watching us   on recording - CSAM material is unique when  it comes to moderation and law enforcement   and the desire of platforms to have this material  off of their servers. Unlike other material - even   other unlawful material - CSAM material is in  many jurisdictions a strict liability offense.  

That is no intent is required in order to  prosecute. There are regional differences,   but anyone who would say that the legal status of  CSAM is equivalent to the legal status of say -   material that infringes copyright - does not  understand this issue. Right? Intent matters   when it comes to copyright infringement. Fair  use analysis matters when it comes to copyright   infringement. Commercial versus non-commercial use  matters when it comes to copyright infringement.  

There's lots of different things that matter  there. CSAM material is illegal to possess.   It's illegal to create.  It's illegal to distribute.   It's illegal full stop. And there are lots  of good reasons beyond the legal reasons   for why platforms want this stuff off  their systems. So if you're a platform   you have a few choices. You can do nothing - which  virtually assures that your platform will become   a haven for this unlawful and terrible material.  You can scan content on your platform - which is  

effective to some degree, as my colleagues have  described, but necessarily requires that you as   the platform developer and owner have access to  all that material on your platform - in the clear.   Or - and this the novelty of Apple's approach you  can do pre-upload comparison. Much in the way that   services do pre-upload comparison for malware, for  hacked data, right? Data that has been released,   that has been made publicly available in a data  breach and others try to upload and distribute on   cloud platforms. There are pre-upload checks for  that. Some do pre-upload checks for copyrighted   material. Some do pre-upload checks for material  that is not technically speaking malware, but can  

be problematic, such as bitcoin mining software  or crypto mining software that can take up a lot   of computer cycles and a lot of energy and it's  against terms of service, right? So this kind of   pre-upload matching that Apple has developed poses  different questions from the privacy community   than on-platform scanning does. And in order to  tee up what I think is the really interesting   discussion here, of how does this compare to  other approaches sort of doing nothing - I   think it's important to appreciate that the  critiques from the cyber security community   and the privacy community have in my  view largely fallen into two buckets.   One bucket is Apple's approach to pre-upload  matching of CSAM vulnerable to abuse   by governments or other bad actors who would  like to turn this tool into something for   which it was never intended, and instead use  it to try to identify other sorts of images,   non-CSAM images. That's one bucket of concerns.  Is the actual - to Apple approach and technical   implementation vulnerable to attack if you're a  cyber security professional. The second bucket   of critiques has been more of a legal and policy  and global political critique. And that has been  

– this is the essentially the first  instance of pre-upload matching   on Apple devices for photo storage and will  this encourage governments around the world,   law enforcement, agencies, corporations,  individuals, whomever to press for   increased surveillance and pre-upload scanning and  matching regarding other material. Not a technical   attack on the Apple implementation, but a legal  either requirement or request or incentives   to expand this beyond CSAM. Those are the -  in my view - the two buckets of critiques.   And those are the concerns that were  expressed by cyber security professionals,   by privacy professionals, and I will say that  the manner in which Apple rolled out this   feature - regardless of the substance of the  feature, right, perhaps could have benefited   from pre-roll-out vetting by independent  security professionals. There was,   I don't - I want to be clear - there was some  pre-rollout vetting for… by select experts,   but I think a broader vetting might have made  a difference in the reception - or perhaps not.  You bring up some great, great questions.  One of which actually was one that I had  

myself for all of you to answer - and you  know, I’m happy to hear also what our other   panels have to say to John's questions. that  specifically - I had the question you know,   how - you know, what the valid risk is of  abuse by other countries, where personal   freedoms and civil liberties are really at  high risk of persecution by the government.   And so I open this up you know, to Laura  and Hany to provide some insight as well. 20:31 Sure, so I would say, as to John's two questions -  the first question being is Apple's system as set   up, subject to abuse by other governments - and  the second question is does Apple's system open   up to broader abuse, mandating other systems  to have that kind of on-device scanning. So,  

with respect to the first one, I actually think  Apple has been quite clever and deliberate   in how they've structured this. They require  the scanning, actually is, gets matched against   databases of these CSAM fingerprints,  so to speak, on these hash values   and it has to have, in order to be a positive  match, it has to hit against databases owned by   and maintained by two sovereign jurisdictions. So  that is to say that the United States could not   make the decision to say that some terrorist video  should be identified and rooted out and so we   hash it, put it in our database, and then put  it in our - another one of the US’s databases   and then we get two matches and it gets uploaded.  It's two sovereign jurisdictions. So that itself   actually is quite protective I think in a lot of  ways. In terms of in the ways that governments   could or could not manipulate the existing Apple  system. Whether or not it opens the door as to   the second question is a little bit of a stickier  issue, I think in some ways, but the first one I   think, like I said, I think Apple has actually  been quite deliberate in ensuring that the way   they structure it would be pretty protective  in that sense and pretty resistant to abuse.

I’ll say a few things. I think John did a great  job of laying the landscape of the concerns   that people have and I think John is absolutely  right that there are legitimate concerns here.   We should - and we should talk about them. So is  it vulnerable to the so-called slippery slope.   Well if you do this what are you going to do  next? And you know that argument I always find   a little intellectually lazy because you could  say that about every technology. So for example   there is a microphone on my phone, there's a GPS  tracking could - that be abused? Of course. Do  

I know what's inside of the IOS every line of  code - how that information is being used? No.   Could malware detection, spam detection, virus  detection - any of number of cyber security   technology be abused by companies, by hackers,  or by repressive regimes? Of course. So if we're   going to use the argument “could this technology  be abused” - we all need to throw away our phones   and turn off our computers and unplug them,  because all of the technologies can be abused.   And so the question is not can it be abused -  and Laura did a nice job and did a good service   for us is explaining that Apple actually was very  thoughtful in, and I would say they actually went   too far in some cases I think they actually made  it too hard, but I appreciate why they did it.   It's extremely privacy preserving. They put the  safeguards in place. Is it perfect? Is it open to  

abuse? Of course, but then as John was saying  we have to balance that. What are we talking   about on the other side of that and what we're  talking about and what we haven't mentioned by the   way - and I think it's worth mentioning without  being too graphic - is that the average age of   a child involved in CSAM is eight - down to a few  months old. We are actually not talking about 15,   16, 17 year olds who are playing with their  sexuality. I’m not excusing that, don't get   me wrong, we are talking about pre-verbal infants  who are being sexually abused and their contents   being distributed online and I say that because  you have to put the risks and the concerns   on balance with those issues of protecting the  most vulnerable among us and I would say this as   I said earlier - the bare minimum technology. And  I want to also emphasize that what I found really   interesting about the pushback, was Apple did, is  doing, proposed to do the scanning on the device   for images that were about to be uploaded to the  cloud - and that's what made people crazy. If  

they had said “hey, we are going to scan images as  they enter the cloud” people would say, “well that   sounds about right”. Everything gets scanned when  it enters into the cloud for all forms of abuse,   as John enumerated. But doing that means you  can't store things in an encrypted fashion,   so to preserve user privacy we are going to  incorporate this technology on your device - which   is the right thing to do for a privacy preserving  technology, but it is exactly what made the   privacy groups extremely unhappy. And I think  primarily, as John enumerated, because - well   what's next? And the thing that I have not found  compelling is - when you when you make a slippery   slope argument you have to ask at least two  questions. One - is it possible? And the answer  

is yes. And the other question is what's the  incentive? Like what's the incentive for Apple to   actually do something that is nefarious - and that  is missing here. So I don't find those threats   on balance outweighs the benefits of the  technology to protect children online. 

So can I add on to that very very briefly? Definitely go ahead.  So, I don't want to - I don't want to step  on our kind moderator and friend, but you   can just because you referenced your development  of photo DNA and you've been at this for a while   I’ve worked in this space for a while too and  one thing that I think is helpful is to have   an appreciation for the history of how these  technologies have been implemented over time.   How NCMEC has operated over time as  a trusted repository of these hashes,   right? And it's not just the technical controls  that Apple has put in place. It's not just the   process and policy controls in terms of human  review that Apple has put in place but when one   talks about the risk in my second bucket - which  I call the political and policy risk. And to be   clear I am very very strongly supportive of strong  encryption and encryption technology. Right? Go to   my twitter, check out the resources The Future of  Privacy Forum produces. I am probably further over  

on being supportive of end to end encryption  than a lot of people in the field for a lot   of different reasons, right? But one thing that I  think is helpful here to understand and appreciate   is that we have a decade plus experience of how  NCMEC operates, how it partners with companies,   how it interacts with law enforcement, right?  How other similarly situated entities operate   and we have a decade plus of experience  in terms of at least having a track record   for whether or not steps to identify CSAM and  mitigate and eliminate CSAM lead to government   overreach in other content areas. There are a  lot of negative stories to tell about privacy out   there in the world in the commercial space, in the  law enforcement space, in the government space,   generally in the U.S. and around the world,  but I don't consider overreach regarding CSAM   mitigation to be one of those stories. We  simply don't have a political track record   for a decade plus of lawmakers and regulators  and law enforcement agencies around the world   successfully compelling global companies to  expand their photo DNA efforts to include   copyrighted material, politically objectional  material, other sorts of material that governments   and law enforcement agencies and others around  the world would love to curtail, right? We simply   don't have a track record of successful attempts  there. So when we talk about the slippery slope, I   think it's important to ask based on our knowledge  about the world and the political process,   how slippery is this particular slope? I am  not here to tell you there is a 100% guarantee   that implementation of this feature would not lead  to renewed calls in legislatures and in regulatory   bodies in the U.S. and around the world for  expanded scope. I am here to say, that we have a  

track record on these issues and it's not a track  record of a year or two years it's track record   of a significant period of time in which the -  in my view - common sense mitigation of CSAM,   using photo DNA and other similar technologies,  has not led to that sort of overreach. I don't know Hany if you have anything else to  follow up with that. I really appreciate that   John. I think you're absolutely right and I’ll  just say… So photo DNA was developed by myself   and Microsoft back in 2008, so John is right,  it's over a decade now. And I will tell you,   when we started deploying it in 2009 I heard  almost verbatim the exact same objections that   I’m hearing today with Apple's technology - this  going to lead to xyz and that, as John said,   has simply not been true. And I think we have to  acknowledge that John is absolutely right. Is it  

100 guaranteed, no. But again we have to balance  what are we trying to do on the other side of that   and I think there have been - we put reasonable  safeguards in when we developed and deployed   photo DNA. I think Apple has done the same thing.  I don't think anybody wants to overreach here, but   I think that you know, we are in crisis mode about  protecting children online and I will say things   have gotten much much worse over the last two  years through the pandemic. The number of child   sexual abuse imagery that is being reported every  year keeps going up. We now have live streaming   of child abuse on platforms like Zoom and other  types of video conferencing and the problem is   getting worse and we have to acknowledge that  some interventions are necessary. And the thing  

I’ll say too - is I’m always frustrated that you  know, I’ve heard people say - well it's fine to   put technologies on my device to protect me  the user – malware, virus, spam, etc. etc.   but don't you dare put technology to protect other  people. And I think that is awfully selfish and   we should acknowledge that yes, we are giving  up something - I think it's very very small,   but it's for a very very good  reason that we're doing that too.  Laura, I don't know if you  have anything else to add here.   Obviously they're a very complicated debate. It is very complicated. Just to Hany’s last   point - I think you know, the idea of these  trade-offs - you know don't infringe my privacy   even for the protection of children. I  think one of the things that gets a little  

bit lost sometimes when we talk about  privacy as this kind of monolith term,   we're talking about the privacy of us -  you know frankly, us on this panel – right?   The law abiding citizen right, but the privacy  interest that often gets overlooked and even lost   in these conversations is the privacy interest  of the abuse victim of the child - who has been   abused and so they also have a privacy interest  in not having their image of, as Hany said,   the worst day of their life most likely, continued  to be circulated and distributed and shared. And   so I think that also just needs to be incorporated  into when we think about balancing these issues.  No - it's a it's a valid point, obviously. I think  as everyone knows once you're on the internet   you're on the internet. There is no removing  that content once it's out there in the world   and so I, you know we have been talking about  this sort of photo detection technology and I   want to pivot a little bit and also talk about  the communication safety feature too that Apple   has rolled out. And Laura I know you do some  research as well on end-to-end encryption   and how that plays a role in this and so I  pose this question to you about you know,   we have this constant debate about access to  and encrypted messages - is Apple's message   check tech and strategy really a step in  the right direction or the wrong direction? So, I think it's a step in the right direction  and I think that for a handful of reasons.  

First of all, I think end-to-end encryption is  here to stay. I think we can have as many debates   as we want about it and law enforcement can scream  from every top mountain and hill and it just   doesn't matter. End to end encryption is going  to be a thing and it's already here and it's only   going to become more prevalent I think across  all of our platforms. So with that in mind we  

have to come up with strategies and solutions  that combat this issue at every way we can. We   have to take every bite at the problem we can. As  we talked about earlier, there's a whole range of   harms that occur against children online in  these contexts and so we have to think about   multiple strategies. There is no one fix to this  problem and so in thinking about this as well,  

just to give some numbers - you know Hany  pointed out the cyber tip line that gets the   reports received by NCMEC, the National Center for  Missing and Exploited Children, in 2020 there were   over 21 million reports made to NCMEC. Facebook  made over 20 million reports. Apple made 265.   Not thousand. 265. And so Apple needs to be - and  is taking this issue seriously. They are stepping   up their work and every bite that they can take  at this apple is a step in the right direction.  

When we talk specifically about the  iMessage issue again, as Hany pointed out,   with respect to the tech when we think about  the algorithms that have to identify this,   all they're really doing right now is identifying  nudity - and that's for technical reasons I   suspect as much as anything else, in that it's  easy to identify nudity. It's much harder for   a machine to identify CSAM and so I want to be  sure that those two things don't get conflated,   but within the realm specifically of, in a space  like iMessage where it's end-to-end encrypted,   you've got this machine learning to identify  sensitive or explicit content. I think one of   the things that's quite useful in this that it  doesn't actually prevent the sending or receipt   of the material all it does is prompt the user  to say “are you sure you want to look at this?”   “are you sure you want to send this” “are you  really sure that you want to send this” - and   that kind of like road bump so to speak is  actually quite useful and provides the user   with a great deal of agency about what they do  and do not want to view. It is not going to stop   the adversarial distribution of CSAM. It's not  going to stop bad guy to bad guy distribution or   sharing of content. It's not, but as we've talked  about - this perceived self-generated material  

is very much on the rise and having that kind of  prompting to remind users that this pic - this   image is going to be shared with somebody - I  think can be a useful pause in the way young   people and children think. The Internet Watch  Foundation which is a non-profit based out of   the UK reported that, I believe, in 2020 or 2021,  44% of the reported images they received involved   self-generated content so this a very much a big  and growing problem and the pandemic has of course   certainly made it worse in many respects. So it is  something. And again, so you're tackling a problem   that really is a kind of the core and is growing  and I think anything we can do to chip away and   minimize the harms that arise from this a good  thing. I would also just say that I would want to  

be sure that it is coupled with a couple things.  I think first of all it has to be coupled with   preventative education to children, so that  they understand what it means to be sending   this content these images of themselves - what the  risks are you know? Is this really a person you   trust? When we think about sextortion Thorn, which  is a nonprofit based here in the United States   around online child safety issues, reports  that about 60% of sextortion victims   knew their the offender in person - in real life.  And about 40% of them met and only and engaged   on the internet. And so there really has to be a  combination to say you might know this person in   real life that doesn't mean that they're not at  risk of doing something untoward with your image.  

And so that - and I also think it needs to be  coupled with a safe and secure user reporting   mechanism within the program, so that if  somebody is receiving or being prompted to   send images that make them uncomfortable that  they feel safe and secure reporting that to the   platform or law enforcement or whomever - and  that they understand that they have options.  Hany and John, if you have anything else to add?  And I’ll also say, you know we'll be waiting at   the end for questions, but someone did ask the  265 reports that were made from Apple - was that   a result of the complaints made to Apple or was  it Apple doing anything proactive themselves?  So I would say - I’m not totally sure I’m going  to bet that it's based on user reporting as   much as anything else, but the numbers are  not broken down that way on the NCMEC site.  Oh go ahead. Go ahead John. No please John. I’m not going to speak for Apple either, but I  

would say just looking at the number my analysis  is the same as Laura's - it appears to be   as a result of reporting and not as  a result of scanning. I will say,   and I’m not going to attribute to a particular  person, but folks who are in a position to know   in industry or folks who previously were in  industry in very senior roles and are now in other   roles outside - have essentially characterized  the volume question on this in differences   between platforms - as largely based on the  platform's willingness and volume of scanning,   rather than a difference in volume on particular  platforms. Which was just - which was surprising   to me frankly. Other material - there are  certain platforms that are particularly   popular for posting music and particularly  popular for posting movies and particularly   possibly popular for posting documents and  they don't always translate across platforms,   right? One does not go to Band Camp to find a  word document right? You go to find audio. But  

what has been shared with me from people,  again who are in a position to know,   is essentially that many many platforms suffer  from this problem and are actively combating it   and that when you see differences in reporting  numbers it typically has to do with the scope   at which they are scanning, rather than  the actual prevalence on their platform   and that means that companies like Facebook  who scan a lot end up in some circumstances   getting criticized for reporting a lot even  though it's simply a result of them doing more   scanning. Hany, I don't know if you have a view. I think John you're absolutely right. And also we   have to remember Facebook is Instagram, Facebook  is WhatsApp, and so that's a very very big entity   worldwide and I think John is absolutely right.  The more you look the more you find and that's   what we've learned over the last decade. I’ll add  one thing I want to come back to what you said,   just so that John and I don't agree on everything.  I want to come to the question about end to end.  

I’m you know, I’m not a firm believer in end  to end and here's why. First of all, in the   offline world there is nothing that is immune to  a lawful warrant, right? My mail - my physical   mail communication - my phone calls, my physical  body, my office, my car, my home, my office - with   a lawful warrant can all be searched and there's a  reason for that, right? There's a reason why we do   that and so I’m not I’m not entirely clear  why we should have bulletproof end-to-end   communication online that is immune to any type of  security or lawful warrant access. I understand,   don't get me wrong, that there are some bad people  out there. There are bad governments. There's a  

need for privacy and there's a need for secure  communication. I absolutely understand that,   but in the same breath we have to acknowledge  there are really bad people out there doing   bad things within this encrypted service -  whether that is the sale of illegal drugs,   which in the United States opioid deaths last  year topped a hundred thousand - vast majority   of the dealing of those drugs is happening online  -child sexual abuse, terrorism and extremism,   the on…the sex trade and so there are really bad  things happening. And to create this bulletproof   tunnel that allows us to do that. I’m not  entirely sure that the benefits of that   outweigh the drawbacks and again, it's all - I  see the benefits, I really do, but we have to   acknowledge - and even Zuckerberg acknowledged  there is a threat if we deploy end-to-end   encryption and I’m not there yet. I’m not sure  that I see the benefits outweighing the drawbacks   and I see what Apple did is a very thoughtful  entryway into trying to respect and preserve   the end to end encrypted messaging service while  adding some protection. And maybe Laura is right,  

is that it is here to stay and we just have to  figure out how to manage it as opposed to trying   to eliminate it, but I think I’m not - again  I just want to say - I think there's still a   debate to be had as to whether the benefits are  going to outweigh the drawbacks. I’m on hold.  John do you care to respond? Good. Now we disagree about that.  Now we're getting into it. Now we're getting into it.   So on end and generally my - I don't disagree  with the global analysis that end to end   has benefits and drawbacks. No question.  What I do disagree with is whether or not   this a fait accompli and the game is already  over. And I would say my view is that it is,  

end-to-end encrypted messaging storage etc  is technically trivial to develop. Criminals   of all stripes will have it whether governments  around the world want them to or not.   The only question is whether the rest of us, the  non-criminals - and I’ll assume for the sake of   argument today that the panel and our guests are  non-criminals, at least in a meaningful way -   the question is whether or not we derive and  society overall derives the benefits from   end-to-end encryption because criminals  will, whether we want them to or not. Okay, so let me pose this. Here in the U.S., we  have I think the world's largest number of handgun  

and gun deaths by a spectacular margin. And can I  make the same argument here? Bad guys have guns,   so let's give guns to everybody, what could  possibly go wrong? So is that the same argument   John? By the way I appreciate the argument -  all right - that's one of the most thoughtful   arguments. But is that the same argument? Well  bad guys have guns, well then I want guns too?  If guns could be independently manufactured  and developed by a smart undergraduate student   in a weekend in their dorm room, maybe. I see - you're saying – yeah. Although 3D   printing of guns is sort of getting us there,  but I don't think we're quite there yet. 

But genuinely… I see what you're saying is.  I think this is a principled position that I  believe. That the facts about the world are   that the underlying math and the underlying  software implementations are out there and   exist and if you asked me the question “hey, we  have a lot of bow and arrow deaths here in the   United States” - which I’m not comparing to guns  in a meaningful way, but a bow and arrow is very   simple to create oneself, right? You know what we  regulate bow and arrows - when someone can simply   go into their backyard and take down a sapling  and create a bow and create an arrow – yeah, I   look - I’m happy to have the overall conversation  about costs and benefits and everything else.   My only question is, given the open source  libraries that are out there, given the   underlying math, given the other underlying  technical implementations - my sense is,   I know you said you're not there yet, my sense  is that I am there yet and these are the same   technologies, just to be clear when you talk  about end-to-end messaging, that the Department of   Defense recommends that U.S. soldiers abroad use  to communicate with their families back at home. 

Yeah I get that. I get that. Let me add one  more thing and I don't want to hijack the   conversation - let me add one more thing - is  that- there is still, you're absolutely right   that you know you can go to GitHub and download  the technology for creating end to end, but there   are still gatekeepers here right? Because if  you want that app to be used there is still   the Apple store, there is still the Google store,  where these apps have to be deployed and though   and those stores routinely ban apps that you know  for various types of reasons. So there's still a   little bit of a gatekeeper there. It's not just  “hey I can deploy this and have a messaging with  

my friend”, like there is still a barrier to entry  here in terms of getting wide deployment. Yeah.  Yes I agree there are there are gatekeepers and  we can go back on this, but I do think we've   identified the nub of the of the disagreement. Okay, good.  I would also add you know to this conversation,  we've - you mentioned you know, there are victims   who are out there. And of course from a parent's  perspective of child, whether they have been a   victim or not, most likely would advocate for  yes, please anything that will protect my child,   please put it out there – regardless. So I’m  curious to know - you know, do those people   then outweigh whatever other kind of conversation  to be had when it comes to this subject matter   and saying that yes we want  more and more protection?  I don't know. I don't know how to  balance these things. And I can tell you   I’m not the right person to ask because  I have spent the last more than decade   you know talking to young victims of these  crimes and it is gut-wrenching and it is   it is heartbreaking and it happens on a daily  basis and it absolutely - I will admit colors   the way I think about these issues in a way that  makes me I think not particularly objective.  

And I think it's you know - I challenge anybody  to go talk to some of these young victims and then   take a strong position to counter what they say,  but I don't think that's actually the right way   to argue I think we have to think more on balance  - because John is absolutely right that there are   benefits to this technology and I don't know how  to weigh those against some of the horrific things   that are happening to kids around the world. I  generally don't know how to weigh through them  Laura do you have any comments  on what we're talking about?  Nothing additive. More than sufficiently  covered by my fellow panelists.  Well then I’ll go get to kind of the final  question for today. You know we've talked   about Apple has proposed so much detection tech  and all these different strategies and we kind   of come to this overarching question of really  should we be leaving it to these tech companies   to kind of determine how it is that our personal  information and sensitive photos and things   like that should be shared and scanned and  you know detected and told to law enforcement   or the government in some form - or does it  require external regulation by some sort of a   government entity to then tell tech  companies how they should be leading forward?  So I would say it should not be left strictly to  tech companies. I think in an ideal world congress   would be passing some sort of law, about not  necessarily to say you have to use this technology   or you have to scan in these ways not that level  of minutia, but this the kind of personal data you   should be collecting on people and should be able  to provide to law enforcement upon legal request.   One of the parts of my research that I’ve been  doing over the past few months is that I’ve   been talking to a lot of law enforcement officials  around how they investigate and pursue these types   of offenses and one of the primary complaints that  I’ve really heard from them is an inconsistency   across the way companies handle this. So what they  have basically said is that every company - every  

tech company - has a different policy for  receiving legal process - which is to say   subpoenas and search warrants. That they maintain  different types of information about their users   for different lengths of time. And when they make  their initial tips to NCMEC that they include   different categories of information in those  initial reports. That kind of knowing what to ask   for, how to ask for it, when to ask for it, and  how to frame it to which company at which time,   creates an enormous administrative burden  for law enforcement officials investigating   these sorts of offenses. And so when  we talk about the fact that there are  

you know over 20 million reports to NCMEC in  a given year, that is itself a huge ask of   law enforcement to investigate these offenses.  And yes some of them is old repeated material   and we know who the victims are and they're  not being harmed physically harmed anymore,   but that's not always the case and so anything  we can do to alleviate some of the administrative   pressure on law enforcement I think is to our  societal benefit. And I think that having some   sort of industry standard would be useful in  terms of that types of - that type of thing,   but like I said I would not suggest that congress  get into the weeds about this technology in   these ways and these circumstances. I think  that's not going to be nimble enough frankly  

or agile enough and would probably not keep  up with technology in any meaningful sense. And I open this up also to John and  Hany for your thoughts on it as well. Yeah I mean, I would simply say I appreciate  the frustrations that the law enforcement   officials Laura has been chatting with, you  know have. I think what's interesting as we   talk about iterating on the existing  system and on the existing framework   is that some of those frustrations about  inconsistency are actually features, not bugs,   in the original implementation of how folks got  companies to sign on to do this reporting at all,   right? By saying to folks “it's not  going to be a prescriptive top-down   reporting system”, “yes, you can still have  your own retention periods, you can still have   your own stuff and still participate and still  make reports”, “yes, maybe you collect different   information than your peer company, but as long  as you provide account information to NCMEC then   in the report that's okay”. The question is as  that this framework has evolved, is there an  

opportunity for standardization and normalization  across those services now that we've onboarded   folks over a decade plus? I think that's a really  interesting question, but it's interesting to me.   And I know you folks know the history of this in  terms of getting folks on board at all to do this   reporting there had to be a flexibility in order  to do that and maybe now comes the harmonization.  I’m not sure I like my choices - my  choices between Facebook/Zuckerberg,   or members of congress to make sure we  are safe. This is not a great choice.  

I think Laura got it right though and I think  what we want is congress to put guardrails.   I don't think we want a highly prescriptive  you must do a b c d e f g down to z but I think   some guard rails would be good. I think for the  last 20 years there have been no guard rails or   very very very few and I think we need to start  putting in some guardrails around child safety,   terrorism, disinformation, and all the various  harm and privacy concerns that we have as well.   I think we need to start putting in some  guardrails. The thing that I worry about now   is we already have virtual monopolies. We have  a handful of companies out there that dominate.   Trillion dollar companies that dominate - and  one of the tricky things about regulating now   is they are going to have a huge advantage over  the new companies that are trying to come up with   a regulatory landscape, that the current giants  of the tech industry didn't have to deal with   so we want to be very thoughtful that we don't  start regulating in a way that stifles even more   innovation than is already being stifled because  of virtual monopolies, but I think if you look   around the world here in Capitol Hill, the UK,  Brussels, Australia, Canada everybody is trying   to figure out how do we reign in the technology  sector while - to mitigate the harms that we've   been talking about - while allowing for an open  free internet and for the things frankly that   we were promised 20 years ago, but that I think  have not really delivered. The internet today is  

not the internet that I was promised 20 years ago.  And I think I still, despite my frustration with   the technology sector, I still think technology  is a force for good, but I think we have to be   more proactive in making sure that they're that  the harms are now out - now are not outweighing   the good parts right now, which I think  is what's happening online right now. Wow, well thank you all for a really really  interesting discussion. I can't believe it's   already been an hour and there's clearly so much  more we could talk about, but I do want to get   to some of the questions that the audience has  submitted. We already have a good number and so  

one of our first questions is talking about the  detection technology, which is called neural hash   and they mentioned that it triggers a report  once it's found 30 files in an event to attempt   to reduce false positives, which is like  saying you need to find 30 pieces of DNA at   a real-world crime scene before an investigation  starts. So could there be an argument that this   overly cautious and neural hash should be more  sensitive to detecting this type of content? Yes. I was not a fan of that. I think  Apple probably went too far. There - here's   my guess… I don't know why they did that, but  here's my guess. If you assume that every image   you scan is independent of the next image then  the false alarm, let's say for a single image is   one in a million, well then the false alarm of two  images is one in a million times one in a million,   and the false alarm for three images is  etcetera etcetera - and so by the time you   get to 30 it is an astronomically  small number and I think probably   an absurdly small number. Just to give you  some context when we were developing photo DNA   our false alarm rate was on the order of one in  50 billion - so that means you'd have to see 50   billion images before you had a false flag.  And please understand when there's a false   flag people don't go to jail, right? It's just a  human moderator steps in looks at the image and   makes a determination, so it's not a huge lift.  I think that was probably overly cautious and I  

would like to see that number come down to a  much more modest number like five, let's say.   But the question is correct. It’s like  you're basically giving people a bye   for the first 30 pieces of CSAM and that doesn't  really sit very well with a lot of us I think. I’m guessing John and Laura you agree  from the vigorous head shaking I saw?  Yeah, I think also you know, I think - and I  don't know that there are statistics on this   one way or the other, but I think a lot  of law enforcement officers who work on   these sorts of investigations would tell you  that it was just one piece of known imagery   that resulted in them identifying an  offender who was actively abusing children.   And so you know, when you take that into account  and we're talking about live victims who are   currently being harmed, I think setting that kind  of threshold as Hany said is perhaps too much.  So I will jump in and agree to disagree on that  this one. So I was nodding vigorously because I  

agree with 99% of Hany’s framing and I agree with  99% of Laura's framing. Here's what I would flag.   The idea that this gives a free pass to folks for  their first 30 images. I get it. I completely get   and understand the point and I take the point,  right? Should the number be 30? Should it be 5?   Should it be 8? Should it be 12, right? I think  we should have a healthy conversation about that   and I think that conversation should be informed  by data, right? The data from Laura's work about   law enforcement officers who use a single piece  of known CSAM to identify an active abuser.   That should be taken into consideration. We  should also take into consideration though  

what the spread of the data looks like when we  have existing data on CSAM reports at companies   who scan in the cloud today, right? What is  the average number of offending CSAM photos   or videos when a user is reported is it one? Is  it 100? Is it 1,000? Based on some conversations   I’ve had with folks, those numbers tend to be  really high. Folks tend to hoard this material,   right? Now I’m not - I don't have a clear view  into this, so I take Laura's research and her work   with officers very seriously and Laura says “well  one is often a key to cracking these cases” - fair   enough, right? I would share one case that was  out of Texas in which a large cloud company   which scans its servers all the time using photo  DNA to try to identify known CSAM - they are you   know a household name, you know, for folks in the  tech industry. They are a good actor in this world   and they make tens of thousands hundreds  of thousands of reports every year - they   made a report to a police department in Texas  regarding a single piece of CSAM that was detected   on an upload to their servers. The defendant in  that case was a young man who stated he was in a   video game chat on a well-known video  game platform and someone pasted a url   into the chat and said go here for more texture  packs and plugins for this game. He clicked on it   got an error page. Oh interesting. Okay. Turns out  that that error page was masking a surreptitious  

download of known CSAM and that surreptitious  download of known CSAM ended up in his downloads   folder on his computer and that downloads folder  was set to automatically back up to the cloud.   Everyone involved in this particular case  agrees that it is a single piece of CSAM,   that as I said before on a strict liability  basis, this individual was no doubt in possession   of unlawful material, but I think what Apple  is trying to do is turn the dial a little bit   to avoid situations like that. Not  to give this individual a pass,   but to separate that situation from an individual  who actively seeks out this material and downloads   five or 15 or 50 or 500 pieces of it - so that's  just to articulate a little bit of the balance   where I think they're trying to get at here. If  we had a law about CSAM and I’m not recommending   this at all that had an intent requirement,  I think Apple would be far less worried   about the scenario I described. But since  it's rightfully a strict liability regime  

2022-02-06 00:23

Show Video

Other news