Standard Technical Measures - Current Technologies and their STM Potential (September 23, 2020)

Standard Technical Measures - Current Technologies and their STM Potential (September 23, 2020)

Show Video

[ Music ] >> Hello, everyone. My name is Steve Andreadis, and I'm a Public Affairs Specialist with the US Copyright Office. Welcome today to the second session in our Standard Technical Measures Discussion, Current Technologies and Their STM Potential.

I would like to go over a few housekeeping items before we start the show. So first, as an attendee, you are automatically muted. However, you can submit questions to the group through the Q&A button at the bottom of your screen. We have set aside time after the discussion to answer your questions. But we can't guarantee that we're going to get to every question that is submitted.

If you are having technical issues, please send me a question directly or a chat. I will try to assist you in that case. And finally, this session is being recorded.

We are planning right now to post it to the, to our website and to our YouTube channel once the recording is ready. So with that, I'd like to pass it off to today's moderator, Aurelia Schultz, Counsel with the Office of Policy and International Affairs. Aurelia? >> Thank you, Steve. As Steve said, welcome to everyone to the US Copyright Office's discussion on Standard Technical Measures, STMs.

As Steve said, my name is Aurelia J. Schultz. I'm Counsel in the Office of Policy and International Affairs, and I'll be moderating today's session. As Emily mentioned yesterday, the Copyright Office believes that identifying STMs is one way creators, service providers, and users can collaborate to improve the efficiency and the effectiveness of the Section 512 for all stakeholders. So we're building on the Office's report on Section 512 of Title XVII, by hosting these discuss, discussion sessions, and they're going to lay the groundwork for sustained engagement on STMs. Yesterday, the first session had a very lively discussion on legal foundations of STMs. And we'll probably be touching back on some of their themes today as we discuss existing technical measures, and their potential as STMs.

The conversation will also continue next week on Tuesday, as we take a look forward. As Steve said, Please put your questions in the Q&A. We definitely welcome your questions. If there are several questions that are very similar, I will probably aggregate them and ask them to our participants together. But we will also have the opportunity to be able to call on you and unmute you so you can ask your questions yourself. I would like to introduce our distinguished participants for today.

And their longer, longer bios are available on the website and of course you can look them up online as well. We have the Founder and CEO of PEX, Rasty Turek, Senior Staff Attorney and Assistant Director at the Electronic Frontier Foundation, Kit Walsh, the President and CEO of AdRev, Noah Becker, and the President of Plus Coalition, Jeff Sedlik. So thank you all for being here. Definitely. Glad to have you. I know we're going to have a great discussion that will probably rival yesterday's.

And as we jump into getting started, I wanted to give us all kind of a common starting point, just to review the definition of standard technical measures from Section 512, I2. So standard technical measures means technical measures that are used by copyright owners to identify or protect copyrighted works. They've been developed pursuant to a broad consensus of copyright owners and service providers in an open, fair, voluntary, multi-industry standards process, are available to any person on reasonable and non-discriminatory terms, and do not impose substantial costs on service providers or substantial burdens on their systems or networks. So that's our starting point here.

And an initial question to jump into the conversation for anyone who would like to answer, how does the law impact the measure? So we're looking at measures that currently exist. So both in terms of measures' development and use, and as you think about that, please feel free to include Congress's policy intent in the concept of law. So how does the statutory law, Congress's policy, how do those things affect the development and use of measures? >> I can take the first shot. So in enacting all of those, requirements that you listed off, Congress was sensitive to the balance that's struck in copyright law, and wrote in procedural protections to make sure that any technical measure that arises to this level isn't going to be something that undermines or alters the traditional contours of copyright law by de facto replacing the legal process. So you mentioned the broad, or the statute mentions the broad consensus of copyright owners. And there are very different kinds of interests among copyright owners.

There are authors alliance and political speakers. Their interests are in getting their message out to as many people as possible. Uploaders, who are making either non-infringing uses or not using others' copyrighted works at all have a different view about what level of automatic content moderation is going to help them express themselves, make a living being creators.

The consensus is very important, because that's part of making sure again, that it's going to be fair. And there are also privacy protective rights holders for people who are uploading sensitive videos of their activism, or people who are vulnerable if the metadata associated with their photos is used to track or locate them. And then there are people who use open licenses because they have an interest in others being able to use their materials unhindered. And so all of those copyright owners are people who are part of the hypothetical consensus.

And then service providers are similarly varied. There are big companies, little companies, nonprofits, service providers, there are service providers whose mission explicitly is about promoting fair use, there are creator-owned video networks like Nebula that are leaving YouTube, because Content ID is too restrictive and keeps them from talking about the subjects that they want to talk about. And there are platforms that have privacy as a selling point, and so wouldn't necessarily be on board with a lot of intrusive scanning of what's going on like ISPs, or social media platforms that are adopting a more privacy-protected model to try to compete with Facebook. And I think the congressional requirement that the process be open is really critical, so that any proposed measure can be really deeply well understood by all the people who are going to be subject to and impacted by it. And particularly, so that we don't recreate old mechanisms where revenue sharing is done in an obscure, unfair way. And the fair and voluntary requirements are also part of the procedural protections that Congress wrote in to make sure that this process isn't a backdoor to undermine the law.

Similarly, the part about substantial costs and burdens is going to make sure that this isn't something that winds up locking in the very large players. So you have the large copyright holders and the large platforms coming around to a deal that maximizes their profits, but at the expense of competition, at the expense of user speech, and at the expense of the interest of copyright law and promoting a diverse and participatory culture and opportunity to create and make a living off of your creations. >> I would just agree with probably the early part of that, that monologue. As soon as you get into user behavior and user understanding, you're sort of losing the point of the existence of copyright law in the first place.

User groups don't understand copyright law, it's one of the fundamental issues with their sort of innate frustration with copyright identification systems is that de facto, they don't understand what music licensing, music use, or the use of third-party IP involves from an actual, in-real-life law perspective. Most of their understanding is derived from OSP platform policy, which is derived from a safe harbor act that is skewed heavily in favor of OSPs and user groups in the first place, while copyright owners of all shapes and sizes are left in the wind fighting a completely impossible uphill battle without the existence of STM. So, you know, with all due respect to the user groups, I could give you a few examples of general user group understanding that we see every day at AdRev. So at AdRev, we administer about 15 million copyrights inside of the content ID system. We also provide proprietary technology and scanning on other platforms.

Inside of YouTube content ID and administrators such as AdRev. or other companies like you heard from Jeff Walker from Sony Music Entertainment yesterday, we have access to what's called a claim dispute queue. This is where users are able to dispute a copyright identification in one of their videos, whether that's use of third-party visual media, publishing i.e. a cover song, or a master recording use.

I think it's important for everybody who might be listening to understand that if you want to use a third party's music inside your video, and it does not constitute a clear fair use, you need a synchronous-- . [ Inaudible ] To use that music in a video. However, no LSPs are doing any work to educate their user groups on these sorts of laws in the first place.

And as such, when you contend that a user group is being negatively impacted, or could potentially be negatively impacted by the implementation of STMs, the impact that they're perceiving is a false interpretation in the first place. So again, I have a few examples of the types of disputes that we see in the quote/unquote user groups that will be impacted negatively, potentially by STMs. These are literally taken from our account. This one claimed a fair use usage for their video. "My girlfriend and I kiss at the beginning of the video and say how much we love the song." This is fair use.

After their kiss and the statement that they liked the song, the song begins plays in its entirety over the background image of the cover art for this song. Another example, "My friend downloaded this song from Amazon Music and sent me this track and said I could use it in my video." "This is my own video, [expletive], you and your copyright fraud company for claiming my own property."

The video was a mash up of other people's visual works and contained a popular recording from a major label that was not properly licensed. So with all due respect to the user groups, I think they need to fundamentally understand what copyright law is in the first place to have a seat at this table. >> We've heard from a lot of YouTube creators who are very versed in the law of fair use, obviously, as are we.

And what they say is that fair use doesn't enter into it because the technological restrictions are so much more narrowly [inaudible] than what the law would allow them to do. And that makes sense because it's very difficult for an automated system, may be impossible for an automated system to identify a fair use, which is very fact-dependent, sends [inaudible] if it's based on matching things that that appear again and again. Ten hours of static got five copyright takedowns because the bots are so sensitive, and it makes sense that that would be the case because there's a financial interest in the rights holders being able to monetize anything that makes any use of their works, even if it's fair. So on the PEX website, it talks about finding matches that are less than a second, the COO has talked about matching fan videos, many of which are going to be fair uses as well as uses that are under a second, and there isn't a countervailing economic interest to keep that from happening. Similarly, with AdRev, there are examples of people's own recordings of music compositions that are in the public domain, getting taken down, cc licensed content being taken down through AdRev.

And again, there's a financial interest for that to happen. Because you, AdRev, someone participates in AdRev, makes a claim, shakes down someone who is making a fair use and either the person doesn't bother to contest it, or they eventually do but only after a period of time that's imposed costs on them. And this is, this is why it's so important to make sure that the STM process doesn't become a backdoor for changing the contours of copyright law. There are examples of people who infringe copyright on platforms. There are lots of examples of people who are trying to engage in speech, especially media criticism, or remix music, or their own compositions, who wind up swept in, in part because of the economics of the situation without a countervailing protection in law.

Then it's not clear how besides these procedural protections that are going to make sure that this process actually is fair and voluntary. It's not sure, it's not clear how fair use and merger and [inaudible] will be preserved in a world where automatic identification of a match for content is enough to impose significant costs on the users or even to frighten them with de-platforming, which is the case with content ID. If you get, if you get enough claims, then you're at risk of losing your platform and creating because of the concentration of that market, it's a very significant economic risk to a creator that they might be booted off of it. And so even bad-faith claims are able to extract some money from content creators.

>> Speaking for visual artists, photographers, illustrators, and other visual artists, what we want and what we need is to have our information be available to the public, our rights information that's associated with our work, so that anybody who comes into contact with our work can make an informed decision about using it. That includes people and machines making informed decisions and machines cannot make fair use decisions. You know, the most common defense to infringement, infringing use of visual work, and probably other words is, "Well, I think this use fair."

Well, you know, the, there's, there's four factors to fair use. There's a lot, there are scholars that disagree as to the application, it's different in different circuits. It's not, part of the challenge is that an individual, especially a layperson, but also attorneys are often ill-equipped to make that determination. At least as a baseline, we can make sure that the information that visual artists put into their work is not removed by the OSPs, and others so that anybody who comes into contact with it can make an informed decision.

And that type of work has been going on for 50 years, 50 years. It's not like something that started after the DMCA. You know, the IPTC was founded 55 years ago, and developed standards 30 years ago for embedding rights information into photographs. The Joint Photographic Experts Group, founded in 1986, 28 years ago, developed a way to put information into photographs using a technical measure so that when they're distributed, people can get to that information. And then my organization, the Plus Coalition, developed standards for communicating rights information between people and machines. That was 13 years ago, and the metadata working group with Adobe, Apple, Canon, Microsoft, Nokia, Sony, that was 12 years ago.

These STMs were developed with an open, broad, and fair standards processes, and are broadly available. The major issue that we face is that OSPs delete the information that photographers, illustrators, and other visual artists put into their work. And then nobody can get access to that work and make decisions about using it. This often happens on their content delivery networks, such that when users upload images, users or creators, upload images into these platforms, it's not immediately stripped. But when the images are resized, and distributed out to localized caching services, in the content delivery networks, the metadata embedded in the photograph is wiped, and so nobody out there has access to it.

And nobody can make informed decisions about making use of the work. And we feel that that as a first step would be a fantastic development. Now, Google, to their credit, in August, the end of August launched a new tool called licensable badge. And that provides a way for creators to expose their rights, a link to their rights information, just by embedding it in their photographs. So Google has adopted that STM to push out and make it publicly available rights information that's in the photographs.

We don't really, in the visual arts community, we believe in fair use. But we also believe that our rights should be respected with respect to usages that don't fall within the boundaries of the definition of fair use. >> And that speaks to my point earlier. I'm sorry, Kit, but most of most of the people that claim fair use are claiming it on a basis that is completely absurd.

And the volume with which we're reviewing these sorts of claim disputes, I can tell you that the vast majority to the tune of 80- to 90% of any disputed claim is founded on a lack of knowledge base of copyright law in the first place. So we can continue to flog fair use, and talk about user groups being potentially negatively impacted by the application of STMs. But as Jeff was just intimating, it sounds like the only way forward to satisfy every intellectual property owner's needs would be to establish some common STMs particularly probably, you know, start to stratify them across different content rights and develop different STMs surrounding different types of It. I'm certainly sympathetic to the concept that Jeff and other photographers and you know, visual artists want their works to be able to be seen online, fundamentally understand that. I'm a musician and a producer and a songwriter myself, I also happen to be a technologist with direct experience with the user groups who are claiming fair use and-- .

[ Inaudible ] At high volume, high velocity and incredible scale, displaying a complete and utter lack of understanding of what copyright law and what the intent of fair use was in the first place. So I really I know we're here to talk about STMs. But we can battle against STMs, on some legal foundational argument about the language within 512, or the definition of 512, I2a, et cetera.

But it's all sort of moot, in my opinion, because the law is flawed, and the user groups don't understand copyright. We need to educate the American population on copyright law in some way, shape, or form, whether that onus should fall on the OSPs or on, you know, the Copyright Office. I'm not here to dictate, and I'm not a policy maker. But it all starts and the decimation of the value of copyright in general starts at the fact that the user groups do not understand the value of copyright in the first place.

And I'd also like to intimate that, you know, to me, this is about job creation, and a thriving middle class and American economy. Just a little anecdotal concept, you know, 15 to 20 years ago, an independent film creator, could go raise a budget, maybe of a couple hundred thousands dollars, and put a few production staff members on staff for six to 12 months while they produce that movie. And then when the, when the, when the production was wrapped, perhaps employ a few editors and color, you know, color artists to, to do the post work on that movie. And 15 or 20 years ago, there might have been an actual sales marketplace for that movie somewhere online, whether it be on the, on the, the producer's website, or on a film distribution company that they work with, for them to actually extrapolate the requisite income to perhaps go produce another movie, which might lead to the dawning of a production company that might create hundreds of sustainable jobs.

So we sit here and we talk about law, and we literally translate language inside a law that was written in 1998, when Internet connection speeds were less than 50 kilobytes per second, it was only one year after the invention of the mp3. And it was a decade and a half before the explosion of social video. And this thing that I will continue to flog, the complete and total lack of general understanding of the public user group of what copyright law really is and means, and what you can and cannot do with third party intellectual property, be it a fair use claim or licensed use claim, et cetera. So it's just really, really difficult to sort of logically argue against anybody who's taking a literal interpretation of 512A, and the definitions of what STMs are. You know, for example, I think it's 512, I2Ad, the final one is, you know, there shall be no financial burden on the OSPs to develop STMs.

I mean, if you look at Google, Facebook, and Amazon, you're talking about combined net income in 2020, projected to be $100 billion. That is $190,000 per minute of net earnings between the three biggest tech companies, as far as I'm aware of, in America Certainly, they can bear the burden of the cost to implement STMs. And certainly a bunch of intelligent people like the ones from yesterday, today, and the ones we'll hear from tomorrow can somehow figure out an elastic and dynamic construct for STMs that will both encourage creativity, encourage the creation of copyrights, and uphold the value of copyright and creation for an American economy that desperately needs it. >> I must say I'm a little bit disappointed by everybody using YouTube as a kind of the standard of the market. Kit, to you, you flagged that we can do [inaudible] identification. I will urge you to find the cases where we made a mistake.

Any cases in fact. I'm not saying that technology's perfect, but we operated way, way higher standard than Content ID at a much higher scale than Content ID. And so one of the disappointing things which I testified in the past is everybody's talking about having an innovation, having all of these, you know, technologies that will be perfect, but you don't have, you do it in isolation.

Essentially, there's no markets up until recently and the fact that the platforms are able to at scale extract revenues from the market while nobody else can participate on it. We'll never force any kind of new technologies into this market like existence of taxes by mistake. We shall never, and we never even intended to be in a copyright business.

It just happened to be the only business that we could make some money on, as we started deploying the technology, and then we grew with it. But the reality is, for last 15 years or so, what was no innovation when it comes to the technology itself. YouTube is able to [inaudible] the same lengths of content for last 15 years, they slightly improved the audio identifications, and that's kind of it.

Fourteen percent of videos, 14% of six billion videos on YouTube are [inaudible] to date. What about the rest? Are you saying nobody else owns the rest? And then, you know, fair use. I'm all for balance, but the balance can be found by proper processes. You don't have to have technology making those decisions. In fact, civic societies out of Europe community and such, I will say spend an incredible amount of resources to create a new processes for alternative dispute resolution. And I think that those kinds of work, and at least on the paper so far, and they can be deployed at scale and can actually bring balance to the market.

The problem is, if the way that the law is written, the burden is essentially on rights holders. That means only very few rights holders can afford to make those identifications. And then the platforms essentially do whatever with it.

So for instance, YouTube decided, because it's not able to identify below ten seconds, just to give you an understanding what that means average video on TikTok is 14 seconds long. YouTube is by default, not able to identify almost any of them. So if someone takes a video and puts it on YouTube, it's just essentially a wash for whoever created and whoever owns the rights.

It's just gone. And YouTube, because they are not technically able to do it, they just said, "You know what, under ten seconds, identifications will be no more applied," because that's it. So that's now the standard for one of the largest UGC markets. And that's just disappointing because the technology is there, the technology can do these things, not because PEX can do it, because it can do in any other sector. So finding the sector can do this at scale of you know, speed of light can do it and billions and billions of transactions can do almost perfectly identifications of transactions, and it works. It works in much more, I will say a higher stake market.

So why will it not work in UGC? Which yes, expression is important, is definitely not as important as when someone's transaction is being blocked when they are sending money to their family somewhere else. And so, you know, if there is a standard of some source that it brings together all of the parties, we can find the middle ground, but it will be not found by saying, "Let's make it or let's keep it as it is." Because at this point, there's 20-plus billion videos and songs on the Internet, growing by roughly 50% every single year.

That is very, very costly to process for everybody from the outside, especially when the platforms, they're fighting you on it, do everything in their power to stop you, as a rights holder, as anyone else. And then eventually, when you finally make the identification, when you finally actually have a real case that you can show here is a problem, here is someone abusing my rights. What they do is they build tools that they allow the creator to essentially with the click of button to wipe it off. So all of the cost is to the rights holders, and nothing else. But you think of the rights holders as these vampiric organization that are taking money and doing nothing, but don't forget, they represent the creators.

At the end of the day, if the creators are not going to be paid, they will not produce. I never understood how people don't see that every single person that creates a piece of content on the Internet or anywhere else is automatically a rights owner, but none of them have access. YouTube has 9000 content ID accounts, 9000 of 110 million creators. Every single creator is the rights holder, and 9000 of them have access. Do you want to know why there is abuse? Because 9000 of them selected by YouTube somehow has a different access than everybody else.

But that's not a standard of any kind. That's not even a good market and I will agree with that any day. That doesn't mean it has to be that way for everybody else. I do think that we can do significantly better and we should but not in a way is like well YouTube tried, YouTube failed so let's, let's essentially do nothing because that's standard. >> Rasty, you touched on something and I think that's important that came up yesterday in the discussion a lot too, which was sort of this debate between whether there are incentives for organizations to create standard technical measures, and there was some argument that there are no incentives. There were arguments that there were market incentives.

There were arguments that there need to be legislative incentives that currently aren't. Could you talk a bit about the incentives behind PEX in creating what you've created? And I think we'd also like to hear from Jeff, for the incentives behind the Plus Coalition, and then anybody else who wants to chime in as well? >> Well, I mean, for us, originally, you know, we had the technology in search of problem. And that's kind of how we ended up in the copyright space in the first place.

But once we, once we found out that the copyright is so strongly abused in the market, obviously, the financial incentive was very well aligned. And I do see how this is perceived by the civic societies, because obviously, you do not want more of abuse of fair use. And that's where everything starts, right? Like new markets attract new behaviors, and the behavior has to settle to certain patterns. And so I don't, I'm not going to claim us or anyone else really was perfect or is perfect at any given point. But as the time evolves, I mean, we are almost seven years old, I do see that there is a potential for a really well-balanced market.

And I think one of the things that escapes generally everybody is somehow always the other, one or the other sides thinks that more deployment on what they call the content filter or less deployment of it, it will somehow create more production of content. But I do think that at the end of the day, it's the very transparent and obvious rules, that will create a proper market, because the problem is, like think as real estate, take in real estate, right? If you bought a house, but you didn't know if you do own it or not own it, you will just not buy it. And so this is kind of the same thing as if you want, if you want people to actually produce and have an upside from that, you have to have clear rules, whatever that is. And I do think that's part of that is legislation. You cannot have, you know, generally stick to legislation and say, we will figure it out because you 20 years later, 22 years later, after DMCA passed, and we have not figured it out, YouTube is somehow considered a standard, even when YouTube abuses both sides, from creators to rights holders.

And nobody really kind of addresses that. And no other platforms do it. Because YouTube once said, you know, we, cost us 100 million dollars to build it because nobody else can. I can promise you, it didn't cost 100 million dollars to build PEX, and we outperform YouTube at orders of magnitude.

So something must be off there. So I do believe that there's possibilities to build a really well-structured rules for the market. And it's also what Noah touched a lot, right? Like, education. I know YouTube tries their best.

But at the end of the day, it tries with auxiliary systems, right? Like, if you are interested, you can learn more, but why not to say if you want to participate in the market, you need to understand the rules and build around that? You don't see that in any of the platforms. Every platform rushes to how people to produce content as quickly as possible. I get it. That's their incentive. But then you cannot be surprised there is so much abuse, right? So I do think that balance is in all three.

It shall be a market driven, obviously, it shall be also driven by the civic societies and the free speech, we shall protect the free speech, it's one of the greatest things about US. You can tell I'm not American, and I love living in this country. And a huge part of that is it is free speech, right? And so I grew up in communistic country, I do not wish to live in anything like it. At the same time, I do want the property to be protected, especially as it's being given to more and more small creators, because at the end of the day, we can argue back and forth about all of these, but it's the small creators, the small rights owners that hurt the most.

The largest companies will always figure it out. The largest platforms, the largest rights [inaudible] will always have the financial capabilities to actually figure it out between each other. What about the rest? Take a look at Facebook, for instance.

Facebook licensed music from a few of the major rights holders, what about the millions and millions of musicians that don't get anything? Why? Because there's no standard for anything. No technical standard, no business standard. They don't have to do anything. So it's not like the largest rights holders are now hurting, they got paid, but everybody else has kind of gone from that, right? And so they've missed on the payments, that they are, in my opinion owed. And without real legislation would push, a legislative push to actually balance these and not skewing one or the other side, this will never improve, no matter how many PEX companies like will be out there. We cannot solve these because the platforms or the rights users will never be interested if your balance is skewed too far to their side.

>> Thank you. Jeff? >> Well, if we look all the way back to the legislative history, we can see that the legislators understood in a way, even way back when the problems that were likely to arise. They didn't get it exactly right.

But they got it, they did a remarkably good job. And if we look into the like the Senate report, it says, "The committee believes that technology is likely to be a solution for many of the issues facing copyright owners and service providers in the digital age." And it goes on to say, "The committee strongly urges all the affected parties expeditiously to commence voluntary inter-industry discussions to agree upon and implement the best technological solutions available to achieve these goals." So this was 1998, probably written in 1996. Did they get it all right? No. Are we having some issues with implementation?

Yes. But for example, four years after that happened, the Copyright Office approached the visual arts community, and suggested that the visual arts community needed to get off its butt and develop some way to identify its copyrighted works. Music has identifiers, books have identifiers, every product in every store has an identifier, and visual works do not. And so I met with, well, Mary Beth Peters approached me, I met with her and others at the Copyright Office, they suggested forming a coalition. We formed the Plus Coalition. It took us a couple years to get it together.

But we made it completely open. And we didn't let anybody buy their way in, we asked the museums to participate, and we gave them museums a seat on our board. The book publishers, the libraries, the creators have one seat on our board.

Educational institutions, advertising agencies, it's a board of nonprofit organizations representing their sectors. And then we set about developing a way for people and machines to talk to each other about the rights associated with visual works. And everybody had to leave their baggage at the door. And I'm describing this because, you know, people say there hasn't been a successful STM. Well, our STM is in every Adobe product. It's been adopted by Google now, one of our fields in their licensable badge.

It's in every visual arts product where you can look at the metadata for an image, we're in it. We're adopted by other standards bodies, as well. And we did that by saying, "Let's not talk about access, limitation of access, the value of works, fair use, let's just talk about how to communicate rights information so that people, everybody can understand what's being communicated, communicated." And so we had 1500 volunteer participants from all the different sectors, we brought Creative Commons in, we had, you know, Yahoo participated, Google participated, Microsoft participated.

You can't say that the OSPs have not participated in such a process. You know, by the time that we developed this way to communicate rights information, Yahoo adopted it. Well, it was Yahoo that owned Flickr at the time, adopted it and displayed all that information on their site within two years of when we had that done. And currently, what we're doing is addressing a situation in which we've got, we've got a way to communicate rights information. And that way has been around as I mentioned earlier, for like 20 years. You know, it is a technical measure to communicate information embedded into visual works when they're distributed in digital form.

So the next piece is, rights holders are micro businesses, and they're being forced to spend a huge amount of their time dealing with enforcing their rights, sending out DMCA notices, dealing with people who are stripping their rights information. One of the big things that we face right now is different companies, including Facebook just now within the last two days announced they're having a rights management tool within Facebook for people to register their rights associated with their works that they, that are uploaded to Facebook, and they're going to recognize that using various recognition processes Well, if I as a creator, a photographer, I have to register with Facebook, I have to register with dozens of other entities, perhaps hundreds or thousands of other entities that are in proprietary systems throughout the world, and none of those systems can talk to each other or share information, it's a real problem. It's a blessing to finally see OSPs adopting ways to identify works and to let creators get their rights information out there.

But now we're forced to start registering all over the place. Now it's a full-time job, and most visual artists, they're just a micro business with themselves as the only employee. So it's a blessing and a curse, right? So what's really needed and what we're working on now is a way to let all those systems talk to each other. And to operate that information hub in a way that is open and fair and controlled by the stakeholders, the users, the creators, the distributors of work. And just to connect all the silos.

We call that the Plus Registry. It's not really a registry. But it is a way to connect all the different systems so that they can communicate. Yesterday, on the panel someone mentioned, well, we could just use the Copyright Office's registration records.

Well, for decades, people have been registering their works, in particular visual artists registered their works by uploading contact sheets, a scan of a contact sheet, which is a bunch, you know, it could be 36 images on a, on a print, scan it, upload it. These, these works are not in any shape to be incorporated into any kind of recognition system or any means to be able to look up information about individual works. Until 2018, photographers and illustrators and other visual arts were registering up to 100,000 images on a single registration with no real way to get to the, to determine what work is associated with which registration except by looking at the deposits. So that is a pipe dream with respect to the majority of the past registrations before the Copyright Office's electronic copyright registration system. Now it becomes possible, and yes, we should put a stake in the ground.

But it would be uninformed to suggest that it's going to be the ability to search that system is going to find all these works that are let's say, more than 10, 15 years old. But right now if you'd like information about what we're doing at Plus, we would welcome any groups, including user groups to participate. You can email us at >> Yeah, I will like to wholeheartedly agree to these.

So we identified exactly this as one of the biggest pain points. That means people have to register in every system that is popping up somewhere. There is no interoperability between the systems, there's no standards of any kind, and so we introduced a service that is essentially trying to do exactly just that.

The kind of the interesting thing about all of these discussions is the identification engines, but at the end of the day, they are the most understood, I think, because they are now decades old, at least logically and philosophically. And so you can improve here and there how they identify piece of content, but it's also what Kit, touched on. Computers cannot make decisions about fair use, because it's complicated even for humans to make. So that doesn't necessarily preclude them from feeding this series of information that can help either side to make a better-informed decision. And this is a huge part right? Having an information about this is actually a registered work by someone, you're sure that this is a work that you want to utilize in your, in your own work.

And these are the consequences of saying so, and so the registration and interoperate, operability between the system is more important than any of the identification itself. And this is one of the other things. Most of the large platforms at this point, are building everything in isolation. Not only they have various quality of the services, but also they open it to just certain amount of folks. So for instance, there is a Congress hearings about why Content ID is not broadly available, which I think it's a fantastic effort on its own. And without a legislation change where this is required way more than the standard technical measures when it comes to the identifications, it's about how the system has to cooperate, how they have to communicate, what they communicate, and who can join in because it should be open to everybody.

It should allow everyone to participate in this system. And one of the other thing is, you know, I love the [inaudible] of Copyright Office, but it's still $40, like it's, it's quite expensive to register hundreds of works. Like imagine that if you here you are a TikTok-er, and you wanted to register every video you produce that is ten seconds long and you produce 30 a day? Like that it will be just not scalable and the technology is there. It's truly possible, and this is the area that we as a company are spending a lot of money, but we will hope the legislators pay so much more attention then when it comes to the identification system itself. So the ACR systems are not as important as what is in those databases and how they communicate between each other and how to propagate to different platforms.

>> Now [inaudible], did either of you want to add anything? >> Oh, yeah, I want-yeah, sure. I want to, really well said, Rasty. I just want to jump back to something Rasty said earlier in regards to OSPs developing in isolation. And you know, I do differ from Rasty in the sense that as my you know, my businesses UpRamp is predicated primarily on our ability to help rights owners monetize YouTube.

I will simultaneously just touch on Kit's earlier comment that we don't shake down anybody for anything. And we are a company that is full of creators, rights holders, thespians, screenplay writers, musicians, producers and songwriters. Any fair use claim or claim dispute is reviewed by a human with utter and absolute expertise on fair use and copyright law.

But as an, so I will contend that YouTube's done a pretty darn good job. But I will also say that to Rasty's point about Google and YouTube working in isolation, and my earlier points about education, ou know, YouTube initially had an initial match segment threshold of a 30-second use to create an automated identification of a visual or audio work in their system. That's been systematically tuned down over time through the existence of companies like PEX, a sister company that AdRev owns and operates called Symbols, other companies in the space like Audible Magic, DMAT, and Tune Set. So the technical measures are all here.

And again, it is a problem when they operate in isolation. For example, now the match segment threshold is ten seconds. Now all YouTubers think that if they use a music file for less than ten seconds, it's a fair use. So these are the sort of transitive effects that OSP policy have on user group understanding, and wow, I'll just sit here all day and lambaste the concept that we really shouldn't be talking to fair use fanatics and user groups about these issues, we should be talking to rights holders, and OSPs, who actually understand copyright law in the first place.

And if there are user groups and fair use fanatics who do actually understand copyright law, and respect and value it, then absolutely, we want to hear from them. But for the most part, what we find in the claim dispute queues in YouTubes is that the user group does not understand copyright law and has no respect for copyright law. >> I wonder how many people watch YouTube a lot or online videos a lot. Because you, I like to watch videos that are critiquing media. And people don't make content about music. Because the technical measure is so sensitive, it's so much more restrictive than the fair use would allow.

People who are talking about movies will say like, here's a really emotionally poignant scene, and I want to talk about why it works in context. And they'll play the clip, but they won't play the audio because the content idea is so sensitive. They'll actually like comedically, this is an example from this past week, have this really goofy recorder cover of the theme music, which is sort of a commentary on the automated system itself. >> We don't, we don't, we don't claim those uses. We release those claims when they're disputed.

That's my entire point. And the volume metrics of those claim disputes that are valid are quite low in comparison to the volume of the claim disputes that are not valid in any way shape or form. Diligent providers, like AdRev, review claim disputes for fair use and actually have expertise about what fair use is and isn't.

>> Most people aren't experts. We're talking about media critics, we're talking about journalists, we're talking about people who engage with culture as their living. And if tools get deployed that wind up chasing them off of the platforms where they exist, or causing them to no longer cover topics, that's a huge net loss for the goals of copyright law to encourage public discourse and to support small creators. >> I agree with you but you're still using YouTube's self-made service with their own rules and own product as a standard of the market. That has nothing to do with what can be done. Or what the technology is capable of.

Just because YouTube decided that they don't want to deal with anything and drop all responsibilities on rights service. So this I mean, if you look at the reality, YouTube always points to the other side that at fault, it's the user's fault because they don't understand fair use. It's the rights service fault because they over, overclaim, but it's YouTube creating these tools, it's YouTube creating these products, it's YouTube creating these processes. [ Inaudible ] >> Being able to match something that's half a second. And for rights holders to monetize fan content, how do you make sure that that doesn't sweep in lawful speech? >> That's a great question.

So first, we ingest all Creative Commons and public domain ahead of the rights service. That means rights holders are not able to claim any public domain content, which is very common in YouTube, especially when it comes to classical and such. Second we built ADR service in three steps where we have a mediator and we don't yet have that decided, but we have a mediator in between.

And so that is a huge part of that where none, not the rights holder, nor the Creator, nor the platform is making the decision about if this is a fair use, but is a third party. It's a third party that is a nonprofit, with the participation of people or WIPO if you want. And so you know, technology will be never perfect, but we can do better.

And the one of the big, big differences between us and YouTube is we identify, all of our identifications are returned within five seconds. What this means is that when user uploads a content, by the time they're ready to publish it, they already know what is going to be claimed by whom. The technology will never be 100%.

But it will do, it gives the context to the creator, saying, here's what I believe, it's owned by other parties, this is what the parties want. If you disagree, you can immediately start before you even published it, you can already start with the challenge. And that challenge doesn't go to the rights holder, nor it goes between you and the platform, it goes to a mediator that is completely independent, publishes this publicly, shares this with everybody, and then makes a decision.

And if you disagree, you can go to court, obviously, that's your right. But we want to balance the market, we want to balance it in ways that the others don't. And for instance, our systems do not allow manual identifications whatsoever.

YouTube, a lot of abuse on YouTube comes from manual identifications where as Noah says, companies like AdRev that actually do this professionally do incredibly good job to consider these things, have educated staff, spend money, but you have 9000 other accounts that are handed out to parties based on YouTube's whim. And they just randomly claim stuff, whatever they decide, because they decide. Like I remember having a conversation years ago with a rights holder that essentially said I identify, like I randomly confirm everything in the queue. And when they come back, I considered that. And it's a fault of the tool, it's not the fault of the people. You give them the way to abuse it, they will abuse it.

[ Inaudible ] >> Two very different things. Technological protection measures, and standard technical measures. These are two different things. The protection measure is an enforcement device. And a standard technical measure is a way to communicate information that can be used to facilitate determining the status of a work and acting upon it.

And standard technical measure can be used for things like communicating that a work is in the public domain, communicating that the rights holder wants to share that work freely and welcomes anybody to make use of that work. The museums, the research libraries, and others who want to push information and content out to the public have a problem in that the public actually perceives that there's liability associated with using some of these works. And so a standard technical measure, like what we're doing at Plus, can allow the public to understand well, yes, I can make use of that video.

Yes, I can make use of that artwork without stifling their ability to make use of it because of a perceived liability that isn't really there. And I just think it's really important that we distinguish between, that we take enforcement off the plate when we're talking about a standard technical measures. >> But standard technical measures is identify or protect copyrighted works and the downstream effect of deploying automated tools has been that it sweeps in a lot of First Amendment protected speech, and a lot of fair use and things that under the traditional balance of copyright law, people were free to engage in without negotiating, or without having their revenues taken for someone else on their video that they made critiquing the original work. And it really matters what happens after the identification, right? So are you then presented with a process where it says, you know, you could contest this, but it's really risky.

And if you get it wrong, you'll lose your whole platform, and all your past videos will come down. And if you want to appeal it, the first appeal is going to go to, you know, not judges, but to people who are hired by this private arrangement. And then if you want to fight it, fight it, fight it, fight it, then eventually, you know, there's no obligation to let it go to court or these private, these private agreements, can and do restrict lawful speech. So that's why it's so important. You know, to put this in the context of the discussion we're having, we're talking about, at what point something can become essentially, government mandated if you want to maintain your safe harbor and exist as a business that has user-generated content.

And rights holders already have an extraordinary tool in the 512 takedown-- where they can get speech removed. >> I'm sorry, that's just [inaudible]. >> And so now say that not only is there no judicial involvement at that step, but we're actually going to replace the traditional contours of copyright law effectively with a private ordering. Is something that you, we need to talk about all these things in the context of 512. And what the consequences is of designating something an STM.

[ Inaudible ] >> I vehemently disagree, we need to talk about all these things outside of the context of 512. And outside of the context of anybody trying to purport that the current notice and takedown system is working for anybody except rights-- , or except for creators and user groups who consistently infringe upon people's works. So the scale at which, Kit, are you making so many wonderful points, but everything really starts to break down when you talk about the scale of legitimate fair use claims versus wild, out there, completely off-base fair use claims, and I'm, I'm telling you, urging you to just trust me on this.

And if you want to sign an NDA and come hang out with us sometime I can show you. Ninety-plus percent of claim disputes are predicated on a lack of understanding of the fundamentals of copyright law in the first place. And even more, a higher percentage of fair use disputes are completely just absurd and out there.

I absolutely respect what you're saying about a journalist, or a film reviewer, or a music reviewer, anybody of that cut, to be able to critique film or music on YouTube or other platform and inherently have that right, based on fair use Policy. I agree with you 120%. What I'm trying to adamantly just state here is that the user groups don't even understand what fair use is actually for, what its intent was. And they also don't understand music licensing. Again, I'm coming from a music rights owner perspective. So it's I'm sorry, I've just kind of rambling on and on here at this point.

But it's really yes to fair use, no to this absurd claim that you're making it sound as though the great volume of fair use claims on the Internet are actually fair use, and that somehow someone's attempt to enforce their own copyright ownership somehow inhibits the free speech of the user group. It's a wild conflation that I just vehemently disagree with. >> There is a flip side on the YouTube situation. So as a photographer, when my works get taken, and you put together in a YouTube video with some music behind it, that's unlicensed, probably. And the video is not about parodying my use or commenting on my work or reviewing my work, but the person has made a video in order to collect advertising revenue on YouTube, and I send a takedown notice and I get a counter notice back, I have less than 14 days to actually go file a lawsuit against that person.

Do I want, do I want to file a lawsuit against that person? No, you know, I really do not want to have to sue everybody and I and I'm not a litigious person, I engage in conversation with people. But when these counter notices happen, if I don't take action, then these platforms will not take any further notice to me with respect to that piece of work, many of the platforms have developed that policy. So I'm really stuck on situations where it's anybody looking at it, or most people looking at it would say it's not fair use. So it's a real issue. I mean, I send out about 200 DMCA notices a week, somewhere in there, around 100 to 200. It takes a lot of time.

Yes, I use automated means to identify where my work is being, is appearing across the web. And yes, I consider it fair use because I have to. If I send a takedown notice, and I and it's a potential fair use, and I move forward, I'm dead in the water, I'm going to, I'm going to be in a lot of trouble and have a lot of liability for damages. >> So see my point. Sorry, this is my point on the law on these essentially, platforms do everything the way that essentially benefits their business, right? And so their business is advertisement, it's not rights holders, it's not creators. Creators are feeding into that business in most cases.

And so does or at least when it comes to UGC, this is true for most of the platforms. And so they, for instance, you want to issue a DMCA notice on YouTube, you'll have to go through a certain form that they have on a platform that has reCAPTCHA on it. So it's very hard to automate.

And they say, if you want this came out on the on the Congress hearing, if you are a rights holder that has issued hundreds of notices in the past, they will consider you for Content ID. So that means you have to first in the first place, somehow find all of the infringement, then act on them. And then they will give you access to a tool that is built for it. And so this is the reality. Kit, like you're saying, yes, you don't want people to lose revenues from their creations when they critique something.

I agree. But that's not a standard of any kind. That's YouTube's decision. You are literally critiquing YouTube's decisions how to run their business. I agree that their product, not the technology, the product called continuity, because they call both the technology and the product, it's horrifyingly bad for everybody.

The fact that some creators or some rights holders have right to go and abuse others, you even saw cases where people started blackmailing others. But that's, again, has nothing to do with technology, that has everything to do with how the product is structured. The technology can be done well and can be done in balance of everybody. I love what Jeff is saying how they run their organization, they have a board that consists of myriads of different people. Why not to have certain such a thing for everybody else? The challenge is when every platform builds their own tools, however they see fit, and then create their own rules, and those rules then are perceived as a market standards or even a law.

Noah touched on it. YouTube says no more below ten seconds. You can ask almost every creator on YouTube, and I believe they will tell you below ten seconds is fair use at this point. But that has nothing to do law, have never had and never will be. But somehow that is considered now as standard.

And this is just a multibillion-dollar company making decisions on behalf of every single citizen of this country, without any government stepping or anyone else. And this is where we are, large platforms making decisions on behalf of everybody without us having a say, and I'm not saying we should give rights holders keys into the you know, into this car right now. Because up until now, the platforms are driving.

So now the rights holders will be driving. I'm saying is like let's figure out how to do it together. But not in a in an extreme way when one or the other side is amplified, and the other one is essentially pushed down.

And it's disappointing to see that YouTube is used as a standard for everything. I get it, they built it. Even PEX is very heavily inspired by what YouTube did in the first place. But you don't use Alta Vista as a standard for search. So I don't see why would you use YouTube as a standard for anything else? >> It's important also to look at what are the market incentives here? So you know, what is the market incentive to make a tool that doesn't sweep up a bunch of fair use, if you're able to make the claim, if people using the service are able to make some money, then there isn't a countervailing market force unless there's either a legal remedy or as we have in 512 I, structural procedural requirements before something winds up being part of essentially the mandate to comply with 512.

So maybe for you know, if we were to take your word for it, Noah, that it's 90% infringing and 10% free speech, that's getting suppressed, is it actually, or what's the dollar amount that makes it acceptable to impinge on journalism to make as a standard measure that all platforms have to comply with, in order to get this, this safe harbor that this is going to be a measure that excludes some forms of expression, for [inaudible]? >> Why do you believe that's the case? Like why do you think it has to be just a financial incentive on the platform to essentially do this? Like why would that not be said explicitly and explicitly is like, if something is a journalism, it's considered to essentially be in fair use arena. And by default tools cannot be used on anything that is called journalism. Like there is a myriad of ways how to solve these things. The one is definitely not saying because here is an exception, was applied to everything else. But there are, and what Noah is telling you and this is I can vouch for it, because we have access to the same things, what Noah is telling you of the myriads of myriads of essentially counter notices that come from Content ID very, very few are essentially correct. And this is, I don't blame the users.

I don't think this is user's fault. I do think that it's the platform's fault. But I don't see why because this happens from time to time. You don't you know, it's like you're essentially saying, because there is some from time to time there is an infringement into someone rights to free speech. Thus, there is, there is a cost to be inflict, inflicted on everybody else.

But you are not looking at these other way, right? Like the rights are abused constantly by everybody, essentially, at this point. There are literally whole platforms that live in the safe harbor world and are under uninterested to comply with anything else about it. They've made a business of that. And then, you know, that is cost to everybody too, right.

So it's like, I don't see why I agree with you, journalists, and critics, and comedy and satire, and all of these things need to be protected. And I wholeheartedly agree with that. But I don't think the solution is to say, because the tools cannot do 100% good job in there, let's not do it at all. I think the solution is let's create processes that help with that where they don't put the burden on the rights holder or the creator. Let's put on the platforms. The platforms make billions of dollars of what? To distribute content to users.

So part of that is also you will be now the arbiter and it's in your hands and you want to return safe harbor, this is essentially the same thing that is 230. Right? You can act on that type of content, you can, you can act and filter type of content that is considered toxic on your platform, and you will still have protections. Why not to do it similarly way? I know the DOJ and everybody's moving in a different direction from that. But that is an option.

I'm not saying it's the option. I'm just saying, let's not consider exceptions to be the standard to the rule, and then decide on those exceptions as a standard for everybody, because it really has major consequences. And if you want to understand how different this is, take a look at China. China abused copyrights for a very long time. And this freedom gave birth to incredibly large and expressive platforms. What did the platform's do? Over the last couple of years went the opposite direction, started acquiring the rights, including the ones like TikTok, and others, because they realized that the standard that the rules that are in place makes this the marketplace significantly better, and then allow them to grow to sizes that we never seen before.

But it was the licensing that empowered all of these, that this the real, transparent rules that are in place that the creators know what to do, not systems that are built like YouTube, where one someone can abuse your rights with a single click of button. Regardless if you're a writer or creator, or a journalist or anyone else. That's the problem. You shall not advocate for systems that allow such a thing. But i

2021-03-11 13:10

Show Video

Other news