Tools for Privacy-Enhancing Technologies

Tools for Privacy-Enhancing Technologies

Show Video

- [Host] Thank you for coming to the RSA. Today's session is "Tools for Privacy-Enhancing Technologies". Our first speaker is...

Is Joakim Brorsson. And our second speaker is Ishmail Afia. - [Joakim] So, this presentation today is going to be about publicly auditable privacy revocation for anonymous credentials.

And it's an article, a paper I've wrote, written with my co-authors here, Bernardo David, Lorenzo Gentile, Elena Pagnin, and Paul Stankovski Wagner. My name is Joakim Brorsson, and I'm a PhD student at Lund University. So... If we zoom out a bit, the main thing we're going to talk about here today is conflict of interest, right? For privacy versus surveillance. And there are loud voices, both things here.

But these things rarely go together. You can't have both at the same time. And we can illustrate this in some examples.

So, this privacy versus accountability conflict often occurs in payment systems. For example, online payment systems. So.

You, most people want transactional privacy, but at the same time, you have know-your-customer laws, saying that every, the banks need to know who is who. And there's anti-money laundering laws, meaning that you need to be able to trace transactions to see who sent money where. And then the conflict also occurs in some high profile legal cases. So, here in the U.S., in 2016, there was the Apple versus FBI case, where, the FBI wanted Apple to decrypt the phone of a suspect shooter.

And upon Apple's refusal to do so, the FBI somehow managed to do it anyway, without the help, help of Apple. In a very similar case is that of EncroChat in Europe. Three years ago, I think. So, that was an end-to-end encrypted messaging platform used by a lot of criminals. And when the French police wanted access to those messages, they managed to infiltrate the organization, and actually get access to the messages.

So, the takeaway is that, well, there are some privacy protections, but, for a powerful entity, when push comes to shove, they actually can circumvent these protections. So, in, in light of this situation, how do people propose to build systems? Well, a common solution is conditional privacy. And what that means is that we build systems where we give privacy to users by default, but, at the same time, a party is introduced who can revoke such privacy upon user misbehavior. And then we trust this party to not misbehave in any way. For example, by currently revoking privacy without grounds, if it has the power to revoke privacy. So, it's kind of hard to find statistics about the use of such powers, but the ones I could find was the, while the EU Data Rotation, Retention Directive was active, it was used quite heavily.

So, in France in 2008. Almost half a million IP's were traced. Like, who's behind this IP? It's this person. And the same, it was a million times in Poland. And... Even if you would trust such an authority to not misbehave and overuse its powers, we all know that there are countless data leaks every year.

So, even if you trust this authority, it might be subject to attack, it might leak its data, right? And that, brings us to the menu of the day. So, we're going to zoom in even further, and we're going to talk about anonymous credentials, and how we can have trustable conditional privacy for such a system. What does trustable conditional privacy mean? Well, it means that we are going to try to build a publicly verifiable log of privacy revocation so that everybody can see how does an authority use these powers? And we're going to do it without using any trusted parties, 'cause that would kind of ruin the purpose. But, let's start with the basics. Most of you in this room probably know this, but we'll start here just so that we're sure. So, what is a credential system? Well, a credential system allows Alice to get a credential from an issuer and then she can show it to Bob saying, "Hey, I have legitimate credential.

This is an authentication. Please accept my authentication". If it's an anonymous credential system, showing the credential does not reveal her identity, it only reveals that she has a valid credential. And then if we add conditional anonymity, we add a privacy revoker, which, kind of an inspection, see, okay, who actually did this anonymous authentication? So, but then we need to think about, what does this mean? What, what's, how is such a system secure? Well, for anonymous credentials, you need unforgeability of credentials, and anonymity, of course.

So, you shouldn't reveal who you are. That's the security requirement. When we add the conditional anonymity requirement, that means that we add a security requirement for guaranteed identity tracing. And that is basically that, a malicious user, user cannot evade privacy revocation, when the privacy revoker wants it. What we have done in this article is to add an auditability security requirement. So, that when we have conditional anonymity, any privacy revocation needs to be publicly announced.

There's a guarantee of that. Meaning that a malicious issuer or revoker cannot trace identities without publicly announcing it first. And, the question we gotta solve here is, who is this privacy revoker? Can we really trust it? Or is it a wolf in sheep's clothing, actually? So, this is, of course, people have thought about this before. A very common solution, is to just replace the central authority, with a committee of authority.

So, we distribute the trust, we take our identity, Alice chops it up into secret shares, little pieces, send one piece out to each committee member. And now, we instead rely upon honesty in the committee. And if we have some threshold, threshold system, we can allow for some dishonesty in the committee. Okay, we're good, right? Well, you, we have a new problem. And that problem, is how do we find these parties in the committee? How do we find which parties we can trust? 'Cause, if we look at, like, people connected to privacy or surveillance, they're often quite divisive figures.

So, if you look at these people here, and think, how would you select a committee? So, personally, I don't know, maybe I would pick Ursula von der Leyen in the top right, right? Some good European Union legislation. Moxie Marlinspike, the end-to-end encryption platform signal. They got a pretty good track record. Maybe throw in Satoshi Nakamoto, some distributed trust from Bitcoin, right? Those are also the least controversial people there.

There's also Russell Ulbricht. Some people trusted him with their privacy when he ran the, the Silk Road store on Dark web. But, okay. Let's imagine you found a committee, which you like, and, and everybody else likes it as well. We, we managed to get around this problem. We now have a new problem, since we agreed upon a committee, it is known, everybody knows who it is.

Therefore, it is targetable by a powerful adversary. And as we saw in the beginning, a powerful entity has the power to circumvent... Protections. So that, that's the main problem we gotta solve.

And in, in our paper, we proposed to use hidden committees. What is a hidden committee? Well. So, instead of having this public selection, we assume a large set of candidates, with honest majority.

So, most people aren't actually trying to attack the system. That could be the users of the system. And then, we can't use all of these, 'cause if we have a large system, this doesn't scale, right? If we have many users, it's a very large committee. So instead, we select committee, a committee, at random, and we don't reveal it to anybody. So, then we can safely store data with this committee.

Why can we do that? Well, we no longer have the problem of finding the committee members, right? Just a random selection from an honest majority of users, or parties. And second, if we hide the committee, if we don't reveal it, it's not targetable by powerful entities anymore. And this forces public announcement if a privacy revocation authority wants to access data, 'cause it is controlled by this committee, right? It needs interaction from them, and they don't know who it is. They have to publicly announce a request for cooperation.

That's the core idea of how we are going to build this. So. We are not the first to suggest hidden committees, right? It's a well-known technique.

It was first introduced by JK Rowling in Harry Potter and the Philosopher's Stone. And that we have an illustration of the technique here, where one key is hidden among a large set of keys, and he needs to find the correct one. Unfortunately, this technique relies on magic, which can be considered impractical. Now, but, but for real. There are many constructions for hidden committees, and I've chosen to call them global hidden committees here. And what I mean by that, is that all parties agree on the same committee.

So that, is the case in some famous papers. We have it in the Algorand construction, we have it in the ECPSS construction, in the kind of public blockchain, keep a secret paper, and many others. And such a construction could be used to build our publicly auditable privacy revocation system. However, they don't have all the properties we want for a credential system.

A credential system is often composed with our stuff. So, we wanted to be secure on the composition. We want universally composability, which is not provided in many known systems. We don't wanna rely on proof of stake, which is also impractical. We wanted some, most important for us, is that we don't want any interaction to set up the committee, right? We want, since if we're going to be able to use users, we don't want them to actually take part or take part as little as possible. So, that's, to meet these goals, we instead provide a novel construction.

What we can do here is we can exploit the setting, of two non-trusting parties, since we have a credential system during credential issuance. So, we have a user or, and we have an issuer, and they don't trust each other, so we can exploit that. And it allows us to build a simpler, what I call today, local hidden committee system. And what I mean by that is that, each user learns its own committee, it gets, everybody gets their own committee, and they're allowed to learn it, but they cannot affect it, right? So, it's still a random committee, and it's still hidden to everybody else, except the user. And such a construction, can be proven secure under static corruptions. That means, that the adversary is only allowed to corrupt parties before start of protocol execution.

But, we also show in the article or the paper how to do extensions for a mobile adversary. So, the main benefit here is that we can do non-interactive committee setup. Though, the idea how, for how we're going to do this, is to exploit that the user now knows its committee.

So, we can have the user do the hiding. And then we can leave the selecting from the issuer. So, what, what we actually do is that, we let the user randomly, or they commit to each public key of the committee candidates in the system, and then randomly shuffle them.

That means we now have hidden the public keys and the commitments, and we've shuffled their order. And we can prove correctness in zero-knowledge for this. We can then give this list via a bulletin board, or similar, to the issuer, and then it can publicly select from this list. And since it doesn't know who each party on the list is, it has no advantage when selecting, it can't affect the committee to its benefit.

Okay. And then we can send this selection, like indication of who it is back to the user. Having such a committee.

User knows it. It is now quite simple to secret share to it securely. So, what Alice does... Is she just produces secret shares of her identity. Encrypts each share for the public key of the members in the committee, and then she can store it with the privacy revoker, 'cause it's encrypted with keys that the privacy revoker doesn't have.

So, we can securely store it with the privacy revoker. And note here that, no committee member is involved, and they're not even aware that any of this happens. It all happens in Alice's head, or in communication with the privacy revoker. We can also prove correctness of this in zero-knowledge, using the commitments from the shuffle earlier. So.

Once we have this committee set up, the shares with the privacy revoker... A privacy revocation needs to involve the committee members. Right? So. If the privacy revoker wants to revoke privacy of a user, it needs to publicly announce, the act of doing so, to ask for committee cooperation. So, the, all committee candidates are expected to watch some public place, a bulletin board.

And whenever the privacy revoker publishes a request with shares, anybody who can decrypt those is expected to do so, and then send the result to the privacy revoker. Once it obtains enough shares, it will have the identities. We have the guaranteed identity tracing. But what we, what we also have is a public log of privacy revocations, right? 'Cause they need to go on the public bulletin board to actually get this done. So, that is our goal. We've achieved our goal.

And that's, that's the summary of our contribution. So, what we've done is we've defined the security of a publicly auditable privacy revocation credential system. With guaranteed public announcement. When we've done it, we've done it using universal composability, and then we provided construction using our local hidden committees.

So. That's it. The result is that Alice is now happy, since she has an anonymous credential, and she will know if her privacy is revoked.

And the extent of how many other people has their privacy revoked as well. And authorities are also happy. Since they can now, still, trace identities of criminals, while still giving privacy to the users.

And that's it. Thanks. (audience clapping in the background) - [Host] Anyone has any questions? - [Audience Member] Was that supposed, you know the, the traditional banking with the connections right? - [Joakim] Yes. - [Audience Member] So, so, the issuer is the controller. Where, they will be able to point out who is holding the anonymous credential.

Now, is that incorrect? Issuers still be able to see... Who are all holding the anonymous credentials with them. - [Joakim] So... Well, yes, the, the issuer can see who has credentials... - [Audience Member] Okay. - [Joakim] But it,

it cannot see who has used it, when they're using it. - [Audience Member] Okay. - [Joakim] Thank you. That's a good question. - [Audience Member] How would you respond to the people who are asking about the privacy being...

Not completely secure? Because there's a system administrator, someone who's running the information technology, who could potentially have access to all that same information that the committee works with? - [Joakim] Yes, that's also a very, a very good question, and that's the problem we're trying to solve here. So, my answer is that... Well, there is, exists no such administrator.

No one actually has this data, 'cause first, the data exists only with Alice, the one obtaining the credential, and then it's spread out among a set of other users, and they don't even know who they are. So, the, the point is to not have this information anywhere in the system, it's zero trust if you want to use that term. - [Audience Member] Hi, thank you very much for this nice presentation.

But, I have some problem with the, understanding the details of your whole system. First of all, privacy may not require secrecy. And all, the, the, in your...

Protocol, everything depends on the secrecy requirement, on the privacy. Because if you look at the security credentials, one of the requirement that you have is anonymity. But, in privacy, you don't have to be anonymous in some cases. So, how can you adopt, this... Protocol that you designed, to the, the privacy which does not require anonymity? This is the first question.

And the second question is, it, it seems that your hidden groups are dynamic. It's not static. That means that member of the, or the group members is gonna change, depending on the, whatever the environment that you are working on. If that is the case, how Alice knows how many, the secret pieces of the information she has to create? And the third one is, when you are getting with the threshold cryptography to, to recover the whole message, how many pieces of the information that you have to have, okay, to actually recover the actual data? Thank you.

- [Joakim] Thanks. Those are three good questions. So, the first one was about privacy. I'm not sure, I quite...

Are you referring to... Differential privacy? - [Audience Member] Not really. What I'm saying is this. For example, I live in Mill Valley here, and everybody, if they, if they know my name, they can find my address. Okay? But, if they're sharing my address with someone else, without my permission, they are violating my privacy, but my address is not secret.

So, that's privacy. And I have to, some kind of regulation or protocol, to, to protect my privacy, based on those conditions. For short, privacy may not require secrecy all the time. - [Joakim] No.

So, no, this is... You're very correct. We only address, like, identification privacy.

When I authenticate to you, you shouldn't know my identity. Only that I am a legit, legitimate user of the system. And, that's all we do. We do not provide any other form of privacy.

I'm not sure if I understood... No, no. And that's, that's our, that's our scope. We don't try to do anything else. So, but it, it's a valid concern, right? Privacy has many aspects.

And now, I got so much into the first question that I forgot what the question is, the second question was. - [Audience Member] How many pieces of the, the, the secret that you have Alice have to... - [Joakim] Right, right, right, right, right. - [Audience Member] Without knowing the size of the group? - [Joakim] Yes.

So these are all balances, right? So, everything kind of depends on how many users do you have in the system? What percentage do you assume, that are honest? And then, once you make those assumptions, then you can say, "Okay, then we need a committee size of this size, and then we can allow for the threshold here". So, every one of these numbers have to go together. And we do provide some examples in the paper, of... I don't have them in my head, but like, so, what are like reasonable numbers here? So, if you go all the way up to 50% corruption, then you need almost half the set of users, to actually get to be really secure. And then your third, third question was about the dynamic of the user shares, right? And once a committee is selected, it is fixed. It doesn't change.

Now, that has a problem of user sharing, right? So, some people might actually leave the system. So, you have to account for that in your threshold parameter as well. So, all of this are parameters to... To adjust. And everything needs to go together. And we do provide some examples, but you need to really be sure about your parameters when you do such a system.

That's an excellent question. All right. Thank you very much. (audience clapping in the background) - [Ismail] Hello everyone.

My name is Ismail Afia. I'm here today to talk about my recent work, my recent, joint work with Dr. Riham AlTawy about unlinkable policy-based sanitizable signature. For today agenda, I will first answer the question: why sanitizable signature is needed if we have digital signatures? Then, I will go briefly over the idea of policy-based sanitizable signature. And, the finally, I will give our work, unlink, unlinkable policy-based sanitizable signature.

Why sanitizable signatures? Normally, in digital signature schemes, the signer uses their private key to sign a message, and then the verifier uses a signer public key to verify that, the message. The signature over the message. However, if a single bit of the message is altered, the signature become invalidated. In real world scenarios, some application requires the modification of the signed data. For example, in medical application, a medical report normally contains the patient's personal information, diagnoses, and the treatment. The, this medical report could be used by the accounting department for billing purposes, where, they need to know the personal information and the treatment received.

The same report could be used by the research department for research purposes, where they don't know, they don't want to know anything about the personal information. And the same report also could be used by the administration department, where they need to know the personal information and the basic idea about the diagnoses and the treatments. However, how to protect the patient privacy, and enforces a Need-to-Know principle, without invalidating the medical report signature. To fulfill this gap, sanitizable signature scheme was proposed.

In sanitizable signature schemes, the signer is allowed to determine who can modify the message data, which we call the sanitizer, and what could be modified in the message itself, which we call the admissible parts. A basic idea about the sanitizable signature schemes. The signer... Designate a sanitizer, by means of their... Public key.

The signer uses, it's, his private key to generate a signature over the message. Then, the sanitizer is allowed to modify the admissible parts of the message. And finally, the verifier, can verify the signature over the message using the signer and the sanitizer public keys. Sanitizable signature scheme defines four main security properties, in addition to the unforgeability, which is a standard notion of security for digital signature scheme. The first security property is immutability.

Which were, which guarantees that the sanitizer cannot modify any inadmissible part of the message. The second property is privacy. Where it is impossible to recover any information about the sanitize, the sanitized parts of the message.

The third property is accountability. Where the signer and the sanitizer should be held accountable for the signature they produced. And finally, transparency. Where the signed and sanitized the signature of the same message are indistinguishable As any scheme proposed, there is some, drawbacks...

For, of sanitizer signature scheme. For example, sanitizers should be, sanitizer should be exactly identified prior to signing. Typically constructed in a single sanitizer setting.

Some schemes allows the sanitizers, allows multiple sanitizer, but come with a cost of losing accountability, and require interaction with the signer, after signature regeneration. To fulfill the gap in convention sanitized, sanitizable signature scheme, policy-based signature scheme was introduced. P3S is the first proposed... Policy-based sanitizable signature scheme, proposed by Samelin and Slamanig, in CT-RSA 2020. The sanitization rights are assigned to any sanitizers that fulfill a predefined policy. The policy is defined over the attribute sets held by the sanitizer.

Any sanitizer possessing an attribute set that satisfies a predefined policy, can sanitize the message. Sanitizer are not required to be known to the signer before signature generation. To give an idea about policy-based sanitizable signature, the signer first define a policy. For example. The sanitizer should be in the finance or accounting department, and hold the manager attribute.

The signer uses his private key, to generate the signature over the message. Where, finance manager, accounting manager can be able to sanitize the message. However, the HR manager cannot. P3S main building blocks are: Policy-based Chameleon Hash. Where, collision could be found, only if the policy is satisfied.

And P3S uses a dynamic group signature, similar in NIZK proof for accountabili... Accountability purposes. Where it, the signer or sanitizer has to prove that the signature is generated by signer or some.

As always, P3S also have some drawbacks. For example, the sanitized signature could be linked to the original message. That's why, that's because the message hash is fixed with, with each sanitization. At least one future sanitizer should be identified prior to the signature generation. And this is because of the NIZK proof used.

It requires a group manager, since the group signature are involved. Each sani... Each sanitizer should be granted sanitization right in a specific group, since group signature are also used.

Efficiency and scalability challenges are facing P3S. To overcome the aforementioned drawbacks, we are introducing unlinkable policy-based sanitizable signature, which we call UP3S. To give an idea about the unlinkability security probability, and how it is vital in some application, unlinkability ensures that associating the different sanitized signature with the same original message, is not feasible. To give an idea about the unlinkability, in the previous example of the medical report, let sanitizer 1 has modified the report by anonymizing the personal information of the patients, and sanitizer 2 has... Modified the same report, by removing the diagnoses section. Combining 2 report, or linking 2 reports, to a 2, 2 sanitized versions of the report to this one, can lead to the reconstruction of the original message.

In a nutshell, UP3S, in UP3S, signers specify a sanitization monotone access policy, which we call predicate Y, at signature generation. Any sanitizers that holds an attribute set that satisfies the specified predicate can sanitize the message. No group signature is required, and accountability is achieved by the traceability feature of the underlying traceable, attribute-based signature scheme.

The generated signature are unlinkable. Provide more practical and efficient construction, which enables scalability and improved performance. Utilizes a rerandomized signature scheme, and a traceable attribute-based signature scheme. To give... A simplified idea about UP3S construction. Each message is divided into two parts, mfix.

Where... Which contains the in... Inadmissible blocks or parts of the message, and the madm, which contains the admissible parts of the message. The signer of the message first reconstructs the predicate y, which should be fulfilled by some of the signer attributes, and the future sanitizer's attributes. Then, the signer uses his RDS keys to generate a signature, over the fixed part of the message, plus the predicate itself. And uses his...

TABS scheme secret keys, to generate a signature over the whole message, the whole message, under the predicate, y. Finally, the signature over the message it consists of post, the signature of the fiel- The fixed part, and the signature over the full message. To sanitize the message, the sanitizer first modifies the admissible parts of the message, to generate mdash. And then, he rerandomize the signature over the M, as the fixed part of the message. And, using the sanitizer secret keys, taps the scheme secret keys, as the sanitizer generate a signature over mdash, under the same predicate, y. Finally, the signature over the message consists of the rerandomized version of the signature over SIG, the mfix, and the new version of...

TABS' signature over mdash. We instantiating UP3S using Pointcheval-Sanders RDS scheme, due to its short signature size and the low signing, and verification costs. And, the Ghadafi DTABS scheme because it offer minimal trust in the attribute authorities, and support decentralization. We extended the definition of the sanitizable signature scheme standard security properties.

In addition to, we introduced unlinkability, which ensures us, again, it's ensures that associating different sanitizer signature with the source original message is not feasible. UP3S uses a different approach towards accountability. Where, it uses a separate tracing authority to trace a signature back to it, to its actual signer, and it doesn't, doesn't use a signer keys and the tracing process. To wrap up our today's session, sanitizable signature schemes allows a signer to designate a sanitizer, to modify the assigned data, without invalidating the signature. Typically, these schemes are defined in a single sanitizer setting, whereas, the designated sanitizer must be known before the signature is generated. Policy-based sanitizable signature allows a signer to grant sanitization right of the, of a message, based on a predefined policy.

Existing schemes has multiple drawbacks, and this doesn't provide unlinkability, which is a vital security property in some application. We presented a construction for an unlinkable policy-based sanitizable signature that fills the gap in the existing construction. Thank you.

Any questions? (audience clapping from the background) - [Host] Anyone have any questions? - [Audience Member] Oh, thank you. Just want to ask, in your scheme, how the verifiers need to know, which public keys they have to use to verify the signatures? Which is, can be multiple signatures. So, how that's supported in your scheme? - [Ismail] Sorry, I didn't get that. Are you talking about the...

Or the, the original sanitizable signature schemes? Or on, on our scheme? - [Audience Member] In, the UP3S. - [Ismail] UP3S, yeah. - [Audience Member] Yeah, so because the signa- The verifiers need to know, they have to use multiple public keys to verify the integrity. - [Ismail] No. Actually, how TABS works, attributes based signature works, It is, it is not tied to the public keys. It is tied to the attributes that the signer uses in signature generation.

So, for the sanitizer to, to verify, it only verifies that the signature is generated with someone. Possesses some attributes that fulfills the policy, defines the, the, the signer as a first place. That's it.

No public case is used. It. The only public key used in TABS is the authority attribute, authority public key. So, it's a single key.

Everyone uses to sign the message under this public key. However, the... The signature assists only that the signer possesses certain attributes.

- [Audience Member] Okay, so it must be centralized? - [Ismail] Yeah. In some sort. However, TABS... Wa- The initi- We instantiating UP3S, using Ghadafi DTABS. It's a decentralized diversion of TABS schemes.

However, there still need to be at least one trusted authority in the scheme. - [Audience Member] Thank you. - [Ismail] Thank you. - [Host] Anyone else? - [Audience Member] How do you guarantee the unforgeability of those signatures? - [Ismail] The unforgeability? - [Audience Member] Yeah.

- [Ismail] Yeah, it's, It's simply of a... The signature consists of two parts. The RDS signature, and the, TABS signature over this whole message. So, unforgeability is guaranteed by both the unforgeability of the RDS schemes underlying, under RDS scheme, and the TABS schemes used. - [Host] Does anyone else have any questions? No.

Okay. Thank you guys very much for attending. - [Ismail] Thank you. (audience clapping in the background)

2023-06-08 08:07

Show Video

Other news