Monday, March 29, 2010

Fully Homomorphic Encryption Scheme

The paper this week is about how to construct "fully homomorphic" encryption scheme.

Homomorphic encryption is a form of encryption where a specific algebraic operation performed on the ciphertext has a regular algebraic effect on the plain text. Let us take a look at the RSA system. Its public key consists a pair of integers $(n,e)$. The encryption function of a message $m$ is by calculating $m^e (mod n)$. Suppose that for message $m_1$ and $m_2$, we have their encryption $c_1,c_2$. We know that $(m_1*m_2)^e = (m_1)^e * (m_2)^e = c1*c2 (mod n)$. Therefore, the product of two ciphertexts is corresponding to the encryption of the product of the plaintexts. RSA is therefore known to be homomorphic with respect to the multiplication operation. However, the homomorphism is not preserved for addition: adding $c_1$ and $c_2$ is not the encryption of $m_1+m_2$ (or other algebraic operations on $m_1$ and $m_2$).

Since basically every computation on the integer ring can be done by a circuit with addition and multiplication gate, people would like to see an encryption scheme that is homomorphic for both addition and multiplication (on the plain text). This allows us to compute the encryption of arbitrary function on the plain texts by only looking at the ciphertexts. A encryption scheme that is homomorphic with respect to both addition and multiplication is called "fully homomorphic". Prior to this work, it is only known how to construct the encryption scheme that is homomorphic for a single operation (either addition or multiplication) or for "simple" circuits of both addition and multiplication gates. One of the famous early work is due to Boneh-Goh-Nissim which supports function of unlimited addition but only a single multiplication.

In his breakthrough paper in STOC 2010, Gentry constructed a encryption scheme that is fully homomorphic. Its security assumption is based on solving certain lattice problem under worst case. There are mostly two key ideas for constructing such a homomorphic encryption scheme. The first idea is that if the encryption scheme can ensure that it is homomorphic with respect to its own decryption circuit, then we can somehow bootstrap the encryption scheme to make it fully homomorphic. Given the first idea, it remains to construct such a encryption that is bootstrapable. This will require the decryption function to be simple enough. Gentry has some smart ideas of "squashing" the circuit complexity of the decryption function.

As the encryption scheme of the original paper is based on ideal lattice which is quite complex algebraic object, we will instead present a followup work (by Gentry along with Marten van Dijk, Shai Halevi, and Vinod Vaikuntanathan) which simplify the first paper. The link of the new paper is
http://eprint.iacr.org/2009/616.
In this new paper, all the encryption and decryption is implemented by addition and multiplication over integers (instead of using the ideal of the lattice). The two key ideas (bootstrapping and squashing) are still preserved in the new paper.

There are many applications for homomorphic encryption schemes. For example, we can use such an encryption scheme for voting without releasing the privacy of each voter's preference. Basically, every voter encrypts their preference and the voting system executes the voting rules (for e.g., evaluate the majority function) only on the ciphertext. Homomorphic encryption scheme is also believed to have wide applications in the context of cloud computing.

However, the current scheme is still quite impractical. To encrypt one bit, it requires $\eta^8$ bits (I think) where $\eta$ is the security parameter. If we take $\eta$ to be 1000, then it means that to encrypt one bit, we need $10^{24}$ bits. Also, there is a huge loss of efficiency for converting a general algorithm into a circuit of the multiplication and addition gates.

As for the class, I suggest you have a look at the paper at:
http://eprint.iacr.org/2009/616
No other crypto knowledge is needed for this talk.

Monday, March 22, 2010

Intrinstic Robustness of the Price of Anarchy

Everybody loves cookies. Even us, the students in 15-589U class, love cookies. Thus, for Thursday's class, Ryan, who wishes us all to attend the talk, will bring cookies. Alas, we are 12 students in the class, and Ryan only has 11 cookies. So Ryan is willing to give us the cookies if we all reach a consensus on who gets a cookie. Splitting a cookie is unheard of, and neither is sharing a cookie. (Yes, we all love cookies that much!)

So essentially, each of us has two options: "ask for a cookie" or "not to ask for a cookie". If everybody asks for a cookie, no one gets a cookie, as there as only 11 cookies and 12 students. If 11 or fewer people ask for a cookie, each gets 1 cookie. If you don't ask for a cookie, you always get no cookies. So, how will we reach a consensus?

Well, that depends on how collegial we are. If we care about getting as many cookies as possible as a class, someone steps down, and the rest get 11 cookies. But what if each of us cares only about how many cookies (s)he gets? It is clear that no matter what other students do, each of us is always better off by asking for a cookie! Therefore, we all ask for cookies, and no one gets any! So, even though as a collective, we could gain 11 cookies, we end up getting 0, and everybody's passion for cookies is left unsatisfied.

Let's contemplate a bit about this example. Here is a game, where acting selfishly is globally worse than playing for overall good. In fact, we get a ratio of 11/0, which is infinitely worse. What about other games? Can we measure how much worse will we do by acting selfishly in a given game G? In a set G of games?

This is precisely the question that Koutsoupias and Papadimitriu raised in STOC'99. Suppose we model the game by costs rather than payoffs, so everybody wishes to minimize their costs. How much more do we (altogether) pay for playing selfishly, in comparison to playing for the overall good? In the words of Koutsoupias and Papadimitriu, who considered routing games, how much faster would the internet be if it had a manager? This is the Price of Anarchy, the ratio between the worst cost of a Nash-Equilibrium in the game, to the cost of the social optimum.

Ever since Koutsoupias and Papadimitriu raised the question, there have been numerous attempts at bounding the POA of various classes of games. The question was even extended to other models of players (such as regret-minimizing, myopic best response players etc.). In STOC 2009, Tim Roughgarden introduced the notion of smoothness in games, and showed the following:

(i) Smoothness gives an immediate (and very elegant) upper bound on the POA of a game.
(ii) The same bound applies to other models of players.
(iii) This bound gives the exact POA for the class of congestion games.

In Thursday's talk, we will survey the definition of POA and the various models of players considered in the past. We will then go on to stating in exact and proving all the above 3 results.

As a side note: the first two results are pretty straight-forward, so even people who have no background in Game Theory are welcome to attend the talk.

And just to clarify: Ryan will NOT actually bring cookies to Thursday's class.

Monday, March 15, 2010

This week I will present a paper from FOCS'09 by Dinur and Harsha on Probabilistically-Checkable Proofs (PCP), "Composition of low-error 2-query PCPs using decodable PCPs" (link). PCP Theorem [Arora-Safra'98] shows that any mathematical proof can be checked probabilistically by a verifier using only small number of queries. If the theorem is true, then there should exists a proof which the verifier accepts always (completeness). If the theorem is false, verifier should accept a proof with at most bounded probability (soundness).

The ground breaking result from [Arora-Safra'98, and Arora-Lund-Motwani+'98] states that any language in NP has a PCP with constant number of queries for which the soundness is bounded by a constant. To appreciate this result, notice that the proof length is essentially unbounded. In this regard, the proof can be seen as a code with very large distance -- one that can detect very small number of changes. A very good survey on the history of PCP theorem and a combinatorial proof of the original result, due to Dinur, can be found in [link].

Formally, PCP verifier V for a language L is a polynomial time algorithm that, given input string x \in {0,1}^n, and oracle access to a proof pi \in Sigma^m(n), reads x, and using r many random bits, computes q indices to query (local window) Ir = {I1, ..., Iq} and a local predicate fr:Sigma^q to {0,1}. Then it accepts iff fr(pi(Ir)) = 1.

  • (Completeness) If x \in L, then there exists a proof pi which is accepted with probability 1.
  • (Soundness) If x \notin L, then for any proof pi, Pr[fr(pi(Ir))=1] <= epsilon.

We denote the languages accepted by such PCP verifiers with PCP{1, epsilon} [r,q]_Sigma. Notice that (trivially) PCP{1,epsilon}[O(log n), q]_Sigma \subset NP. However the existence of proof systems with NP \subset PCP made it possible to get hardness of approximation results, which started with the inapproximability of clique/independent set. Even problems with unusual approximation ratios (Asymmetric k-Center problem with O(log*n)-approximation) were shown to be optimal, in other words any slight improvement in the approximation ratio will lead to NP=P.

A central problem in this area is the construction of PCPs with smaller parameters, most notably sub-constant soundness with 2-queries and small alphabet. Although interesting in its own right, the existence of such PCPs provide even stronger inapproximability results. For example, Hastad'01 showed that Max-3SAT problem is NP-hard to approximate beyond 7/8 + epsilon, for any constant epsilon > 0. However this result does not rule out, say, the possibility of having an approximation ratio of 7/8 + 1/loglog n.

It has been known for quite some time how to get sub-constant soundness [Raz-Safra'97, Arora-Sudan'03, Goldreich-Sudan'06] with exactly 2-queries. However these constructions tended to yield proofs with exponentially large alphabets in terms of soundness. Although this may not sound too bad (after all alphabet sizes are still at most polynomial in n), the hardness results take time exponential (or doubly exponential) in alphabet size. Thus these PCPs make it impossible to get hardness results based on P != NP (at best one gets DTIME(n^log n) != NP).

A recent paper by Moshkowitz and Raz in FOCS'08 demonstrated that indeed it is possible to get sub-constant soundness with small alphabet (and proof). A major open question in their paper was:

  • Does there exist a shorter/easier proof for this result? (As the paper happened to be 100+ pages.)

The paper we will discuss answers this question affirmatively. To give an overview: The technique involves reducing the alphabet size considerably through a technique called proof composition using list-decodable proofs [otherwise alphabet reductions with unique decoding are not possible]. The decoding part is performed by another PCP called [no surpise there] decodable PCP, D. Most important part of this paper is a way to compose V and D without increasing number of queries. After composition step, the resulting proof is made nicer by applying local transformations like degree reduction using expanders on the corresponding Label-Cover problem, which is defined as:

An instance of Label-Cover problem is specified with: a bipartite graph G=(L \cup R, E), alphabets SigmaL, SigmaR, set of functions (projections) for each edge F{uv} : SigmaL to SigmaR with u\in L, v\in R. A labelling function over nodes, G:L\cup R \to SigmaL \cup SigmaR is said to satisfy an edge (u,v)\in E if F{uv}(G(u)) = G(v).

----

Although none of the proofs I will present are complicated or long, the presentation will be a little bit dense as we will constantly switch back and forth between different definitions. Hence I strongly suggest looking at (assuming non-familiarity):

  • Expanders (definition and expander mixing lemma)
  • Basic definitions from coding theory (relative distance, rate)
  • Label Cover, PCPs (basically becoming familiar with the definitions)

Thursday, March 4, 2010

No class today

Just a reminder: no class today, as I am out of town for a workshop.