Monday, February 8, 2010

Twice Ramanujan Sparsifiers

Given a weighted graph G, a natural question to ask is whether there exist another graph H on the same vertex set such that any cut has similar weights in G and H. As the weight of a cut can be described by $\sum_{uv \in E(G)}w_{uv}(x_u-x_v)^2$ where x is a 0-1 vector, one could ask the same question where x can take any real values. A spectral sparsifier for a graph G is H such that the above functions on the two graphs are close for any assignment of x and H is sparse.

Algorithmically, graph sparsifiers were first studied as a subroutine for near-linear time Laplacian Solvers by Spielman and Teng. Their algorithm partitions the graph into expanders by finding approximately sparse cuts, then randomly sampling each expander.

Follow up work has been done on other methods for finding sparsifiers that give better guarantees. One approach is to sample the edges by effective resistance, but it relies on the near-linear time solver to achieve good performance. This work uses a similar approach, but with a different goal in mind. It aims to find the best approximation rate possible with a d-regular subgraph, and achieves twice the ratio given by Ramanujan graphs , which are regular graphs whose spectral gap are as large as possible.

I will use the first half of the talk to cover background topics such as condition number and spectral decompositions, then show the approaches used in recent work on sparsifiers. If time permit, I will try to give a sketch of why one couldn't do better than the ratios given by Ramanujan graphs. The second half of the talk will be used to go through the proof of the key theorem of this paper, which basically states the following:
Given a set of rank one matrices that sum to identity, one could pick a small subset of them such that the ratio between the largest and smallest eigenvalues of their sum is small.

No comments:

Post a Comment