|
Abstracts |
Friday, 21 October 2005
Saturday, 22 October 2005 |
8:30 am |
Continental Breakfast / Discussion
|
9:00 am |
Noga Alon,
Tel Aviv University
The Frieze-Kannan Decomposition
method and Grothendieck type inequalities
Frieze and Kannan developed a general method that
provides efficient approximation algorithms for
various optimization problems for dense graphs.
It turns out that a classical inequality of
Grothendieck can be useful in the analysis of this
method. Moreover, one can define a new graph
parameter, called the Grothendieck constant of a
graph, which measures the integrality gap of a
natural integer program associated with the graph.
The study of this parameter leads to several
algorithmic applications, to various extensions of
the classical inequality of Grothendieck, to an
improvement of a recent result of Kashin and Szarek,
and to a solution of a problem of Megretski and of
Charikar and Wirth.
The lecture is based on two recent
papers with A. Naor, and with K. Makarychev, Y.
Makarychev, and A. Naor.
|
10:00 am |
Break |
10:30 am |
Jeff Kahn,
Rutgers University
Some correlation inequalities
We show positive association (i.e. positive correlation of increasing events)
for some families of random variables arising in percolation, the
random cluster model and the "fractional fuzzy Potts model."
|
11:05 am |
Mike Steele,
University of Pennsylvania
Minimum Spanning Trees and the Zeta Theorems
This talk will survey the results that have grown out of
Alan Frieze's remarkable discovery that for large n the length of the
minimal weight spanning tree of the complete graph on n vertices with
independent uniformly distributed edge weights is approximately equal
to Zeta(3).
|
11:40 am |
Boris Pittel,
Ohio State University
Bootstrap percolation on the random regular graph
The bootstrap percolation on a graph is a model of an
interacting particle system, which can also be viewed as a variant of a
growth process of a cellular automata with a threshold k \ge 2. At the start
each of the graph vertices is active with probability p and inactive
with probability 1-p, independently of other vertices. Presence of active
vertices triggers a percolation process controlled by the recursive
rule: an active vertex remains active forever, and a currently inactive vertex
becomes active when at least k of its neighbors have become active.
The basic problem is to identify, for a given graph, p_1 < p_2 such that
for p < p_ 1 (p > p_2 resp.) the probability that all, or almost all, vertices
are eventually active is very close to 0 (1 resp.) The percolation process is a
Markov chain on the space of subsets of the vertex set, which is easy to
describe but hard to analyze rigorously in general. We study the percolation
on the random d-regular graph, d\ge 3, via analysis of the process on its
multigraph counterpart. This substitution works since the probability that
random multigraph is in fact a graph is bounded away from zero. For the
multigraph (the configuration model), thanks to a "principle of deferred
decisions," the percolation dynamics is described by a much simpler
Markov chain. Its generic state is formed by the counts of currently active and
nonactive vertices having various degrees of percolation capabilities.
We replace the chain by a deterministic dynamic system, and use its integrals
to show -- via exponential supermartingales -- that the percolation process
undergoes relatively small fluctuations around the deterministic trajectory.
This approximation result allows us to show existence of the phase transition
within an interval [p_1(n), p_2(n)], such that p(k,d)= lim p_1(n)
=lim p_2(n) is 1 - min {y^2/R(y); y > 0}, R(y) being a certain polynomial
of degree d. Here the transition width p_2(n) - p_1(n) is of order n^{-1/2} for
k < d-1, and of order n^{-1/[2(d+5)]} for k = d-1.
This is joint work with Jozsef Balogh (University of Illinos at Urbana
Champaign).
|
12:15 pm |
Lunch |
2:00 pm |
Ravi Kannan,
Yale University
Low-Rank Approximations to Matrices and Tensors
I will describe a line of work started jointly with Alan Frieze ten
years ago in which both traditional and new methods of finding
low-rank approximations of matrices have been used in algorithms for
combinatorial problems. These methods have been extended from
matrices to higher-dimensional arrays (tensors), where, traditional
Linear Algebra is not available. The applications fall broadly into
two categories - numerical (an area which often goes under the name of
Principal Component Analysis) and combinatorial (mainly, Maximum
Constraint Satisfaction problems - Max-CSP.)
|
3:00 pm |
Break |
3:30 pm |
Colin Cooper,
King's College London
The cover time of random walks on random graphs
Let $G=(V,E)$ be a connected graph with $|V|=n$. For $v\in V$ let
$C_v$ be the expected time taken for a random walk on $G$ starting
at $v$, to visit every vertex of $G$. The cover time $C_G$ of
$G$ is defined as $C_G=\max_{v\in V}C_v$.
We investigate the cover time of a random walk on some classes of
sparse random graphs. In particular we prove the following for a
random walk on the giant component of graphs $ G_{n,p}$.
Let $np=c>1$ and $c=O(\log \log n)$. Let
\[
t^*= \frac{cx(2-x)}{4(cx-\log c)} n (\log n)^2,
\]
where $x$ is the solution in $(0,1)$ of $x=1-e^{-cx}$. If $H$
denotes the giant component of $G_{n,p}$, then with high probability
the cover time is asymptotic to $t^*$.
This is joint work with Alan Frieze.
|
4:05 pm |
Aravind Srinivasan,
University of Maryland
The Local Lemma for random variables with large support
One of the many areas that Alan Frieze has made major
contributions to, is the existence and construction of low-congestion
routing strategies. We present a tight sufficient condition for
the existence of unsplittable flows in graphs. This relies on an
extension of the Local Lemma that we have developed, which often
works better than standard use of the Local Lemma when the underlying
independent random variables have large supprts. We will illustrate
this via an extension of a result of Noga Alon on structured
independent sets in graphs.
This is joint work with Tom Leighton and Satish Rao.
|
4:40 pm - 5:15 pm |
Eric Vigoda,
Georgia Institute of Technology
Simulated Annealing for the Permanent and Binary Contingency Tables
I will describe some recent refinements of the simulated annealing
algorithm for estimating the permanent of a non-negative matrix in
polynomial time. The first result is an improved cooling schedule
for the annealing process in the permanent algorithm. The second
is an application to binary contingency tables (i.e., bipartite graphs
with specified degree sequences)
|
 |
Saturday, 22 October 2005 Friday, 21 October 2005 |
8:30 am |
Continental Breakfast / Discussion
|
9:00 am |
Eli Upfal,
Brown University
The Combinatorics of Sequencing by Hybridization
Alan Frieze has also contributed to computational biology. I’ll describe
a line of work related to the efficient use of SBH chips that started in
a joint work Alan.
Sequencing by hybridization is a novel DNA sequencing technique in which
an array (SBH chip) of short sequences of nucleotides (probes) is
brought in contact with a solution of (replicas of) the target
DNA sequence. A biochemical method determines the subset of probes that bind
to the target sequence (the spectrum of the sequence), and
a combinatorial method is used to reconstruct the DNA sequence
from the spectrum.
Since technology limits the number of probes on the SBH chip, a challenging
combinatorial question is the design of a smallest set of probes that
can sequence almost all DNA string of a given length.
We'll present combinatorial designs that improve the performance
of SBH, using universal bases (bases that bind to
any nucleotide), and related methods. The talk will focus on the
combinatorial and algorithmic issues, with just a few words about the
technical challenge of commercializing these methods.
|
10:00 am |
Break |
10:30 am |
Dimitris Achlioptas,
Microsoft Research
Clustering of solutions in
random constraint satisfaction problems
For a number of random constraint satisfaction problems, such as random
k-SAT and random graph/hypergraph coloring, we now have good estimates
of the largest constraint density for which solutions exist. Yet, known
polynomial-time algorithms for these problems fail to find solutions
even at much lower densities. To understand the origin of this gap we
study how the structure of the space of solutions evolves in such
problems as constraints are added. We prove that much before solutions
disappear, they organize into an exponential number of clusters, each of
which is relatively small and far apart from all other clusters.
Moreover, inside each cluster the vast majority of variables is frozen,
i.e., takes only one value. Our proof also establishes rigoroulsy one of
the two main hypotheses underlying Survey Propagation, a heuristic
introduced by physicists in recent few years that appears to perform
extraordinarily well on random constraint satisfaction problems.
|
11:05 am |
Andrei Broder,
Yahoo! Research
Sampling Search Engine Results
We consider the problem of efficiently sampling Web search engine query
results. In turn, using a small random sample instead of the full set
of
results leads to efficient approximate algorithms for several
applications,
such as:
- Determining the set of categories in a given taxonomy spanned by the
search results;
- Finding the range of metadata values associated to the result set in
order
to enable "multi-faceted search;"
- Estimating the size of the result set;
- Data mining associations to the query terms.
We present and analyze an efficient algorithm for obtaining uniform
random
samples applicable to any search engine based on posting lists and
document-at-a-time evaluation. (To our knowledge, all Web & large scale
search engines, e.g. AllTheWeb, AltaVista, Inktomi, Google, OmniFind,
Yahoo, belong to this class.)
This is joint work with Aris Anagnostopoulos (Brown University) and David Carmel (IBM
Haifa Research Lab).
|
11:40 am |
Richard M. Karp,
University of California at Berkeley
Geometric Optics, Linear Programming
and Congestion in Sensornets
We consider the problem of routing in a geographically distributed network of processors to minimize the maximum congestion at any node, or to optimize the trade-off between average path delay and maximum congestion. Instead of assuming a discrete model with nodes at known positions, we assume that the density of nodes is so large that we can adopt a continuous model, in which each communication path is a continuous curve in a region of the plane and congestion at a point corresponds to the density of paths in a neighborhood of the point. Using an argument based on linear programming, we show that the problem is isomorphic to a problem in geometric optics, in which we interpret the communication paths as the minimum-time paths followed by light rays in a medium where the speed of light varies continuously. Our problem is then to specify the speed of light as a function of position so that the resulting minimum-time paths minimize maximum congestion. Once this function has been specified the computation of minimum-time paths is a standard problem in the calculus of variations, but the problem of specifying the function is novel, and we give an approach based on the primal-dual algorithm of linear programming.
This is joint work with Christos Papadimitriou, Lucian Popa, and Afshin Rostami.
|
12:15 pm |
Lunch |
1:30 pm |
FOCS 2005 Tutorials
Separate registration is required for FOCS at http://www.cs.cmu.edu/~FOCS05/
|
4:00 pm |
Martin Dyer,
University of Leeds
Generating random colourings
A recent collaboration of mine with Alan Frieze concerned the problem of
generating colourings of degree-bounded graphs almost uniformly at random.
The objective of the work was to mitigate the pessimism of coupling (and
similar) analyses for this problem. The resulting paper suggested an
approach which subsequently proved quite successful, although the natural
conjecture is not yet fully resolved. I will review the "history" of this
problem as well as some more recent ideas.
|
5:00 pm |
Nick Wormald,
University of Waterloo
On the chromatic number of random regular graphs
In 1992, Frieze and Luczak determined the behaviour of the
chromatic number of random d-regular graphs asymptotically, for d
going to infinity slowly and the number of vertices going to infinity
quickly. At the other end of the spectrum, and using different
methods, we now know that almost all 4-regular graphs are 3-chromatic
and almost all 6-regular are 4-chromatic. The proofs of these and
similar results use systems of equations of right-hand derivatives.
This is based on recent work with L. Shi.
|
5:30 pm - 6:00 pm |
Jeong Han Kim,
Microsoft Research
Poisson Cloning Model and Giant Component of Random Graphs
Since P. Erdös and A. Rényi introduced random
graphs in 1959-1968, the theory of random graphs has played a
central role in probabilistic combinatorics. The emergence of the
giant component is one of a few subjects with the most colorful
history in the area. After introducing a new (essentially equivalent)
model
for the random graph, called the Poisson cloning model, we improve
and/or reprove various results regarding the giant component. For
instance, we prove that a tight Chernoff type bound for the size of the
giant component for G(n,p) with p= (1+c)/n and n^{-1/3} \ll c <1.
|
 |
Back to FriezeFest 2005 main page
|