Talks of the 2008 Edition, June 30th  July 18th
30062008 11:15  Bipedal Bugs, Galloping Ghosts and Gripping Geckos: Neuromechanical Systems Biology
Prof. Robert J. Full, University of California, Berkeley 

Abstract 
Author's webpage http://polypedal.berkeley.edu/twiki/bin/view/PolyPEDAL/ProfessorsOffice 



30062008 14:15  Graph analysis of diversity coding
Prof. Joseph Boutros, Texas A&M University at Qatar 

Abstract  We review coding structures for attaining maximum diversity on wireless channels with a limited number of degrees of freedom. We distinguish those specific codes into two families, trellisbased codes like the hpidiagonally multiplexed turbo codes, and sparse graphbased codes like root LDPC constructions. As a link between the two families, we present convolutional Tanner codes for limited diversity channels where the local trellis neighborhood mimics an LDPC checknode. Finally, we unveil the answer to the question "Is graph irregularity sufficient to attain maximum diversity?". Publications related to this seminar can be found at the author's webpage.
Author's webpage http://www.josephboutros.org/ 



30062008 15:15  Molecular algorithms: combining efficiency with robustness
Prof. Ashish Goel, Stanford University 

Abstract  The abstract: DNA Selfassembly has emerged as an important technique for molecular computation and nanotechnology. At these scales, self assembly is governed by simple (and local) probabilistic rules for growth, making it amenable to algorithmic techniques. We will discuss two important challenges in algorithmic selfassembly: robustness and efficiency. This talk will present recent results, and also attempt to provide a roadmap of open problems. We will also briefly describe some recent algorithmic progress towards programmable molecular machines.
Author's webpage http://www.stanford.edu/~ashishg/ 



01072008 11:15  Efficient Publish/Subscribe in LargeScale Distributed Systems
Dr Gregory Chockler, IBM R&D Labs, Israel 



01072008 14:15  Informationtheoretic and physical limits on the capacity
Prof. Massimo Franceschetti, University of California, San Diego 

Abstract  By distributing uniformly at random an order of n nodes wishing to establish pairwise independent communications inside a 2D domain of area of the order of n using electromagnetic waves, the pernode information rate must follow an inverse squareroot of n law, as n tends to infinity. This scaling limit result is computed without postulating fading channel and path loss models, but applying directly Maxwell's physics of wave propagation in conjunction to Shannon's theory of information. Indeed, the upper bound is due to a limitation in the spatial degrees of freedom of the propagating field which can be rigorously proved via functional analysis. Broad conclusions are drawn from this result on the value of the spatial resource in wireless networks, and a description of different configurations of networks which can achieve in principle a much higher (i.e. constant) bitrate will follow. Joint work with P. Minero and M.D. Migliore.
Author's webpage http://fleece.ucsd.edu/~massimo/ 



02072008 10:15  Agreement, consensus, rendezvous: a review of some open problems
Prof. Fabio Fagnani, Politecnico di Torino, Italy 

Abstract  The simplest instance of the consensus problem can be formulated as follows. Suppose we have a directed graph G with set of nodes V = {1, . . . , N } and a measure xi for every node i ∈ V . The (average) consensus problem consists in computing the average xA = N −1 􏰀 i xi in an iterative
way, exchanging information among nodes exclusively along the available edges in G. This problem appears in a number of diﬀerent contexts since the 80’s (decentralized computation, load balancing, clock syncronization) and, recently, has attracted much attention for possible applications to sensor networks (distributed estimation problems) and to coordinated control for mobile autonomous agents. Several algorithms for average consensus can be found in the literature: they diﬀerentiate on the basis of the amount of communication and computation they use, on their scalability with respect to the number of nodes, on their adaptability to timevarying graphs, and, ﬁnally, they can be deterministic or random. In spite of the enormous amount of literature on the sub ject there are still quite basic open questions not yet answered. One big issue on which this talk will focus is on measuring the performance of such an algorithm. As we will see, this is related to the type of application we have in mind and it becomes a particularly subtle question for randomized algorithms.
Author's webpage http://calvino.polito.it/~fagnani/ 



02072008 14:15  SAR Image Regularization
Dr. Loic Denis, CPELyon, France 

Abstract  In a first part, I will give a short overview of graphcut techniques for image processing. These techniques solve combinatorial problems using graph algorithms. Many image processing problems (denoising, reconstruction, egmentation) can be expressed as energy minimization
problems. I will present the principle of graphcut techniques and some popular graphcut based algorithms.
In a second part, I will describe a denoising technique for the multiplicative (speckle) noise that arises with coherent imaging techniques such as synthetic aperture radar (SAR). Speckle noise follows heavytailed distributions, and regularization approaches lead
to a minimization problem involving nonconvex loglikelihood terms. I will describe an algorithm that uses graphcuts to perform a combinatorial exploration of large trial moves. I will then show some results of joint regularization of amplitude and interferometric phase
in urban area SAR images.
Author's webpage http://www.tsi.enst.fr/~ldenis/index.html 



02072008 15:15  Introduction to unconstrained numerical optimization
Dr. Catherine Mennessier, CPELyon, France 

Abstract  We will first recall the main structure of local optimization algorithms. The "one iteration two steps" approach, i.e. the selection of a direction and the linear search will be developed for standard algorithms of the first and the second kind. Then, more efficient algorithms (that became famous) will then be presented, in particular, the BFGS algorithms with a Wolfe linear search.
Author's webpage http://www.univstetienne.fr/tsi/svemouv/cadresperso/mennessier.htm 



03072008 10:15  Visualizing and Engineering Stochastic Local Search
Prof. Roland Yap, National University of Singapore 

Abstract  Stochastic Local Search (SLS) has been proven to be very effective for solving many combinatorial problems. However, the performance of SLS depends very much on the precise heuristics and parameters for SLS which may not be easy to get right. In practice, SLS are carefully designed and tuned to get good performance but this is often done in an adhoc fashion. Our thesis is that a manmachine approach using visualization can be used to make this adhoc process more systematic. We present an approach using visualization and animation which is able to explain and give insights into how the SLS behaves. It can do this in a generic fashion on many problems which are tackled with SLS. The manmachine approach allows us to effectively engineer better SLS algorithms which can turn adhoc tweaks into a more reasoned and scientific improvement process. Some case study examples will be given to illustrate the power of visualization.
Author's webpage http://www.comp.nus.edu.sg/~ryap/ 



04072008 10:15  Interval Analysis for Solving Equation Systems and Optimization Problems
Dr Odile Pourtallier, INRIA, France 

Abstract  After presenting the interval arithmetic we show how it can be used to obtain certified solution of equations and optimization problems. Although at a first glance the basic method seem to be simple, it need refinements carefully adapted to the problems to be considered in order to be efficient. In many cases techniques based upon interval analysis do not compete with more classical techniques when approximated local solution is searched for. Nevertheless they show efficiency to find and certify all the solutions of a problem in a given search space. We will show some examples solved with the library ALIAS developed at INRIA in Coprin team.
Author's webpage http://wwwsop.inria.fr/miaou/Odile.Pourtallier/cv.html 



04072008 14:15  How to convince without giving proofs away?
Prof. Ronald Cramer, CWI, Centrum voor Wiskunde en Informatica, Amsterdam, NL 

Abstract  Encryption and digital signatures safeguard secrecy and integrity of data communication against intruders and hackers, i.e., they provide unilateral security: the "good guys" are protected from the "bad guys". In this lecture we give an introduction to a fundamentally different notion. In principle it is possible to convince a sceptical verifier that a certain theorem is true *without giving away any details of your proof*. In this lecture we will give a playful example of this wonderful concept of *zeroknowledge proofs*. Zero knowledge proofs, invented in the early 1980s, play an important role in modern cryptology, and they are a special case of *secure computation*. This very active area of cryptologic research deals with multilateral security problems that arise when cooperating parties do not sufficiently trust each other and hence do not want to exchange relevant secret data.
Author's webpage http://homepages.cwi.nl/~cramer/ 



07072008 14:15  Managing Uncertain Data using Statistical Models
Prof. Amol Deshpande, University of Maryland, USA 

Abstract  Much of the realworld data generated these days is inherently uncertain or probabilistic in nature. For instance, sensor data typically has some notion of quality attached to it  it could be the confidence of existence, trust, accuracy, a probability distribution over possible values, or a mix of these. Similarly, when attempting to integrate heterogenous data sources (data integration) or extracting structured information from text (information extraction), the results are approximate and uncertain at best. In this talk, I will present an overview of our approach to integrate statistical and probabilistic models into relational database systems so as to make it easy to manage and reason about uncertain data. I will first discuss our work on the MauveDB project that supports an abstraction called "modelbased views" using which users can specify statistical models to be applied to streaming sensor data. The output of the modeling process is presented to the users as a relational table that can be queried using a declarative language. I will then present our ongoing work on query processing over uncertain relational databases, which occur naturally in many settings such as information extraction or may be the result of a probabilistic modelbased view. We have developed a uniform, closed framework for representing and querying uncertain data based on concepts from probabilistic graphical models; I will present an overview of our framework, the challenges in query evaluation over uncertain data, and the algorithms we have developed for efficient query evaluation over large uncertain databases.
Author's webpage http://www.cs.umd.edu/~amol/ 



08072008 13:15  LKL : The Linux Kernel Library
Prof. Octavian Purdila, Polytechnic University of Bucharest 

Abstract  The Linux kernel library project's goal is to offer an easy way of reusing the Linux kernel code in as diverse environments as possible. It allows other projects to use code from the Linux kernel without needing to separate, isolate and extract it from Linux and which guarantees that changes in the mainline version will propagate easily to those projects. It is implemented as a Linux kernel port to a virtual architecture and we proved that is practical by developing complex (and useful) applications running in both userspace (Linux, Windows) and kernel (Windows), as well as a few interesting hacks: a portable FTP server that can access Linux filesystems, a Windows kernel driver which serves the same purpose and a new SL*B allocator which allows one to run memory checking tools like Valgrind on kernel code.
Author's webpage http://ixlabs.cs.pub.ro/projects/linuxkernellibrary/ 



09072008 16:15  Towards CorrectnessConstrained Execution for Processor Designs
Prof. Valeria Bertacco, University of Michigan, USA 

Abstract  Every integrated circuit is released with latent bugs. The damage and risk implied by an escaped bug ranges from almost imperceptible to potential tragedy; unfortunately it is impossible to discern within this range before a bug has been exposed and analyzed. While the past few decades have witnessed significant efforts to improve verification methodology for hardware systems, these efforts have been far outstripped by the massive complexity of modern digital designs, leading to product releases for which an always smaller fraction of system's states has been verified. The news of escaped bugs in large market designs and/or safety critical domains is alarming because of safety and cost implications (due to replacements, lawsuits, etc.). This talk will describe our solutions to solve the verification challenge, such that users of future designs can be assured that their devices can operate completely free of bugs. We will attack the problem both at designtime, by presenting techniques to boost the fraction of the state space that can be verified before tapeout, and after deployment in the field, discussing novel solutions which can correct escaped bugs after a system has been shipped. Our ultimate vision for this technology is to make hardware as malleable as software.
Author's webpage http://www.eecs.umich.edu/~valeria/ 



10072008 14:15  Multiresolution Image Denoising
Prof. Thierry Blu, The Chinese University of Hong Kong 

Abstract 
Author's webpage http://www.ee.cuhk.edu.hk/~tblu/ 



10072008 16:15  A stochastic calculus of binding with applications to the modelling of cellular signalling
Prof. Vincent Danos, University of Edinburgh 

Abstract 
Author's webpage http://www.inf.ed.ac.uk/people/staff/Vincent_Danos.html 



11072008 10:15  From Pixels to Perception
Prof. Graham D. Finlayson, University of East Anglia, UK 

Abstract  For about the last 10 years, the digital camera market has been obsessed with the number of pixels. New cameras can have 10, 12, 20 or more Megapixels per image. Yet, the need for so many pixels is questionable: digital cameras now have around twice the number of colour receptors that we ourselves have and we see the world in perfect clarity.
Moreover, from a signal processing point of view, too many pixels means that, even in relatively normal lighting conditions, individual pixels often capture just a handful of photons. As a consequence the resulting raw images measured on a camera CCD are often very noisy. While this
noise can be removed, by averaging proximal pixels, doing so reduces the effective resolution of the camera.
At the University of East Anglia, and also in a spin out company called ImSense Ltd, we have been developing technology for making better looking images. Our approach is not to focus on the number of pixels in they eye or indeed the eye's general physiology. Rather, we have developed a unique mathematical algorithm which delivers images that
look like those we ourselves remember seeing: the processed images are more appealing and vivid (in fact, they appear to have higher definition). Our approach is entirely consistent with our own vision system where the eye records an image but the picture we see is the result of extensive cortical processing. Thus, we believe a major part of the image quality story will be the processing of images and not just
the pixel count of the acquisition device or the resolution of the display.
In this talk I will review some of the major processing approaches that have evolved (including Retinex theory and ICAM) and give my view of why these approaches are limited in their application. These limitations are the starting point of the UEA/ImSense approach. A high level outline of
our algorithm will be presented along with experimental results.
Author's webpage http://www.colorresearch.com/ 



14072008 14:15  Aardvark: making Byzantine faulttolerant systems tolerate Byzantine faults
Prof. Lorenzo Alvisi, University of Texas, Austin 



15072008 10:15  The TCP Unfairness Problem in 802.11 Networks
Prof. Andrzej Duda, Laboratoire d'informatique de Grenoble, France 

Abstract  In a typical deployment of IEEE 802.11 wireless LANs in the infrastructure mode, an access point acts as a bridge between the wireless and the wired parts of the network. Under the current IEEE 802.11 DCF (Distributed Coordination Function) access method, which provides equal channel access probability to all devices in a cell, the access point cannot relay all the frames it receives on the downlink. This causes significant unfairness between upload and download connections, long delays, and frame losses. This unfairness problem comes from the interaction of transport layer protocols with the MAC layer access method. The main problem is that the access point requires more transmission attempt probability than wireless stations for correct operation at the transport layer. In this talk, we present a simple and elegant solution to solve the unfairness problem at the MAC layer. We define the operation of an Asymmetric Access Point that benefits from a sufficient transmission capacity with respect to wireless stations so that the overall performance improves. We will also present other solutions to the same problem at the network layer.
Author's webpage http://duda.imag.fr/ 



15072008 14:15  Communicating DelaySensitive and Bursty Information over an Outage Channel
Prof. Tara Javidi, University of California, San Diego 

Abstract  In this talk, we consider the classic (crosslayer) queuechannel optimization problem for bursty and delaysensitive information sources. In particular, we are interested in communications over outagelimited channels (with no feedback) when the number of bits that arrive at the transmitter during any time slot is random but the delivery of bits at the receiver must adhere to a strict delay constraint. In this setting, the errors experienced by the source of information, concatenated with an infinite buffer and a constantrate outagelimited channel, are caused by either erroneous decoding at the receiver or violation of the delivery deadline. It is intuitive, then, that there is a trade off between reliability over the channel and timely delivery of information. After briefly revisiting the difficulty in quantifying the above tradeoff in its classical Shannon Theoretic context, we take advantage of the recently developed high SNR analysis of outagelimited channels to go around this difficulty. Hence, the focus of the talk becomes to characterize the error performance of the overall system in the high SNR regime. We will see that the optimal decay behavior of the asymptotic error probability depends on how fast the burstiness of the source scales down with its mean, which itself scales with SNR. We will focus on a particular scaling under which the optimal exponent of the total error probability reveals a tradeoff addressing the following classical question: How much of the allowable delay and channel capacity should be utilized for gaining reliability over the channel versus accommodating the burstiness of the delay sensitive source? Time permitting, we will address the extension of this work to a multiuser scenario and the resulting many flow large deviation analysis of multiqueue systems with interacting service.
Author's webpage http://fleece.ucsd.edu/~tjavidi/ 



16072008 11:15  Common Randomness, Multiuser Secrecy and Tree Packing
Prof. Prakash Narayan, University of Maryland, USA 

Abstract  Consider a situation in which multiple terminals observe separate but correlated signals. In a multiterminal data compression problem, a la the classic work of Slepian and Wolf, a subset of these terminals seek to acquire the signals observed by all the terminals by means of efficiently compressed interterminal communication.
This problem of generating common randomness does not involve any secrecy constraints. On the other hand, in a secret key generation problem, the same subset of terminals seek to devise ``secret'' common randomness or a secret key through public communication, which can be observed by an eavesdropper, in such a way that the key is concealed from the eavesdropper. Such a secret key can be used for subsequent encryption. We explain the innate connections that exist between these two problems.
Next, for a special ``pairwise independent network model,'' in which every pair of terminals observes correlated signals that are independent of the signals observed by all other pairs of terminals, we show a natural connection between secrecy generation and a (separate) combinatorial problem
of maximal packing of Steiner trees in an associated multigraph. In particular, the maximum number of Steiner tree packings is always a lower bound for the secrecy capacity, and is tight when all the terminals seek
to share a secret key.
This talk is based on joint works with Imre Csiszar, Sirin Nitinawarat, Chunxuan Ye, Alexander Barg and Alex Reznik.
Author's webpage http://www.ece.umd.edu/~prakash/ 



16072008 14:15  The Broadcast Approach in Communications Systems: Overview, Applications and Perspectives
Prof. Shlomo Shamai, TechnionIsrael Institute of Technology 

Abstract  Traditional coded communication schemes for block fading channels, with channel state information (CSI) available at the receiving end only are based on fixed rate single level coding. The achievable throughput is known as the outage capacity, the average of is often optimized. We will focus on a more general approach based on multilayered coded communications, termed the broadcast approach. This strategy facilitates to adapt the reliably decoded rate to the actual channel state without having any feedback link to the transmitter. This is obtained by using a multilayer code where each layer is associated with a channel state. Emphasis will be placed on block fading channel models, in which the state is associated with the fading realization. Maximization of the expected throughput is achieved by optimally allocating power per layer. When the number of layers is unlimited, a continuous multilayer upper bound can be formulated. We obtain this fundamental upper bound for channels with one degree of freedom, where the transmitter has imperfect CSI, or no CSI at all. For other multipleinput multipleoutput (MIMO) channels only achievable rates are available, due to the nondegradedness nature of the MIMO channel. These rates are contrasted with the ergodic capacity upper bound. We extend the view of multilayered channel coding to address a simple single server queueing model, and obtain tight bounds on the expected delay. Another setting considered, is sourcechannel coding. Specifically we address the transmission of a Gaussian source, subject to the mean squared error distortion measure. The source is assumed to be encoded in a successive refinement manner, and then transmitted over the channel using the broadcast strategy. We characterize analytically the optimal power allocation when the fading state is a continuum. Numerous other communication settings incorporating the broadcast strategy are considered, among these are hybrid ARQ protocols, and simple procedures of cooperative communications. An outlook of related open problems, where an analytic formulation of a broadcast approach is yet elusive or incomplete will be shortly described, followed by a general perspective of layered communications in multiterminal communication systems.
Author's webpage http://www.ee.technion.ac.il/people/shamai/shamai_hp.html 



16072008 15:15  The Matched Filter Done Right
Prof. Amos Lapidoth, ETHZ 

Abstract  One of the key results of Digital Communications can be paraphrased very roughly as follows: "in guessing which of two deterministic signals is being observed in white Gaussian noise, the inner products between the observed waveform and each of the signals form a sufficient statistic. Consequently, it is optimal to base one's decision on these two inner products." It is surprising that this basic result is never formulated as a theorem in any of the textbooks on the subject. This may be because of the difficulties in defining white Gaussian noise, in defining sufficient statistics for waveform observations, and in relating sufficiency to optimal detection. In this talk I shall describe a number of approaches to formulating the above statement as a theorem and point out some of their shortcomings. I will finally describe my proposed approach, formulate the theorem, and prove it from first principles. The proposed approach does not rely on the Ito Calculus, on Brownian Motion, or on generalized stochastic processes. It does not introduce nonphysical infinitepower noise processes. Moreover, it is suitable for rigorously treating colored noise.
Author's webpage http://eestaff.ethz.ch/~lapidoth/ 



17072008 11:15  Statistical estimation in highdimensional settings: Practical and informationtheoretic limits
Prof. Martin Wainwright, University of California, Berkeley 



17072008 14:00  Path Splicing
Prof. Nick Feamster, Georgia Tech, College of Computing, USA 

Abstract  We present path splicing, a new routing primitive that allows network paths to be constructed by combining multiple routing trees (``slices'') to each destination over a single network topology. Path splicing destination over the allows traffic to switch trees at any hop en route to the destination. End systems can change the path on which traffic is forwarded by changing a small number of additional bits in the packet header. We evaluate path splicing for intradomain routing using slices generated from perturbed link weights and find that splicing achieves reliability that approaches the best possible using a small number of slices, for only a small increase in latency and no adverse effects on traffic in the network. In the case of interdomain routing, where splicing derives multiple trees from edges in alternate backup routes, path splicing achieves nearoptimal reliability and can provide significant benefits even when only a fraction of ASes deploy it. We also describe several other applications of path splicing, as well as various possible deployment paths. Joint work with Murtaza Motiwala, Megan Elmore, and Santosh Vempala.
Author's webpage http://www.cc.gatech.edu/~feamster/ 



18072008 10:15  Wedge: Splitting Complex, Monolithic Applications into ReducedPrivilege Compartments
Dr. Brad Karp, University College London 

Abstract  Software vulnerabilities and bugs persist, and so exploits continue to cause significant damage, particularly by divulging users' sensitive data to miscreants. Yet the vast majority of networked applications remain nolithically structured, in stark contravention of the ideal of leastprivilege partitioning. This state of affairs continues because today's operating systems offer isolation primitives that are cumbersome.
In this talk, I'll describe Wedge, a system well suited to the splitting of complex, legacy, monolithic applications into finegrained, leastprivilege compartments on today's operating systems. Wedge consists of two synergistic parts: OS primitives that create compartments with *defaultdeny* semantics, and thus force the programmer to make compartments' privileges explicit; and Crowbar, a
pair of runtime analysis tools that assist the programmer in determining which code needs which privileges for which memory objects.
I will use the SSLenabled Apache web server as a running example throughout this talk: how its original monolithic structure risks revealing confidential data to attackers; the difficulties of dividing it into leastprivilege compartments; how Wedge eases this difficult process; and finally, how Wedge is powerful enough to prevent a subtle
maninthemiddle attack that succeeds on a more coarsely privileged Apache web server.
(Joint work with Andrea Bittau, Petr Marchenko, and Mark Handley, all of UCL.)
Author's webpage http://www.cs.ucl.ac.uk/staff/B.Karp/ 



18072008 14:15  The Data Ring: Community Content Sharing
Prof. Neoklis Alkis Polyzotis, University of California, Santa Cruz 

Abstract  Information ubiquity has created a large crowd of users (most notably scientists), who could employ DBMS technology to process and share their data more effectively. Still, this user base prefers to keep its data in files that can be easily managed by applications such as spreadsheets, rather than deal with the complexity and rigidity of modern database systems. Motivated by this observation, we propose a framework that enables nonexperts to build content sharing communities in a true database fashion: declaratively. The proposed infrastructure, called the data ring, enables users to manage and share their data with minimal effort: The user points to the data that should be shared, and the data ring becomes responsible for automatically indexing the data (to make it accessible), replicating it (for availability), and reorganizing its physical storage (for better query performance). The talk outlines the salient features of the proposal, and describes in more detail our recent work on two related topics: query processing in DHT networks, and online index selection in a database management system.
Author's webpage http://www.cs.ucsc.edu/~alkis/ 

