Talks of the 2005 Edition, June 27th  July 15th
27062005 10:15  Experiments with inHand, a ConsumerFocused Application Platform, and the Implementation of Consistent Imagebased Measurement and Classification of Skin Color.
Dr Nina Bhatti, HP Labs 

Abstract  The inHand system is an endtoend architecture providing personalized ubiquitous experiences. We imagine a world where consumer routinely carry communicationenabled devices with one or more built in sensors based on technologies such as RFID (Radio Frequency IDentification), image capture, bar codes, etc. In our vision, all objects are contentrich, interactive, and linked to personalized information. "Clicking" on a recipe in a magazine could automatically add the needed ingredients to a shopping list, or play a video clip of a famous chef demonstrating the preparation. Clicking on a poster for an upcoming film would show previews of the film and invite ticket reservations. The inHand platform supports content customized to language preferences, previous history, current events, regional preferences, political views, etc. to create a personalized experience. In this talk we concentrate on technologies needed for imagebased applications. For example, analyzing a person's appearance to make precise recommendations for products appropriate to that individual. Our goal is to allow the use of an arbitrary user's device. Therefore, our application must allow for large variations in operator skill, imaging device, device color processing, and very importantly the illuminant. While the general problem is known to be insoluble, it can be solved for a specific subset, such as skin coloring classification. The system makes recommendations based on the best matching class of the individual. In this talk, we will focus on methods for fast color correction, with emphasis on skin color, and fully automated skin sampling from images removing blemishes and shadows. We will show an evaluation of these methods to ground truth obtained through a study of 148 women. Nina Bhatti's home page http://www.hpl.hp.com/personal/Nina_Bhatti/ 



27062005 14:15  Algorithms for switches
Dr. Shah Devavrat, MIT Boston 

Abstract  High speed data networks, such as the Internet, present the algorithm designer with highly constrained problems: the algorithms need to work at a very high speed while providing good performance and accessing limited computing resources. Consequently, only the simplest algorithms are implementable. But such algorithms may perform rather poorly. This tension between implementability and goodness of performance is acutely experienced by the designer of switch scheduling algorithms. In the first part of my talk I willl show how randomization may be used to simplify the implementation of switch schedulers. Specifically, I will exhibit a simple randomized scheduling algorithm and prove that it delivers 100% throughput while providing low delay. The second part of my talk concerns a new approach for analyzing the packet delay induced by an algorithm. I shall explain why traditional approaches based, for example, on queueing and large deviation theories are inadequate, and advance a new approach based on Heavy Traffic Theory. This approach helps explain some intriguing observations other researchers have made about the delay of scheduling algorithms based on simulations. It also leads to the characterization of a delayoptimal scheduling algorithm. Finally, I will briefly discuss a new distributed maximum weight matching algorithm based on ideas from AI and Statistical physics. Shah Devavrat's home page http://web.mit.edu/devavrat/www/ 



27062005 17:15  Engineering Stereoscopic Imaging Systems
Dr Cary Kornfeld, ETHZ 

Abstract  Spatial perception is an essential survival skill. Humans employ a diversity of methods for deriving spatial relationships from visual information: lighting cues, linear perspective, relative size, aerial perspective, occlusion, height relationships, etc. Building a system for stereoscopic imaging requires a background in a variety of topics, ranging from computer systems and performance evaluation to an understanding of the human vision system. A course on Stereoscopic Imaging has recently been developed at ETH Zurich that integrates topics from a variety of areas under the common thread of spatial imaging, providing a broad based, comprehensive study of human vision (anatomy, neurology, perception, image interpretation and construction), digital imaging (representations, encoding, compression, color spaces, solid state capture devices, display devices, color science, analog television), film craft, and the history of stereoscopy. Students are required to complete 5 major projects enabling them to explore the lecture material with _handson_ depth. They create simple stereoscopic images and video clips using simple and then increasingly more complex optics, camera and computer control systems. They devise a scientific method for establishing the relationship between camera separation and lens characteristics required to capture an arbitrary spatial volume. They are then required to build, using software and hardware, stereoscopic image capture and display systems. These are then used to create a half dozen ten minute stereoscopic film. By the time they_ve completed the course they have mastered an impressive range of skills and have learned, via first hand experience, about the subtleties of creating stereoscopic images. There is little doubt about the importance of spatial vision in the evolution and survival of the human species. But there remains continued debate as to the need for stereoscopic imaging systems for computer, engineering and communication tasks. It is a cruel irony that the term 3D has become a synonym for a cheap gimmick, a carnival trick, or a magician_s illusion. 



28062005 10:15  Algorithmic Motion Planning: Challenges and Approaches
Andrew M. Ladd, University of Rice, Houston 

Abstract  Algorithmic motion planning research seeks to provide robust andefficient generation of motion plans under various constraints.Applications for these algorithms are diverse: autonomous andindustrial robotics, animation, virtual environments and structuralbioinformatics. This talk will relate the current stateofart anddetail ongoing research seeking to apply motion planning to systemswith increased physical realism by addressing difficulties that arisein handling dynamics and highdimensional geometry. 



29062005 10:15  Zeroerror information theory: a roundtrip from Shannon to Sperner
Prof. Janos KÃ¶rner, University of Roma, La Sapienza 

Abstract  Shannon's classical 1956 paper "The zeroerror capacity of a noisy channel" has led to important discoveries in combinatorics and computer science such as the theory of perfect graphs and Lovasz'Theta function. We show how Shannon's original problem can be generalized from its original formulation in terms of the capacity of graphs to the capacity of families of (directed) graphs and even hypergraphs and illustrate the power of the resulting theory by several examples in which wellknown open problems in asymptotic combinatorics are solved using informationtheoretic methods.We will briefly mention recent and yet unpublished joint results with C. Malvenuto on the capacity of infinite graphs and its relation to counting "very different" permutations. Janos KÃ¶rner's home page http://w3.uniroma1.it/dipinfo/visuadoc.asp?iddoc1=19 



29062005 17:15  Code Acquisition Strategies for UWB Impulse Radio
Dr. Gian Mario Maggio, ST Microelectronics, MICS EPFL 

Abstract  UWB (ultrawideband) is an emerging wireless technology that findsapplication for both high datarate communication systems, such asin WPAN (Wireless Personal Area Network), and for low dataratecommunications, like in wireless sensor networks. In the contextof UWB impulse radio, one of the crucial challenges remains theacquisition of synchronization in terms of time, robustness andnumber of operations, especially in harsh propagation environments,like in the presence of dense multipath and interfering users.This talk will present a study on different code acquisitionstrategies aimed at reducing the acquisition time, in the caseof highthroughput systems, or the acquisition complexity, thusthe corresponding power consumption, for lowpower low datarateapplications, respectively. An indepth analysis, supported bysimulations in the presence of multipath will be presented, andthe results discussed. 



30062005 10:15  Opportunistic Orthogonal Writing on Dirty Paper
Prof. Pramod Viswanath, University of Illinois, Urbana Champaign 

Abstract  Abstract: A simple scheme that achieves the capacity and reliability function of the wideband Costa's dirty paper channel is proposed. This scheme allows itself to be interpreted as an opportunistic version of pulse position modulation (PPM). This interpretation suggests a natural generalization of the scheme which we show to achieve the capacity per unit cost of abstract Gel'fandPinsker channels with a zero cost input letter. Pramod Viswanath's home page http://www.ifp.uiuc.edu/~pramodv/ 



30062005 14:15  Provably secure routing in wireless ad hoc networks
Prof. Levente Buttyan, Budapest University of Technology and Economics 

Abstract 
Levente Buttyan's home page http://www.hit.bme.hu/~buttyan/ PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/Buttyan05sri.pdf 



01072005 10:15  From statistical physics to information systems
Dr Massimo Franceschetti, UCSD 

Abstract  ical physics originally developed to describe the collective behavior of large ensembles of interactive particles in matter. In this talk I will argue that the same mathematical tools can be exploited to study the behavior of complex information networks. This approach sheds new light on global properties of systems, including connectivity, achievable information rates, and robustness to failures. Key applications in the context of wireless networks will be presented. Massimo Franceschetti's home page http://fleece.ucsd.edu/~massimo/ 



04072005 10:15  Data Mining in LargeScale Distributed Systems
Prof. Assaf Schuster, TechnionIsrael Institute of Technology 

Abstract  Recent advances in Internet and communication technologies have given rise to many new types of networks that excel in their scalability. Such systems include peertopeer networks, grid systems, router networks, adhoc mobilenetworks and wireless sensor networks. Many of these largescale distributed infrastructures manufacture or collect data, and thus can be viewed as huge,highly distributed databases. When attempting to mine the data, its sheer volume, issues regarding its privacy, or even power considerations, mandate the development of distributed algorithms. But because data mining processestend to involve complex patterns that are iterative and synchronized, they cannot be scaled to the required level. The mining task is further complicated by the dynamicity of both the system and the data, as is inevitable in largescale distributed systems. In this talk we present a general methodology for computing and mining data in largescale distributed networks. Central to our approach are algorithms that output ad hoc results. These results are accurate because the changes in the global database are usually stationary. Another key idea is the development of superscalable distributed algorithms for rather simple primitives. The high scalability is achieved by means of local computing, a method which also implies anytime computing, incrementality, and robustness. The scalable primitives then become building blocks forhigherlevel and more complicated data mining goals through correct and efficient composition, which also preserves locality. The talk will focus on peertopeer networks, where the recent trends of eeconomy, dropping storage prices, and increased connectivity, allow peers to collect and share huge volumes of valuable data, previously available only to corporations. We show how our methodology can be used to devise efficient algorithms that can be employed in peertopeer networks to meet challenges such as association rule mining and facility location.The talk summarizes several recent works coauthored with Ran Wolff, Denis Krivitsky, Amir BarOr, Mirit Shalem, Tsachi Scharfman, Bobi Gilburd, and Liran Liss. Assaf Schuster's home page http://www.cs.technion.ac.il/~assaf/ PowerPoint : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/Schuster.ppt 



04072005 11:15  Representing Speech: Models, Frames, and Perception
Prof. Bastiaan Kleijn, Royal Institute of Technology, KTH, Stockholm 

Abstract  A variety of speech representations are used in different speech processingcontexts. This talk will provide a discussion of the motivation of thebasic structure of the most common representations. Threeillustrations will be provided as examples of how the commonly used representations can be improved. The first illustration discussesa theoretical shortcoming of the ubiquitous CELP speechcodingalgorithm and a method to eliminate this problem. The secondillustration shows the commonly used sinusoidal representation can begeneralized to a framebased representation that facilitates perfectreconstruction and an accurate decomposition into physically meaningfulcomponents. The third illustration shows that the gap between theheuristic criteria used in speech (and audio) processing and thesophisticated auditory models of the psychoacoustic literature can beeliminated by lowdistortion approximations. PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/kleijnEPFL2005.pdf 



04072005 14:15  Benefits of exposing calls and returns
Prof. Rajeev Alur, University of Pennsylvania 

Abstract  Regular languages have robust theoretical foundations leading to numerousapplications including model checking. Contextfree languages and pushdownautomata have been indispensable in program analysis due to their ability tomodel control flow in procedural languages, but the corresponding theory isfragile limiting its use. In the recently proposed definition of visiblypushdown languages, the set of input symbols is partitioned into calls,returns, and local symbols, and the type of the input symbol determines whenthe pushdown automaton can push or pop or swap. Exposing the matching structureof calls and returns is natural in associating a language with a sequentialblockstructured program or a document format with nested tags. The resultingclass of VPLs has many appealing theoretical properties concerning closure,robustness, decidability, determinization, and minimization.After reviewing the theory of VPLs, we show how it allows enhancing theexpressiveness of specification languages used in software model checking. Thetemporal logic of calls and returns (CaRet) integrates Pnuelistyle temporalmodalities with Hoarestyle reasoning by pre/post conditions, in a singlealgorithmic framework. CaRet extends the classical linear temporal logic with localmodalities that allow a path to jump from a call to the matching returnas well as callermodalities that jump to the most recent pending call. Theseoperators can be used to specify a variety of nonregular properties such aspartial and total correctness of program blocks with respect to pre and postconditions, and security properties that involve inspection of the callstack. The set of models of a CaRet formula can be interpreted as a visibly pushdownomegalanguage, and generalization of the classical tableau construction allowsmodel checking CaRet formulas against a pushdown model. We conclude bydiscussing ongoing research on a fixpoint calculus that can specifylocal and global properties of program flows, and some open problems. Rajeev Alur's home page http://www.cis.upenn.edu/~alur/ 



04072005 15:15  Embedded Computing for HighPerformance Video
Prof. Wayne Wolf, University of Princeton 

Abstract 
Wayne Wolf's home page http://www.princeton.edu/~wolf/ 



05072005 10:15  Applications of coding theory in computational complexity
Prof. Luca Trevisan, University of Berkeley, California 

Abstract  This survey talk I will review our current (unfortunately, fairly poor) understanding of the averagecase complexity of problems in NP, focusing on results that relate to the following questions:  Is it possible to base cryptography on the P=/=NP assumption, or related worstcase complexity assumptions?  Is it possible to prove the existence of hardonaverage problems in NP based on the P=/=NP assumption or related ones?  Are different formalizations of the question "are there hardonaverage problems in NP" equivalent?  What was that onepage paper by Levin about? Luca Trevisan's home page http://www.cs.berkeley.edu/~luca/ PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/trevisan.pd 



05072005 11:15  A Novel Approach for Querying Priced Information
Prof. Eduardo Laber, PUC Rio 

Abstract  This talk, we focus on competitive function evaluation in the context of computing with priced information. A function f on the set of variables V is given together with a cost c_x for each variable x of V. The cost c_x has to be paid to read the value of x. The problem is to design algorithms that query the values of the variables sequentially in order to compute the function while trying to minimize the total cost incurred. Competitive analysis is employed to evaluate the performance of the algorithms. We describe a novel approach for devising efficient algorithms in this setting. We apply our approach to several classes of functions which have been studied in the literature of computing with priced information, e.g., monotone boolean functions, game trees, selection of the minimum. In most of the cases considered, our approach leads to optimal algorithms. This research is motivated by problems that arise in networked economies and Databases Eduardo Laber's home page http://wwwdi.inf.pucrio.br/~laber/ 



05072005 14:15  Sublinear Indexing for Large Scale Object Recognition
Dr Jiri Matas, CVUT Praha 

Abstract  Realistic approaches to large scale object recognition, i.e. for detection and localisation of hundreds or more objects, must support sublinear time indexing. A method capable of recognising one of N objects in log(N) time will be presented. The "visual memory" is organised as a binary decision tree that is built to minimise average time to decision. Leaves of the tree represent a few local patches, and each nonterminal node is associated with a weak classifier. In the recognition phase, a single invariant measurement on a query patch decides in which subtree a corresponding patch is sought. The method possesses all the strengths of local affine region methods robustness to background clutter, occlusion, and large changes of viewpoints. Experimentally we show that it supports near realtime recognition of hundreds of objects with stateoftheart recognition rates. After the test image is processed (in a second on a current PCs), the recognition via indexing into the visual memory requires milliseconds. Time permitting, a life demonstration of the method will be shown during the talk. Jiri Matas's home page Jiri Matas's home page http://cmp.felk.cvut.cz/~matas/ 



06072005 10:15  Some properties of kwise independent distributions
Prof. Louay Bazzi, American University of Beirut 

Abstract  We study the derandomization capabilities of probability measures on the hamming cube having the kwise independence property, the smallbias property, and the almost kwise independence property.First, using linearprogramming duality, we characterize in the Fourier domain the class of boolean functions that can be fooled by probability measures having those properties.Among other results, we establish a sharp upper bound on the probability that a random binary string generated according to a kwise independent probability measure has any given weight. The setting is naturally extendible to almost kwise independent probability measures. We give applications related to the distribution of quadraticresidues and the weight distribution of linear codes.From a concrete viewpoint, we prove that any sufficiently logwise independent probability measure looks random to all polynomially small readonce DNF formulas. Thesetting is naturally extendible to almost kwise independent probability measures. We give an application related to the distribution of quadraticresidues.We characterize also the location of classical linearcodesbased constructions of kwise independentprobability measures in the convex polytope of all such measures, and its subpolytope consisting of thosemeasures whose Fourier transform is nonnegative. American University of Beirut website http://www.aub.edu.lb/ 



06072005 11:15  Large scale circuit placement  challenges and progress
Prof. Jason Cong, UCLA 

Abstract  Placement is one of the most important steps in the RTL to GDSII synthesis process, as it directly defines the interconnects, which have become the bottleneck in circuit and system performance in deep submicron technologies. The placement problem has been studied extensively in the past 30 years. However, recent studies showed that existing placement solutions were surprisingly far from optimal. The optimality and scalability studies of existing placement tools in the past two years showed that the results of leading placement tools from both industry and academia were 50% to 150% away from the optimal solutions in terms of the total wirelength. If such a gap can be closed, it will be equivalent to several technology generation advancements. The first part of my talk summarizes the techniques and results of our placement optimality studies. The second part of the talk present the ongoing research at UCLA on highly scalable placement algorithms using multilevel methods and nonlinear optimization. I shall discuss our work on the generalized forcedirected method using the nonlinear optimization formulation and the way we implement it in a multilevel framework. The latest release of our placement package, mPL5, demonstrated best optimality and scalability results compared to other existing public domain placement packages. Joint work with Tony Chan, Joe Shinnerl, Kenton Sze, Min Xie Jason Cong's home page http://ballade.cs.ucla.edu/~cong/ 



06072005 13:15  The concept of scheduled access in ad hoc wireless networks
Prof. Antony Ephremides, University of Maryland 

Abstract  One of the basic methods for accessing a shared channel is based on scheduled, controlled access so that destructive interference can be avoided. In Ad Hoc Wirelesss Networks a minimum form of scheduling is absolutely necessary since wireless nodes cannot transmit and receive simultaneously on the same channel. We examine the scheduling of transmissions from several perspectives. One view concerns the matching of transmitter nodes to desired receiver nodes through the assocoated power control problem that is necessary when a reasonable physical layer model is used. Another view concerns a practical algorithm fror crosslayer scheduling in conjunction with routing. Yet another view examines scheduling as part of the use of network coding. And, finally, there is a broader, more general view in which scheduling refers to asssigning different power levels to transmitting nodes at different time slots that need not be zero in any slot. In all these views, there are various performance implications (including energy consumption). We will review the scheduling concept from all these perspectives and present selected recent results in most of them and we will pose some potentially interesting research questions. Antony Ephremides'home page Antony Ephremides'home page http://www.ee.umd.edu/faculty/tony.html PowerPoint : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/Ephremides.ppt 



07072005 10:15  Advanced Networks on chip: architectures and design flow
Prof. Luca Benini, UniversitÃ di Bologna 

Abstract 
Luca Benini's home page http://wwwmicrel.deis.unibo.it/~benini/ 



07072005 11:15  Algebraic Theory of Signal Processing
Prof. Markus PÃ¼schel, Carnegie Mellon University 

Abstract  The talk gives an overview of a new, algebraic approach to the foundation of linear signal processing. The algebraic approach goes beyond linear algebra (the theory of vector spaces), which is not sufficient to capture the fundamental structure underlying digital signal processing (DSP). At the core of algebraic signal processing is the concept of a linear signal model defined as a triple (A, M, phi), where familiar concepts like the filter space and the signal space are cast as an algebra A and a module M, respectively, and phi generalizes the concept of the ztransform to bijective linear mappings from a vector space into the module M. The algebraic point of view shows that the same algebraic structure induces many diverse theories of signal processing, such as infinite and finite discrete time and space, and multidimensional DSP. As soon as a signal model is selected, all main DSP ingredients follow, including the notions of filtering, spectrum, and Fourier transform, which take different forms for different signal models. The algebraic theory identifies the shift operator q as a key concept and uniquely clarifies its role: the shift operator q is the generator of the algebra of filters A. Once the shift operation is defined, a welldefined procedure that we introduce leads to the definition of the associated signal model. Different choices of shift lead to infinite and finite time models with associated infinite and finite ztransforms, and to infinite and finite space models with associated infinite and finite Ctransforms (that we introduce). In particular, we show that the 16 discrete cosine and sine transforms are Fourier transforms for the finite space model. Other definitions of the shift lead to new signal models and to new transforms as associated Fourier transforms. Finally, the algebraic theory provides the means to discover, concisely derive, explain, and classify fast transform algorithms.Markus PÃ¼schel's home page Markus PÃ¼schel's home page http://www.ece.cmu.edu/~pueschel/ 



07072005 13:15  A Nonlinear Dynamics Approach to EMI Reduction and High Throughput True Random Number Generation
Prof. Gianluca Setti, University of Ferrara, Italy 

Abstract  In this lecture we will briefly introduce a statistical frameworkto tackle the complex (chaotic) behavior that characterize even very simple,discretetime dynamical systems (i.e. maps). More specifically, we will havea look on how such maps can be considered as stochastic processes generatorswith tunable statistical features. We will then apply such an approach to:1) EMI (ElectroMagnetic Interference) reduction2) High Throughput True Random Number Generation (TRNG)More details follow for each subtpic:1) EMIWe will show that the use of chaosbased frequency modulated timing signalallows to reduce the power spectrum peak of EMI in power converters ordigital systems boards by approximately 9dB with respect to other know andpatented methodologies for spread spectrum clocking generation, which areemployed in some products from Intel, IBM and Cypress.2) High Throughput TRNGWe will here consider a methodology, based on chaotic maps, for implementingTRNGs which, in terms of implementation easiness, has features similar towidely employed pseudoRNGs. More specifically, they can be practicallyrealized out of pipeline analogtodigital converters(ADC) parts so that one can easily reuse design expertise and even analogIntellectualProperty blocks to quickly embed true random sources in SOCsand specialized apparatuses. A chaosbased TRNG based on this approach hasbeen designed in AMS 0.35um CMOS technology. Measurement results showed thatthe designed chaosbased TRNG: operates at a throughput of approximately 20Mbits/s, i.e. faster than current stateoftheart TRNG; behaves in a completely satisfactory way when validated against two standard randomness test suites proposed by NIST (FIPS 140.2 and SP80022). Gianluca Setti's home page http://wwwmicro.deis.unibo.it/cgibin/user?setti PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/G_Setti_ProceedingsPartI.pdf PowerPoint : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/G_Setti.pdf 



08072005 10:15  Deterministic clustering with data nets
Prof. Leonard Schulman, Caltech 

Abstract 
Leonard J. Schulman's home page http://www.cs.caltech.edu/~schulman/ 



08072005 11:15  Programming the Interactive Web
Prof. Shriram Krishnamurthi, Brown University 

Abstract 
Shriram Krishnamurthi's home page http://www.cs.brown.edu/people/faculty/sk.html PowerPoint : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/SKrishnamurthi.ppt 



08072005 14:15  Robust Computing
Prof. Trevor Mudge, University of Michigan 

Abstract  At a recent International Electron Devices Meeting, Intel chairman AndrewGrove reiterated his position that Moore's law (stating that circuit densitydoubles roughly every two years) will not slow down for at least a decade.By that time, integrated circuits will have feature sizes smaller than 30nanometers, allowing for integration of billions of devices on a single dieand enabling unforeseen computational capabilities. Aggressively scaledfeature sizes result in increased process variability, substantialuncertainty in device performance, and poor reliability. In that connectionGrove mentioned that at 30nm design will enter an era of "probabilisticcomputing," with the behavior of logic gates no longer being deterministic.Thus, new approaches to robust design will be crucial to the successfulcontinuation of process scaling which has fueled the modern semiconductorindustry.In our talk we will present an approach to robust computing based ontimingfault tolerant latches, or Razor latches, as we have termed them.They are designed to detect errors in buses, bitsliced logic, arithmeticlogic, and random logic that are manifest as delay errors. As we will showthese are rarely simple unidirectional errors, which precludes using simpleerrordetecting codes. Timingfault tolerant latches have better coverage inthis context and work for all types of logic. Furthermore, they allow us torun with small amounts of error. Being able to compute in the presence oferrors allows design margins to be eliminated. These margins are required tocompensate for process and temperature variation as well as power supplynoise. Their removal allow significant power reduction and simplifies designclosure  typical case design is now possible and corner cases can beignored. Trevor Mudge's home page http://www.eecs.umich.edu/~tnm/ 



11072005 10:15  A photon counting imager used as a noiseless kHz framerate detector for an adaptive optics system for large groundbased telescopes.
Dr Bettina Mikulec & Dr Michael Campbell, CERN 

Abstract 
Bettina Mikulec's contact http://consult.cern.ch/xwho/people/20593 



11072005 11:15  Sparse Approximation, Denoising, and Large Random Dictionaries
Prof. Vivek Goyal, MIT 

Abstract  A spate of recent results has renewed interest in sparse signal representations  linear combinations of small numbers of elements chosen from an overcomplete dictionary of vectors. While most recent work has focused on identifying cases in which finding optimal sparse approximations is computationally tractable, there are few results that quantify the effect of underlying sparsity on achievable estimation error. This talk gives new results on the ability to estimate a sparselyrepresentable signal from a noisy observation, including both the probability of recovering the sparsity pattern and the meansquared estimation error. One main result uses ratedistortion theory to bound the meansquared estimation error as a simple function of the problem dimension, sparsity, and SNR. The other main results are bounds and approximations that apply when the dictionary is large and isotropically random. Joint work with Alyson Fletcher, Sundeep Rangan, and Kannan Ramchandran. Vivek Goyal's home page http://www.rle.mit.edu/stir/people.htm 



11072005 14:15  ReadOnce Functions and Cographs Revisited
Prof. Martin Charles Golumbic, University of Haifa, 

Abstract  In this talk, we present the first polynomial time algorithm for recognizing and factoring readonce functions. The algorithm is based on algorithms for cograph recognition and a new efficient method for checking normality. Its correctness is based on a classical characterization theorem of Gurvich which states that a positive Boolean function is readonce if and only if it is normal and its cooccurrance graph is $P_4$free. We also investigate the problem of factoring certain nonreadonce functions. In particular, we show that if the cooccurrence graph of a positive Boolean function $f$ is a tree, then the function is readtwice. We then extend this further proving that if $f$ is normal and its corresponding graph is a partial $k$tree, then $f$ is a read $2^k$ function and a read $2^k$ formula for $F$ for $f$ can be obtained in polynomial time. (Joint research with Aviad Mintz and Udi Rotics) Martin Charles Golumbic's home page http://www.cs.biu.ac.il/~golumbic/ 



11072005 15:15  Towards a New Browser and a Crossmedia Search Engine for Heterogeneous Digital Contents
Prof. Katsumi Tanaka, University of Kyoto 

Abstract  In this talk, I will mainly describe our two research projects: the first project is concerned with the research and development of new types of content browsers at NICT (National Institute of Information and Communication Technologies), which aims at exploring technologies of transforming and integrating Web and TV broadcasting contents and the development of the new content browsers for the integrated information. The second topic is the research and development of crossmedia search engines for distributed and heterogenous digital contents, such as a crossmedia metasearch engine for digital images. Katsumi Tanaka's home page http://www.dl.kuis.kyotou.ac.jp/~tanaka/ 



12072005 10:15  InformationTheoretic Bounds on the ParityCheck Density of LDPC Codes: Previous and Recent Results
Prof. Igal Sason, TechnionIsrael Institute of Technology 

Abstract  Lowdensity paritycheck (LDPC) codes are efficiently encoded and decoded due to the sparseness of their paritycheck matrices. Motivated by their remarkable performance and feasible complexity under iterative messagepassing decoding, Urbanke and me derived lower bounds on the density of paritycheck matrices of binary linear codes whose transmission takes place over a memoryless binaryinput outputsymmetric channel [IEEE Trans. on IT, July 2003]. The bounds are expressed in terms of the gap between the rate of these codes for which reliable communications is achievable and the channel capacity. In my talk, I will present these previously reported bounds, and also present some recent results which improve the known lower bounds. The tightness of these bounds will be considered in light of the thresholds they provide for LDPC ensembles under ML decoding. The recent results are based on a joint work with Mr. Gil Wiechman. Igal Sason's home page http://www.ee.technion.ac.il/people/sason/ PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/I_Sason.pdf 



12072005 11:15  The Adaptive Properties of Small and Largescale Structure of Selforganized Networks in Ants
Prof. Guy Theraulaz, CNRS, Toulouse 

Abstract  Many collective activities performed by social insects result from selforganized processes. Nonlinear interactions among individuals can give rise to highly structured collective behaviors and complex spatiotemporal patterns despite a strong random component in individual behaviors. In ants, a great number of these selforganized structures such as foraging trails or nests are networks or can be described as a network. Until quite recently, these networks did not attract much attention and were merely seen as random arrangements. However some recent experimental results revealed that network geometry can be a valuable source of information for the insects that travel along these paths. Indeed, the geometry of trail networks is a source of directional information that can be used by ants in an adaptive way to reorientate themselves if they walk in the wrong direction. Moreover an interesting consequence of the ability of ants to make use of the cues given by bifurcation angles is that the efficiency of path selection toward a food source strongly depends on the geometry of the trail network. The largescale structure of these networks also possesses some interesting functional properties such as robustness. For instance, ant networks of galleries combine high levels of robustness and efficiency while the ants minimize their total length. CNRS web page http://cognition.upstlse.fr/ PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/Guy_Theraulaz_SRI2005_talk.pdf 



12072005 13:15  Network coding for cost, reliability and ease of management.
Prof. Muriel MÃ©dard, MIT 

Abstract  Network coding has recently emerged as an effective method to go beyondtraditional routebased networks.While network coding allows certain traffic matrices that are infeasible withtraditional routing to be achievable, it is possible that its main benefits fornetworking may lie mainly in the way that network coding allows for efficient,distributed operation. We overview some recent results in the area ofdistributed network coding and discuss network coding for cost minimization anderasure handling. Distributed randomized network coding simultaneouslyyields a reduction in cost, say energy use, and networkcapacityachieving performance in erasure settings. We discuss some possibleimplications for wireless networks and present some open problems in the area. Muriel MÃ©dard's home page http://www.mit.edu/~medard/medard.html 



12072005 14:00  Processing along the way: forwarding vs. coding
Prof. Daniela Tuninetti, University of Illinois 

Abstract  In this talk we address the issue of separation of channel coding and ``smart routing'', also referred to as network coding, in multihop network. In the seminal work by Ahlswede et al. it was shown that coding over independent information streams allows to better share the available network resources and increase the overall throughput in multicast scenarios, even in the case where the links in the network are errorfree. This model implicitly assumes that relay nodes can operate without delay and complexity constraints. In this work we constrain the processing capability of the relay nodes. We consider a source that transmits to a receiver by routing the information packets over a communication network represented by a directed graph. We examine rate benefits that finite complexity processing at the intermediate nodes may offer. We show that the processing capabilities of the intermediate nodes affect not only the endtoend achievable rate, but also the optimal routing strategy. We show that there exist configurations where the optimal rate is achieved only when coding across independent information streams (channel coding and routing cannot be separated); that optimal processing is function of the particular set of channel parameters and not only of the network topology; that small processing capability suffices to achieve a large fraction of the ultimate network performance; and that there exists a connection between linear codes and routing for a special class of graphs. (Joint work Christina Fragouli at EPFL). Daniela Tuninetti's contact http://www.ece.uic.edu/Faculty.html PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/D_Tuninetti.pdf 



13072005 10:15  Old Tricks for New Problems
Prof. Mohamed Oussama Damen, ECE Waterloo, Canada 

Abstract  Reliable communications over wireless channels present very challenging problems due to the constant demand of higher data rate with better quality of service on the one hand, and signal attenuation, multipath fading and cochannel interference suffered by the transmitted signals on the other. The latter phenomena are usually combated by spreading the information bits over different degrees of freedom of the channel (e.g., time, frequency or space). In this talk, we discuss using old techniques to solve problems arising in communications over multipleinput multipleoutput (MIMO) channels. In the first part, we show that LAttice SpaceTime (LAST) codes, achieve the optimal diversitymultiplexing tradeoff under MMSElattice decoding. In order to increase the degrees of freedom of the channel, we use ARQ in MIMO channels and characterize the diversitymultiplexingdelay tradeoff in the ARQMIMO channel where incremental redundancy (IR) LAST codes are proved to achieve the optimal tradeoff under MMSElattice decoding. An integral component of the optimal IRLAST coding scheme is a list decoder, based on MMSE lattice decoding, for joint error detection and correction. In the second part, we further elaborate on the decoder structure and discuss a unified framework for MMSElattice decoding through tree search algorithms. Among the algorithms discussed, a variation on the Fano decoder, namely, MMSEFano (lattice) decoder with lattice reduction and column ordering, emerges as a powerful tool to solve the joint detection and decoding problem. The excellent performancevscomplexity tradeoff achieved by the proposed MMSEFano decoder is established via simulation results and analytical arguments in several MIMO and ISI scenarios. The main contribution of this work (which summarizes several papers coauthored with H. El Gamal and G. Caire) is to show that the traditional techniques of lattice coding, ARQ, power control, MMSEDFE filtering and sequential lattice decoding allow for solving some of the fundamental problems in MIMO communications. Another important contribution is to use lattice coding and sequential lattice decoding for formulating a unified framework for the different encoding and decoding techniques in MIMO channels. Mohamed Oussama Damen's home page http://www.ece.uwaterloo.ca/~modamen/ 



13072005 11:15  Potential of quasiinfinite spaces : artistic and architectural processes in selfexpanding worlds
Prof. Nicolas Reeves, University of Quebec, Montreal 

Abstract  Researchcreation differs from scientific research by its objectives andaims, which brings it closer to the artistic realm, but it may be based onthe same scientific concepts and on the same tools and technologies. Thecity of Montreal (Canada) is currently the locus of a burst of emerginghightech creation practices, supported both by the universities and thedifferent government levels, where artists, engineers and scientists meetaround the development of massively transdisciplinary creation projects, theoutput of which being distributed between art installations andperformances, scientific publications, seminars and transfers to otherrealms. I will present the general portrait of this evolution, thenintroduce two of its major actors (the Hexagram Institute and the Societyfor Arts and Technology), and finally exemplifies it through twointerrelated projects from our own lab, both of which being situated at thecrossroads of arts, sciences and technology : Â« Potential of quasiinfinitespaces Â», where we try to maximize the potential fertility of a givendigital space, and Â« The Mascarillons Sphere Â», a swarmintelligencerobotics project involving flying cubic automatas, developed in partnershipwith researchers from EPFL, UWE and U. Paul Sabatier,. PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/Nicolas_Reeves_SRI2005_talk.pdf 



13072005 14:15  Communication under Regulatory Power Constraints
Prof. Michael Gastpar, University of Berkeley, California 

Abstract  Wireless adhoc and sensor networks allow for cooperative communication strategies (for example, variants of beamforming). For such communication modes, regulatory bodies are likely to impose power limitations for the system as a whole, rather than for individual components. In this talk, we discuss some of the consequences of this change in perspective, in terms of the resulting system capacities (relay networks, adhoc wireless networks), in terms of coding strategies, and in terms of overall system performance (in sensor network examples). Michael Gastpar's contact http://www.eecs.berkeley.edu/~gastpar 



13072005 15:15  PasswordAuthenticated Key Exchange Using Hidden Smooth Subgroups
Dr Zulfikar Ramzan, DOCOMO USA Labs 

Abstract  Suppose Alice and Bob wish to securely communicate with each other overan otherwise insecure network. They share knowledge of a (small) secretpassword which is their only means for authenticating each other overthe network. In and of itself, the password might be "human memorable"so it does not have sufficient entropy to be used as a cryptographickey. In this talk, we focus on the following question: how can Alice and Bobleverage their (lowentropy) password to agree on a secure(highentropy) cryptographic key, thereby establishing a secure channelbetween them?This particular problem, known as Password Authenticated Key Exchange(PAKE), is fundamental for typical user authentication (e.g., PAKEschemes could be used in FTP, Telnet, etc.). An eavesdropper can try torecord conversations, and go offline to try every possible passwordchoice until it finds one that seems to fit the conversation. This isknown as an offline dictionary attack. If the space of possiblepasswords is small (as is often the case in practice), then an attackercan rapidly perform this exhaustive search. Even worse, an adversarymay even try to impersonate Alice (or Bob), and attempt an offlineattack on the resulting conversation. Our goal, therefore, is to designa protocol that is secure against all such attacks. Numerous PAKE protocols have been proposed. Some are provably secureand can be trusted, whereas many others have been broken. There are twogeneral approaches that all the practical and provably secure PAKEprotocols in the literature employ. These two techniques are general,and it appeared as though the overall design space for practical andprovably secure PAKE schemes had been exhausted. full abstract http://talk/index.php Zulfikar Ramzan's home page http://theory.lcs.mit.edu/~zulfikar/homepage.html PowerPoint : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/050713ZullyPAK.ppt 



14072005 10:15  Algorithms exploiting the chain structure of proteins
Dr Itay Lotan 

Abstract  Proteins are large molecules (from a few hundred to several thousand atoms) that affect almost all properties of living organisms. Gene expression, catalysis,storage and transport, transmission of signals, defense against intruders and many other vital functions are all performed by proteins. Since their functions are largely dependent on their spatial structure, the study of the 3D shapes of proteins is central to the field of structural biology. Various computational methods have been developed to enhance our understanding of proteins, from molecular simulation techniques, to structure determination tools, to comparison and classification methods. Their ubiquitous use in research today, along with the rapidly growing number of proteins of known sequence and structure, create a need for efficient algorithms that can handle the complexity of proteins. In this talk I will present two new algorithms that address fundamental problems in computational structural biology. The common thread of these algorithms is their use of concepts and techniques from robotics and computational geometry to exploit the long chain kinematics of proteins (a protein is a sequence of aminoacids that form a long flexible backbone with short protruding sidechains). The first algorithm is an automatic method for completing partial models of protein structures resolved using Xray crystallography. It was developed in collaboration with the Joint Center for Structural Genomics at SSRL. full abstract on Summer Research yoda's database http://talk.epfl.ch/talkdetail.php?talkId=271 Itay Lotan's home page http://wwwcsstudents.stanford.edu/~itayl/ PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/Itay%20Lotan%20sri05.pdf 



14072005 11:15  From Swarm Intelligence to Swarm Engineering: out of the lab and into the real world
Prof. Alan Winfield, University of the West of England, Bristol 

Abstract  Swarm Intelligence is the study of how insect or animal societies collectively perform apparently miraculous feats of construction or coordination with, apparently, no centralised or hierarchical control. From an engineering perspective, the Swarm Intelligence paradigm is compellingly attractive. If offers the possibility that we may be able to engineer future large scale distributed systems in a radically new way. Instead of very complex centralised or hierarchical command and control structures  which are proving increasingly difficult to design and validate  we could build completely decentralised systems, with relatively simple agents. If such Swarm Engineered systems make use of emergence to generate the desired overall system behaviours then, potentially, we could achieve hitherto impossible levels of scalability and robustness. In the context of Swarm Robotics (= embodied Swarm Intelligence), this talk will discuss the gap between the research lab and the real world, and what needs to be done to bridge that gap. The talk will debate such questions as "how can we design emergence in a methodologically rigourous way?"; "how can we formally validate or prove swarm engineered systems?", or "what is the realistic potential for realworld swarms?". Swarm engineering is a radical new paradigm for building complex systems. The talk will conclude by suggesting that "perhaps we should  radically  consider changing the way we approach the validation of future complex systems". Alan 's home page http://www.ias.uwe.ac.uk/~awinfie/ PowerPoint : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/A_Winfield.ppt PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/Alan_Winfield_SRI2005_talk.pdf 



14072005 14:15  Multiantenna Capacity: Progress and Misconceptions
Prof. Sergio Verdu, University of Princeton 

Abstract 
Sergio Verdu's home page http://www.princeton.edu/~verdu/ 



14072005 16:15  The long story from the _Research_ to the _Implementation_ of a new technology in the telecommunication world
Jacques Robadey, Swisscom Innovation 

Abstract  The course will show the different stages of Research and Development atHigh Schools, Research Institutes and in the industries to launch newtelecommunication products. Some specific examples of research at SwisscomInnovation (with/out external collaboration) that led to new products willillustrate the topic. In the examples broadband access and Unlimited (now implemented) will betreated. The perspective of long term future technologies such as "sensornetworks" will also be discussed. The different stages in the development ofnew technologies will clearly illustrate the collaboration between highschools, equipment supplier and operators. It will also show the importanceof the customer need, for the decision to support new ideas and for thefinal success of a new technology.The different R&D stages will include: basic research applied research research analysis standard participation and analysis opportunities evaluation role definition in a multiplayer environment demonstrator business model prebusiness case business case (with iteration) preliminary go non go decision pilot trials full business case final go non go decision implementation modification upgrades etc 



15072005 10:15  Decentralized Detection of a Nomadic Transmitter via Helping Agents
Prof. Shlomo Shamai, TechnionIsrael Institute of Technology 

Abstract 
Shlomo Shamai's home page http://www.ee.technion.ac.il/people/shamai/shamai_hp.html PDF : http://ic.epfl.ch/webdav/site/ic/shared/SummerRI/2005/S_Shamai.pdf 



15072005 11:15  Algorithms for distances between distributions
Dr Suresh Venkatasubramanian, AT & T Labs Research 

Abstract  In many data analysis settings, whether they be in data cleaning, networkanalysis, machine learning, computer vision, or even neuroscience, thebasic primitive is a distribution, or a histogram. The analysis taskinvolves estimating distances between such primitives, exactly orapproximately.Distributions can be viewed as points in a ddimensional space,and "standard" distances (like those derived from l_p norms) can beemployed in tasks like classification and clustering. It turns out thoughthat other distances (especially measures like relative entropy that arederived from informationtheoretic considerations) are more meaningful inmany application settings, both formally and in practice.This presents an interesting algorithmic challenge: Are there goodalgorithms for estimating such distances, preferably in sublinearspace/time ? Can these be used effectively in various application domains,and at scale ?In this talk, I will describe current research directions in the"algorithmics" of distributions, and sketch out recent work that I've beendoing on both the applied and theoretical ends of this area. Suresh Venkatasubramanian's home page http://www.research.att.com/~suresh/ 



15072005 15:15  MixedMode ESD Protection Circuitry Design for RF ICs: Prediction and Characterization.
Prof. Albert Wang, Illinois Institute of Technology, Chicago 

Abstract 
Albert Want's home page http://www.ece.iit.edu/~awang/ 

