Talks of the 2006 Edition, July 3rd  21st
03072006 11:15  Sequences with good correlations and their applications
Prof. Tor Helleseth, University of Bergen, Norway 

Abstract  Binary sequences have many practical applications in modern communication systems, including radar systems, synchronization of data, generation of pseudorandom numbers, global positioning system (GPS), and construction of stream ciphers in cryptography and in multipleaccess communication. Sequences with desirable properties can be generated using a variety of methods from algebra and discrete mathematics, involving properties from Galois fields and Galois rin In codedivision multipleaccess (CDMA) users are allocated different sequences. These systems need large families of sequences with good correlation properties in order to ease the synchronization and to minimize the interference among the users. The main purpose of this talk is to give an overview and description of some important families of sequences with applications in modern communication systems. We start by describing msequences which are the main building blocks in many constructions of good sequence families. Thereafter, we describe some traditionally important families of sequences with good correlation properties, including Gold sequences, Kasami sequences, sequences constructed by from Galois rings, as well as sequences used in third generation mobile communication standards. Professor Helleseth's homepage http://www.ii.uib.no/~torh/ 



03072006 14:15  The game of twenty questions with an operator  signal design for active sensing
Prof. Robert Calderbank, Princeton University, USA 



04072006 10:15  Combined multipath routing and congestion control: a robust internet architecture
Prof. Don Towsley, University of Massachusetts, USA 

Abstract  Network management is complicated by uncertain traffic patterns and workloads. Flexible routing schemes mitigate some of the problems by making the exact location of capacity less important: if there is available capacity the routing scheme will find it. In this talk we propose a combined multipath routing and congestion control architecture that gives performance improvements to the end user and simplifies network dimensioning. We advocate multihoming and stepping stone routers to provide path diversity, and a congestion controller and path selection algorithm that automatically balances traffic across the lowest cost paths. Scalability of the architecture results from implementing the algorithms at endsystems. We illustrate the performance benefits of this architecture. Portions of this work are joint with P. Key and L. Massoulie, Microsoft Research, and with H. Han, C. Hollot, UMassAmherst, S. Shakkotai, R. Srikant, UIUC. Prof. Towsley's homepage http://wwwnet.cs.umass.edu/personnel/towsley.html 



04072006 11:15  Robot learning
Prof. Stefan Schaal, University of Southern California, USA 

Abstract  Robotics is at the verge of a new era, where robots will be used ubiquitously in many domains of human daily life, including entertainment robotics, assistive robotics, personal robotics, and much more. A crucial component of this new wave of robotics will be the autonomous acquisition and execution of a large repertoire of motor skills, skills that will often be similar to that observed in humans. Such an envisioned autonomy requires that robots can learn.While the idea of robot learning has been pursued for quite many years, and some success stories can be found particularly in the realm of state estimation, learning robot control still remains a rather hard and underexplored topic. This talk investigates what has been achieved in robot learning, which learning techniques exist, and which components are still missing. We will cover many different areas of robotics and related fields, and how they contribute to robot learning, including reinforcement learning with policy gradients, modelbased learning using Bayesian machine learning, imitation learning, learning movement primitives with nonlinear dynamic systems, control theoretic aspects, and ideas from neuroscience. Videos of experiments with anthropomorphic and humanoid robots will illustrate the different concepts and their experimental realization. In the end, we will provide a critical outlook on the future developments that are needed to achieve truly autonomous skill learning in robotics. Prof Schaal's homepage http://www.usc.edu/programs/neuroscience/faculty/profile.php?fid=66 



04072006 15:15  The KubelkaMunk color prediction model : limitations and new approaches
Prof. Li Yang, Karlstad University, Sweden 

Abstract  The KubelkaMunk (KM) theory is a widely applied theoretical model describing light propagation in a turbid medium. Its analytical expressions make the model a very popular and useful tool to determine the optical properties of a material and to predict the optical performance of a media system consisting of different material components. However, this model has fatal shortcomings causing failure when applied to media systems comprising materials of strong scattering and strong absorption. The speaker examined the shortcomings from a theoretical perspective and proposed a revised theory. The revised model is a general approach that applies to various media systems of light absorption and scattering, such as for instance, ink, paper, print, dyedsheet, textile, etc. Additionally, it preserves the simplicity of the original theory. This talk will present a through analysis of the shortcomings of the original model. It is then followed by detailed descriptions on the fundamentals of the theoretical revision, which enable the (revised) theory to be applicable to any turbid media system. The theory is further illustrated by applications to different media systems consisting of materials of both weak and strong light absorption. Prof. Yang's homepage http://www.chemistry.kau.se/li_yang/ 



05072006 10:15  Optimal keytrees for treebased private authentication
Prof. Levente Buttyan, Budapest University of Technology and Economics, Hungary 

Abstract  Keytree based private authentication has been proposed by Molnar and Wagner as a neat way to efficiently solve the problem of privacy preserving authentication based on symmetric key cryptography. However, in the keytree based approach, the level of privacy provided by the system to its members may decrease considerably if some members are compromised. In this paper, we analyze this problem, and show that careful design of the tree can help to minimize this loss of privacy. First, we introduce a benchmark metric for measuring the resistance of the system to a single compromised member. This metric is based on the wellknown concept of anonymity sets. Then, we show how the parameters of the keytree should be chosen in order to maximize the system's resistance to single member compromise under some constraints on the authentication delay. In the general case, when any member can be compromised, we give a lower bound on the level of privacy provided by the system. We also present some simulation results that show that this lower bound is quite sharp. The results of this paper can be directly used by system designers to construct optimal keytrees in practice; indeed, we consider this as the main contribution of our work.Prof. Buttyan's homepage Prof. Buttyan's homepage http://www.hit.bme.hu/~buttyan/ 



05072006 14:15  Program analysis with predicates in the background
Prof. Rupak Majumdar, University of California, Los Angeles, USA 

Abstract  Dataflow analyses and type systems sacrifice pathsensitivity for efficiency and lead to false positives when used for verification. Predicate refinement based model checking methods are pathsensitive but must perform many expensive iterations to find all the relevant facts about a program, not all of which are naturally expressed and analyzed using predicates. We show how to join these complementary techniques to obtain efficient and precise versions of any latticebased dataflow analysis using predicated lattices. A predicated lattice partitions the program state according to a set of predicates and tracks a lattice element for each partition. The resulting dataflow analysis is more precise than the eager dataflow analysis without the predicates. In addition, by automatically inferring predicates to rule out imprecisions, we get a dataflow analysis that can adaptively refine its precision. We give experimental evidence that this combined analysis is both more precise than usual dataflow analysis in that it is sensitive enough to prove various properties, as well as much faster than pure counterexampleguided refinement, as many relevant facts are eagerly computed, thus reducing the number of iterations. This results in an order of magnitude improvement in the running times. Prof. Majumdar's homepage http://www.cs.ucla.edu/~rupak/ 



06072006 10:15  A lattice theory to solve games of imperfect information
Prof. JeanFranÃ§ois Raskin, UniversitÃ© Libre de Bruxelles, Belgium 

Abstract  In this talk, I will present a fixed point theory to solve games of imperfect information. The fixed point theory is defined on the lattice of antichains of sets of states. Contrary to the classical solution proposed by Reif, the new solution does not involve determinization. As a consequence, it is readily applicable to classes of systems that do not admit determinization. Notable examples of such systems are timed and hybrid automata. As an application, I will show that the discrete control problem for games of imperfect information defined by rectangular automata is decidable. This result extends a result by Henzinger and Kopke. Finally, I will also show that the ideas underlying the new algorithms can be applied to solve classical problems of automata on finite words. Prof. Raskin's homepage http://www.ulb.ac.be/di/ssd/jfr/ 



06072006 14:15  Ordered and disordered source coding
Prof. Vivek Goyal, Massachusetts Institute of Technology, USA 

Abstract  When a source produces i.i.d. letters, the values of the letters (a multiset of the alphabet) and the order of the letters (a permutation) are independent. This talk investigates this separation and presents results for source coding without regard to order and by order. There are situations in which the order information is irrelevant, for example compressing a database of records that are in an arbitrary order. Also, in many estimation and inference problems, the destination will compute a permutationinvariant function of the source letters. With this motivation, we consider source coding under permutationinvariant fidelity criteria. We show that source coding becomes trivial (requires zero rate) under sequencelength asymptotics. For sequences of finitelength blocks, we find ratedistortion and highrate quantization results. In the ordinary source coding problem (for ordered vectors), permutation source codes are an example of coding only the order information. We present a new way to code using only order. Furthermore, separating order and value suggests other computationallysimple vector quantizers. Joint work with Lav Varshney. Prof. Goyal's homepage http://www.rle.mit.edu/stir/people_goyal.htm 



10072006 14:15  Walk this way without waving: the Feynman path integral approach to photonics and quantum imaging
Dr Neil Gunther, Performance Dynamics Company, CA, USA 

Abstract  The photon is a principle carrier (qubit) of information in modern communications. Being an entirely quantum phenomenon, light needs to be represented quantum mechanically to ensure the correct design of VLSIscale devices for communications, cryptography, and imaging. Unfortunately, it is more common to represent light in terms of classical waves and, as we demonstrate with case studies, this can lead to design errors. The Feynman Quantum Path Integral (QPI) is a lesser known but powerful tool that eschews an explicit wave representation of light. In the course of three lectures I will: (1) Introduce the historical motivation and basics of the QPI, (2) Apply the QPI to standard optical devices and extend it to quantum devices, (3) Review current photonic research from the QPI standpoint. The content of these lectures is largely tutorial in style and primarily directed at engineers with no background in either quantum mechanics or optics. Prof. Gunther's homepage http://www.perfdynamics.com/Bio/njg.html 



13072006 11:15  Algebraic signatures in distributed storage
Prof. Thomas Schwarz, Santa Clara University, USA 

Abstract  As remote storage becomes more popular, it becomes more important to be able to check on the existence and the validity of items stored remotely. We propose to use algebraic signatures for this purpose. Algebraic signatures are short hashes calculated from long data objects that have algebraic properties. Examples for such properties are the capability to calculate the new signature from the old one and the signature of an update or the possibility to calculate the signature of parity objects (generated with an erasure corrected code) from the signatures of the data objects from which the parity objects were calculated. We can use algebraic signatures to check for equality of data objects, to check for the consistency between parity and client data in a failure resilient system, or to prove the validity of remote data without having a copy of the data ourselves. In all cases, the remote objects are locally reduced to the few bytes of the signature that are then sent to the comparer. Thus, the network is not taxed. Prof. Schwarz's homepage http://www.cse.scu.edu/~tschwarz/ 



13072006 15:15  How to measure network behavior, and how much information do measurements capture?
Dr Jean Bolot, Sprint Labs, California, USA 

Abstract  Much of the progress in networking research relies on the availability of measurement data. Data is collected either using active probing techniques (sending probes into a network and inferring network behavior based the observed delay, jitter or loss process of the probe stream) or passive monitoring techniques (deploying monitoring stations that capture packet, flow, or routing information). In all cases, we would like the measurement data to be accurate and informationrich. This raises two important questions: 1. In the case of active probing, what is the best way to send probes into a network so that the probes yield the best quality estimates? Conventional wisdom says probes should be Poisson because of the PASTA theorem. We will show that conventional wisdom is wrong. 2. In the case of active monitoring, how much information (entropy) is captured by different types of monitoring stations (packet vs flow vs byte counts..), and what is the value in deploying additional monitoring stations in a network (i.e. how much joint information is there between different stations).Finally, I will briefly describe some of the research carried out at Sprint Labs. Sprint Labs homepage http://www.sprintlabs.com/default.html 



14072006 11:15  Dynamically SelfOrganizing Sensors as Virtual InNetwork Aggregators and Query Processors in Mobile Ad Hoc Sensor Databases
Prof. Aris Ouksel, University of Illinois and Chicago, USA 

Abstract  Efficient innetworking processing of sophisticated query types such as range and aggregate queries remains a challenge in distributed, dataintensive, and selforganizing sensor networks. We propose a novel data management infrastructure, which is built upon multidimensional indexing techniques, to support fast aggregate and nonaggregate query processing relying on an overlay structure, called the AGGINDEX tree, for both static and mobile environments. The AGGINDEX organizes the sensors as virtual processors continuously and efficiently computing both precise and approximate aggregations. This capability tolerates imprecise localization of the mobile sensors, and thus provides efficient query processing while avoiding excessive energy consumption of traditional localization techniques. We will first discuss our adaptive Scheme, which synergistically integrates localization and data space partitioning, and minimizes the unavoidable high cost due to "overlocalization" or to "overpartitioning" when the two aspects are considered separately. Our extensive experiments show a significant performance gain achieved by our query processing approach both in latency and message cost when compared with gossipbased aggregation as well as a class of aggregations techniques on spanningtrees exemplified by TAG and COUGAR. Prof. Ouksel's homepage http://tigger.uic.edu/~aris/ 



14072006 14:15  Cryptographic hash functions: basics, blockciphers and beyond
Prof. Thomas Shrimpton, Portland State University, USA 

Abstract  We will take a closer look at one of cryptography's least flashy, yet most widely used objects, the hash function. The cryptographic community has been rocked over the last two years by a series of attacks on MD5 (broken), SHA0 (broken), SHA1 (teetering) and others. Now seems like a good time to revisit exactly what are these objects, what they are supposed to do for us, and how we go about constructing them. We will do this, giving special attention to blockcipherbased hash functions, their security models and performance. We'll also take a brief look at current trends in hash function design. Prof. Shrimpton's homepage http://web.cecs.pdx.edu/~teshrim/ 



17072006 11:15  Thinking outside the box: relatedkey attacks on block ciphers
Dr Raphael Phan, Swinburne University of Technology, Sarawak, Malaysia 

Abstract  Block ciphers are methods that provide confidentiality, in addition to being used as primitives in other cryptosystems, namely in hash functions, stream ciphers, authentication modes, protocols and commercially available security devices. The most common approach taken by researchers to gauge the security of block ciphers is via analyzing how resistant they are to attacks. Of these attacks, relatedkey ones have been the least accepted. Often regarded as infeasible, then when such attacks are reported on ciphers they generate mixed responses from the community. The purpose of this talk is to revisit this view. Dr Phan's homepage http://www.swinburne.edu.my/rphan/ 



17072006 14:15  Control mechanisms for media streaming over wireless links
Prof. Athina Markopoulou, University of California, Irvine, USA 

Abstract  Media streaming applications over wireless links face several challenges due to both the nature of the wireless channel and the stringent delivery requirements of media traffic. In this talk, I will present a framework for control mechanisms that can handle the variations of wireless links. In particular, we consider video streaming over a single wireless link with timevarying interference and bandwidth. First, we jointly optimize the power at the transmitter and the playout speed at the receiver, so as to minimize the power consumption and maximize the media playout quality. Then, we extend this work to include packet scheduling control (to minimize distortion) and contentaware playout (to take into account the motionintensity of video scenes). We formulate all these problems within a dynamic programming framework, design practical heuristics and demonstrate significant performance gains over benchmark systems. Prof. Markopoulou's homepage http://newport.eecs.uci.edu/~athina/ 



17072006 15:15  Compositional Methodologies in System Level Design
Prof. Rajesh Gupta, University of California, San Diego, USA 

Abstract  High level modeling of components and the ability to compose these into systemlevel models are key to advancing system design methods. Recently, high level language models, such as C and C++, have been used to build component models, at least for chip design purposes. These models deploy a number of smarts and assists  for modeling concurrency, reactivity, exceptions, synchronous/asynchronous behaviors, data types etc  to enable the system designer build fairly detailed and (at least, cycle accurate) models of component behaviors. What happens when these models are composed  often structurally as a chip or system designer would  into complete system models? Are these compositions even possible, and what semantic guarantees do the composed models carry? In this talk, I will examine the difficulties associated with building a system design methodology based on composition of predesigned components, technical challenges and our ongoing work in this area, including use of metadata for modeling systemlevel concerns. Prof. Gupta's homepage http://www.cse.ucsd.edu/~gupta/ 



18072006 11:15  Delay, feedback, and the price of ignorance
Prof. Anant Sahai, University of California, Berkeley, USA 

Abstract  In 1959, Shannon made a profound comment: "[The duality between source and channel coding] can be pursued further and is related to a duality between past and future and the notions of control and knowledge. Thus we may have knowledge of the past and cannot control it; we may control the future but have no knowledge of it." This comment cannot be understood in the traditional blockcode setting and as a result, has remained entirely mysterious. To understand it, we must step back and consider endtoend delay, since delay is what fundamentally allows the exploitation of the laws of large numbers to give reliability. In channel coding, we show that while feedback often does not improve fixed blocklength reliability functions, it can significantly improve the reliability with respect to fixed delay! (Contrary to a "theorem" by Pinsker claiming otherwise.) A new bound, that we call the "focusing bound," allows us to calculate the limit of what is possible when the encoder is not ignorant of the channel's past behavior. In source coding, the price of ignorance is demonstrated by considering what happens when receiver sideinformation is withheld from the transmitter. Blockcodes perform equally poorly, but nonblock encoders can use sideinformation to dramatically improve the fixeddelay error exponent. Furthermore, a closer look at the dominant error events for all these cases gives Shannon's otherwise cryptic comment a precise interpretation. These results suggest that the traditional information theoretic recommendation of aggregating messages into big blocks is flawed as architectural guidance. When encoders are not ignorant, queueing and variablelength ideas should be employed to do appropriate flow control, even when facing hard endtoend latency constraints. In a fundamental sense, queues have a largedeviation performance that is superior to blocks. [Parts of this talk are joint work with my former student Tunc Simsek and current student Cheng Chang.] Prof. Sahai's homepage http://www.eecs.berkeley.edu/~sahai/ 



18072006 14:15  Capacityachieving codes for the BEC with bounded complexity
Prof. Igal Sason, Technion, Israel Institute of Technology 

Abstract  Errorcorrecting codes which employ iterative decoding algorithms are now considered state of the art in communications. In fact, there are a large number of code families which achieve a small gap to capacity with feasible decoding complexity. Examples are lowdensity paritycheck (LDPC) codes, irregular repeataccumulate (IRA) codes, and Raptor codes. For each of these code families, one can construct code sequences which provably achieve capacity on the binary erasure channel (BEC). In each case, however, the decoding complexity becomes unbounded as the gap to capacity vanishes. This talk will focus on recently constructed code families whose complexity remains bounded as the gap to capacity vanishes. Assuming only basic knowledge of LDPC codes, three closely related ensembles will be described: IRA codes, accumulaterepeataccumulate (ARA) codes, and accumulateLDPC (ALDPC) codes. Using the duality between these ensembles and simplified approach to density evolution, we will construct a variety of codes which achieve capacity with bounded complexity. Parts of this work are joint with Henry Pfister and Ruediger Urbanke. Prof. Sason's homepage http://www.ee.technion.ac.il/people/sason/ 



18072006 15:15  Robust System Design
Prof. Subhasish Mitra, Stanford University, USA 

Abstract  Robust systems are expected to operate correctly in the presence of hardware failures, software malfunctions, malicious attacks and human errors. Robust system design is becoming increasingly difficult in advanced technologies. The current design paradigm which assumes that no gate or interconnect will ever operate incorrectly within the lifetime of a product must change to cope with such failures. Hence, most future systems, not just highend mainframes, will require builtin mechanisms for error detection, selftest, diagnostics, attack prevention, selfrecovery and repair. Classical faulttolerance techniques have limited applicability because they are very expensive, and often inadequate. This talk will provide an overview of some of our recent work on robust system design. Prof. Mitra's homepage http://www.stanford.edu/~subh/ 



18072006 17:15  What physicists say about the difficulty of random Ksatisfiability problems, and the puzzling effectiveness of simple heuristics
Prof. Erik Aurell, Royal Institute of Technology, Sweden 

Abstract  A P2P system under churn has many properties in common with socalled open systems in statistical physics: varying number of entities; concurrent actions taken by these entities independently; and emergent properties of the system as a whole from the interaction of the entities. I will describe an attempt to characterize the structured overlay Chord under churn, as the dynamic equilibrium of a probabilistic model in this spirit.One main result is an accurate (albeit approximate) estimate of the fraction of dead pointers in chord, as function of churn and pointer length (finger length in the language of chord). This leads to e.g. an estimate of the typical number of hops and timeouts incurred during key lookup in Chord under churn. Prof. Aurell's homepage http://www.theophys.kth.se/biophys/ERIK/kth_homepage.html 



19072006 14:15  Doubly generalized LDPC codes for the binary erasure channel and the AWGN channel
Prof. Marc Fossorier, University of Hawaii 

Abstract  In this talk, the design of doubly generalized lowdensity paritycheck (DGLDPC) codes with generic block linear codes at both bit and check nodes (instead of the traditional repetition and single paritycheck codes) isconsidered for the binary erasure channel (BEC) and the additive white Gaussian noise channel (AWGN). Both analysis and simulations show that this approach can provide more flexibility in constructing codes with good threshold. Prof. Fossorier's homepage http://wwwee.eng.hawaii.edu/EEPage/Faculty/Bios/marc.html 



19072006 15:15  Probing spikegenerating dynamics and synaptic integration with conductance injection  synchronization at gamma rhythms in the neocortex
Dr Hugh Robinson, University of Cambridge, UK 

Abstract  In this talk, I will describe the technique of conductance injection (also known as dynamic clamp), which studies how nonlinear dynamical models of ionic channels and of synaptic input, executing sufficiently quickly in real time, interact with actual biological neurons. Conductance injection has been widely used to mimic the precise dynamical effect of real synaptic input to neurons, but in a controlled way which allows the reliability of synaptic integration to be resolved. It can also be used to "knockin" or "knockout" particular types of active voltagedependent conductances into the electrical dynamics of neurons, to study their role in spike generation. I will then describe some recent results using this technique, on the mechanism of gamma oscillations in the neocortex. Gamma oscillations correspond to synchronized firing of cortical cells at frequencies between 30 and 80 Hz, which are thought to be critically important in a variety of cortical functions, including sensory processing, motor control and feature binding. Dr Hugh Robinson's homepage http://www.pdn.cam.ac.uk/staff/robinson/ 



20072006 10:15  Information dynamics: a fresh look at information, its properties and implications
Prof. Ashok Agrawala, University of Maryland, USA 

Abstract  The term information has been with us for a long time and has been used in a variety of contexts with different meaning associated with it. Information Dynamics is an attempt to examine the fundamental nature of everyday information and its properties, starting from the recognition that the information is different from its representation. Clearly all information processing systems built to date use and manipulate the representations only, retaining only those relationship that the designer of the system explicitly deployed. Information dynamics is a paradigm for studying the basic properties of information, including the representation, relationships, models, context, actions, triggers, etc. In this talk we present this paradigm and show its applicability in design of systems of tomorrow. Prof. Agrawala's homepage http://www.cs.umd.edu/~agrawala/ 



20072006 11:15  Decentralized processing : an information theoretic perspective
Prof. Shlomo Shamai, TechnionIsrael Institute of Technology 

Abstract  We discuss the scenario where remote nomadic users (or a single user) communicate to a destination via a set of intermediate agents. The agents are ignorant of the codebook used due to the nomadic regime and are connected to the destination via reliable links of finite capacity. We focus here on independent Gaussian channels to all agents, who are equipped with a single antenna while the transmitter or transmitters may posses multiple antennas. First we review the results associated with a single transmit antenna, invoking decentralized quantization, which yields the ultimate achievable rate, in the nomadic regime, with Gaussian signalling. For a multiantenna transmitter, with independent Rayleigh fading, upper and lower bounds on the achievable rate assuming Gaussian signalling are developed. It is demonstrated that the full multiplexing gain of the system can potentially be maintained, even when the transmitter is denied the knowledge of the channel state information (corresponding fading coefficients). We also examine the asymptotic setting with the number of agents and transmit antennas (or users) taken to infinity, yet maintaining a fixed ratio. Here we demonstrate the incompetence of the simple compression when compared to a WynerZiv based approach. Finally, we confine attention to the basic single antenna scheme with two agents and consider the impact of a finite capacity feedback link from the final destination to the agents, allowing for a single round of conferencing. Network coding is optimal here in the sense of facilitating the full exploitation of the conferencing phase on the feedback link (from the destination to the agents). The impact of this conferencing protocol on the ultimate performance is quantified, and implications of layered coding in this scenario are also considered. Prof. Shamai's homepage http://www.ee.technion.ac.il/people/shamai/shamai_hp.html 



20072006 14:15  Capacity of multiuser MIMO channels and dirty paper coding
Prof. SaeYoung Chung, Korea Advanced Institute of Science & Technology 

Abstract  We show some achievable rate regions of vector Gaussian broadcast channels with partial channel state information at the transmitter (CSIT) and perfect CSI at the receivers. We propose a scheme that uses both dirty paper coding (DPC) at the transmitter and successive interference cancelation (SIC) at thereceivers simultaneously and show gains of this scheme with respect to pure DPC or pure SIC under our channel assumptions. We then show a simple practical implementation of DPC using short block codes such as repetition codes and the (8,4) extended hamming code for shaping. Although the shaping gains of such codes are notvery high, they may provide a better complexityperformance tradeoff for simple practical implementations of DPC. Furthermore, we show accurate theoretical analysis is possible for such codes dueto their simplicity, which can provide us some intuition on how to design good shaping codes for DPC. KAIST homepage http://www.kaist.edu/ 



21072006 10:15  Avalanche: network coding for large scale content distribution
Dr Pablo Rodriguez, Microsoft Labs, Cambridge, UK 

Abstract  Up until recently, content distribution solutions consisted on placing dedicated equipment at certain places inside or at the edge of the Internet. However, in recent years, a new paradigm for Content Distribution has emerged based on a fully distributed architecture where commodity PCs are used to form a cooperative network and share their resources (storage, CPU, bandwidth). In this talk, we will study a P2P system for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit informative blocks of information. This is particularly important in large unstructured overlay networks, where the nodes need to make decisions based on local information only. We will demonstrate the benefits of network coding under different realistic settings, present the results of several live trials, and discuss the implementation overheads and security related problems that need to be overcome to make such solution work. Dr Rodriguez's homepage http://research.microsoft.com/~pablo/ 



21072006 14:15  Reversible and irreversible information networks
Prof. Soren Riis, Queen Mary, University of London, UK 

Abstract  In the talk I construct a network with the curious property that a set of k users (infact 38) in general can send messesages to their k friends without delay or congestion. However, their k friends cannot reply back without creating congestion or delays. This construction is the starting point for a number of related results. Using ideas akin to the Feynman integal from Quantum Electrodynamics, I show that information networks are reversible as far as linear solutions are concerned! The talk is to a large extent nontechnical and addresses a general (mathematically minded) audience. Prof. Riis' homepage http://www.dcs.qmul.ac.uk/~smriis/ 

