To see a video of the talks by our keynote speakers, please click on the link below their photo.

07/06/2011 @ 10:15 room CO1Machines Reasoning about MachinesProf. J Strother Moore, University of Texas at Austin, Keynote Speaker
Computer hardware and software can be modeled precisely in mathematical logic. If expressed appropriately, these models can be executable. An ``appropriate'' logic is an axiomatically formalized functional programming language. This allows models to be used as simulation engines or rapid prototypes. But because they are formal they can be manipulated by symbolic means: theorems can be proved about them, directly, with mechanical theorem provers. But how practical is this vision of machines reasoning about machines? In this highly personal talk, I will describe the 40 year history of the ``BoyerMoore Project'' and discuss progress toward making formal verification practical. Among other examples I will describe important theorems about commercial microprocessor designs, including parts of processors by AMD, Motorola, IBM, RockwellCollins and others. Some of these microprocessor models execute at 90% the speed of C models and have had important functional properties verified. In addition, I will describe a model of the Java Virtual Machine, including class loading and bytecode verification and the proofs of theorems about JVM methods. Prof. J Strother Moore's homepage

08/06/2011 @ 10:15 room BC 01Distinguishing pathsProf. David Monniaux, CNRS, Vérimag
Usual techniques in abstract interpretation apply "join" operations when control flows from several nodes. Unfortunately, these techniques introduce imprecision, resulting in not being able to prove desired properties. Modern SMTsolving techniques allow enumerating paths "on demand". We shall see how such path techniques may be combined with conventional polyhedral analysis, quantifier elimination, or with policy iteration, in order to obtain more precise invariants. This is joint work with Laure Gonnord (Université Lille I / LIFL) and Thomas Gawlitza (then CNRS / VERIMAG, now at INRIA). Prof. Monniaux's homepage

08/06/2011 @ 16:15 room BC 01The End of Anonymity, The Beginning of PrivacyProf. Vitaly Shmatikov, University of Texas at Austin
The Internet economy relies on the collection and aggregation of personal data on an everincreasing scale. Information about our tastes, purchases, searches, browsing history, social relationships, health history, genetics, and so forth is shared with advertisers, marketers, and researchers, who use it to predict individual behavior and provide personalized product offerings, recommendations, and even clinical treatments. I will survey privacy issues caused by massive aggregation of personal information. After demonstrating that the existing methods for "anonymizing" the data fail to provide meaningful privacy protection, I will describe new approaches to privacypreserving computation. This includes Airavat, a new system for largescale data analysis which integrates mandatory access control and differential privacy. Prof. Shmatikov's homepage

14/06/2011 @ 10:15 room BC 01Spectrum sensing under uncertain channel modellingProf. Ezio Biglieri, Universitat Pompeu Fabra, Barcelona
We examine spectrum sensing in a situation of uncertain channel modeling. In particular, we assume that, besides additive noise, the observed signal contains an interference term whose probability distribution is unknown, and only its range and maximum power are known. We discuss the evaluation of the detector performance and its design in this situation. Although this talk specifically deals with the design of spectrum sensors, its scope is wider, as the applicability of its results extends to a general class of problems that may arise in the design of receivers whenever there is uncertainty about how to model the environment in which one is expected to operate. The theory expounded here allows one to determine the performance of a receiver, by combining the available (objective) probabilistic information with (subjective) information describing the designer's attitude. Prof. Biglieri's homepage

14/06/2011 @ 11:15 room BC 01Models for Optimization of Industrial Research ManagementDr Debasis Mitra, Bell Labs Lucent Technologies
We describe work on modeling of industrial laboratories and the use of the models for optimization and control of management processes. The industrial laboratory is viewed as being analogous to a knowledge factory, where research is done at the first stage and is followed by development stages, which terminate with handoff to the market or other organizations. Uncertainty is pervasive in this framework. There is randomness in the processing time at each stage. Also, stochastic processes govern the evolution of project values through the stages. Management is twofold, project and investment. Option value is the methodology for project management. It relies on infrastructure support that enables information on project values to be collected after each stage. Such information allows calculations to be made of the option value, which is the expected terminal value of a project that takes into account future decisions. This allows midcourse decisions to be made on whether to allow projects to continue or be terminated, which occurs if its option value drops to zero. Investment optimization concerns allocation of given budgets to the resources in the various stages. Yet another important management mechanism that is considered is controlled access to resources and servers in the development stages. Projects with higher option values are given priority in accessing resources. For the special case of two stages, research and development, we consider a combined model of investment optimization in the presence of optimal project and resource management. For this case results from an asymptotic analysis provide insights on optimal management decisions. Finally, time permitting, we will descrbe preliminary results of an endogeneous growth model in a slow time scale. Throughout the talk we give numerical examples to illustrate concepts. Prof. Mitra's homepage

14/06/2011 @ 14:15 room BC 01NearReal Time Context & Activity Extraction via Mobile Sensing: Advances & ChallengesProf. Archan Misra, Singapore Management University

14/06/2011 @ 15:15 room BC 01Trust Management: reputation and computational trust propagationProf. Adam Wierzbicki, PolishJapanese Institute of Information Technology

14/06/2011 @ 17:15 room BC 01The Socioeconomics of Pervasive SystemsProf. David Hales, Open University

15/06/2011 @ 09:15 room BC 01Submodular Function Optimization in Sensor and Social NetworksProf. Andreas Krause, ETH Zürich

15/06/2011 @ 10:15 room BC 01Detecting Earthquakes and other Rare Events from Communitybased SensorsProf. Andreas Krause, ETH Zürich

15/06/2011 @ 11:15 room BC 01Mobile sensing applied to psychology research, challenges and opportunitiesProf. Cecilia Mascolo, University of Cambridge
Mobile sensing applied to psychology research: Challenges and Opportunities The talk will describe how sensing has extended to mobile computing through the use of mobile phones to monitor everyday activities of users. I will describe how various sensors on the phone can be used and be combined to monitor activity and the various projects which have started to use the approach. In particular i will describe our work EmotionSense which allows the sensing of user behaviour and mood to help social psychology research. I will further illustrate how this could be integrated into mobile social network applications. Prof. Mascolo's homepage

15/06/2011 @ 14:15 room BC 01Fundamental RateReliabilityComplexity Limits in Outage Limited MIMO CommunicationsProf. Petros Elia, EURECOM
The work establishes fundamental limits with respect to rate, reliability and computational complexity, for encodingdecoding in a general setting of outagelimited MIMO communications. In the highSNR regime, the limits are optimized over all encoders, all decoders, and all complexity regulating policies. The work then proceeds to explicitly identify encoderdecoder designs and policies, that meet this optimal tradeoff. In practice, the limits aim to meaningfully quantify different pertinent measures, such as the optimal ratereliability capabilities per unit complexity and power, the optimal diversity gains per complexity costs, or the optimal number of numerical operations (i.e., flops) per bit. Finally the tradeoff's simple nature, renders it useful for insightful comparison of the ratereliabilitycomplexity capabilities for different encodersdecoders. In the same setting we identify the first ever lattice decoding solution that achieves both a vanishing gap to the errorperformance of the exact solution of lattice decoding, as well as a computational complexity that is subexponential in the rate and in the problem dimensionality. This complexity behavior is rigorously demonstrated here for the first time, where it is also proven that for most common codes, the complexity cost for asymptotically optimal regularized lattice (sphere) decoding is exponential in the rate, and in many cases it in fact matches the complexity cost of ML (bounded) sphere decoding. In light of the fact that, prior to this work, a vanishing gap was generally attributed only to full lattice searches of exponential complexity, whereas subexponential complexity was generally attributed to earlyterminated (linear) solutions having a performance gap that can be up to exponential in dimension and/or rate, the work constitutes the first proof that subexponential complexity need not come at the cost of exponential reductions in lattice decoding error performance. Finally, the performance and complexity guarantees in this work hold for the most general MIMO setting, for all reasonable fading statistics, all channel dimensions, and all lattice codes. Prof. Elia's homepage

15/06/2011 @ 15:15 room BC 01Uncertain Schema Matching: the Power of not KnowingProf. Avigdor Gal, Technion  Israel Institute of Technology
Schema matching is the task of providing correspondences between concepts describing the meaning of data in various heterogeneous, distributed data sources. Schema matching is one of the basic operations required by the process of data and schema integration, and thus has a great effect on its outcomes, whether these involve targeted content delivery, view integration, database integration, query rewriting over heterogeneous sources, duplicate data elimination, or automatic streamlining of workflow activities that involve heterogeneous data sources. Although schema matching research has been ongoing for over 25 years, only recently a realization has emerged that schema matchers are inherently uncertain. Since 2003, work on the uncertainty in schema matching has picked up, along with research on uncertainty in other areas of data management. This lecture presents the benefits of modelling schema matching as an uncertain process and shows a single unified framework for it. We also briefly cover two common methods that have been proposed to deal with uncertainty in schema matching, namely ensembles and topK matchings. The talk is based on a recent manuscript, part of the "Synthesized Lectures on Data Management" by Morgan & Claypool. Prof. Gal's homepage

15/06/2011 @ 16:15 room BC 01Managing Uncertain DatabasesProf. Reynold Cheng, The University of Hong Kong
Data uncertainty is inherent in emerging applications such as pervasive computing, sensor networks, biological databases, and data integration. In this seminar, we will study some interesting issues that we have encountered during the development of the ORION, a system prototype developed to handle the storage and retrieval of a large amount of uncertain information. We will cover topics like uncertain data modeling, probabilistic query classification, query quality, and uncertain data indexing. Prof. Cheng's homepage

20/06/2011 @ 10:15 room CE 3Applied LogicProf. Tony Hoare, Microsoft Research, Keynote Speaker
Logic is the study of the rules of valid reasoning. Its direction of development has always been inspired by its intended areas of application. When first codified by Aristotle it was applied to reasoning in philosophy which then included biology and other branches of science. In classical times it was applied by Euclid to land measurement (geometry); in medieval Europe it was applied by William of Occam and others to theology. Leibnitz hoped that logic could transform the art of mathematical proof to calculation as in the infinitesimal calculus. Logic was developed further in modern times to secure the foundations of mathematics protecting against inconsistency of its axioms and checking against errors and omissions in individual mathematical proofs. Most recently logic has been applied in the design of the hardware of binary digital computers and in the development of the software programs which make them usable. The consequential reduction in the cost of errors in both hardware and software is estimated in billions of dollars per year. The aim of this lecture is to show that Computer Science belongs to a great intellectual and academic tradition that stretches back into antiquity. And the future is equally exciting. Computers are now getting ever more proficient at logical and mathematical reasoning; and their capabilities are being widely exploited in the humanities in biology and in other branches of science.

20/06/2011 @ 14:15 room BC 01Relational Analysis of Nondeterministic Integer ProgramsDr Radu Iosif, Véermag, CNRS and University of Grenoble
Automatic inference of function summaries is the key to compositional verification of numerical programs. In the first part of this talk I will present an algorithm for verifying hierarchical nonrecursive programs, based on an efficient computation of transitive closures for several classes of program loops: difference bounds, octagonal and finite monoid affine relations.
On the theoretical side, this provides a common solution to the acceleration problem, for all these three classes of relations. In practice, according to our experiments, the new method performs up to four orders of magnitude better than the previous ones for the first two classes.
In the second part of the talk, I explain how transitive closure computation combines with other methods (quantifier elimination, abstraction, etc.) in order to provide a framework for the analysis of such systems. These ideas led us to the implementation of the FLATA tool.
Joint work with Marius Bozga and Filip Konecny.

20/06/2011 @ 15:15 room BC 01SpecificationCentered RobustnessDr Barbara Jobstmann, Vérimag, CNRS and University of Grenoble
In addition to being correct, a system should be robust, that is, it should react reasonably even to unexpected inputs. In this talk, I will summarize two formal notions of robustness that we have introduced previously for reactive systems. One of the notions assigns cost to failures based on a userprovided notion of an incorrect transition in a specification. Here, we define a system to be robust if a finite number of incorrect inputs does not lead to an infinite number of incorrect outputs. We also give a more refined notion of robustness that aims to minimize the ratio of output failures to input failures. The second notion is aimed at liveness. In contrast to the previous notion, it has no concept of recovery from an error. Instead, it compares the ratio of the number of liveness constraints that the system violates to the number of liveness constraints that the environment violates. Dr Jobstmann's homepage

20/06/2011 @ 16:15 room BC 01Uniform Reduction to SAT and SMTProf. Predrag Janicic, University of Belgrade
SAT is a problem of satisfiability in propositional logic, and the satisfiability modulo theory (SMT) is a problem of satisfiability with respect to a background theory (e.g., linear arithmetic, theory of arrays) expressed in classical firstorder logic with equality. Over the last decade, SAT and SMT solvers made tremendous advances both in theoretical and in practical aspects. They have a wide range of applications, primarily in software verification, planning, scheduling, cryptanalysis, electronic design automation, etc. Typically, problems are reduced to SAT/SMT by problemspecific, adhoc tools. In this talk, a general framework, URSA Major, for reducing problems to SAT/SMT will be presented. The system can be also considered as a general modelling system for specifying and solving a wide class of problems (e.g., NP problems). The specification language is Clike, but based on a novel imperativedeclarative programming paradigm. A problem is specified by a test (expressed in an imperative form) that given values indeed make a solution to the problem. From a problem specification, a corresponding propositional or firstorder formula (in a suitable theory) is generated and passed to a SAT/SMT solver. If the formula is satisfiable, then its model is transformed back to the values of the unknowns. The system will be illustrated by examples from several domains. Prof. Janicic's homepage

21/06/2011 @ 10:15 room BC 01Boolean Satisfiability: From Theoretical Hardness to Practical SuccessProf. Sharad Malik, Princeton University
Boolean Satisfiability (SAT) is the problem of checking if a propositional logic formula can ever evaluate to true. This problem has long enjoyed a special status in computer science. On the theoretical side, it was the first problem to be classified as being NPcomplete. On the practical side, SAT manifests itself in several important application domains such as the design and verification of hardware and software systems, as well as applications in artificial intelligence. Thus, there is strong motivation to develop practically useful SAT solvers. However, the NPcompleteness is cause for pessimism, since it seems unlikely that we will be able to scale the solutions to large practical instances. While attempts to develop practically useful SAT solvers have persisted for almost half a century, for the longest time it was a largely academic exercise with little hope of seeing practical use. Fortunately, several relatively recent research developments have enabled us to tackle instances with even millions of variables and constraints  enabling SAT solvers to be effectively deployed in practical applications. The problem is even (erroneously?) considered to be significantly tamed. This talk will explore this transition from theoretical hardness to practical success, including a description of the key ideas used in modern SAT solvers and future research directions. Prof. Malik's homepage

21/06/2011 @ 11:15 room BC 01In praise of algebraProf. Tony Hoare, Microsoft Research
George Boole was a pioneer of the algebraic presentation of logic. I will suggest algebra as a simple way of presenting and reasoning about the semantics of broad families of programming languages. It neatly characterises the similarities and differences between members of the family, and the choices that are made in design of new languages. Denotational, operational, and deductive semantic presentations can be readily derived from algebraic axioms by algebraic reasoning. When a programming language is extended (for example, by introduction of nondeterminism and concurrency), all the earlier and simpler derivations and theorems remain valid. Prof. Hoare's homepage

21/06/2011 @ 14:15 room BC 01Scalable Software Model Checking for Finding Bugs in the LargeDr Aarti Gupta, NEC Laboratories America
Software model checking has become popular due to the success of predicate abstraction and refinement techniques in finding API usage bugs in programs. In this talk, I will describe an alternate approach we have taken for finding memory safety bugs (such as null pointer dereferences, array buffer overflows, string errors, memory leaks), and violations of API usage and userspecified assertions in C programs.
The highlights of our approach are:
 accurate memory modeling, including an analysisfriendly representation
 powerful static analyses based on abstract interpretation, to simplify models and find relatively easy proofs of correctness
 efficient symbolic model checking, to verify the unresolved properties and generate concrete error traces.
To improve scalability on large software projects, we utilize a DC2 (Depth Cutoff with Design Constraints) framework. DC2 creates manageable partitions based on scope bounding, and uses automated specification inference techniques to derive environment constraints for partitions and stubs for missing code. Furthermore, it can automatically suggest refinements of the environment contraints based on counterexamples generated by a model checker, if a user deems a reported witness to be a false positive. We have implemented these ideas in the FSoft verification platform, which has been used successfully to find bugs in many large open source as well as industry software projects. Dr Gupta's homepage

21/06/2011 @ 15:15 room BC 01Endtoend arguments in embedded control systemsProf. Rupak Majumdar, MaxPlanck Institute for Software Systems
There is a semantic gap between the mathematics of control theory and the implementation of controllers on top of realtime operating systems. We show two examples stability analysis and controllerscheduler codesign of verification algorithms that perform an endtoend analysis of control system implementations, translating the mathematical analysis of the control system to properties of the implementation. We also outline a theory of robustness for discrete systems, inspired by similar theories in robust control. Prof. Majumdar's homepage

21/06/2011 @ 16:15 room BC 01IMUnit: Improved Multithreaded Unit TestingProf. Darko Marinov, University of Ilinois at UrbanaChampaign
Multithreaded code is getting increasingly important but remains extremely hard to develop and test. Most recent research on testing multithreaded code focuses solely on finding bugs in one given version of code. While there are many promising results, the tools are fairly slow because they explore a large number of thread schedules and do not exploit the fact that code evolves through several versions during development and maintenance. Our overarching goal is to improve (regression) testing of multithreaded code. This talk focuses on a novel approach to specifying and executing schedules for multithreaded tests. Traditionally, developers enforce a particular schedule with time delays, e.g., using Thread.sleep in Java. Unfortunately, this sleepbased approach can produce false positives or negatives, and can result in unnecessarily long testing time. We introduce a new language that allows specifying schedules as constraints on the events during test execution. We provide a tool that automatically controls the code to execute the specified schedule and a tool that helps developers to migrate their legacy, sleepbased tests into eventbased tests in our language. The latter tool uses new techniques for inferring events and schedules from the executions of sleepbased tests. We describe our experience in migrating over 200 tests. The inference techniques have high precision and recall, of over 75%, and our approach reduces testing time compared to sleepbased tests, on average 3.4x. This is joint work with Milos Gligoric, Vilas Jagannath, Dongyun Jin, Qingzhou Luo, and Grigore Rosu. Prof. Marinov's homepage

22/06/2011 @ 10:15 room BC 01Coping with Multithread Schedules for Detecting Concurrency BugsDr Aarti Gupta, NEC Laboratories America
Verification of multithreaded programs has to grapple with the core problem of handling an explosive number of thread schedules. In this talk,I will describe three main approaches we have proposed to cope with this problem, spanning a range of static and dynamic analyses.
 The first is a static analysis framework for semantic reduction of thread interleavings, where we iteratively refine a transaction graph that captures the interleavings, by deriving sound invariants using abstract interpretation. For data race detection, this framework serves to reduce the number of false warnings captured by an initial lockset analysis, and enables the use of model checking on the remaining warnings to generate concrete error traces.
 The second approach combines dynamic testing with predictive analysis. Given a test execution of the multithreaded program, we derive a symbolic predictive model to explore all (or some) alternative thread schedules over the same events, and use an SMT solver to find violations.
 The third approach is based on systematic testing, where we propose a coveragebased metric to choose which interleavings to explore during stateless model checking. Although unsound in principle, the metric works well in practice to catch many bugs, while significantly improving performance in comparison to other methods such as dynamic partial order reduction and preemptive context bounding. Dr Gupta's homepage

22/06/2011 @ 13:15 room BC 01Iterated Regret Minimization: A New Solution ConceptProf. Joseph Y. Halpern, Cornell University
For some wellknown games, such as the Traveler's Dilemma or the Centipede Game, traditional gametheoretic solution concepts  most notably Nash equilibrium  predict outcomes that are not consistent with empirical observations. We introduce a new solution concept, iterated regret minimization, which exhibits the same qualitative behavior as that observed in experiments in many games of interest, including Traveler's Dilemma, the Centipede Game, Nash bargaining, and Bertrand competition. As the name suggests, iterated regret minimization involves the iterated deletion of strategies that do not minimize regret. We provide an epistemic characterization of iterated regret minimization, show how it works in many settings. Prof Halpern's homepage

22/06/2011 @ 14:15 room BC 01Botgrep: Detecting botnets via structured graph analysisProf. Shishir Nagaraja, Indraprastha Institute of Information Technology, Delhi (IIITD)
In this talk I shall first highlight the impact of surveillance botnet attacks and discuss the high level design of such botnets using a realworld attack as a case study. We will discuss the reasons behind their spectacular successes despite their centralized design  a fundamental weakness that limits their scalability and robustness. Botnets designed primarily as vehicles for economic crime reached these limits a while back. As a consequence, they moved to more decentralized designs based on the use of structured overlay topologies. While this has allowed them to scale in vast numbers, it can also be used as a point of detection. In the second part of the talk, I shall present techniques (O(nlogn)) to localize botnet members based on the unique communication patterns arising from their overlay topologies used for command and control. Experimental results on synthetic topologies embedded within Internet traffic traces from an ISP's backbone network indicate that our techniques (i) can localize the majority of bots with low false positive rate, and (ii) are resilient to the partial visibility arising from partial deployment of monitoring systems, and measurement inaccuracies arising from partial visibility and dynamics of background traffic. Prof. Nagaraja's homepage

22/06/2011 @ 15:15 room BC 01Generalized Sampling and Infinite Dimensional Compressed SensingProf. Anders Christian Hansen, University of Cambridge
I will discuss a generalization of the Shannon Sampling Theorem that allows for reconstruction of signals in arbitrary bases. Not only can one reconstruct in arbitrary bases, but this can also be done in a completely stable way. When extra information is available, such as sparsity or compressibility of the signal in a particular bases, one may reduce the number of samples dramatically. This is done via Compressed Sensing techniques, however, the usual finitedimensional framework is not sufficient. To overcome this obstacle I'll introduce the concept of Infinite Dimensional Compressed Sensing. Prof. Hansen's homepage

22/06/2011 @ 16:15 room BC 01Generalized Approximate Message Passing Algorithms with Applications to InterCellular Interference CoordinationProf. Sundeep Rangan, Polytechnic Institute of New York University
Message passing algorithms on graphical models have been highly successful in inference and optimization problems in a wide range of fields. This talks considers graphical models with linear mixing, which model large systems where the interactions between components are through aggregates of small, linearizable perturbations. For such systems, I present a new class of estimation and optimization algorithms called generalized approximate message passing (GAMP) based on Gaussian approximations of loopy belief propagation. GAMP is computationally simple and significantly more general than earlier AMP methods. Also, for the case of large, random graphs, a state evolution analysis provides sharp, easily characterizable asymptotics with verifiable conditions for optimality. The potential for the GAMP algorithm is far reaching, and in this talk, I will discuss applications to intercellular interference coordination (ICIC) in nextgeneration cellular wireless networks. Dynamic interference coordination has emerged as a central challenge as networks introduce femtocells and unplanned, heterogeneous deployments. I will show that the GAMP method provides a tractable method for distributed interference coordination that is computationally simple, has low communication overhead and can incorporate complex physicallayer mechanisms including beamforming, subband frequency allocation and dynamic orthogonalization. Joint work with Ritesh Madan, Qualcomm. Prof. Rangan's homepage

23/06/2011 @ 09:15 room BC 01Vanishing Signals: Trading Agent Kills Market Information (Evidence from a Natural Experiment in Online Social Lending)Prof. Jens Grossklags, Pennsylvania State University
Advances in information technology enable the creation of new markets as social coordination mechanisms in numerous ways, including those where humans and computer algorithms interact. Because humans and computers differ in their capabilities to emit and process market signals, there is a need to understand what determines the aggregate level of signaling. We frame the arising signaling regime as alternative equilibrium outcomes in a coordination game. More specifically, we tackle the general research question from the perspective of new electronic credit markets. On online social lending platforms, borrowers' offers typically contain detailed personal information next to hard facts such as credit ratings. We investigate whether a change of the market mechanisms in the form of the introduction of an automated trading agent shifts the dynamics of information revelation from a higheffort signaling norm to a loweffort information equilibrium. After developing the theoretical background for our analysis, we scrutinize our hypothesis in the form of a natural experiment at Smava.de and find strong support for our proposition. Prof. Grossklags' homepage

23/06/2011 @ 10:15 room BC 01The latest and greatest in network codingProf. Muriel Médard, MIT

23/06/2011 @ 11:15 room BC 01From the dullest fields of authentication to the greener pasturesDr Pedro PerisLopez, Delft University
In recent times, many generally short lived RFID authentication protocols have been proposed with only moderate success. The vast majority of RFID authentication protocols assume proximity between readers and tags, due to the limited range of the radio channel. However, in real scenarios an intruder can be located between the prover (tag) and the verifier (reader) and trick the latter into thinking that the former is in close proximity. This attack is generally known as a relay attack and can come in different shapes, including distance fraud, mafia fraud and terrorist attacks. Distance bounding protocols represent a promising countermeasure to hinder relay attacks, and some interesting proposals have grown lately. Another very exciting problem, introduced by Juels in 2004, is how to show that two tags have been simultaneously scanned. This could potentially have numerous real life applications. He called this kind of evidence a yokingproof, and it is expected to be verifiable offline. Then, some authors suggested the generalization of the proof for a larger number of tags. This is another very promising and still quite unexplored research field. Prof. PerisLopez' homepage

23/06/2011 @ 14:15 room BC 01Green Mobile Clouds and the Use of Network CodingProf. Frank Fitzek, University of Aalborg, Denmark

23/06/2011 @ 15:15 room BC 01Mechanism DesignDr. Milan Vojnovic, Microsoft Research Lab
A mechanism is a rule that determines allocation of a resource and payment across individual users. The theory of mechanism design looks into how to design mechanisms that have good properties with respect to * Truthfulness * Revenue * Social welfare What makes the area extremely important and of much research interest are numerous applications, including sponsored search and the allocation of network and computational resources. In this tutorial, I will focus on the revenue and social welfare properties of various standard mechanisms and give an update on the stateoftheart research in the area. Dr Vojnovic's homepage

23/06/2011 @ 16:15 room BC 01Zeroerror codes for the noisytypewriter channelProf. Fernando Perez Cruz, University Carlos III in Madrid
In this paper we propose nontrivial codes that achieve a nonzero zeroerror rate for several oddletter noisytypewriter channels. Some of these codes (specifically those which are defined for a number of letters of the channel of the form 2^n+1) achieve the bestknown lower bound on the zeroerror capacity. We build the codes using linear codes over rings as we do not require the multiplicative inverse to build the codes. This is a joint work with Francisco RuizRodriguez. Prof. PerezCruz's homepage

24/06/2011 @ 10:15 room BC 01Refinement of Two Fundamental Tools in Information TheoryProf. Raymond Yeung, The Chinese University of Hong Kong
In Shannon's original paper and textbooks in information theory, the entropy of a discrete random variable is assumed or shown to be a continuous function. However, we found that all Shannon's information measures including entropy and mutual information are discontinuous in the general case that the random variables take values in possibly countably infinite alphabets. This fundamental property explains why strong typicality and Fano's inequality can only be applied on finite alphabets. Note that strong typicality and Fano's inequality have wide applications in information theory so that it is important to extend them in full generality. In this talk, details about the discontinuity of Shannon's information measures will be given. We will show how these results lead to a new definition of typicality and an inequality tighter than Fano's inequality. Applications in network coding and information theoretic security will be discussed. Prof. Yeung's homepage

24/06/2011 @ 11:15 room BC 01TreeStructured algorithm applied to LDPC decodingProf. Juan José Murillo Fuentes, Universidad de Sevilla
Very much effort has been devoted to the design of LDPC codes to achieve capacity under BP decoding. In this work we rather focus on the decoder design. We borrow from the treestructured expectation propagation (TEP) algorithm to propose novel decoders that improve the BP performance. We first develop the TEP as a general decoder. Then, we deeply study the impact of this decoding on the BEC. We prove that it can achieve a better threshold at linear complexity with the length of the code. And, for short codes, it exhibits a good performance. The codes can also be designed to improve the threshold. To achieve capacity, we go further and develop decoders with no linear complexity that achieve the MAP solution. We end this talk by presenting some present lines of research. Prof. MurilloFuentes' homepage

24/06/2011 @ 14:15 room BC 01Image Denoising  The SURELET MethodologyProf. Thierry Blu, The Chinese University of Hong Kong
The goal of this presentation is to promote a new approach for dealing with noisy data  typically, images or videos here. Image denoising consists in approximating the noiseless image by performing some, usually nonlinear, processing of the noisy image. Most standard techniques involve assumptions on the result of this processing (sparsity, low highfrequency contents, etc.); i.e., the denoised image. Instead, the SURELET methodology consists in approximating the processing itself (seen as a function) over some linear combination of elementary nonlinear processings (LET: Linear Expansion of Thresholds), and to optimize the coefficients of this combination by minimizing a statistically unbiased estimate of the Mean Square Error (SURE: Stein's Unbiased Risk Estimate, in the case of additive Gaussian noise). I will introduce the technique and outline its advantages (fast, noiserobust, flexible, image adaptive). A comprehensive set of results will be shown and compared with the stateoftheart. Prof. Blu's homepage

24/06/2011 @ 15:15 room BC 01Diffuse imaging: Replacing lenses and mirrors with omnitemporal camerasProf. Vivek Goyal, MIT

24/06/2011 @ 16:15 room BC 01SCION: Scalability, Control, and Isolation On NextGeneration NetworksProf. Adrian Perrig, Carnegie Mellon University
We present the first Internet architecture designed to provide route control, failure isolation, and explicit trust information for endtoend communications. SCION separates ASes into groups of independent routing subplanes, called trust domains, which then interconnect to form complete routes. Trust domains provide natural isolation of routing failures and human misconfiguration, give endpoints strong control for both inbound and outbound traffic, provide meaningful and enforceable trust, and enable scalable routing updates with high path freshness. As a result, our architecture provides strong resilience and security properties as an intrinsic consequence of good design principles, avoiding piecemeal addon protocols as security patches. Meanwhile, SCION only assumes that a few toptier ISPs in the trust domain are trusted for providing reliable endtoend communications, thus achieving a small Trusted Computing Base. Both our security analysis and evaluation results show that SCION naturally prevents numerous attacks and provides a high level of resilience, scalability, control, and isolation. Prof. Perrig's homepage