Talks of the 2009 Edition, June 15th - July 3rd

15/06/2009
14:15
Constructing tight fusion frames

Prof. Matthew Fickus, Air force Institute of Technology, Ohio, USA
Abstract A tight fusion frame is a finite set of orthogonal projection matrices that sum to a scalar multiple of the identity matrix. Tight fusion frames are an emerging topic of frame theory, with applications to encoding and distributed processing. We completely characterize the existence of tight fusion frames in the special case in which all of the projection matrices are of equal rank. Our methods are entirely constructive. Though several basic examples may be constructed using tensor products and orthogonal complements, the general construction makes use of two new ideas: the first is a new method for building unit norm tight frames that resembles the popular game Tetris; the second is to properly modulate these Spectral Tetris frames to produce Gabor fusion frames.

Prof. Fickus' homepage http://www.afit.edu/en/enc/faclook.cfm?fname=7F10266D03221A%26lname=741831721E34

15/06/2009
15:15
Network-Level Spam and Scam Defenses

Prof. Nick Feamster, Georgia Tech, College of Computing, USA
Abstract This talk introduces a new class of methods called "behavioral blacklisting", which identify spammers based on their network-level behavior. Rather than attempting to blacklist individual spam messages based on what the message contains, behavioral blacklisting classifies a message based on how the message itself was sent (spatial and temporal traffic patterns of the email traffic itself). Behavioral blacklisting tracks the sending behavior of an email sender from a wide variety of vantage points and establishes "fingerprints" that are indicative of spamming behavior. Behavioral blacklisting can apply not only to email traffic, but also to the network-level behavior of hosting infrastructure for scam or phishing attacks. First, I will present a brief overview of our study of the network-level behavior of spammers. Second, I will describe two behavioral blacklisting algorithms that are based on insights from our study of the network-level behavior of spammers. Third, I will describe SpamSpotter, a real-time reputation system that integrates these algorithms. Finally, I will describe our ongoing work applying similar behavioral detection techniques to detecting both online scam hosting infrastructure and phishing attacks.

Prof. Feamster's homepage http://www.cc.gatech.edu/~feamster/

16/06/2009
09:15
Geospatial Visual Analytics

Dr Gennady Andrienko and Dr Natalia Andrienko, Fraunhofer Institute IAIS - Intelligent Analysis and Information Systems
Abstract The course introduces Visual Analytics – a new research discipline defined as the science of analytical reasoning facilitated by interactive visual interfaces. Visual analytics combines automated analysis techniques with interactive visualizations so that to extend the perceptual and cognitive abilities of humans and enable them to extract useful information and derive knowledge from large and complex data and to solve complex problems. Particularly, data and problems involving geospatial components and time are inherently complex and therefore call for visual analytics approaches. We present a system of exploratory data analysis tools and methods, including interactive maps, statistical graphics and multiple coordinated displays, data transformations applicable to spatial and temporal data, and appropriate computational techniques such as cluster analysis and aggregation. We consider analytical questions related to spatial time series and to spatially distributed events, describe general types of patterns that can be found in such data, and suggest appropriate visual analytics methods. We discuss in more detail the problems of analyzing data about movement of various discrete objects in geographical space. We consider three types of movement data: data describing movements of a single entity during a long time period, data about simultaneous movements of multiple unrelated entities, and data about simultaneous movements of multiple related entities. The pertinent analysis tasks significantly differ for these types of data. For each type of data, we briefly introduce the visual analytics techniques and tools we have developed in the recent years.

Homepage of Dr Natalia Andriekno and Dr Grennady Andrienko http://geoanalytics.net/and/

16/06/2009
14:15
Coordination, synchronization, and consensus

Prof. Rodolphe Sepulchre, University of Liège
Abstract This talk will survey old and recent advances in three closely related problems. Coordination is the distributed control of a collective task. Synchronization is the phase coordination of a collective rhythm. Consensus is the ultimate agreement reached through local but spreading exchange of information. We show some benefits of a unifying geometric approach to the three problems by considering the generalization of the classical consensus problem to state-space that are not vector spaces but manifolds. We highlight the numerous applications of consensus on manifolds and discuss some open problems in this rich area of dynamics and control.

Prof. Sepulchre's homepage http://www.montefiore.ulg.ac.be/~sepulch/

16/06/2009
15:15
Floating Point System Level Designs

Mr Martin Langhammer, Altera Corporation
Abstract Current FPGA devices are optimized for fixed point applications; the DSP precision, logic to DSP ratios, and especially the routing density are all architected for fixed point datapath requirements. Although small floating point datapaths can readily be implemented, large systems with several hundred operators are impossible, or result in severe performance degradation. A simple analysis shows that the routing density needs to be doubled in order to support IEEE754 floating point arithmetic, but the increased device area would not be commercially feasible. In addition, many algebraic and transcendental functions have been traditionally implemented using CORDIC or other bit-iterative methods, which despite mapping well to FPGA logic structures, cause severe routing stress. Recently, some new methods of floating point datapath construction, such as fused floating point datapath synthesis, have been proposed, which result in large logic, routing, and latency reductions. Combined with newer function architectures, which offer linear cost and latency tradeoffs for traditionally non-linear complexity, the construction of dense floating point systems with a very flexible operator mix is now possible. Design examples will be shown, involving hundreds of floating point operators, with pushbutton timing closure to the same performance levels as fixed point designs. In particular, matrix operation implementations such as matrix multiplication and Cholesky decomposition will be described, along with the algorithmic changes required to optimally use the wide, but deeply pipelined, vector operations that result from these new methods.


17/06/2009
10:15
Interference Management for Next-Generation Wireless Networks

Dr Sundeep Rangan, Qualcomm Technologies
Abstract Managing interference between users is the central challenge of multiuser wireless systems. While next-generation cellular standards offer enormous potential in improved capacity and services, several features in these systems will make interference management particularly challenging. Future cellular deployments are likely to involve mixtures of large-area macro cells along with unplanned, small-area pico and femto cells, relays, and other network elements. Such heterogeneous and unplanned networks can create adverse interference conditions that require intelligent interference management. In addition, as networks are designed to support low-latency, bursty traffic, fast adaption will be increasingly needed in any viable interference management strategy. In this talk, I will review some of the methods being considered in commercial cellular systems with an emphasis on their implications for scheduling algorithms and dynamic resource allocation. From a theoretical perspective, mitigating interference via scheduling can be seen as a complex, distributed optimization problem. I will present some possible strategies to address this problem. I will also discuss the implications of interference cancellation. Increased computational power is enabling advanced receiver algorithms at the mobile such as successive interference cancellation (SIC). I will argue that the possibility of SIC-enabled mobile devices suggests that we think of resource allocation in fundamentally new ways. In particular, I will present a new strategy based on "rate shaping" as opposed to dynamic orthogonalization. I will also argue that interference cancellation offers new scalable methods of sharing spectrum between macro cellular systems and short-range local communication.


17/06/2009
11:15
Applications of Matroid Theory to Index and Network Coding

Dr Salim El Rouayheb
Abstract A major result in network coding theory is determining the capacity of noise and interference free multicast networks. However, a full understanding of the flow of information in networks with general demands faces numerous obstacles and remains an elusive goal. In this talk, we shed some light on this problem by investigating underlying connections to source coding and matroid theory. We start by showing that the general network coding problem can be reduced to a special source coding problem with side information, known as the index coding problem. This reduction implies that it is sufficient to focus on a family of networks with a special topology, simplifying thus the problem description and linking it to problems in graph theory and zero-error information theory. Then, we describe a construction that ties the linear representation properties of a given matroid to the existence of optimal index codes. This new construction strengthens previous related results in the literature and establishes a deeper connection between network coding and the rich field of matroid theory. Joint work with Alex Sprintson and Costas N. Georghiades.


17/06/2009
14:15
On Resolution, Sparse Signal Recovery, and Random Access Communication

Prof. Vivek Goyal, MIT
Abstract Resolution of a data acquisition system is usually thought to be determined completely by sensing properties, such as density and accuracy of measurements. This talk advocates the view that resolution is, in addition, dependent on signal modeling and the complexity of computations allowed. This view is developed concretely for the acquisition of sparse signals, where the asymptotic relationship between resolution and the number of measurements is studied for algorithms with various complexities. The sparse signal recovery problem corresponds perfectly to a model of random multiple access communication in which the task of the receiver is to determine only the subset of the users that transmitted. This connection gives insight on the performance of single- and multi-user detection algorithms. Specifically, all known practical multiuser detection techniques become interference limited at high SNR. However, power control is natural in the multiple access setting, and we are able to prove that a simple detection algorithm based on orthogonal matching pursuit is not interference limited under optimal power control. The talk is based on joint work with Alyson Fletcher and Sundeep Rangan.

Prof. Goyal's homepage http://www.rle.mit.edu/stir/people.htm

17/06/2009
15:15
Simulating few by many: k-concurrency via k-set consensus

Prof. Eli Gafni, University of California, Los Angeles
Abstract Perfection is the enemy of the good. Most backoff schemes wait until a single contending candidate survives. But it is ``the last mile'' which is usually the most costly: backoff systems which wait on some fixed $k>1$ perform better that those that require perfection, i.e. $k=1$. This paper asks what tasks are read-write solvable ``k-concurrently '' i.e. tasks where progress can be guaranteed when contention goes below $k+1$. That's it, what good is backoff to $k>1$? While backoff to $k=1$ is omnipotent what set of tasks can be solved with $k>1$? We show that the set of these tasks is exactly the set of tasks that are solvable wait-free with the availability of $k$-set consensus. It is now for the system designer to decide whether good is good enough.

Prof. Gafni's homepage http://www.cs.ucla.edu/~eli/eli.html

18/06/2009
09:15
Geospatial Visual Analytics

Dr Gennady Andrienko and Dr Natalia Andrienko, Fraunhofer Institute IAIS - Intelligent Analysis and Information Systems
Abstract The course introduces Visual Analytics – a new research discipline defined as the science of analytical reasoning facilitated by interactive visual interfaces. Visual analytics combines automated analysis techniques with interactive visualizations so that to extend the perceptual and cognitive abilities of humans and enable them to extract useful information and derive knowledge from large and complex data and to solve complex problems. Particularly, data and problems involving geospatial components and time are inherently complex and therefore call for visual analytics approaches. We present a system of exploratory data analysis tools and methods, including interactive maps, statistical graphics and multiple coordinated displays, data transformations applicable to spatial and temporal data, and appropriate computational techniques such as cluster analysis and aggregation. We consider analytical questions related to spatial time series and to spatially distributed events, describe general types of patterns that can be found in such data, and suggest appropriate visual analytics methods. We discuss in more detail the problems of analyzing data about movement of various discrete objects in geographical space. We consider three types of movement data: data describing movements of a single entity during a long time period, data about simultaneous movements of multiple unrelated entities, and data about simultaneous movements of multiple related entities. The pertinent analysis tasks significantly differ for these types of data. For each type of data, we briefly introduce the visual analytics techniques and tools we have developed in the recent years.

Homepage of Dr Natalia Andrienko and Dr Grennady Andrienko http://geoanalytics.net/and/

18/06/2009
14:15
Discussion Session : Rediscovered Favorites

Organized by: K. Argyraki, S. Diggavi and C. Fragouli
Abstract In this session we have multiple invited speakers, each giving a 10 minute talk to present a favorite paper (or work), that is not her/his own work, and that has not received the deserved attention. There is no restriction in area, in publication date, or in reasons for selecting the paper. Criteria for example can include interaction between areas, interesting problem formulation, technical strength, etc. Interaction with the audience is encouraged. A preliminary list of speakers includes: Sid Jaggi, Gerhard Kramer, Athina Markopoulou, Muriel Medard, Emina Soljanin, Alex Sprintson, Raymond Yeung, and Yunnan Wu.


19/06/2009
10:15
Sorting in Space

Prof. Hanan Samet, University of Maryland, USA
Abstract The representation of spatial data is an important issue in computer graphics, computer vision, geographic information systems, and robotics. A wide number of representations is currently in use. Recently, there has been much interest in hierarchical data structures such as quadtrees, octrees, R-trees, etc. The key advantage of these representations is that they provide a way to index into space. In fact, they are little more than multidimensional sorts. They are compact and depending on the nature of the spatial data they save space as well as time and also facilitate operations such as search. In this talk we give a brief overview of hierarchical spatial data structures and related research results. In addition we demonstrate the SAND Browser (found at http://www.cs.umd.edu/~brabec/sandjava) and the VASCO JAVA applet which illustrate these methods (found at http://www.cs.umd.edu/~hjs/quadtree/index.html).

Prof. Samet's homeppage http://www.cs.umd.edu/~hjs/
SAND Browser http://www.cs.umd.edu/~brabec/sandjava
VASCO JAVA applet http://www.cs.umd.edu/~hjs/quadtree/index.html

19/06/2009
11:15
Wireless Random Medium Access Control: Reverse and Forward

Prof. Jianwei Huang, The Chinese University of Hong Kong
Abstract The widely used random MAC protocols, although quite successful in practice, are mostly designed based on engineering heuristics instead of rigorous mathematical framework. This talk consists two studies that aim at discovering the mathematical foundation behind random MAC protocols and design better algorithms. The first part involves reverse engineering of random MAC. Starting from a given back-off based MAC protocol, reverse-engineering discovers the underlying mathematical problems implicitly being solved by the network dynamics of that protocol. It leads to new insights on why such existing protocol “works” and when it will not, thus indirectly leads to systematic forward-engineering. We will show that the average behavior of the random MAC can be reversed engineered as a non-cooperative game. Such result complements the existing success on reverse- engineering of layer 4 (TCP) and layer 3 (BGP) protocols. The second part involves forward engineering a better random MAC, that is more efficient, converges faster, and is more robust to the network dynamics.

Prof. Huang's homepage http://personal.ie.cuhk.edu.hk/~jwhuang/

19/06/2009
14:15
Adaptive Networks

Prof. Ali H. Sayed, University of California, Los Angeles
Abstract Distributed networks linking sensors and actuators will form the backbone of future data communication and control networks. Applications will range from sensor networks to precision agriculture, environment monitoring, disaster relief management, smart spaces, target localization, as well as medical applications. In all these cases, the distribution of the nodes in the field yields spatial diversity, which should be exploited alongside the temporal dimension in order to enhance the robustness of the processing tasks and improve the probability of signal and event detection. Distributed processing techniques allow for the efficient extraction of temporal and spatial information from data collected at such distributed nodes by relying on local cooperation and data processing. This talk describes recent developments in distributed processing over adaptive networks. The presentation covers adaptive filtering algorithms that allow neighboring nodes to communicate with each other. At each node, estimates exchanged with neighboring nodes are fused and promptly fed into the local adaptation rules. In this way, an adaptive network is obtained where the structure as a whole is able to respond in real-time to the temporal and spatial variations in the statistical profile of the data. Different adaptation or learning rules at the nodes, allied with different cooperation protocols, give rise to adaptive networks of various complexities and learning potential. The ideas are illustrated by considering algorithms of the least-mean-squares type, although more general adaptation rules are also possible including least-squares rules and Kalman-type rules. Both incremental and diffusion collaboration strategies are considered and comparison with consensus mechanisms are provided.

Prof. Sayed's homepage http://asl.ee.ucla.edu/index.php?option=com_content%26task=view§ionid=4%26id=36

19/06/2009
15:15
Reliability of Wave Pipelining for NoC and FPGA

Prof. Guy Lemieux, University of British Columbia, Canada
Abstract Networks-on-Chip and FPGAs both require a programmable interconnect structure where high-bandwidth, low-latency communication is key to performance. Wave pipelining meets this requirement -- by sending data in waves through a logic network, and carefully timing the release of new data, maximum bandwidth and minimum latency can be achieved when everything is ideal. However, the real world is never ideal. Noise sources and unpredictable timing variation leads to serious performance degradation of wave-pipelined circuits. Accounting for noise, we will show that traditional synchronous circuits with registers offer higher bandwidth and are more reliable than wave pipelining. However, reliability and bandwidth can be improved by compensating for the main source of non-ideality, accumulated timing jitter, through the use of surfing and distributed asynchronous FIFOs. To estimate reliability of these circuits, we have devised new techniques based upon statistical timing.

Prof. Lemieux's homepage http://www.ece.ubc.ca/~lemieux/

22/06/2009
11:15
Signals, Truth and Design

Prof. Judith Donath, MIT Media Laboratory, MA, USA
Abstract Much of what we want to know about other people is not directly perceivable. Instead, we rely on signals, which are perceivable features or actions that indicate the presence of hidden qualities. Yet not all signals are reliable. What keeps signals honest? And why are some signals more reliable than others? Signaling theory provides a framework for understanding these dynamics. Among other things, it shows how the cost of many seemingly extravagant displays is not wasteful expenditure, but useful for ensuring the reliability of the display as a signal. In this talk I will show how signaling theory can be used for the design and analysis of social technologies. It is especially well suited for this domain, for in mediated interactions there are few qualities that can be directly observed: everything is signal.

Prof. Donath's homepage http://www.media.mit.edu/people/bio_judith.html

22/06/2009
14:15
Learning to exploit diversity in wireless systems

Dr Alexandre Proutière, Microsoft Lab, Cambridge, UK
Abstract Opportunistic scheduling is a key mechanism for improving the performance of wireless systems. However, this mechanism requires that transmitters are aware of channel conditions (or CSI, Channel State Information) to the various possible receivers. CSI is not automatically available at the transmitters, rather it has to be acquired. Acquiring CSI consumes resources, and only the remaining resources can be used for actual data transmissions. We explore the resulting trade-off between acquiring CSI and exploiting channel diversity to the various receivers. Specifically, we consider a system consisting of a transmitter and a fixed number of receivers. We first analyze the case where the transmitter cannot adapt its transmission power, and then extend the analysis to allow for power control. We provide an optimal probing / transmission strategy in both saturated and buffered scenarios. In the former case, the transmitter always have packets to send, and in the latter case, an infinite buffer is associated to each receiver, and packets arrive in this buffer according to some stochastic process with fixed intensity. The proposed strategies may be used in a wide class of wireless systems with limited information, such as broadcast systems without a priori knowledge of the CSIs. But it can be also used to solve dynamic spectrum access problems such as those arising in cognitive radio systems, where secondary users can access large parts of the spectrum, but have to discover which portions of the spectrum offer more favorable radio conditions or less interference from primary users.

Dr Proutière's homepage http://research.microsoft.com/en-us/people/aproutie/

23/06/2009
14:15
Cryptography and Secure Channels

Prof. Kenneth G. Paterson, Royal Holloway, University of London
Abstract I will talk about how cryptography gets used in protocols like IPsec, SSL/TLS and SSH to build secure channels. I will discuss what "provable security" can and cannot tell us about the security of these protocols.

Prof. Paterson's homepage http://www.isg.rhul.ac.uk/~kp/

24/06/2009
11:15
Towards a General-Purpose Sensing System

Prof. Ramesh Govindan, University of Southern California
Abstract In this talk, I will discuss the Tenet architecture for sensor networks. Tenet is motivated by the observation that future large-scale sensor network deployments will be tiered, consisting of motes in the lower tier and masters, relatively unconstrained 32-bit platform nodes, in the upper tier. Masters provide increased network capacity. Tenet constrains multi-node fusion to the master-tier while allowing motes to process locally-generated sensor data. This simplifies application development and allows mote-tier software to be reused. Applications running on masters task motes by composing task descriptions from a novel tasklet library. The Tenet system contains novel subsystems for congestion control and energy management, which I will describe in some detail. Finally, I will present our experiences from several deployments of the Tenet software.

Prof. Govindan's homepage http://cs.usc.edu/~ramesh/

24/06/2009
14:15
Multi-touch versus single touch: which is best for supporting collaborative learning?

Prof. Yvonne Rogers, The Open University, UK
Abstract Recently, there has been much interest in how tabletops can support collaborative learning. Smart Table has now reached the classroom. Several claims have been made about the benefits of providing multi-touch horizontal interfaces including encouraging more equitable participation than vertical surfaces and reduced social awkwardness compared to PCs. The shared interactive surface - that allows more than one person to interact with digital content simultaneously - is assumed to provide new opportunities for groups of children to learn together. As part of our research on the ShareIT project we have been conducting studies that have been investigating these claims. In one of our studies, we compared groups of children working around a multi-touch surface - where all children could interact with the digital content - versus a multi-touch tabletop which was constrained so that only one child could interact with it at a given time. The task involved planning and designing a classroom layout. Our findings showed that children talked more about their designs in the multiple-touch condition than they did in the single-touch condition. However, in the single-touch condition, talk about turn taking was more frequent and appeared to replace discussion about design. Hence, it seems, that while multi-touch can facilitate collaboration, forcing children to take turns rather than allowing for a free-for-all can result in more awareness of the other children's interactions. In my talk, I will discuss reasons for this. I will finish by describing our design framework that considers the effects of constraining a collaborative learning activity either by encouraging, enabling or enforcing, in terms of the physical space, the technology, the software and the learning task.

Prof. Roger's homepage http://mcs.open.ac.uk/yr258/

24/06/2009
15:15
From Sequential Decoding to Polar Coding

Prof. Erdal Arikan, Bilkent University
Abstract Polar codes are a class of error-correction codes that achieve the capacity of binary-input memoryless communication channels. Variants of these codes have also been shown to be optimal in a number of other source and channel coding scenarios. Polar coding can be seen as a natural generalization of Reed-Muller coding. However, the ideas leading to polar codes actually come from sequential decoding and the associated channel parameter "cutoff rate." This talk relates polar coding to efforts to boost the cutoff rate of a channel towards its capacity, most notably to the works by Pinsker and Massey.

Prof. Arikan's homepage http://www.ee.bilkent.edu.tr/~arikan/

25/06/2009
10:15
Scalable Query Processing in Probabilistic Databases with SPROUT

Prof. Dan Olteanu, Oxford University, UK
Abstract In this talk I will address the problem of query evaluation on probabilistic databases and present the SPROUT query engine, which is under development at Oxford. SPROUT is publicly available as an extension of the PostgreSQL 8.3.3 query engine. It is specifically tailored to tractable conjunctive queries with inequalities and to queries that are not tractable in general but become tractable on probabilistic databases restricted by functional dependencies. The major components of SPROUT are an aggregation operator for exact confidence computation, which can be naturally integrated into existing relational query plans, and optimizations that allow to push the aggregation operator or parts thereof past joins. The operator is based on a fundamental connection between tractable queries and linear-size Ordered Binary Decision Diagrams (OBDDs) representing the uncertainty in the answers to such queries. I will then discuss the secondary-storage algorithm for the aggregation operator. This algorithm can compute the probability of OBDDs for tractable queries without materializing them, with main memory requirements only dependent on the query size, and in a few scans over the data. Experiments with GBs of TPC-H data show orders of magnitude improvements of SPROUT over state-of-the-art exact and approximate techniques. >

Prof. Olteanu's homepage http://www.comlab.ox.ac.uk/people/dan.olteanu/

25/06/2009
11:15
Jamming-resistant Key Establishment using Uncoordinated Frequency Hopping

Prof. Mario Cagalj, University of Split, Croatia
Abstract We consider the following problem: how can two devices that do not share any secrets establish a shared secret key over a wireless radio channel in the presence of a communication jammer? An inherent challenge in solving this problem is that known anti-jamming techniques (e.g., frequency hopping or direct-sequence spread spectrum) which should support device communication during the key establishment require that the devices share a secret spreading key (or code) prior to the start of their communication. This requirement creates a circular dependency between antijamming spread-spectrum communication and key establishment, which has so far not been addressed. In this talk, we present an Uncoordinated Frequency Hopping (UFH) scheme that breaks this dependency and enables key establishment in the presence of a communication jammer. We provide a detailed analysis of our UFH scheme and show its feasibility, both in terms of execution time and resource requirements.

Prof. Cagalj's homepage http://www.fesb.hr/~mcagalj/

26/06/2009
10:15
Learning Networks of People and Places from Location Data

Prof. Tony Jebara, Columbia University, New York
Abstract Networks and graphs have become essential for understanding the online world with applications ranging from the Web to FaceBook. I will discuss building such networks in the offline real world by using location and GPS data. By gathering long-term high frequency location data from millions of mobile devices it becomes possible to track movement trends in real-time in cities, learn networks of real places and learn real social networks of people. We build graphs from this data using generalized matching algorithms and also apply novel visualization, clustering and classification tools to them. For example, we can visualize the network of places in a city showing the similarity between different locations and how active they are right now. Another graph is the network of users showing how similar person X is to person Y by comparing their movement histories and how often they colocated. Embedding and clustering these graphs reveals interesting trends in behavior and organizes people into tribes that are more detailed than traditional demographics. With learning algorithms applied to these human activity graphs, it becomes possible to make predictions for advertising, marketing and collaborative recommendation from real offline behavior.

Prof Jebara's homepage http://www1.cs.columbia.edu/~jebara/

26/06/2009
11:15
Depersonalization of Location Traces

Prof. Marco Gruteser, Rutgers University, NJ, USA
Abstract Motivated by a probe-vehicle based automotive traffic monitoring system, this talk addresses the problem of guaranteeing anonymity in a dataset of location traces while maintaining high data accuracy. An analysis of a set of GPS traces from 239 vehicles shows that known privacy algorithms cannot meet application accuracy requirements or fail to provide privacy guarantees for drivers in low-density areas. To overcome these challenges, I will present a novel time-to-confusion criterion to characterize privacy in a location dataset and propose a centralized density-aware path cloaking algorithm that hides location samples in a dataset to provide a time-to-confusion guarantee for all vehicles. This approach effectively guarantees worst case tracking bounds, while achieving significant data accuracy improvements. I will then discuss a distributed scheme building on virtual trip lines, which does not need to rely on a trustworthy privacy server with access to all traces.

Prof. Gruteser's homepage http://www.winlab.rutgers.edu/~gruteser/

26/06/2009
14:15
Faster Symmetry Discovery using Sparsity of Symmetries

Prof. Karem Sakallah, University of Michigan
Abstract Many computational tools have recently begun to benefit from the use of the symmetry inherent in the tasks they solve, and use general-purpose graph symmetry tools to uncover this symmetry. However, existing tools suffer quadratic runtime in the number of symmetries explicitly returned and are of limited use on very large, sparse, symmetric graphs. This paper introduces a new symmetry-discovery algorithm which exploits the sparsity present not only in the input but also the output, i.e., the symmetries themselves. By avoiding quadratic runtime on large graphs, it improves state-of the-art runtimes from several days to less than a second.

Prof. Sakallah's homepage http://www.eecs.umich.edu/~karem/

29/06/2009
11:15
Energy Management in Data Centers

Prof. Ricardo Bianchini, Rutgers University, Piscataway, USA
Abstract In 2006, the data centers in the United States consumed 61.4 Billion KWhs, an amount of energy that is equivalent to that consumed by the entire transportation manufacturing industry. Worse, under current efficiency trends, this gigantic amount of energy will have nearly doubled by 2011 for an overall electricity cost of $7.4 Billion per year. In the first part of this talk, I will discuss these trends, identify the major energy consumers in data centers, and overview the approaches that have been proposed to manage their energy consumption. In the second part, I will introduce a new management approach that caps energy consumption over a period of time, while minimizing costs and satisfying all service-level agreements. The approach relies on workload prediction, mathematical optimization, and statistical performance data. The research described in the talk is being done in collaboration with Ozlem Bilgir, Kien Le, Margaret Martonosi, and Thu Nguyen.

Prof. Bianchini's homepage http://www.cs.rutgers.edu/~ricardob/

29/06/2009
14:15
Models of light diffusion within prints

Dr Jean-Francis Bloch, Institut National Polytechnique de Grenoble
Abstract Paper is a well known material as it is used every day, at least for printing or read a newspaper. However, it exists for many other applications, such as filter, blotter, bank note, cigarette or tracing papers. Classically, the expected end used properties do depend on the considered application. However, they consist usually in a compromise between the needed mechanical properties and the desired optical properties. Essentially, colour gloss, opacity and brightness (and/or whiteness) are the main optical properties taken into account. However, these optical properties do depend on the structure of the fibrous network constituting paper. Therefore, it is essential to be able to describe the 3D structure of paper in order to understand and analyse its optical properties. Furthermore, this 3D structure and its relation with a fluid, such as ink, are of major interest for many applications. Consequently, we will first present first the 2D characterisation of paper, in order to explain the relation between topography and gloss. Then the analysis of the penetration of ink will be exemplified using different experimental techniques. However, as fluid flows involve the 3D structure of paper, we will present experimental results of 3D structure obtained at the European Synchrotron Radiation Facility (ESRF, Grenoble) using X-Ray microtomography. The aim is here to characterise the 3D structure. Some example of paper structures will be presented including the description of both the fibrous structure and the fillers that modify the paper’s optical properties. ...


30/06/2009
11:15
Boolean function decomposition and optimization

Prof. Kartik Mohanram, Rice University, Texas, USA
Abstract This talk introduces two new approaches for the decomposition and optimization of arbitrary multi-level logic circuits. The first approach, based upon a generalization of the parallel prefix problem, is used for the synthesis of logic circuits with "lookahead" properties due to the inherent parallelism among sub-circuits. The second approach is based upon the use of OR-XOR primitives for Boolean function representation. We present several examples and preliminary work that addresses the decomposition and optimization of logic circuits using OR-XOR expressions.

Prof. Mohanram's homepage http://www.ece.rice.edu/~kmram/

30/06/2009
14:15
Generalizations of the minmax theorem

Prof. Christos H. Papadimitriou, University of California at Berkeley