The property of global injectivity is often essential for the identifiability of parameters in mathematical models, and for showing uniqueness of equilibria for high-dimensional nonlinear dynamical systems. On the other hand, there are no comprehensive computational tools for checking the global injectivity of general nonlinear function, especially if they depend on several variables and contain several unknown parameters. In particular, mathematical models of biochemical reaction networks give rise to dynamical systems that are usually high dimensional, nonlinear, and have many unknown parameters. Nevertheless, we show that it is often possible to use reaction network properties to conclude global injectivity of the associated vector field. Moreover, some of these criteria for global injectivity hold for very wide classes of nonlinear functions, even if these function are not related to any biochemical network. We also explain how these criteria are related to other problems, such as the Jacobian conjecture in algebraic geometry and the Bezier self-intersection problem in computer graphics. Prof. Craciun's homepage
In the last decade fractals from an area of interest restricted for theoreticians and artistically-oriented members of academic communities came to many spectacular applications. This lecture presents an overview of basic geometric concepts of fractal geometry and space-filling-curves. Further a number of examples of very advanced applications in microelectronic circuits will be presented. These applications show enhancement of properties of microelectronic circuits through construction of new geometric structures e.g. in biosensors, fractal electrodes, fractal antenna, solar cells and super-capacitors, etc. Properties of geometric structures exhibiting unusual geometries indicate their applicability in ever growing miniaturization (nano-structures) and frequency of operation (TeraHz range). Fractal theories apart from bringing ideas to non-conventional geometric constructions provide also useful tools for analysis of micro-system operation in the extreme conditions in the nano and tera range when standard macroscopic physical laws and circuit-theoretic approaches (such as eg. the Ohm's law) are not applicable any more. Prof. Ogorzalek's homepage
I will describe recent methods that allows the efficient extraction of information in very large networks. I will first show how a simple generalization of PageRank can be used to compute similarities between graph vertices. I will then describe an algorithm for detecting communities in very large networks and will show the surprising results obtained with this algorithm on a country-wide mobile phone database of more than one billion phone calls. Prof. Blondel's homepage
Presence, broadly defined as an event publish-notification infrastructure for converged applications, has emerged as a key mechanism for collecting and disseminating context attributes for next-generation services in both enterprise and provider domains. Current presence-based solutions and products lack in the ability to a) support flexible user-defined queries over dynamic presence data and b) derive composite presence from multiple provider domains. Accordingly, current uses of context are limited to individual domains/organizations and do not provide a programmable mechanism for rapid creation of context-aware services.
This work presents a large-scale presence virtualization and federation platform that (a) federates across heterogeneous presence domains and thereby enables applications to exploit cross-domain contextual data; (b) provides a programmatic interface for aggregating presence data from various sources and for composing base presence information into abstract, functionally richer entities for enabling applications. An underlying design consideration is to leverage capabilities of protocols that are being widely deployed today.
This is a joint research project between IBM T.J. Watson Research Center and IBM Research India. The platform has undergone customer trials.
Time permitting, In the later part of the talk, I plan to cover our work in the related area of real-time social networking, and specifically talk about R-U-In? (pronounced Are you in?) - an activity-oriented social networking prototype we have developed. Dr. Chakraborty's homepage
Interruptions are ubiquitous in everyday life. This talk will first review research on how often they occur, what effects they have and what types exist. Then, solutions for managing interruptions in human-computer interaction will be discussed before presenting the authors own work on how people manage interruptions of collaborative tasks through conversation. The talk will end with a discussion of the challenges for modelling and managing constraints of parallel collaborative activities. Prof. Bangerter's homepage
Abstraction is at the center of much work in Computer Science. It encompasses finding the right interface for a system as well as finding an effective design for a system implementation. Furthermore, abstraction is the basis for program construction, allowing programs to be built in a modular fashion. This talk will discuss how the abstraction mechanisms we use today came to be, how they are supported in programming languages, and some possible areas for future research. Prof. Liskov's homepage
For over five years, we have been investigating the reliability of the software and hardware systems that store data. In almost every case, we have found serious design flaws, implementation problems, and software bugs, many of which can lead to system crashes, data loss, or silent data corruption.
In this talk, I will present an overview of our research, highlighting the aforementioned problems. I will discuss why such problems seem to repeatedly occur and thus are fundamental to the current way in which we build software. I will then describe some techniques we have eveloped to build more reliable storage systems in the future.
A sub-theme of my talk will focus on an issue of broader interest to younger researchers: where our ideas came from. Thus, in presenting the work, I will not only discuss the core technical material but also the thinking, reasoning, chance, and dumb luck that led to some of our ideas. Prof. Arpaci-Dusseau's homepage
The Cloud abstracts away infrastructural complexity for the benefit of tenants. But to tenants' detriment, it can also abstract away vital security information. In this talk, I discuss several protocols for remote testing of cloud storage integrity and robustness. Executing these protocols without detailed infrastructural knowledge or trust in cloud providers, clients or auditors can: (1) Verify the integrity of full files without downloading them; (2) Distribute files across cloud providers and ensure strong robustness with periodic, inexpensive checks (in a cloud analog to RAID); and (3) Test whether files are resilient to drive crashes. Joint work with Kevin Bowers, Marten van Dijk, Alina Oprea, and Ron Rivest. Dr Juels' homepage
As use of location-based services (LBSs) is becoming increasingly prevalent, mobile users are more and more enticed to reveal their location, which may be exploited by attackers to infer the points of interest (POIs) the users visit, then compromise their privacy. To protect a user's location privacy, we have been developing a new approach based on unobservability, preventing the attackers from associating any particular POI to the user's location. Specifically, we designed, implemented, and evaluated a privacy-protection system, called the Location Information ScrAmbler (LISA), that adjusts the location noise level in order to remove or significantly weaken the distinguishability of POIs the user may visit. By protecting location privacy locally on each mobile user's device, LISA eliminates the reliance on trusted third-party servers required by most previous approaches, avoiding the vulnerability of a single point of failure and facilitating the deployment of LBSs. Moreover, since energy-efficiency is the most critical requirement for battery-powered mobile devices, LISA explores the trade-off between location noise/privacy and energy consumption to achieve both privacy-protection and energy-efficiency. This is joint work with two of my graduate students, Jerry Chen and Xin Hu. Prof. Shin's homepage
The distinction between lazy and eager (or strict) evaluation has been studied in programming languages since Algol 60s call by name, as a way to avoid unnecessary work and to deal gracefully with infinite structures such as streams. It is deeply integrated in some languages, notably Haskell, and can be simulated in many languages by wrapping a lazy expression in a lambda.
Less well studied is the role of laziness, and its opposite, speculation, in computer systems, both hardware and software. A wide range of techniques can be understood as applications of these two ideas.
Laziness is the idea behind:
* Redo logging for maintaining persistent state and replicated state machines: the log represents the current state, but it is evaluated only after a failure or to bring a replica online * Copy-on-write schemes * Clipping regions and expose events in graphics and window systems * Infinity and Not a number results of floating point operations * Futures (in programming) and out of order execution (in CPUs), which launch a computation but are lazy about consuming the result * Formatting operators in text editors, which apply properties such as italic to large regions of text by attaching a sequence of functions that compute the properties; the functions are not evaluated until the text needs to be displayed
Speculation is the idea behind:
* Optimistic concurrency control in databases, and more recently in transactional memory * Prefetching in memory and file systems * Branch prediction, and speculative execution in general in modern CPUs * Exponential backoff schemes for scheduling a resource, most notably in LANs such as WiFi or classical Ethernet * All forms of caching, which speculate that its worth filling up some memory with data in the hope that it will be used again
In both cases it is usual to insist that the laziness or speculation is strictly a matter of scheduling that doesnt affect the result of a computation but only improves the performance. Sometimes, however, the spec is weakened, for example in eventual consistency. I will discuss many of these examples in detail and examine what they have in common, how they differ, and what factors govern the effectiveness of laziness and speculation in computer systems. Dr. Lampson's homepage
Multivariate Cryptography has been proposed as an alternative to classical public-key schemes such as ElGamal and RSA. One of the most appealing feature these schemes is that the difficult problem on which they are based is difficult even for quantum computers. However, differential cryptanalysis has been used with success on some of these schemes. In this talk, we will discuss some of these attacks and new algorithms for solving the key recovery problem. Prof. Fouque's homepage
Recent advances in geometry processing have resulted in a wide range of acquisition and modeling tools. Easy accessibility of such tools have lead to large online repositories for 3D polygonal models of shapes, digitally acquired from physical objects or modeled from scratch. However, such polygonal representations do not make specific the characteristics and invariants of the underlying shapes. In this talk, I will describe shape analysis techniques, specially for detecting object symmetry, i.e., invariance under the action of certain transformations. Such self-similarity is often related to form, function, utility and aesthetics. As we enter the age of easily accessible 3D geometry, analyzing their global properties and invariants becomes important. I will describe a simple and powerful algorithm for detecting partial and approximate symmetries in 3D shapes, and its generalization to detect regularity among the extracted symmetry transformations. In the later part of the talk, I will describe how such high-level shape invariants, extracted in the analysis phase, can be readily used in a range of applications including shape completion, smart geometry editing, motion visualization, shape abstraction, which are otherwise challenging to perform. Prof. Mitra's homepage
File and storage systems contain design flaws, implementation problems, and software bugs that can lead to system crashes, data loss, and silent data corruption. In this talk, I describe two specific contributions of our group at Wisconsin in designing and building more reliable file and storage systems.
First, I describe SQCK, a new file system checker. File system checkers are necessary in order to fix problems that may occur in file system images. However, existing checkers (such as e2fsck) are overly complex and fail in significant ways. The key contribution of SQCK is that it is based on a declarative query language, which is a natural match for the cross-checking that must be performed across the many structures of a file system image. Thus, SQCK is able to perform more useful checks and repairs than existing checkers with surprisingly elegant and compact queries.
Second, I describe the I/O Shepherd. The main contribution of the I/O shepherd is to make the reliability policy of a file system a first-class concern. With the I/O shepherd, the reliability policy of the file system (e.g., retry, parity, mirrors, checksums, and/or sanity checks) can be cleanly specified and encapsulated. We again show that with the right abstraction, even complex policies can be specified in relatively few lines of code.
Database systems rely heavily on indexes in order to achieve good performance. Selecting the appropriate indexes is a difficult optimization problem, and modern database systems are equipped with automated methods that recommend indexes based on some type of workload analysis. Unfortunately, current methods either require advanced knowledge of the database workload, or force the administrator to relinquish control of which indices are created. This talk will summarize our recent work in semi-automatic index tuning, a novel index recommendation technique that addresses the shortcomings of previous methods. Semi-automatic tuning leverages techniques from online optimization, which allows us to prove strong bounds on the quality of its recommendations. The experimental results show that semi-automatic tuning outperforms previous methods by a large margin, offering index recommendations that achieve close to optimal savings in workload evaluation time. Prof. Polyzotis' homepage
2010 heralds the 30th anniversary of the OECD data privacy guidelines, the US Fair Information Practices (FIP) and the 15th anniversary of the EC data protection Directive 95/46/EC underlying todays privacy practices. These fundamental documents pre-date the web and the internet. Location Based Services, Social Networking, Online Shopping are introducing new privacy concerns for potential improper use of consumer data. Surveys have shown that every group of internet users has privacy concerns. Consumer advocacy and regulatory stakeholders are anticipating the advent of privacy enabling technology (PET) innovations that will facilitate consumers retaking control of their personal data. The talk will present some of the leading hot topics and industry trends prioritising these PET developments.
Petabytes of data about human movements, transactions, and communication patterns are continuously being generated by everyday technologies such as mobile phones and credit cards. This unprecedented volume of information facilitates a novel set of research questions applicable to a wide range of development issues. In collaboration with the mobile phone, internet, and credit card industries, my colleagues and I are aggregating and analyzing behavioral data from over 250 million people from North and South America, Europe, Asia and Africa. I will discuss a selection of projects arising from these collaborations that involve inferring behavioral dynamics on a broad spectrum of scales; from risky behavior in a group of MIT freshman to population-level behavioral signatures, including cholera outbreaks in Rwanda and wealth in the UK. Access to the movement patterns of the majority of mobile phones in East Africa also facilitates realistic models of disease transmission as well as slum formations. This vast volume of data requires new analytical tools - we are developing a range of large-scale network analysis and machine learning algorithms that we hope will provide deeper insight into human behavior. However, ultimately our goal is to determine how we can use these insights to actively improve the lives of the billions of people who generate this data and the societies in which they live. Prof. Eagle's homepage
We present a group-theoretic approach to producing fast algorithms for matrix multiplication. In this framework, one devises algorithms by constructing non-abelian groups with certain properties. The algorithms themselves are natural and make use of the discrete Fourier transform over these groups.
We construct several families of groups that achieve matrix multiplication exponent significantly less than 3 (but not better than the current best bound, 2.376...). This leads to two appealing conjectures, one combinatorial and the other algebraic. Either one would imply that the exponent of matrix multiplication is 2.
This is joint work with Henry Cohn, Bobby Kleinberg, and Balazs Szegedy. Prof. Umans' homepage
Recent years, have witnessed an explosive demand for cellular wireless data -- a trend that is not abating. Cisco, for example, estimates that wireless data may grow by as much as 38 fold by 2013. Meeting this demand in a cost effective manner presents a major challenge for cellular operators and wireless engineers.
One promising new technology for deploying radically cheaper networks is femtocells, which are small (typically < 1W) base stations installed in the customer's premises. With their smaller form factor, self-configuration capabilities, and ability to leverage the customer's backhaul, femtocells can be deployed at a fraction of the cost of traditional macrocellular networks.
This talk will discuss a number of the technical challenges concerning femtocells, with a focus on problems of interference coordination. Devices in mixed femto / macro deployments often experience strong interference conditions not well-handled by traditional cellular power control. We offer alternative interference coordination methods based on subband scheduling -- a new technique available in 4G cellular systems such as LTE.
We will also see that the mathematics of interference coordination can be seen as a type of distributed optimization with linear mixing. Time permitting, I will discuss connections to graphical models and belief propagation and applications to other problems including neural mapping and compressed sensing. Prof. Rangan's homepage
I will talk about the use of multiresolution techniques and frames in the area of biomedical imaging. Experiments on diverse data sets showed that multiresolution techniques and frame in particular have great promise in classification of biomedical images. This prompted us to to pursue two lines of work: trying to explain in a mathematically rigorous way why frame perform so well in practice, as well as design new families of frames tailored to such applications. Prof. Kovacevic's homepage
Parallel channels appear in wide range of communication system applications, in addition to optics and electromagnetic propagation. Two main issues arising when dealing with parallel channels is, first, how to identify the number of available parallel channels, and second, how to exploit the existence of such channels in an optimum way. The first question involves finding the so-called Degrees of Freedom of the channel. This number, for example, corresponds to the number of sub-carriers in an OFDM system, the rank of channel matrix for a MIMO system, the number of independent time slots in a fast fading scenario, and the number of independent routes between source and destination in a communication network. After identifying the number of parallel channels, the second question has been traditionally addressed by the Waterfilling algorithm which optimizes the transmission rate over different sub-channels for a given total transmission power. However, the solution is not trivial when other practical factors such as delay come into consideration. Another important question also arises when diversity and multiplexing trade-offs are taken into account. Extending such ideas to multiuser scenarios has also been addressed through schemes such as Interference Alignment. In this talk, after revisiting the main concepts behind parallel channels, we will discuss new ideas related to this topic and finally, present some open problems in this field. Prof. Khalaj's homepage
This talk investigates the behavior and the control of discrete dynamic systems made of distributed objects evolving in a common environment. We build an information hierarchy, from blind to clairvoyant information schemes on one hand, and from distributed local information to central, total information on the other. This hierarchy has a counterpart for the optimization procedures, from static to adaptative policies on one side and from distributed selfish policies to social optimal policies on the other.
We show that when the number of objects becomes large, the optimal cost of the system converges to the optimal cost of a mean field limit system that is deterministic. Convergence also holds for optimal policies. This implies that the value of information disappears when the number of objects goes to infinity so that the hierarchy constructed above collapses.
This framework is illustrated by a brokering problem in grid computing. We provide several comparisons in the blind versus clairvoyant cases as well as bounds on the price of anarchy (resulting from a lack of central information) and show how this vanishes when the size of the system grows.
Several simulations with growing numbers of processors will also be reported. They compare the performance of the optimal policy of the limit system used in the finite case, with classical policies by measuring its asymptotic gain. Prof. Gaujal's homepage
The field of anonymous communications has received considerable interest in the past 10 years, with systems such as Mixminion and Tor fielded and used. At the same time traffic analysis of anonymity systems, the science of extracting information about who is talking to whom, has made significant progress. In this talk we present our latest result on how to cast most current traffic analysis attacks in the framework of Bayesian inference, and how we apply tools from probabilistic modelling and sampling to evaluate anonymity. We will be reformulating long term de-anonymization attacks, classic mix systems as well as pass-the-parcel Crowds in terms of this paradigm, and look at the road ahead when it comes to analysing anonymity. Dr. Danezis' homepage
Technology projections indicate the possibility of 50 billion transistors on a chip in a few years, with the processor landscape being dominated by multi-core designs. Developing correct and reliable software that takes advantage of the multiple cores to deliver high performance, while at the same time ensuring performance isolation, remains a growing challenge.
In this talk, I will begin by describing our changes to existing coherence mechanisms in order to control data sharing --- in particular, to monitor memory, isolate data modifications, support fine-grain protection, and improve the efficiency of fine-grain sharing.
I will discuss the utility of the mechanisms for a range of applications and show how monitoring and isolation can be combined to support one parallel programming construct --- transactional memory.
The prevalence of multi-core processors also implies more ubiquitous resource sharing at several levels. Controlled resource sharing is essential in order to provide performance isolation, dictating the need for resource-aware policies within the operating system. As time permits, I will also describe our efforts in combining resource utilization prediction, resource control mechanisms, and resource-aware policies in order to effect performance isolation at the operating system level and improve multi-core resource utilization. Prof. Dwarkadas' homepage
In the last few years, we have been witnessing an evergrowing need for continuous observation and monitoring applications. This need is driven by recent technological advances that have made streaming applications possible, and by the fact that analysts in various domains have realized the value that such applications can provide. In this presentation, we describe a general framework for enabling complex applications over data streams. This framework is based on efficiently computing an approximation of multi-dimensional distributions of streaming data, and is amenable to distributed operation, thus, making better use of the available resources. We demonstrate the use of the proposed framework for the diverse problems of deviation detection, and detection and tracking of homogeneous regions. Finally, we briefly discuss future research directions in this general area. Prof. Palpanas' homepage
The goal of deriving good upper bounds for the capacities of real networks under general demands has, for decades, remained a seemingly unattainable objective. This talk explores a new road map for building such computational tools and for assessing the accuracy of the resulting bounds. Prof. Effros' homepage
High-Speed Data-Oriented donwlink has been included in current 3G wireless standards in various similar forms (e.g., 1xEv-Do for CDMA2000 and HSDPA for 3GPP-WCDMA). These schemes make use of opportunistic scheduling that take advantage of instantaneous channel state information in order to allocate the user to the time-frequency slots with the most favorable channel conditions, subject to some ``fairness'' criterion. For delay-tolerant data traffic, opportunistic scheduling provides a significant throughput improvement with respect to conventional CDMA or TDMA/FDMA with fixed round-robin channel assignment. In the pursuit of even higher data rates, the next generation of wireless systems will make widespread use of multiuser multiple antennas (MU-MIMO) techniques. In this talk, we review the principles of MU-MIMO dowlink schemes (MIMO Gaussian broadcast channel), we discuss the challenges related to providing reliable channel state information at the transmitter(s), and to the presence of unpredictable inter-cell interference due to independent uncoordinated operations in neighboring cells, and provide a comprehensive and general framework to solve the opportunistic downlink scheduling problem that takes into account these non-ideal conditions. The proposed approach builds on a general stochastic network optimization framework based on Liapunov queue stability and Liapunov drift. Effects such as non-perfect channel state information and unknown instantaneous interference power levels from adjacent cells are included, and computationally efficient near-optimal implementation are discussed. Finally, we will present an analysis method that combines results from large random matrices and convex optimization, and is able to characterize semi-analytically the system performance under opportunistic scheduling and arbitrary fairness criteria. Prof. Caire's homepage