I will discuss new methods and studies that aim to improve eyes-free data entry for blind mobile device users. Currently, mobile devices are generally accessible to blind people, but text entry is almost prohibitively slow. Studies show that blind people enter text on an iPhone at a rate of just 4 words per minute. I will present Perkinput, a chording text entry method where users touch the screen with one to three fingers at a time in patterns based on Braille. Instead of soft keys, Perkinput uses concepts from signal detection theory to determine the user’s input. Based on Perkinput, I developed PassChords, a touchscreen authentication method that has no audio feedback. Unlike current eyes-free input methods, PassChords doesn’t echo a user’s input, so it won’t broadcast the user’s password for others to hear. Finally, I will discuss another modality for eyes-free input: speech. I conducted a survey and a study to determine the patterns and challenges of the use of speech input for composing paragraphs on mobile devices. I will conclude by presenting current work on eyes-free methods for correcting speech recognition errors.
Shiri Azenkot is a PhD Candidate in Computer Science at the University of Washington. Her research is in human-computer interaction and accessibility, focusing on eyes-free input on mobile devices using gestures and speech. Shiri received two Best Paper awards from ACM's ASSETS conference and has presented her work at other top HCI conferences (CHI and UIST). She received a National Science Foundation Graduate Research Fellowship and an AT&T Labs Graduate Fellowship. Shiri holds a BA in computer science from Pomona College and an MS in computer science from the University of Washington. You can find out more about her at http://shiriazenkot.com.
Computer systems have faced significant power challenges at many points in their history, but over the past 20 years, these challenges have shifted from mainly being addressed at the devices and circuits level, to their current position as first-order constraints for architects and software developers. With power concerns creeping up the implementation layers, while application-level changes alter the nature of computation being performed, the natural approaches and opportunities for power mitigation require constant innovation. My talk will discuss work both by my own group and by the field overall to address power challenges while meeting performance targets on platforms from smartphones to datacenters.
Margaret Martonosi is the Hugh Trumbull Adams '35 Professor of Computer Science at Princeton University, where she has been on the faculty since 1994. Martonosi's research focuses on computer architecture and mobile computing, particularly power-efficient systems. Past projects include the Wattch power modeling tool and the ZebraNet mobile sensor network, which was deployed for wildlife tracking in Kenya. Martonosi is a Fellow of both IEEE and ACM. Her major awards include Princeton University's 2010 Graduate Mentoring Award, the Anita Borg Institute's 2013 Technical Leadership Award, and NCWIT's 2013 Undergraduate Research Mentoring Award.
As cloud computing becomes increasingly popular, organizations face greater security threats. Public clouds have become a central point of attack and successful compromises can cause potentially billions of dollars of damage. Physical attacks on data center machines are very concerning because an attacker can gain full control of the machines and circumvent software protections.
We present an efficient processor architecture that allows us to build a more secure cloud that is resistant against physical attacks. We are able to achieve full security against malicious adversaries by only trusting and securing the CPU of a machine. We can leverage commodity components such as DRAM, hard drives, and network interfaces without requiring that they be secured against physical attacks. We achieve this by designing a novel Oblivious RAM algorithm ideal for hardware and building a memory controller that hides access patterns to DRAM and storage. The memory controller is integrated into the CPU and makes data dependent computation indistinguishable to an adversary.
Emil Stefanov is a 5th year graduate student at UC Berkeley working with Professor Dawn Song. His research interests include systems security, privacy, and applied cryptography focusing on secure cloud computing and privacy-preserving storage outsourcing. Some of his recent research topics include oblivious ram, secure processor architecture, searchable encryption, integrity verified file systems, dynamic proofs of retrievability, and private set intersection. Before joining UC Berkeley, Emil got his B.S. degree in Computer Science from Purdue University in 2009, and is expected to defend his Ph.D. in the summer of this year.
Emil was awarded an NSF graduate fellowship in 2009 and an NDSEG graduate fellowship in 2011. He is a coauthor of 15 conference proceeding papers and 5 journal papers, and has won a best paper award, an AT&T Best Applied Security Paper Award in 2012, and an AT&T best applied security paper ﬁnalist award in 2013. Besides his academic experience, Emil has also worked for a short time for NVIDIA, Microsoft, RSA labs, and Motorola as a summer intern.
By 2020, there will be billions of devices connecting to the Internet. These devices will be ubiquitous and will generate large amounts of sensing and monitoring data that will enable a multitude of applications to improve human life. The key enabler of this vision is the underlying wireless communication technology. However, current wireless networks are notoriously interference-limited. With the number of devices increasing to the billions in the future, current solutions will be crippled to support the amount of data that needs to be communicated.
In the first part of my talk, I will talk about a special interference that is known. Specifically, the self-interference that arises from a node transmitting while it's receiving on the same frequency. Many research groups, including ours, have shown ways to reduce this self-interference to allow full duplex communication. I will show that this self-interference cancellation can also enable a new node capability -- flexibility in allocating RF resources. Existing multiple antenna techniques are inflexible; they use all of their antennas for either transmission or reception, as in multiple input multiple output (MIMO) and interference alignment techniques. I will first motivate the need to make wireless nodes flexible. If a wireless node can allocate some of its antennas for transmission and the remaining for reception, then it can improve its efficiency. The exact allocation changes based on link quality, network topology and traffic demand. We call this design FlexRadio. I will show that FlexRadio can outperform any existing multiple antenna technology; MIMO, full duplex, multi-user MIMO (MU-MIMO) and interference alignment. I will also present a way to design FlexRadio and show preliminary results from our prototype. We observe a 2x throughput gain over existing techniques.
In the second part of my talk, I will motivate the need to exploit heterogeneity in node capabilities in wireless networks. Existing interference mitigating techniques such as interference alignment assume all nodes to be equi-capable and stationary. I will present RobinHood, which enables powerful and backbone-assisted access points to help out less powerful and mobile devices. RobinHood can achieve 6x throughput gain over perfect time division multiple access (TDMA) and 24x gain over WiFi. In general, RobinHood provides a linearly increasing network throughput as the number of mobile and low power wireless devices increases.
I will conclude my talk with a note on designing innovative physical layer mechanisms and building a network stack that is aware of such mechanisms. A future network stack should also be aware of node capabilities such as the number of antennas, flexibility, computation power, backbone connectivity and electrical power availability so that it can exploit them to jointly address wireless interference.
Kannan Srinivasan is an Assistant Professor in the department of Computer Science and Engineering at the Ohio State University (OSU). He graduated with a Ph.D from Stanford University in 2010 and was a post doctoral researcher at the University of Texas at Austin for a year before joining OSU. He has won multiple awards; Excellent performance award from OSU-CSE, NSF CAREER Award in 2013, Best Paper Runner-Up at IPSN 2013,Best Paper Award at MobiCom 2010, Best Paper Runner-Up at MobiCom 2013, Best Demo Award at MobiCom 2010, Fellowship from Stanford-ECE and a Presidential Award from Oklahoma State University. His work on wireless in-band full duplex broke a century-old belief that a wireless cannot send and receive on the same frequency simultaneously. This work got significant media attention and had both the theory and systems communities revisit the first principles. It’s being commercialized by a Stanford start-up company.
Computational neuroanatomy utilizes various non-invasive imaging modalities such as magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) in quantifying the spatiotemporal dynamics of anatomical structures. Many modeling frameworks in computational neuroanatomy assume diffeomorphism and topological
invariance between structures, and hence are not applicable to anatomical structures with changing topology. Persistent homology is a recently popular branch of computational topology that can handle changing topology. Persistent homology computes topologically invariant features such as Betti numbers and Euler characteristics
of a space at different spatial resolutions. To construct persistent homology, it is necessary to construct a filtration, a nested sequence of increasing subsets, over the different resolutions. The features that are more persistent over the filtration are considered as the topological signal of the space. In this talk, we will go
over the basic concepts of persistent homology such as Rips complexes, barcodes, persistent diagrams and persistent landscapes. These tools are then applied in solving practical topological problems in brain imaging: Morse filtration for cortical surface data in autistic MRI, large scale graphical-LASSO and compressed sensing without optimization in DTI, hole detection in PET networks in Alzheimer\u2019s disease, epileptic seizure detection in EEG and multi-filtrations over multiple imaging modalities.
Specialization and accelerators are an effective way to address the slowdown of Dennard scaling. For a family of accelerators like DySER, NPU, CE, and SSE acceleration that rely on a high performance processor to interface with memory using a decoupled access/execute paradigm, the power/energy benefits of acceleration are curtailed by the host processor’s power consumption. We observe that the host processor is essentially performing three primitive tasks: i) computation to generate recurring address patterns/branches; ii) managing and triggering recurring events like arrival of value from cache, value from accelerator etc.; iii) actions to move information from one place to another; and iv) the above three are recurring and occur concurrently. The host processor's overarching role is to orchestrate memory access dataflow, and a conventional OOO processor is power-inefficient and over-provisioned for this.
The observation enables an efficient dataflow microarchitecture to build a memory access dataflow engine. We propose a new architecture/execution-model called memory access dataflow (MAD) that is built on these primitive tasks, exposes them in the low level MAD ISA, and an accompanying efficient microarchitecture.
New computing platforms have greatly increased the demand for programmers, but learning to program remains a big challenge. Program synthesis has the potential to revolutionize programming by making it more accessible. My work has focused on two goals: making programming more intuitive through the use of new interfaces, and using automated feedback to help students learn programming. In this talk, I will present my work on three systems that work towards these goals. The FlashFill system helps end-users perform repetitive data transformations over strings, numbers, and tables using input-output examples. FlashFill was shipped as part of Excel 2013 and was quoted as one of the top features by many press reviews. The Storyboard Programming system helps students write data-structure manipulations using textbook-like visual examples and bridges the gap between high-level insights and low-level code. Finally, the Autograder system provides automated feedback to students on introductory programming assignments, and was successfully run on tens of thousands of programming exercises from edX. I will describe how ideas from advances in constraint-solving, machine learning, and formal verification enabled the new forms of interaction required by these systems.
Rishabh Singh is a PhD candidate in the Computer Science and Artificial Intelligence Laboratory at MIT. His research interests are broadly in formal methods and programming languages. His PhD work focuses on developing program synthesis techniques for making programming accessible to end-users and students. He is a Microsoft Research PhD fellow and winner of MIT’s William A. Martin Outstanding Master's thesis Award. He obtained his BTech in Computer Science and Engineering from IIT Kharagpur in 2008, where he was awarded the Institute Silver Medal and Bigyan Sinha Memorial Award. He was also awarded to be Prime Minister’s National Guest at Republic Day Parade, New Delhi in 2005.
Software obfuscation aims to make the code of a computer program "unintelligible'' while preserving its functionality. This problem was first posed by Diffie and Hellman in 1976, and so far, most cryptographers believed that realizing obfuscation was impossible.
My research provides the first secure solution to this problem. Consequently several other long-standing open problems have been resolved. In this talk, I will describe these new developments and their implications.
Sanjam Garg is a Josef Raviv Memorial Postdoctoral Fellow at IBM Research T.J. Watson. His research interests are in cryptography and security, and more broadly in theoretical computer science. He obtained his Ph.D. from the University of California, Los Angeles (UCLA) in 2013 and his undergraduate degree from the Indian Institute of Technology, Delhi in 2008. Sanjam's Ph.D. thesis provides the first candidate constructions of multilinear maps that have found extensive applications in cryptography, most notably to software obfuscation. He has published several papers in top cryptography and security conferences and is the recipient of various honors such as the Outstanding Graduating Ph.D. Student award at UCLA and the best paper award at EUROCRYPT 2013.
Datacenters consume an enormous amount of electricity, which translates into high operational cost and high carbon emissions, since most of this electricity is produced using fossil fuels. Interest has been growing in building "green" datacenters that are partially or completely powered by renewable ("green") sources of energy such as solar or wind. Green datacenters have the potential to reduce both the electricity costs and the carbon footprint. However, solar and wind energy production is variable, making it challenging to use in datacenters. In this talk, I will first explore self-generation with solar and/or wind as an approach to greening datacenters. I will then describe Parasol, a prototype green datacenter that we have built as a research platform. Parasol comprises a small container, a set of solar panels, a battery bank, and a grid-tie. Finally, I will describe our work on matching a datacenter's computational load to the green energy supply. I will present real experiments run on Parasol to show that intelligent workload and energy source management can significantly reduce grid electricity consumption (thereby lowering the carbon footprint) and cost.
Bio: Thu is an Associate Professor in the Department of Computer Science at Rutgers. He is also currently serving as the department’s Associate Chair. He received his PhD from the University of Washington, Seattle, his MS from MIT, and his BS from the University of California, Berkeley. His current main research topics include energy efficiency and integration of renewable energy into datacenters, and storing, indexing, and searching of personal information (e.g., Facebook, Twitter, device filesystems, etc.). Part of the work in this talk was published as “Parasol and GreenSwitch: Managing Datacenters Powered by Renewable Energy” in ASPLOS 2013, which has been chosen as an IEEE Micro Top Picks from the Computer Architecture Conferences 2014. More information about Thu can be found at: http://www.cs.rutgers.edu/~tdnguyen.
We live in a software-driven world. Software helps us communicate and collaborate; create art and music; and make discoveries in biological, physical, and social sciences. Yet the growing demand for new software, to solve new kinds of problems, remains largely unmet. Because programming is still hard, developer productivity is limited, and so is end-users' ability to program on their own.
Emina Torlak is a researcher at U.C. Berkeley, working at the intersection of software engineering, formal methods, and programming languages. Her focus is on developing tools that help people build better software more easily. She received her B.Sc. (2003), M.Eng. (2004) and Ph.D. (2009) from MIT, where she developed Kodkod, an efficient SAT-based solver for relational logic. Kodkod has since been used in over 70 tools for verification, debugging, and synthesis of code and specifications. Emina has also worked on a wide range of domain-specific formal methods. She won an ACM SIGSOFT distinguished paper award for her work at LogicBlox, where she built a system for synthesizing massive data sets, used in testing of decision support applications. As a member of IBM Research, she led the development of a tool for bounded verification of memory models, enabling the first fully automatic analysis of the Java Memory Model. These experiences inspired her current research on solver-aided languages, which aims to reduce the effort of applying formal methods to new problem domains.
Abstract: The problem of estimating the number of distinct entries in a data
stream is a fundamental question in the study of streaming algorithms
with applications to detecting denial of service attacks, query
planning and data integration. We present an algorithm for this
problem obtaining asymptotically optimal space and update time.
Bio: After completing undergraduate degrees from M.I.T. in mathematics with
computer science and physics, Daniel Kane attended graduate school at
Harvard University with the support of NSF and NDSEG fellowships,
obtained his Ph.D. in mathematics under the mentorship of Barry Mazur
in 2011. He is currently a postdoctoral fellow at Stanford University
supported by an NSF fellowship. Daniel has broad research interests in
mathematics and theoretical computer science and has published dozens
of papers on a wide variety of topics including streaming algorithms,
analytic number theory, combinatorics, Johnson-Lindenstrauss
dimensionality reduction, spherical designs, derandomization,
statistics of random set partitions, and the analysis of Boolean
functions. Daniel is a four-times Putnam Fellow who won the 2007
AMS/MAA/SIAM Morgan Prize for his undergraduate research. He also won
best student paper awards at FOCS 2005 and CCC 2010 and best paper
awards at PODS 2010 and CCC 2013.
GPGPUs are gaining track as a vehicle for power-efficient, high performance computing. Nevertheless, their von-Neumann-based design suggests they are amenable the model’s key inefficiencies: the processor must fetch and decode each dynamic instruction instance, and all intermediate values of the computation must be transferred back-and-forth between the functional units and the register file.
In contrast to the von-Neumann model, dataflow machines represent programs as graphs that can be pre-loaded and executed multiple times. Furthermore, they facilitate direct communication of intermediate values between computational units. Therefore, dataflow architectures reduce both instruction and data memory accesses, as well as minimize register-file accesses.
In this talk, I will describe the single-graph multiple-flows (SGMF) dataflow execution model and architecture that target efficient execution of emerging massively parallel programming models. SGMF maps a computation graph onto a tagged-token dataflow engine composed of a fabric of interconnected functional units and simultaneously streams multiple instances of the computation (tasks) through the fabric. Dynamic dataflow enables tasks to execute out-of-order, thus maximizing the utilization of the computational grid. I will describe the design challenges of the SGMF architecture and describe our initial evaluation, which shows that SGMF outperforms von-Neumann-based GPGPUs both in raw performance and energy consumption.
This is joint work with my graduate student, Mr. Dani Voitshechov.
Yoav Etsion is an Assistant Professor in the Electrical Engineering and Computer Science departments at Technion - Israel Institute of Technology, where he is a founding member of the Technion Computer Engineering Research Center. He received his MSc and PhD from the Hebrew University of Jerusalem. In the past, he was a Senior Researcher at the Barcelona Supercomputing Center (BSC-CNS) where he held a Juan de la Cierva Fellowship from the Ministry of Science and innovation of Spain. His research interests include computer architecture, HW/SW interoperability, operating systems, and parallel programming models.
Although we have successfully created smaller, faster, and cheaper computer devices, several adoption barriers remain to realize the dream of Ubiquitous Computing (Ubicomp). By lowering these barriers, we can seamlessly embed human-computer interfaces into our home and work environments. My work focuses on developing highly integrated hardware/software sensing systems for Ubicomp applications using my expertise in embedded systems, low-energy hardware design, and sensing, in addition to integrating communications, signal processing, and machine learning. In this talk, I will present my research on ultra-low-power indirect sensing approaches for both on- and off-body applications. First, I will discuss how the conductive properties of the human body can be leveraged to enable novel human-computer interactions. Next, I will discuss my work on using the existing infrastructure in buildings to reduce the number of sensors required and to reduce the power consumption for many Ubicomp applications. Finally, I will discuss my current work in on-body, non-invasive health sensing systems. By continually working on application-driven interdisciplinary research, we can lower the adoption barriers and enable many new high-impact application domains.
Gabe Cohn is a Ph.D. candidate in Electrical Engineering in the Ubiquitous Computing (Ubicomp) Lab at the University of Washington, advised by Shwetak Patel. His research focuses on (1) designing and implementing ultra-low-power embedded sensing systems, (2) leveraging physical phenomena to enable new sensing modalities for human-computer interaction, and (3) developing sensor systems targeted at realizing immediate change in high-impact application domains. He was awarded the Microsoft Research Ph.D. Fellowship in 2012, the National Science Foundation Graduate Research Fellowship in 2010, and 6 Best Paper awards and nominations. He is the co-founder of SNUPI Technologies, a sensor and services company focused on home safety, security, and loss prevention. He received his B.S. with honors in Electrical Engineering from the California Institute of Technology in 2009, where he specialized in embedded systems, computer architectures, and digital VLSI.
Programmable accelerators such as graphics processing units (GPUs) can potentially enable reductions in the cost of computation along with increases in computing efficiency. However, outside of graphics, GPUs are primarily employed for high performance computing. This talk will describe my group's recent research on exploring hardware changes to broaden the range of applications that benefit from GPU-like accelerators. Approaches discussed include introducing transactional memory and coherence into GPUs along with using cache miss feedback to make better hardware thread scheduling decisions. The common theme is finding low cost hardware changes enabling a larger fraction of an application to easily run on the accelerator.
Tor Aamodt is an Associate Professor in the ECE Department at the University of British Columbia. His research focuses on general purpose GPU architectures. Three of his papers have been selected as "Top Picks" by IEEE Micro Magazine and one was recently invited as a "Research Highlight" in Communications of the ACM. He received his BASc, MASc and PhD from the University of Toronto. Before UBC he worked briefly at NVIDIA on the memory system of the first GPU supporting CUDA and recently he enjoyed a sabbatical year at Stanford.
The growth of computer science enrollments, especially in introductory courses, raises the possibility of increasing levels of plagiarism. Fortunately, good software plagiarism detectors have been available since the mid-90's, and these help detect suspicious assignment submissions. However, many popular plagiarism detectors were designed when computer memories were two to three orders of magnitude smaller than today's systems, so these systems necessarily made a number of assumptions to bound their running times and keep their memory consumption low.
In this talk, we describe a new plagiarism detector designed with a systems mindset, aiming to exploit greater resource availbility in order to improve detection performance and to provide more resources to the instructor to help determine which suspected matches are true positives. By some metrics, this new detector performs hundreds to thousands of times more calculations than previous approaches. It is being used at several universities, and at one, its use resulted in nearly twice as many plagiarism cases being discovered, causing an ongoing discussion of policy and pedagogical issues.
Bio: Vivek Pai is an associate professor at Princeton. He has worked in numerous areas of server design and performance, from the depths of optimizing TCP checksum performance and eliminating buffer copying, all the way up to designing scalable content delivery infrastructures. In the middle, he has worked on improving OS performance for server applications, designing software architectures for high-performance servers, and developing intelligent server clustering algorithms. He co-founded iMimic Networking, where he helped architect and develop the fastest Web proxy server in the world. iMimic was acquired by Ironport Systems, which was subsequently acquired by Cisco. He also co-founded CoBlitz LLC, developing licensed content delivery networks, which was later acquired by Verivue, and subsequently by Akamai. As a community service, he has also developed a new plagiarism detector for programming assignments.
Modern software is intricate, highly interconnected, diverse, ubiquitous, and continuously evolving. To manage this complexity and enable construction of high-quality software systems, my research aims to equip developers with scalable automated techniques for formally reasoning about correctness of their software.
In this talk, I describe generic and efficient push-button techniques for verifying safety of software systems, that is, proving that every execution of a program works as expected (does not cause run-time crashes and is functionally correct). To tackle this undecidable problem, I present an approach that examines a small number of program executions (out of possibly infinitely many), and uses novel logical reasoning methodologies to devise hypotheses explaining why the entire program might be safe. I then discuss the implementation of these techniques in the award-winning tool, UFO, and its application to verification of a large array of programs including Linux and Windows device drivers and software in cardiac pacemakers.
Aws Albarghouthi is a PhD candidate in the Department of Computer Science at the University of Toronto, where he is a recipient of the prestigious Alexander Graham Bell Canada Graduate Scholarship. Aws’s overarching research goal is ensuring correctness, reliability, and security of software systems. Specifically, he has contributed automated formal techniques for proving software correctness, discovering bugs, and synthesizing correct software. His automated verification tool, UFO, won the largest number of gold medals at the 2013 International Software Verification Competition.
Department of Computer Sciences
University of Wisconsin–Madison
1210 W. Dayton St
Madison, WI 53706-1685