H.F. DeLuca Forum, Wisconsin Institutes for Discovery, 330 N. Orchard St.
It is often feared that the growing frequency of hardware errors will be a major obstacle to the deployment of exascale systems. The Department of Energy held several workshops to study this issue, including a week-long workshop organized by the Institute for Computing Sciences. This workshop brought together leading researchers in circuits, architecture, operating systems and applications. The workshop produced a report that indicates possible scenarios for handling resilience at exascale and required research to achieve progress in this area. Snir will discuss this report, indicating the questions it raises and research directions it identifies.
Marc Snir is Director of the Mathematics and Computer Science Division at the Argonne National Laboratory and Michael Faiman and Saburo Muroga Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He currently pursues research in parallel computing.
He was head of the Computer Science Department from 2001 to 2007. Until 2001 he was a senior manager at the IBM T. J. Watson Research Center where he led the Scalable Parallel Systems research group that was responsible for major contributions to the IBM SP scalable parallel system and to the IBM Blue Gene system.
Marc Snir received a Ph.D. in Mathematics from the Hebrew University of Jerusalem in 1979, worked at NYU on the NYU Ultracomputer project in 1980-1982, and was at the Hebrew University of Jerusalem in 1982-1986, before joining IBM. Marc Snir was a major contributor to the design of the Message Passing Interface. He has published numerous papers and given many presentations on computational complexity, parallel algorithms, parallel architectures, interconnection networks, parallel languages and libraries and parallel programming environments.
Marc is Argonne Distinguished Fellow, AAAS Fellow, ACM Fellow and IEEE Fellow. He has Erdos number 2 and is a mathematical descendant of Jacques Salomon Hadamard.
Hosted by the Center for High Throughput Computing
Modern datacenters that host large-scale Internet services are extremely expensive to construct and operate. Improving software performance and server utilization is key to improving the efficiency and reducing the enormous cost in datacenters. In this talk, I present novel compilation techniques and runtime systems to significantly improve performance, quality of service (QoS) and machine utilization in datacenters by effectively mitigating memory resource contention on modern multicore servers.
Specifically, this talk presents: 1) comprehensive characterization of the impact of memory resource sharing on industry-strength large-scale datacenter workloads and the design of runtime systems to intelligently map application threads to cores to promote positive resource sharing and mitigate resource contention to improve application performance; 2) the design of novel compilation techniques and run-time systems that statically and dynamically manipulate applications’ contentious nature to enable the co-location of applications with varying QoS requirements, and as a result, greatly improve server utilization in data centers.
H.F. DeLuca Forum, Wisconsin Institutes for Discovery, 330 N. Orchard St.
What happens when scientists and researchers are no longer limited by fixed-size compute and data capacity? Better, faster science. From small research labs to start-ups to Fortune 500s, the combination of HTCondor and Utility HPC is making impossible science possible in Life Sciences. This session explores how organizations are running large-scale high performance computing (HPC) workloads, using CycleCloud and HTCondor with lessons learned from several real-world examples in Genomics, Molecular Modeling, Simulation, Proteomics and many more.
Jason Stowe, CEO at Cycle Computing
Jason Stowe is a seasoned entrepreneur, and the founder and CEO of Cycle Computing, the leader in Utility HPC and Utility Supercomputing Software. Cycle delivers proven, secure and flexible high performance computing (HPC) and data solutions since 2005. Cycle Computing products help clients maximize internal infrastructure and increase power as research demands, like the 10000-core cluster for Genentech, the 30000+ core cluster for a Top 5 Pharma and 50,000+ core cluster for Schrodinger that were covered in the NY Times, Wall Street Journal, Wired, BusinessWeek, Bio-IT World and Forbes. Starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, The Hartford Insurance Group, Johnson & Johnson, Purdue University, Pfizer and Lockheed Martin. Jason attended Carnegie Mellon and Cornell Universities, and volunteered/guest lectured for the Entrepreneurship program at Cornell's Johnson Business School
H.F. DeLuca Forum, Wisconsin Institutes for Discovery, 330 N. Orchard St.
It is often feared that the growing frequency of hardware errors will be a major obstacle to the deployment of exascale systems. The Department of Energy held several workshops to study this issue, including a week-long workshop organized by the Institute for Computing Sciences. This workshop brought together leading researchers in circuits, architecture, operating systems and applications. The workshop produced a report that indicates possible scenarios for handling resilience at exascale and required research to achieve progress in this area. Snir will discuss this report, indicating the questions it raises and research directions it identifies.
Marc Snir is Director of the Mathematics and Computer Science Division at the Argonne National Laboratory and Michael Faiman and Saburo Muroga Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He currently pursues research in parallel computing.
He was head of the Computer Science Department from 2001 to 2007. Until 2001 he was a senior manager at the IBM T. J. Watson Research Center where he led the Scalable Parallel Systems research group that was responsible for major contributions to the IBM SP scalable parallel system and to the IBM Blue Gene system.
Marc Snir received a Ph.D. in Mathematics from the Hebrew University of Jerusalem in 1979, worked at NYU on the NYU Ultracomputer project in 1980-1982, and was at the Hebrew University of Jerusalem in 1982-1986, before joining IBM. Marc Snir was a major contributor to the design of the Message Passing Interface. He has published numerous papers and given many presentations on computational complexity, parallel algorithms, parallel architectures, interconnection networks, parallel languages and libraries and parallel programming environments.
Marc is Argonne Distinguished Fellow, AAAS Fellow, ACM Fellow and IEEE Fellow. He has Erdos number 2 and is a mathematical descendant of Jacques Salomon Hadamard.
Hosted by the Center for High Throughput Computing
Modern datacenters that host large-scale Internet services are extremely expensive to construct and operate. Improving software performance and server utilization is key to improving the efficiency and reducing the enormous cost in datacenters. In this talk, I present novel compilation techniques and runtime systems to significantly improve performance, quality of service (QoS) and machine utilization in datacenters by effectively mitigating memory resource contention on modern multicore servers.
Specifically, this talk presents: 1) comprehensive characterization of the impact of memory resource sharing on industry-strength large-scale datacenter workloads and the design of runtime systems to intelligently map application threads to cores to promote positive resource sharing and mitigate resource contention to improve application performance; 2) the design of novel compilation techniques and run-time systems that statically and dynamically manipulate applications’ contentious nature to enable the co-location of applications with varying QoS requirements, and as a result, greatly improve server utilization in data centers.
H.F. DeLuca Forum, Wisconsin Institutes for Discovery, 330 N. Orchard St.
What happens when scientists and researchers are no longer limited by fixed-size compute and data capacity? Better, faster science. From small research labs to start-ups to Fortune 500s, the combination of HTCondor and Utility HPC is making impossible science possible in Life Sciences. This session explores how organizations are running large-scale high performance computing (HPC) workloads, using CycleCloud and HTCondor with lessons learned from several real-world examples in Genomics, Molecular Modeling, Simulation, Proteomics and many more.
Jason Stowe, CEO at Cycle Computing
Jason Stowe is a seasoned entrepreneur, and the founder and CEO of Cycle Computing, the leader in Utility HPC and Utility Supercomputing Software. Cycle delivers proven, secure and flexible high performance computing (HPC) and data solutions since 2005. Cycle Computing products help clients maximize internal infrastructure and increase power as research demands, like the 10000-core cluster for Genentech, the 30000+ core cluster for a Top 5 Pharma and 50,000+ core cluster for Schrodinger that were covered in the NY Times, Wall Street Journal, Wired, BusinessWeek, Bio-IT World and Forbes. Starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, The Hartford Insurance Group, Johnson & Johnson, Purdue University, Pfizer and Lockheed Martin. Jason attended Carnegie Mellon and Cornell Universities, and volunteered/guest lectured for the Entrepreneurship program at Cornell's Johnson Business School
"Sensemaking for Mobile Health," explores the possibilities of using data
from mobile electronic devices to monitor and manage health care.
Speaker Bio:
Deborah Estrin is currently on leave from her position as a Professor of Computer Science with a joint appointment in Electrical Engineering at UCLA, where she held the Jon Postel Chair in Computer Networks, and was Founding Director of the NSF-funded Center for Embedded Networked Sensing (CENS, 2001-2012). She has accepted a faculty position with the Computer Science Department at the new Cornell Tech campus in New York City, http://tech.cornell.edu. Estrin received her Ph.D. (1985) in Computer Science from the Massachusetts Institute of Technology, and her B.S. (1980) from U.C. Berkeley.
Estrin’s early research (conducted while on the Computer Science Department faculty at USC and the USC Information Sciences Institute) focused on the design of network and routing protocols for very large, global, networks, including: multicast routing protocols, self-configuring protocol mechanisms for scalability and robustness, and tools and methods for designing and studying large scale networks. In the late 90’s Professor Estrin began her work in embedded networked sensing systems, with emphasis on environmental monitoring applications. Most recently her work focuses on participatory sensing systems, leveraging the location, activity, image, and user-contributed data streams increasingly available from mobile phones. Ongoing projects include Participatory Sensing for civic engagement and STEM education (http://mobilizingcs.org), and self-monitoring applications in support of health and wellness (http://openmhealth.org).
CS 638 [Software Engineering] students work together in large semester project teams. Join us any time from 2:30pm until 3:45pm to see live demos of:
Intramural Baseball Manager, proposed by David Kuehn, developed by Brian Anderson, Matt Beaty, David Kuehn, Colin Laska, and Kristie Stalberger
NexTrack, proposed by Ryan Riebling, developed by Aaron Bregger, William Justmann, Michael Landau, Ryan Riebling, Mikhail Skobov, and James Stefanich
Space Battle Game, proposed by Colin McKay, developed by Joseph Francke, Andrew Hermus, Pierce Johnson, Colin McKay, and Sam Olver
Tablet-Top RPG, proposed by Aaron Bartholomew, developed by Aaron Bartholomew, Jacob Laska, James Merrill, and Ahmad Faiz Abdull Rashid
Tuter: The Ultimate Tutor Finder, proposed by Sher Minn Chong, developed by Sher Minn Chong, Trever Johnson, Faiz Lurman, Josh Serbus, and Adam Thorson
UW–Madison Campus Tour Guide, developed by proposed by Anousone Bounket Anousone Bounket, Peter Erickson, Emily Gerner, Vinodh Muthiah, and Ryan Shenk
These students have all worked very hard, and accomplished much in little time. Please drop by at your convenience to see and celebrate their accomplishments!
Abstract:
The availability of high-speed flash solid-state devices (SSD) has introduced a new tier into the memory hierarchy. SSDs have dramatically different properties than disks, yet are exposed in many systems as a generic block device. In this talk, I will present two systems that update the interface to these devices to better match their capabilities as a new memory tier.
First, I will talk about a system called FlashTier to use flash SSDs as a cache in front of slower disks. In this work, we investigate the numerous differences between the interface offered by an SSD, a persistent block store, and the service it provides, caching data. I will present how we redress these differences through new block-addressing and space-management techniques.
Next, I will describe our work on extending main memory by virtualizing it with inexpensive flash storage. We find that there are several paging mechanisms in the core virtual memory subsystem of the Linux kernel, which have been optimized for the characteristics of disks. I will describe a new flash-virtual memory system called FlashVM that de-diskifies these mechanisms for improved performance and reliability with flash SSDs.
Ph.D Defense Committee Members:
Michael M. Swift (chair), UW-Madison
Andrea C. Arpaci-Dusseau, UW-Madison
Remzi H. Arpaci-Dusseau, UW-Madison
Mark D. Hill, UW-Madison
Arif Merchant, Google
This talk covers several graphics projects including cloth simulation, perceptually based tone mapping, and digital image/video forensics. These projects will be discussed in the context of exploring the common underlying theme of simulation based on human perception and measurement. I will show how measured data and models of the human visual system can be used for more realistic image reproduction, how simulation can be used to take measurements of the real-world and build more realistic cloth models, and how models of the word can be used to detect forgeries that otherwise fool human perception.
Bio:
James F. O'Brien is a Professor of Computer Science at the University of California, Berkeley. His primary area of interest is Computer Animation, with an emphasis on generating realistic motion using physically based simulation and motion capture techniques. He has authored numerous papers on these topics. In addition to his research pursuits, Prof. O'Brien has worked with several game companies on integrating advanced simulation physics into game engines, and his methods for destruction modeling have been used in more than 15 feature films. He received his doctorate from the Georgia Institute of Technology in 2000, the same year he joined the Faculty at U.C. Berkeley. Professor O'Brien is a Sloan Fellow and ACM Distinguished Scientist, Technology Review selected him as one of their TR-100, and he has been awarded research grants from the Okawa and Hellman Foundations. He is currently serving as ACM SIGGRAPH Director at Large.
We describe an approach for synthesizing data representations for
concurrent programs. Our compiler takes as input a program
written using concurrent relations, and synthesizes a representation of
the relations as sets of cooperating data structures, as well as the
placement and acquisition of locks to synchronize concurrent access
to those data structures. The resulting code is correct by construction:
individual relational operations are implemented correctly, and the
aggregate set of operations is serializable and deadlock free. The
relational specification also permits a high-level optimizer to choose
the best performing of many possible legal data representations
and locking strategies, which we demonstrate with an experiment
autotuning a graph benchmark.
This is joint work with Alex Aiken and Peter Hawkins(Stanford),
Kathleen Fisher(DARPA), and Martin Rinard(MIT).
The work is part of Peter Hawkins's Ph.D. thesis
(http://theory.stanford.edu/~hawkinsp/).
Please also see the article about the work that appeared
in the December 2012 issue of CACM.