Documentation

UW Connect

CS736 Project Poster Session

Room: 
2310CS

The CS736 (Advanced Operating Systems) class will present posters of their final projects.  Come hear about this great collection of projects:

  1. Improved APIs and Programming Models for Deterministic Multithreading, Aarti Basant, James Paton, Seth Pollen

    Abstract:
    Dthreads is a deterministic multithreading library which conforms to the pthreads API and programming model. However, the pthreads API was originally designed to handle nondeterministic threading, and using it to control a deterministic library masks the actual semantics and introduces needless complexity. We propose a new, simplified API for deterministic multithreading, consisting of just two calls which lock and unlock a globally shared mutex. We show that this new API, when combined with the semantics of Dthreads, is sufficiently powerful for general use and can be used as the basis for more complex and powerful synchronization operations, such as condition variables and barriers.

    We then evaluate the new API by incorporating its use into the PARSEC benchmark suite. We present benchmark data to support the conclusion that our API changes do not negatively impact performance. In addition, we present a case study of a PARSEC program that is broken by linking to the Dthreads library. We show that a Dthreads-aware port of the program can fix the problem, albeit at a cost to performance. We conclude by outlining avenues for future work.
     
  2. Race-Guided Random Testing for Multithreaded Applications,  Dongdong Deng, Xiaoyang Gao

    Abstract: Exposing concurrency bugs in multithreaded applications is difficult because of large interleaving space. Also, buggy interleaving may occur with extremely low probability, making repeated executions of an application inefficient to trigger concurrency bugs. With our race detector, we can focus on the instruction pairs
    directly related to data races, reducing the interleaving space significantly. To expose concurrency bugs, we suspend the execution of an application by inserting random length of delay into randomly selected instructions. We analyzed the causes of concurrency bugs as well as buggy interleaving in four multithreaded applications and our experiment results show that we can discover data races and expose concurrency bugs effectively in these applications.
     
  3. The Pedigree of Linux, Zev Weiss, Matthew Wolfe

    Abstract: We have performed a source-level study of the variations between different versions of the Linux kernel as packaged with six modern Linux distributions.  Our goal was to provide a summary of the primary differences that have emerged between operating systems all nominally referred to as "Linux".

    We performed a detailed examination of the source code modifications made in each of the respective kernels, six in total.  We report on our findings by presenting a broad sampling of new features added to the Linux kernel by each distribution, which are often quite recognizably relevant to that particular distribution's intended use. Ultimately we find that there are significant differences in the  feature sets offered by the various versions of the kernel, but do not
    see any evidence of divergence to an extent that would break backward compatibility or cause compatibility problems for portable programs (so long as they were not written specifically to use a particular new  feature offered on a certain distribution).
     
  4. Tree-Based Density Clustering using Graphics Processors, Evan Samanas, Benjamin Welton

    Abstract: Tree-Based Overlay Networks (TBONs) have proven to be a powerful and scalable communication model for building tools for massively parallel and distributed systems, but this model is less proven for building applications. We present as a case study our software that uses the TBON model to efficiently execute a popular clustering algorithm usually run on a single node while utilizing Graphics Processing Units (GPUs) in a TBON. Our non-GPU implementation provides up to a 23x speedup at 32 nodes over the algorithm running on a single node, while a non-TBON based solution is able to achieve 20x speedup before declining. A 7x-23x speedup is also seen in using a single GPU to run the same clustering algorithm. We then give an example use of clustering at large-scale by attempting to track the movement and outbreak of influenza in the United States.
     
  5. Implementing a Data race detector with the Dyninst API, Xiaozhu Meng, Luke Pillar, Nairan Zhang

    Abstract: Data races are difficult to manually debug, so a tool that can detect Data races automatically is quite useful to writing multi-threaded programs. In this paper, we describe the implementation of a Data race detector using the  DyninstAPI, an efficient binary-instrumentation tool. Our Data race detector implements a modification of the Vector Clock algorithm, based on the  “happened-before” partial-ordering relationship. We implement this algorithm for the pthread library, instrumenting the pthread synchronization primitives. In our tests, our detector reports data races successfully. We conduct a comprehensive study on the performance and scalability of our detector and  give an analysis of performance bottlenecks. We believe that our detector has  the potential to be used on real programs after some further optimization.
     
  6. A procfs Interface for Debugging in Linux, Srinivas Govindan, Sathya Gunasekar

    Abstract: Many Unix based operating systems provide support for debugging through their procfs interface. Linux however, provides a debugging support through the ptrace system call, while at the same time it exposes a process through its procfs interface to some extent. We extend the current procfs interface in Linux to support all the debugging features provided by ptrace and also provide additional features enabling a debugger to perform better. Debuggers can read/write to specific files in /proc to perform the  debugging tasks. Our debugging interface can read/write to the process address space, registers and supports tracing specific system calls that are explicitly of interest to the debugger. We also enable the debugger to choose not to be notified about specific signals delivered to a traced process that it is not interested in. Our simple debugging interface performs as good as ptrace for normal debugging and can perform better while tracing system calls. It also provides much faster reads and writes to the address space of a process compared to ptrace which only allows access one word at a time.
     
  7. Debugging support through /proc file system in Linux, Victor Bittorf, Igor Canadi


    Abstract: Using the virtual file system in Linux, we implemented debugger primitives through /proc file system. Our new interface is intended as replacement for the existing ptrace system call. We discuss differences and advantages of our model. We created a proctrace library to semi-transparently emulate ptrace using our interface. While we cannot perfectly emulate ptrace, we did have some success in porting strace. We also created our own tracer, itrace, which can be run simultaneously with gdb.
     
  8. Reverse Engineering of Data Structures using Dyninst, Eric Lederer, Srividya Doss

    Abstract: The knowledge of a program’s data structures along with their syntactic and semantic definitions can be highly valuable in a variety of forensic and security applications. In this paper, we attempt to infer rich information regarding the types of these data structures from only the binary executable of a program. We first instrument the binary with the the Dyninst API such that we can monitor data flow across memory locations and registers when the instrumented binary is executed.  At runtime, each memory access is tracked by a shadow array, and we construct a graph of dependencies between tracked memory locations.  We use the REWARDS [1] dynamic analysis technique to automatically reverse engineer data structures from this information by propagating type information across these dependencies between memory locations.  The key to type resolution is the execution of a type-revealing instruction called a “type sink,” typically system calls and standard library functions with well-defined parameter and return type information.  We have a developed a prototype tool, Pumpkin, which outputs a visualization of the data structures present in the binary’s memory address space and the layout of data structures within the binary’s custom function stack frames.
     

Refreshments will be served.

Event Date:
Monday, December 19, 2011 - 10:00am - 12:00pm (ended)