In this talk, I will describe computational tools I helped develop for
analyzing and manipulating the backbone of macromolecular 3D structures, and
I demonstrate that these tools support building better macromolecular
structures than current methodology.
Noisy and missing data are prevalent in many real-world statistical estimation problems. Popular techniques for handling non-idealities in data, such as imputation and expectation-maximization, are often difficult to analyze theoretically and/or terminate in local optima of non-convex functions -- these problems are only exacerbated in high-dimensional settings. We present new methods for obtaining high-dimensional regression estimators in the presence of corrupted data, and provide theoretical guarantees for the statistical consistency of our methods.
We live in a software-driven world. Software helps us communicate and collaborate; create art and music; and make discoveries in biological, physical, and social sciences. Yet the growing demand for new software, to solve new kinds of problems, remains largely unmet. Because programming is still hard, developer productivity is limited, and so is end-users' ability to program on their own.
GPGPUs are gaining track as a vehicle for power-efficient, high performance computing. Nevertheless, their von-Neumann-based design suggests they are amenable the model’s key inefficiencies: the processor must fetch and decode each dynamic instruction instance, and all intermediate values of the computation must be transferred back-and-forth between the functional units and the register file.
Efficient memory sharing between CPU and GPU threads can greatly expand the effective set of GPGPU workloads. For increased programmability, this memory should be uniformly virtualized, necessitating compatible address translation support for GPU memory references. However, even a modest GPU might need 100s of translations per cycle (6 CUs * 64 lanes/CU) with memory access patterns designed for throughput more than locality.