MegaTGIF Quiz bowl

Are you a curious person? Do you like trivia and know random stuff about technology, movies, TV-series, games, books etc? If yes, this is the perfect place for you. Pit your knowledge against the best at MegaTGIF's Quiz Bowl and enjoy a fun evening of food, trivia, and good times. Undergraduates, graduates, faculty, and staff are all invited.

Computational Phenotyping in Mental Health

Mental health research is at a critical moment. Suicide rates have increased by 60% worldwide in the last 45 years and depression is the leading cause of disability worldwide, as reported by the World Health Organization. Experiments have largely provided incomplete explanations of mental disorders. Slow progress has led many health organizations to change their approach to mental health research.

PlinyCompute: Connecting Programming, Computation and Storage for Analytics at Speed

Abstract: Users want Big Data analytics systems that provide interactive-speed ad-hoc query processing and short training times for machine learning. But the performance of existing systems is not always great. In this talk, I identify two reasons for this. First, such systems are heavily
layered, with many separate softwares working together: a distributed file system, an in-memory file system, the JVM, and the computational

Data Poisoning Attacks in Contextual Bandits

We study offline data poisoning attacks in contextual bandits, a class of reinforcement learning problems with important applications in online recommendation and adaptive medical treatment, among others. We provide a general attack framework based on convex optimization and show that by slightly manipulating rewards in the data, an attacker can force the bandit algorithm to pull a target arm for a target contextual vector. The target arm and target contextual vector are both chosen by the attacker. That is, the attacker can hijack the behavior of a contextual bandit.

Training Set Camouflage

We introduce a form of steganography in the domain of machine learning which we call training set camouflage. Imagine Alice has a training set on an illicit machine learning classification task. Alice wants Bob (a machine learning system) to learn the task. However, sending either the training set or the trained model to Bob can raise suspicion if the communication is monitored. Training set camouflage allows Alice to compute a second training set on a completely different – and seemingly benign – classification task.

Pages

Subscribe to UW-Madison Computer Sciences Department RSS