CS/Psych-770 HCI Class Poster Session
All members of the department is invited to join a poster session with projects from the HCI Class. Cookies and chocolate will be served. Below are abstracts for the posters.
Effects of facial expression in video- and avatar-based computer-mediated communication
Erdem Kaya, Tomislav Pejsa, & Magdalena Rychlowska
The goal of this study is to explore collaborative behavior and rapport in different modes of computer-mediated communication (CMC). We tested whether the well-described in literature positive effects of smile are present in video and 3D-avatar-based CMC. A between-subjects laboratory experiment compared the effects of smiles shown in prerecorded and rendered video sequences of a human and an avatar, respectively, displayed during the iterated prisoner’s dilemma game. Self-reported and behavioral measures included collaborative behavior of the participant, likability and rapport. Human game partners were perceived as more likable compared to avatars. Smiles of the game partner predicted likability and rapport, but not collaboration. Importantly, these effects were only significant for male participants. Other effects of gender were observed: females reported higher likability of both types of game partners, smiled for longer time. Also, males were more collaborative with human compared to avatar game partners.
Comparing Administration Methods of Psychological Instruments
Tony McDonald & Lauren Meyer
Most psychological instruments were designed and normed with paper and pencil administration, but many researchers are now using computers instead. Studies indicate there may be differences between these modes. How does the mode of administration affect instruments’ psychometric properties? How do participants perceive the experience? Twenty-seven UW-Madison undergrad and graduate students completed four instruments (Social Responsiveness Scale, Interpersonal Trust Scale, modified NASA Task Load Index, experience) in a 2 (mode: computer vs. paper) by 2 (location: lab vs. home) between subjects design. Data from the computer conditions (n = 15) were less reliable than from the paper conditions (n = 12), IPTS Cronbach’s α = 0.39 vs. 0.76. Participants took less time in the computer conditions, p = 0.004, and there was an interaction with location, p = 0.01. Participants in the computer conditions correctly judged the computer administration to be faster, p = 0.04. Participants prefer surveys on paper and in the lab, p < 0.001.
Do Only Humans Cheat? Exploring the Role of Cheating in a Video Game
J.J. De Simone, Tessa Verbruggen, & Li-Hsiang Kuo
In sports and board games, an opponent cheating is typically met with disdain, anger, and disengagement by the other players. However, work has yet to address the role of AI cheating in video games. Participants played either a cheating or non-cheating version of a modified open source tower defense game. Results indicate that when an AI competitor cheats, players perceive the opponent as being more human. Cheating also increases player aggravation, but does not impact presence and enjoyment of the experience. Game designers can integrate subtle levels of cheating into AI opponents without any real negative responses from the players. The data indicate that minor levels of cheating might also increase player engagement with video games.
Effects of robot's feedback under time pressure
Heemoon Chae, Mai Lee Chang, & Nisha Kiran
During human-robot interaction under task time pressure, a robot’s feedback must be carefully designed since humans are under more stress and have less tolerance of the robot’s mistakes. Thus far, no research has investigated the effects of a robot’s feedback under time pressure. Our study examines the effects of a robot’s feedback on performance, usefulness, and satisfaction under task time pressure. Participants were asked to find words in a Word Search. The independent variables included feedback (none vs. verbal) and time pressure (none vs. high). Feedback included both compliments and hints. The dependent variables were usefulness, satisfaction, workload, and words per minute. The results show that the participants found the robot to be useful, assisting, and raising alertness when the robot gave verbal feedback.
The Effects of Text and Robotic Agents on Deception Detection
Wesley Miller & Michael Seaholm
When people attempt to conceal the truth from others, they typically exhibit what are known as deception cues, identifiable indications that the person in question is speaking untruthfully. For this study, we were interested in seeing how well an individual can separate truth from falsehood when a small subset of these deception cues are exhibited by not only human, but also robotic and text-based agents. Participants were tasked with interacting with recordings of each agent in some predetermined order, asking them questions from a preset list. Participants then rate each response in terms of perceived truthfulness. Once they have interacted with all three agents, participants were asked to rate the overall trustworthiness of each agent. The results of our study indicate that participants were able to detect deception more reliably from the human agent and that statements given by the text-based agent were consistently ranked more truthful than the statements of the other agents.
An Adaptive Autonomous System for Second Language Acquisition
Young-Bum Kim, Pallavika Ramaswamy, & Soyoun Kim
Second language acquisition poses a unique challenge in that it combines elements of rote memory with skill integration and creative productions. To handle these challenges, we harness the power of frequent self-testing by designing a Computer Aided Language Learning System(CALL) system. We explore whether people can learn a second language, by evaluating learner responses to a language translation task, with our CALL system. Our CALL system can automatically assess the responses and also adapt the questions according to the level of the learner. We comment about the efficiency of our adaptive CALL system with a one-size-fits-all approach and see how it affects learning curve of people with different knowledge bases . To this end, we show how we can (1) estimate the semantic and syntactic difficulty of a sentence, (2) autonomously grade the response, and (3) select problems according to the assessed level of the learner. In an uncontrolled online experiment, we asked the participants to translate English sentences into Korean sentences. A third of the participants are given randomly chosen questions and assessed manually, and another third of the participants are given randomly chosen questions but are assessed with our autonomous assessment system. A final third of the participants are given adaptively chosen questions according by our system. With this information, we evaluate how the adaptive autonomous assessment framework affected the participants' task performance.
