Time: 10:30 Monday May 13
Place: KWII room 1110
Sociality is one of the most fundamental aspects of being human. The key to sociality is coordination, that is, the bringing of people "into a common action, movement or condition" . Coordination is, at base, how social creatures get social things done in the world. Being social creatures, we engage in highly coordinative activities in everyday life-two girls play hopscotch together, a group of musicians play jazz in a jam session and a father teaches a son how to ride a bicycle. Even mundane actions such as greetings, answering a phone call, and asking a question to ask a question by saying "Can I ask you a question?" are complex and intricate. Actors not only need to plan and perform situated actions, but also need to process the responding actions--even unforeseen ones--from the other party in real time and adjust their own subsequent actions. Yet, we expertly coordinate with each other in performing highly intricate coordinative actions.
In this work, I look at how people coordinate joint activities at the moment of interaction and aim to unveil a range of coordinative issues, using "methodologies and approaches that fundamentally question the mainstream frameworks that define what counts as knowledge" (p.2, ) in the field of Computer Supported Cooperative Work (CSCW). To investigate computer mediated interactions among co-located people, I examine different interactional choices people make in the course of carrying out their joint activities, and the consequences of their choices.
By investigating co-located groups as they played a collaborative, problem-solving game using distributed technologies in experimental settings, I (1) provide critical case reports which question and challenge non-discussed, often-taken-for-granted assumptions about face-to-face interactions and coordination, and (2) tie the observations to the creation of higher level constructs which, in turn, can affect subsequent design choices.
More specifically, I ran two studies to look at how co-located people coordinate and manage their attention, tasks at hand, and joint activities in an experimental setting. I asked triads to work on a Sudoku puzzle collectively as a team. I varied support for the deictic mechanism in the software as well as form factors of mediating technology.
My research findings show that:
1. different tools support different deictic behaviors. Explicit support for pointing is desirable to support complex reference tasks, but may not be needed for simpler ones. On the other hand, users without sophisticated explicit support may give up the attempt to engaged in complex reference.
2. talk is diagnostic of user satisfaction but lack of talk is not diagnostic of dissatisfaction. Therefore, designers must be careful in their use of talk as a measurement of collaboration.
3. the more people talk about complex relationships in the puzzle, the higher their increase in positive emotion. Either engaging with the problem at hand is rewarding or having the ability to engage with the problem effectively enough to speak about it is engaging.
4. amount of talk is related to form factor. People in both computer conditions talked less about the specifics oF the game board than people in the paper condition, but only people in the laptop condition experienced a significant decrease in positive emotion.
5. different mediating technologies afford different types of non-response situations. The most common occurrences of non-responses were precipitated by speakers talking to themselves in the computer conditions. Participants did not talk to themselves much in the paper condition.
Differences in technology form factors may influence peopleís behaviors and emotion differently. These findings represent a portrait of how different technologies provide different interactional possibilities for people.
With my quantitative and qualitative analyses I do not make bold and futile claims such as "using a highlighter tool will make users collaborate more efficiently," or "making people talk more will make the group perform better." I, instead, illustrate the interactional choices people made in the presence of given technological conditions and how their choices eventuated in situ.
I then propose processlessness as an idea for preparing designs that are open to multiple interactional possibilities, and nudgers as an idea for enabling and aiding users to create and design their own situated experiences.
Title: Design and Evaluation of a Web-Based Programming Tool to Improve the Introductory Computer Science Experience
Time: Tuesday 5/7, 10:00am
Place: 1110 KWII
Introductory computer science courses can be notoriously difficult for students, especially those outside of the major. There are many reasons for this, but based on our experience, the programming software itself may play a significant role. To investigate this issue, we have developed Pythy, a web-based programming environment which allows students to write, execute, and test programming assignments from within the familiar interface of a web browser. In this work, we discuss various aspects of Pythy in detail, including the rationale behind its design, the system architecture on which it is built, and the various functions offered by the software. Next, we discuss an evaluation of Pythy's effectiveness conducted during a programming course for non CS-majors offered at Virginia Tech, comparing responses from Pythy users to responses from users of a different software solution in another programming course. The results of these surveys suggest that Pythy was successful in several target areas, including making it easier to get started with programming and providing feedback about program behavior. We then discuss an analysis of access log data from Pythy itself, revealing details about how students used the system. Finally, we conclude with a summary of key contributions and suggest some potential future directions for the system.
CHCI Ph.D. student Bireswar Laha was recently awarded a prestigious IBM Ph.D. Fellowship. Laha, who works with Professor Doug Bowman in the 3D Interaction Group, studies the use of virtual reality and 3D user interfaces in the analysis of volume datasets. Congratulations, Bireswar!
Full story on the College of Engineering website.
Title: Supporting Learning through Spatial Information Presentations in Virtual Environments
Committee: Doug Bowman (Chair), Richard Mayer, Chris North, Francis Quek, Tonya Smith-Jackson Time and Place:1:00pm Monday, April 22, in KWII 1110.
Though many researchers have suggested that 3D virtual environments (VEs) could provide advantages for conceptual learning, few studies have attempted to evaluate the validity of this claim. A wide variety of educational VEs have been developed, but little empirical evidence exists to help researchers and educators determine the effectiveness of these applications. Additional evidence is needed in order to decide whether VEs should be used to aid conceptual learning. Furthermore, if there is evidence that VEs can support learning, developers and researchers will still need to understand how to design effective educational applications. While many educational VEs share the challenge of providing learners with information within 3D spaces, few researchers have investigated what approaches are used to help learn new information from 3D spatial representations. It is not understood how well learners can take advantage of 3D layouts to help understand information. Additionally, although complex arrangements of information within 3D space can potentially allow for large amounts of information to be presented within a VE, accessing this information can become more difficult due to the increased navigational challenges.
Complicating these issues are details regarding display types and interaction devices used for educational applications. Compared to desktop displays, more immersive VE systems often provide display features (e.g., stereoscopy, increased field of view) that support improved perception and understanding of spatial information. Additionally, immersive VE often allow more familiar, natural interaction methods (e.g., physical walking or rotation of the head and body) to control viewing within the virtual space. It is unknown how these features interact with the types of spatial information presentations to affect learning.
The research presented in this dissertation investigates these issues in order to further the knowledge of how to design VEs to support learning. The research includes six studies (five empirical experiments and one case study) designed to investigate how spatial information presentations affect learning effectiveness and learner strategies. This investigation includes consideration for the complexity of spatial information layouts, the features of display systems that could affect the effectiveness of spatial strategies, and the degree of navigational control for accessing information. Based on the results of these studies, we created a set of design guidelines for developing VEs for learning-related activities. By considering factors of virtual information presentation, as well as those based on the display-systems, our guidelines support design decisions for both the software and hardware required for creating effective educational VEs.
The Human-Centered Design program that is starting up this fall is an interdisciplinary program combining expertise and approaches from the arts/humanities and the engineering/technology perspectives. They will offer a certificate in HCD as well.
This fall, two new courses will be offered:
1) ART 5524 TS: Human Centered Design T/TH 1500-1800 taught by Troy Abel
2) STS 6614-Advanced Topics in Technology Studies: Origins of Innovation taught by Matt Wisnioski
The Interaction Design Foundation offers a useful list of conferences that apply to HCI for the current year with submission dates included. Links below are the interactive and printer-friendly versions.
CSCW and DIS aren't listed yet, but should be added soon. The page is community driven, so check back often.
ARC Visual Computing
The VT Visionarium seeks talented and motivated candidates to join our team innovating information and scientific visualizations with virtual environment technology. As an open lab for the campus community with world-class infrastructure, the Visionarium is a dynamic learning and research environment. We work with faculty research groups and projects from around campus and empower them with visualization solutions. As such, there are many opportunities for interdisciplinary collaboration, applied visualization development, experimental research, and publication especially in the areas touched by high-performance visualization. The ideal candidate will have a background in some 3D graphics tools as well as web and multimedia technologies and usability engineering. We value good communication skills as evidenced by writings and presentations. Typical tasks include: exercising and automating pipelines for data processing and visualization for a domain problem, testing and documenting the scalability of clusters and remote rendering setups or the integration of new input devices and displays.
Interactive 3D Training Platforms
This GRA will design, develop and evaluate a test bed platform for 3D multimodal training. Specifically, the project seeks to take advantage of new consumer technologies and gesture recognition frameworks in order to support the acquisition and transfer of maintenance and repair skills. Virtual training platforms can provide the advantages of lower cost, greater safety and increased accessibility; however task fidelity, tracker accuracy as well as the abstractions and formalisms describing the procedure can all impact the systemís effectiveness. This GRA will work in a VT project team that includes a set of 3D modelers. This is a US Army project and all content is unclassified.
Web3D Geospatial Services
This GRA will develop and evaluate new methods of geodata processing and delivery, including open international standards (OGC, Web3D) and open source software (e.g. Geoserver). This project will leverage public geospatial data in a POSTGIS database, dynamically delivering 3D portrayals of geospatial data on demand. These interactive 3D views can be delivered to standalone clients (e.g. X3D) and native web clients (e.g. HTML5 / WebGL). This position will expand and evaluate our current prototype. Specifically, we are looking to improve support for data sources such as weather simulation, improve the integration with VT identity management services, and provide functionality for location-based annotations and discussions. This GRA is based in the Visionarium, but will work with faculty and staff from Virtual ICAT and 3DBlacksburg.
The call for participation for HCIC has been released. This year's topic is Big Data, led by Sue Dumain, Gary Olson, and Jaime Teevan.
The 2-3 page abstracts are due very soon -- Feb 22 -- so please think about whether there is anything you'd like to submit. Recall that only faculty members can present papers, though students certainly can be co-authors.
Particularly if you are planning to attend, think also about whether you have (or know of) a student who would benefit from going. Recall that we can send two students at no charge, assuming there is a VT faculty member in attendance.
More info is available at http://www.hcic.org (only accessible from a vt.edu domain).
Congratulations to PhD students Felipe Bacim, Eric Ragan, Siroberto Scerbo and Cheryl Stinson who took first place honors in the 3D User Interface Contest at the IEEE 3D User Interfaces Conference 2012.
The winning entry is described in this YouTube video.
Congratulations, as well, to their advisor, Prof. Doug Bowman. This is the third straight year HCI Center students have won the contest.
Title: Collaborative Navigation in Virtual Search and Rescue
The challenge this year was to build an application to enable collaborative navigation through a complicated 3D environment. Our team did a virtual search-and-rescue scenario where a rescuer is inside a virtual burning building looking for survivors and the other is a commander who is monitoring progress on an interactive map of the building. The commander suggests paths for the rescuer to follow in order to ensure coverage of the whole building, while the rescuer places markers in the building to indicate the location of survivors, blockages, hazards, and new openings in the building.