Accepted Papers, alphabetically by first author
Following the peer review process, the following papers have been accepted for presentation and publication at OzCHI 2011.
Papers are presented here in alphabetical order by first author.
Feeding the Digital Parrot: Capturing Situational Context in an Augmented Memory System
Jehan Alallah & Annika Hinze, University of Waikato
In this paper, we explore the capturing of information for augmented memory systems. We focus on user-selected capturing of trigger data to remember events in people’s lives. We propose the method of marking moments and report the results of a five-stage user study that explored our method. We studied how to best support the capturing activity, which data to capture and in which way. We observed how each of our 30 participants recorded data for later use while visiting the local zoo.
Smart-Phone Augmented Reality for Public Participation in Urban Planning
Max Allen, Holger Regenbrecht & Mick Abbott, University of Otago
We investigate smart-phone based augmented reality architecture as a tool for aiding public participation in urban planning. A smart-phone prototype system was developed which showed 3D virtual representations of proposed architectural designs visualised on top of existing real-world architecture, with an appropriate interface to accommodate user actions and basic feedback. Members of the public participated in a user study where they used the prototype system as part of a simulated urban planning event. The prototype system demonstrated a new application of augmented reality architecture and an accessible way for members of the public to participate in urban planning projects.
Extending Mobile User Ambient Awareness for Nomadic Text Entry
Ahmed Sabbir Arif, York University; Benedikt Iltisberger, Hochschule Bonn-Rhein-Sieg; Wolfgang Stuerzlinger, York University
Nowadays, we input text not only on stationary devices, but also on handheld devices while walking, driving, or commuting. Text entry on the move, which we term as nomadic text entry, is generally slower. This is partially due to the need for users to move their visual focus from the device to their surroundings for navigational purposes and back. To investigate if better feedback about users’ surroundings on the device can improve performance, we present a number of new and existing feedback systems: textual, visual, textual & visual, and textual & visual via translucent keyboard. Experimental comparisons between the conventional and these techniques established that increased ambient awareness for mobile users enhances nomadic text entry performance. Results showed that the textual and the textual & visual via translucent keyboard conditions increased text entry speed by 14% and 11%, respectively, and reduced the error rate by 13% compared to the regular technique. The two methods also significantly reduced the number of collisions with obstacles.
A Quantitative Quality Model for Gesture Based User Interfaces
Kayne Barclay, Danny Wei, Christof Lutteroth & Robert Sheehan, University of Auckland
The technological advancement of computers and cameras over the past few years has given us the ability to control objects without touching them. There have already been a number of attempts at producing gesture based applications, but many of them have usability issues. This paper proposes a model that reflects the usability of a gesture based interface, in order to evaluate and improve a gesture-controlled system. The model defines four levels of abstraction, with the higher levels based on the lower ones. The levels of the model allow us to propose quantitative notions for 1) the parameters affecting the quality of individual gestures, 2) the overall quality of a gesture, 3) the quality of particular functionalities, or use cases, in a system, and 4) the overall quality of a system. The model was evaluated using an existing gesture-based interface for a popular media center application.
Theoretical Foundations for User-Controlled Forgetting in Scrutable Long Term User Models
Debjanee Barua, Judy Kay & Bob Kummerfeld, University of Sydney; Cecile Paris, CSIRO ICT Centre
Emerging technologies are making it feasible for people to capture large amounts of personal information that can support important aspects of their lives. When that information is stored in a persistent storage so that it can drive personalisation, it is called a user model. There has been little exploration of systematic approaches to enable people to gain control over such user models using forgetting mechanisms. This paper first presents reasons why such forgetting mechanisms are desirable. Then it analyses core forms of human forgetting as the basis for a theoretical model of forgetting in user models. Our key contribution is to establish theoretical foundations for the design of mechanisms and interfaces for forgetting in stores of personal information, with the goal of user control of these mechanisms, so that we can enable people to achieve a new form of control over their personal information and its use.
Using Mobile Device Screens for Authentication
Andrea Bianchi, KAIST; Ian Oakley, MITI – University of Madeira; Dong Soo Kwon, KAIST
Authentication in public spaces, such as ATM PIN entry, is inherently susceptible to security attacks based on observation in person or via cameras. This paper addresses this problem with a system which allows users to enter a PIN on a standard mobile phone and then transmit it securely for authentication using modulated patterns of light shown on the screen and sensed by a cheap bespoke receiver unit. No pre-pairing is required as physical proximity guarantees security. The paper presents several hardware and software variations, evaluates the technical soundness of the system, and presents two user studies addressing usability and security against observation attacks.
Discovery Table: Exploring the design of tangible and ubiquitous technology for learning in preparatory classrooms
Marie Bodén, Andrew Dekker & Stephen Viller, University of Queensland
In this paper we investigate how technologies can be designed to support learning in preparatory classrooms by augmenting existing learning objects. By following an iterative interaction design process of working with teachers and children within the classroom environment, we can better design technologies through the augmentation (rather than replacement) of existing learning activities. Our case study – the Discovery Table – uses a variety of technologies to allow everyday plastic symbols of letters and numbers to be placed on a technology augmented table in order to provide visual, audible and tangible feedback to the children. Discovery Table demonstrates a first step towards more fundamental work towards successful design for tangible learning.
Interactive Kaleidoscope: Audience Participation Study
Bert Bongers & Alejandra Mery, University of Technology Sydney
This paper presents a physical user interface design approach based on interactive art pieces. A range of interactive video installations have been developed by the first author and presented at festivals and exhibitions, enabling audiences to co-create images of kaleidoscopic patterns and textures through movement and tangible interaction. The focus of this paper is the reporting of a semi-structured study in a museum context, addressing the three research questions of whether people are drawn into the installation, whether the interaction is clear, and to what extent the participants become engaged. The findings reveal that audiences are attracted and that the levels of engagement are satisfactory in the context of children exploring the museum. We propose five stages of interaction reflecting the engagement.
Rewriting History: More Power to Creative People
Carlo Bueno, Sarah Crossland, Christof Lutteroth & Gerald Weber, University of Auckland
Trying out different alternatives is a natural part of creative work, resulting in several versions that are hard to manage. With the tools available today, we often end up having to manually redo changes that worked in one version on other versions. We propose a new approach for supporting creative work: an artifact is described as the history of the operations that created it. We show that by allowing users to change this history, the common use cases of merging, generalizing and specializing can be supported efficiently. This rewriting history approach is based on a formal specification of the operations offered by a tool, leads to a new theory of operations, and enables exciting new ways to share and combine creative work. It is complementary to state-based version control, and offers the user a new understanding of merging. The approach was implemented for a collaborative drawing tool, and evaluated in a user study. The study shows that users understand the approach and would like to use it in their own creative work.
Designing Games to Educate Diabetic Children
Gang Chen, Nilufar Baghaei, Abdolhossein Sarrafzadeh, Chris Manford & Steve Marshall, Unitec Institute of Technology; Gudrun Court, Auckland District Health Board Startship Children’s Hospital
The use of computer games as common vehicles for education, as opposed to pure entertainment, has gained popularity in recent years. Traditional method for diabetes education relies heavily on written materials and there is only a limited amount of resources targeted at educating diabetic children. In this paper, we present a novel approach for designing computer games aimed for educating children with diabetes. Our game design was applied to an existing open source game (Mario Brothers). The results of a pilot study showed that participants enjoyed playing the game and found it valuable for educating diabetic patients.
A Comparison of Four Methods for Cognitive Load Measurement
Siyuan Chen & Julien Epps, University of New South Wales; Fang Chen, National ICT Australia
Recognizing users’ cognitive load during tasks is among the most important considerations for adaptive automation and interface evaluation. This paper compares four methods of measuring user cognitive load, that is, subjective rating of task difficulty, task completion time, performance accuracy and eye activity based physiological measurement. In order to be practically useful, the measurement should be sensitive to task difficulty variation and accurately predict user cognitive load. In this study, we examined the sensitivity and accuracy of these measures for five levels of cognitive load. ANOVA tests and Gaussian mixture model classification results show that subjective rating of task difficulty is the most effective measure, meanwhile eye activity based measure is as sensitive and accurate as using task completion time to classify two or more cognitive load levels, but has the relative advantage of being a real time measure and not requiring a specific action.
Presenting Search Results of Meeting Documents
Caslon Chua & Clinton Woodward, Swinburne University of Technology
Information plays an important role in organisations, allowing management to make sound decisions. Many organisations keep documents in electronic format, and as the number and volume of documents increases, search and retrieval become tedious and difficult. Effective presentation of search results is an important user interface issue for any search tool in this context. Search results can be composed of a number of different elements, including file details, text extracts and thumbnail images.
This study considered the effectiveness of several search result presentation elements in the context of a desktop search tool used to search for relevant meeting minute documents. Participants were presented with search results from two existing desktop search applications and one test application developed by the authors. The participants were then asked to evaluate the quality of different elements of the result presentation. Responses indicated that domain-specific presentation elements are valuable to users, allowing them to effectively determine the relevance of individual search result items. The results also suggest that other domain specific search tools would benefit from customised search result presentation.
Seamless Interaction in Space
Adrian Clark, Andreas Dünser, Mark Billinghurst, Thammathip Piumsomboon & David Altimira, University of Canterbury
As more electronic devices enter the living room, there is a need to explore new ways to provide seamless interaction with them over a range of different distances. In this paper we describe a proximity-based interface that allows users to interact with screen content both within arm’s length and at a distance. To support such a wide interaction range we combine speech and gesture input with a secondary display, and have the interface change dynamically depending on the user proximity. We conducted a user evaluation of our prototype system and found that users were impressed with the away from screen interfaces, and believed that changing the interface based on proximity would be useful for larger displays. We present the lessons learned and discuss directions for future research.
Learning a physical skill via a computer: a case study exploring Australian Sign Language
Kirsten Ellis, Monash University; Neil Ray, Deaf Children Australia; Cheryl Howard, Monash University
The aim of this research project was to consider the implications of teaching a physical skill using a computer. The case study that was used was the development of a resource for teaching Australian Sign Language (Auslan) to hearing people that could be customised to cater for user’s individual learning preferences. Learning Auslan as visual spatial languages presents several interesting human–computer interaction challenges to the interface designer as the user is trying to learn a physical skill via the computer. In addition, multiple vocabularies could be targeted to meet the needs of different users by implementing dynamic insertion of resources. The premise for this approach was to empower the deaf community to create and customise their own teaching resources rather than being dependent on a programmer for each new version of the learning material.
Online Assessment: Splitting the Screen to be Seen
Graham Farrell & Vivienne Farrell, Swinburne University of Technology
The limited viewing space provided by online assessment tools makes it difficult for a student to view supportive graphics, scripts or diagrams where the viewing screen is smaller than the complete question. This paper reports on the user centred design and the application of a bifocal information visualisation technique for the display of graphics of an online multiple choice assessment tool. Traditional online tests require the students to scroll down the viewing page to study a question’s supporting graphic or diagram and then scroll up again to insert their response. In many instances, this drastically increased the level of tension in a student, who would lose sight of the graphic when answering the question. Consequently, the increased cognitive load often resulted in errors. We provide a presentation technique that displays the question in readable text with a compressed, distorted vision of the supportive graphic/diagram. This enables students to focus on the graphic when considering their answer while not losing sight of the question itself. Furthermore, it allows the student to toggle freely between a focus on the graphic and the question. Our evaluation of this bifocal display demonstrates a notable decrease in errors being made during transcription of answers, and a decrease in the stress level for students during the test.
Courtroom Evidence Presentation Technology: Overcoming Traditional Barriers
Vivienne Farrell, Graham Farrell, Karola Von Baggo & Kon Mouzakis, Swinburne University of Technology
The Supreme Court of Victoria is an heritage listed building steeped in tradition and reticent to physical change. The current capability of the courtrooms to exhibit advanced technology derived evidence is substantially inadequate. This paper discusses the negative environmental impact, the inadequacies of courtroom facilities and inconsistencies in comparison to the available evidence and the requirements of jurors and courts in relation to evidence presentation. It also discusses the issues, possibilities and limitations involved in the implementation of an IT solution to the presentation of evidence.
Beyond Interaction: Meta-Design and Cultures of Participation
Gerhard Fischer, University of Colorado, Boulder
Most interesting, important, and pressing problems facing our societies in the 21st century transcend the unaided individual human mind. They require collaborative systems to explore, frame, solve, and assess their solutions. Cultures of participation represent foundations for the next generation of collaborative systems by supporting all stakeholders to participate actively in personally meaningful problems. Meta-design supports cultures of participation by defining and creating social and technical infrastructures in which users can choose to become designers. These developments create new discourses in HCI complementing and transcending current approaches centered on interaction.
The article illustrates these objectives and themes with specific examples and articulates their relevance for the OzCHI conference theme “Design, Culture and Interaction”.
Orientation Passport: Using gamification to engage university students
Zachary Fitz-Walter, Dian Tjondronegoro & Peta Wyeth, Queensland University of Technology
Adding game elements to an application to motivate use and enhance the user experience is a growing trend known as gamification. This study explores the use of game achievements when applied to a mobile application designed to help new students at university. This paper describes the foundations of a design framework used to integrate game elements to Orientation Passport, a personalised orientation event application for smart phones. Orientation Passport utilises game achievements to present orientation information in an engaging way and to encourage use of the application. The system is explained in terms of the design framework, and the findings of a pilot study involving 26 new students are presented. This study contributes the foundations of a design framework for general gamified achievement design. It also suggests that added game elements can be enjoyable but can potentially encourage undesirable use by some, and aren’t as enjoyable if not enforced properly by the technology. Consideration is also needed when enforcing stricter game rules as usability can be affected.
Fixing the City One Photo at a Time: Mobile Logging of Maintenance Requests
Marcus Foth, Ronald Schroeter & Irina Anastasiu, Queensland University of Technology
We have designed a mobile application that takes advantage of the built-in features of smart phones such as camera and GPS that allow users to take geo-tagged photos while on the move. Urban residents can take pictures of broken street furniture and public property requiring repair, attach a brief description, and submit the information as a maintenance request to the local government organisation of their city. This paper discusses the design approach that led to the application, highlights a built-in mechanism to elicit user feedback, and evaluates the progress to date with user feedback and log statistics. It concludes with an outlook highlighting user requested features and our own design aspirations for moving from a reporting tool to a civic engagement tool.
Widgets to support disabled learners: A challenge to participatory inclusive design
Voula Gkatzidou, Elaine Pearson, Steve Green & Franck-Olivier Perrin, Teesside University
This paper describes a combinatorial methodology that responds to the challenge of inclusive design drawing from the fields of participatory design and agile development. We describe the Widgets for Inclusive Distributed Environment (WIDE) study that aims to produce open source widgets that can be plugged into a range of learning environments to support disabled learners and are freely available for use and adaptation by the wider community. The research adopted a mixed methodology by involving disabled learners not just as research subjects but as consultants, designers and partners. We describe the WIDE process in terms of the participants’ involvement. The evaluation findings of the study highlight the importance of a mixed methodology for inclusive e-learning design and contribute to the understanding of HCI approaches in the context of designing participatory studies.
Using Sticky Light Technology for Projected Guidance
Chris Gunn & Matt Adcock, CSIRO ICT Centre
A worker performing a physical task may need to ask for advice and guidance from an expert. This can be a problem if the expert is in some distant location. We describe a system which allows the expert to see the workplace from the worker’s point of view, and to draw annotations directly into that workplace using a pico-projector. Since the system can be worn by the worker, these projected annotations may move with the worker’s movements. We describe two methods of sticking these annotations to the original positions thereby compensating for the movement of the worker.
Identifying Stakeholder Perspectives in a Large Collaborative Project: An ICT4D Case Study
Susan Hansen, University of Technology, Sydney and Rhodes University; Toni Robertson, University of Technology Sydney; Laurie Wilson, CSIRO ICT Centre; Hannah Thinyane, Rhodes University; Sibukele Gumbo, University of Fort Hare
This paper explores some of the benefits of formally capturing stakeholder perspectives through conducting stakeholder interviews in a large, collaborative project. This case study discussed is an Information and Communication Technologies for Development (ICT4D) venture between two universities, industry, government and communities based in the former homeland of Transkei in rural South Africa. Benefits of conducting stakeholder interviews are discussed through the early analysis of two areas: stakeholder agendas and success criteria identified by stakeholders. The stakeholder interviews highlight the variety and range of agendas in projects involving multiple organisations, as well as the need and respective challenges of capturing community perspectives in this project. It also provides support for the need to conduct evaluation, as well as guidance for what the evaluations should include.
Elastic Experiences: Designing Adaptive Interaction for Individuals and Crowds in the Public Space
Luke Hespanhol, Maria Carmela Sogono, Ge Wu, Rob Saunders & Martin Tomitsch, University of Sydney
This paper presents insights into the design process acquired during the implementation and evaluation of an interactive art installation for two very distinct public environments. Issues of scalability, robustness and performance became progressively interwoven with the concern of creating an overall user experience sustaining consistent high engagement levels. Contextual factors such as audience size, dimensions of the interactive space and length of exposure to the artwork had to be handled gracefully in order not to interfere with the interaction flow. Adopting a research by and through design approach, the work uncovered a series of findings that are pervasive to the design of adaptive interactive experiences.
Beyond designing: roles of the designer in complex design projects
Zaana Howard & Gavin Melles, Swinburne University of Technology
Human computer interaction and interaction design have recognised the need for participatory methods of co-design to contribute to designing human-centred interfaces, systems and services. Design thinking has recently developed as a set of strategies for human-centred co-design in product innovation, management and organisational transformation. Both developments place the designer in a new mediator role, requiring new skills than previously evident. This paper presents preliminary findings from a PhD case study of strategy and innovation consultancy Second Road to discuss these emerging roles of design lead, facilitator, teacher and director in action.
Investigating Interactive Search Behaviour of Medical Students: An Exploratory Survey
Anushia Inthiran, Saadat M. Alhashmi & Pervaiz K. Ahmed, Monash University
In this paper, we investigate medical students medical search behavior on a medical domain. We use two behavioral signals: detailed query analysis (qualitative and quantitative) and task completion time to understand how medical students perform medical searches based on varying task complexity. We also investigate how task complexity and topic familiarity affect search behavior. We gathered 80 interactive search sessions from an exploratory survey with 20 medical students. We observe information searching behavior using 3 simulated work task scenarios and 1 personal scenario. We present quantitative results from two perspectives: overall and user perceived task complexity. We also analyze query properties from a qualitative aspect. Our results show task complexity and topic familiarity affect search behavior of medical students. In some cases, medical students demonstrate different search traits on a personal task in comparison to the simulated work task scenarios. These findings help us better understand medical search behavior. Medical search engines can use these findings to detect and adapt to medical students’ search behavior to enhance a student’s search experience.
Same System—Different Experiences: Physicians’ and Nurses’ Experiences in Using IT Systems
Rebecka Janols & Bengt Göransson, Uppsala University
In this paper we use a sociotechnical approach and theories about group processes to analyse how two main clinician groups, nurses and physicians, are influenced by their main IT tool, the Electronic Patient Record (EPR), in their clinical practice. The paper is based on interviews with 19 physicians and 17 nurses that work at a Swedish university hospital. The clinicians considered the use of an EPR system necessary, but experienced the need to change their clinical practice to less efficient work routines in order for the EPR system to support them. The main result of the paper is that the EPR system affected nurses and physicians differently. The physicians were more frustrated and experienced that the EPR system worsened their clinical practice and a decreased status among the other clinical professions. The nurses on the other hand experienced that their work became more visible than before and found it easier to claim the importance of their work towards the physicians.
Mobile Internet, Internet on mobiles or just Internet you access with variety of devices?
Anne Kaikkonen, Nokia.
The role of Internet on mobile phones has changed in just few years. Touch screen devices allow improved browsing experience and WLAN and high speed networks enable good connectivity. As people have free of charge WLAN or flat fee agreement with their cellular provider, they either pay nothing for connectivity or same money, no matter how much they browse. The cost and quality of the connectivity have earlier been bottlenecks of mobile Internet. Improvements in these have enabled the change in usage patterns of mobiles, which has influenced also in how computers are used.
Visualising Web Browsing Data for User Behaviour Analysis
Raymes Khoury & Tim Dawborn, University of Sydney; Weidong Huang, CSIRO ICT Centre
The rapid growth of Internet usage has dramatically changed the way we interact with the outside world. Many people read news, communicate with friends and purchase goods online. These activities are usually done via web browsing. Understanding user web browsing behaviour is important in improving their browsing experience. For example, website usability and the personalization of online services could both benefit from knowledge of user browsing patterns. Much research has been done on understanding user web browsing behaviour. However, the usefulness of visualisations has not been fully explored in this space. In this paper, we introduce a system that offers three different ways of visualising web browsing data. This system provides a common interface for users to interact with the visualisations. We also present an evaluation of the system with end users. We show that by visualising a user's web browsing history, we are able to uncover interesting patterns in the way that individuals use the Web.
The Lived Body in Design: Mapping the Terrain
Lian Loke & Toni Robertson, University of Technology, Sydney
We briefly sketch an overview of emerging design research and practice, which values the lived body as a central theoretical foundation in the design of interactive technologies. Three main areas of research activity are presented: theoretical and philosophical perspectives on bodies and embodiment; concepts of the body; and design approaches and methods for working with the body and bodily literacy.
Playing the Game – Effective Gender Role Analysis Techniques for Computer Games
Derrick Martin & Kirsten Ellis, Monash University
The majority of gender studies of computer games examines game subsets, such as the first twenty minutes of gameplay, and extends their conclusions to the whole game and the game industry in general. The hypothesis of the subset effectively representing the entire game requires testing. This study addresses this problem by comparing the results of two commonly used subset methods to an analysis of a whole game.
The findings show that the two subset analyses fail to identify gender representation inequalities that examining a whole game was able to discover. This result throws into doubt subset analysis methodology in games and indicates that the results of current subset techniques, such as those used by government games rating boards, are flawed. In analyzing the whole game, this study has developed a gender role coding technique for whole games that may be useful in future studies.
A Toolkit for Designing Interactive Musical Agents
Aengus Martin, Craig T. Jin & Oliver Bown, Sydney University
We have developed a prototype software toolkit to enable non-technical users to design artificially intelligent agents to perform electronic music in collaboration with a human musician. In this paper we describe the toolkit and present a preliminary investigation of its use. We then discuss how the investigation has helped identify issues to address in an upcoming user-centred design study, which will take place in Spring 2011.
What we have here is a failure of companionship: communication in goal-oriented team-mate games
Kevin McGee, Tim Merritt & Christopher Ong, National University of Singapore
There is a fairly common assumption about real-time, goal oriented, multiplayer games: communication is primarily appreciated (and used) for more effectively attaining goals. But an interesting question that does not seem to have been explored in the literature is whether the desire for companionship is a significant factor in people's desire for and use of communication channels in real-time, goal-oriented, cooperative games. A qualitative study was conducted in which 40 participants played variations of a real-time, goal-oriented, cooperative game with either human or artificial (AI) team-mates, using different communication modalities. Participants consistently expressed a strong desire for the ability to communicate with a team-mate, arguing that it made game-play more effective and more enjoyable. The significant finding of this study is that in some cases, the strong desire for (and use of) communication channels in real-time, goal-oriented, cooperative games seems to actually be more of a desire for (and experience of) social companionship.
A jump to the left (and then a step to the right): Reading practices within academic ebooks
Dana McKay, Swinburne University of Technology
Considerable attention has been paid to how readers find, triage, navigate and read periodical material such as journal articles. Until recently however, applying these questions to books has been impractical or impossible. This paper reports an exploratory log analysis of ebook usage in an academic library. This study investigates raw usage, document triage practices, and in-book navigation.
Designing navigation and wayfinding in 3D virtual learning spaces
Shailey Minocha & Christopher Leslie Hardy, The Open University
As the use of virtual worlds in education continues to grow, it is important that designers and educators consider the interaction design and usability of three-dimensional (3D) virtual learning spaces as being integral to student learning and engagement. In a previous project on the design of learning spaces in virtual worlds, we uncovered that difficulties with navigation and wayfinding are the key usability problems that impact on the student experience. Second Life is the most commonly used virtual world for educational purposes. Based upon empirical investigations in Second Life, we have derived heuristics and guidelines for the design of 3D virtual learning spaces to facilitate navigation and wayfinding. Qualitative data arising from heuristic evaluations and user observations enabled a variety of navigational aids to be assessed for their suitability in designs of 3D virtual learning spaces. We have also derived best practice examples for navigational aids such as maps, signs, paths and landmarks.
Bridging the representation and interaction challenges of mobile context-aware computing: designing agile ridesharing
Seyed Hadi Mirisaee, Margot Brereton & Paul Roe, Queensland University of Technology
The increasing capability of mobile devices and social networks to gather contextual and social data has led to increased interest in context-aware computing for mobile applications. This paper explores ways of reconciling two different viewpoints of context, representational and interactional, that have arisen respectively from technical and social science perspectives on context-aware computing. Through a case study in agile ridesharing, the importance of dynamic context control, historical context and broader context is discussed. We build upon earlier work that has sought to address the divide by further explicating the problem in the mobile context and expanding on the design approaches.
Pseudo-Direct Touch: Interaction for Collaboration in Large and High-Resolution Displays Environments
Christian Müller-Tomfelde, Kelvin Cheng & Jane Li, CSIRO ICT Centre
In this paper, we present an exploration of an interaction technique designed for large and high-resolution display environments in collaborative work situations. We introduce the Pseudo-Direct Touch technique to enable users to interact with a large display from a distance through a transparent touch frame. The touch points on the frame are projected onto the distant large display, so that users have the impression of touching on the large display directly. This approach combines the advantages of intuitive interface for individuals and interaction design that supports unobstructed awareness and face-to-face contact for collaborations in display environments. We assessed our design and performance of our technique in a user study, and gauge the effect of parallax on accuracy during absolute selections. Finally, we trialled a prototypical user application and observed fluent interactions by most participants.
Natural Interactions Between Augmented Virtual Objects
Steven Neale, Winyu Chinthammit, Christopher Lueg & Paddy Nixon, University of Tasmania
There are many situations in which physical interaction with real-world objects is not possible – for example, museums contain many objects or artefacts which are too fragile or expensive for the public to handle. Augmented Reality (AR) has the potential to offer an alternative in these situations, but most of our current interactions with virtual objects in AR tend to be indirect. Tangible AR allows for natural movement, but we rarely manipulate or control virtual objects beyond that in the way we do their physical counterparts. To address this problem, we propose that a more natural approach to interacting with tangible AR be introduced. We present a prototype that allows users to physically orientate virtual objects so that they ‘snap’ together in order to complete a ‘3D AR Puzzle’, and show that introducing ‘responsive virtual objects’ for tangible AR is a promising first step towards more natural interactions.
Generic functionality in user interfaces for emergency response
Erik G. Nilsson & Ketil Stølen, SINTEF ICT and University of Oslo
In this paper we use findings from a number of empirical studies involving different emergency response actors to identify shared or overlapping needs for user interfaces functionality. By analyzing the findings from these studies, we have identified 11 categories of functionality supporting shared needs, including functionality for handling incident information, logging facilities, and functionality for managing human resources and equipment. After presenting our research method, we give an overview of the identified categories of shared functionality. We also describe one of the categories, namely resource management, in some more detail including giving examples of concrete user interface functionality. We have validated the conclusions of our findings through observations and interviews in a training exercise. The validation supported our prediction that the exercise would not reveal major additional categories of functionality, and it also supplemented the earlier findings regarding which actors that need which categories of functionality. We conclude by discussing pros and cons of using generic solutions supporting shared functionality across emergency response actors.
Using Mobile Phones for Promoting Water Conservation
Rahuvaran Pathmanathan 1, 2, Jon Pearce 2, Jesper Kjeldskov 1, 2 & Wally Smith 2, 1 Aalborg University, 2 The University of Melbourne
We report a design investigation that seeks to help people to conserve water in their homes through the use of mobile technology. To persuade people to use water more wisely, one approach is to give them tailored information about their water use and about other people’s usage. Investigating this approach, a mobile application was implemented to explore the role of three different sources of information (weather, expert’s advice and community information). Based on the evaluation, several themes for designing mobile technology for gardeners were identified. Findings from the study show that gardeners want more tailored messages from the system, and advice should come from more than one source of information, to have a greater opportunity to persuade.
Search or Explore: Do you know what you’re looking for?
Jon Pearce, Shanton Chang, Basil Alzougool, Gregor Kennedy & Mary Ainley, The University of Melbourne; Susan Rodrigues, University of Northumbria
This paper explores the distinctions between searching and exploring when looking for information. We propose that, while traditional search engines work well in supporting search behaviour, they are more limited in assisting those who are looking to explore new information, especially when the exploration task is ill-defined. We ran a pilot study using two systems: one based on a traditional database search engine, and the other – a highly innovative, engaging and playful system called iFISH – that we designed specifically to support exploration through the use of user preferences. We looked for evidence to support the concept that exploration requires a different kind of interaction. The initial results report a positive response to our exploration system and indicate the differences in preferences amongst users for systems that match their searching or exploring behaviours.
Seek and Sign: An early experience of the joys and challenges of software design with young Deaf children
Leigh Ellen Potter, Jessica Korte &Sue Nielsen, Griffith University
This paper describes the initial stages of a research project aimed at teaching preliterate Deaf children Australian sign language (Auslan) using a software application deployed on a mobile technology device. We discuss the user centred design techniques to be used in this project, specifically Gestural Think Aloud Protocol and the Problem Identification Picture Cards method. An initial design session exploring the feasibility of the design approach suggests that the approach is suitable and desirable. Our design questions for future development are listed.
Mobile Banking Customization via User-Defined Tags
Rajinesh Ravendran, Ian MacColl & Michael Docherty, Queensland University of Technology
In this paper, we describe on-going work on mobile banking customization, particularly in the Australian context. The use of user-defined tags to facilitate personalized interactions in the mobile context is explored. The aim of this research is to find ways to improve mobile banking interaction. Customization is more significant in the mobile context than online due to factors such as smaller screen sizes and limited software and hardware capabilities, placing an increased emphasis on usability. This paper explains how user-defined tags can aid different types of customization at the interaction level. A preliminary prototype has been developed to demonstrate the mechanics of the proposed approach. Potential implications, design decisions and limitations are discussed with an outline of future work.
Quality Delivery of Mobile Video: In-depth Understanding of User Requirements
Wei Song, Dian Tjondronegoro & Michael Docherty, Queensland University of Technology
The increase of powerful mobile devices has accelerated the demand for mobile videos. Previous studies in mobile video have focused on understanding of mobile video usage, improvement of video quality, and user interface design in video browsing. However, research focusing on a deep understanding of users’ needs for a pleasing quality delivery of mobile video is lacking. In particular, what quality-delivery mode users prefer and what information relevant to video quality they need requires attention. This paper presents a qualitative interview study with 38 participants to gain an insight into three aspects: influencing factors of user-desired video quality, user-preferred quality-delivery modes, and user-required interaction information of mobile video. The results show that user requirements for video quality are related to personal preference, technology background and video viewing experience, and the preferred quality-delivery mode and interactive mode are diverse. These complex user requirements call for flexible and personalised quality delivery and interaction of mobile video.
Gestural Navigation in Google Earth
Simon Stannus, Daniel Rolf , Arko Lucieer & Winyu Chinthammit, University of Tasmania
Geographical Information Systems (GISs) are playing an increasingly important role in society. Not only have the capabilities of GIS packages expanded, but their spectrum has been widened by the popularisation of software such as Google Earth, which has added an extra dimension to navigation, while still using the same interaction method. We argue that traditional GIS interfaces limit productivity by not being sufficiently intuitive to new users and by causing extra delay due to unnecessary modality. As a step on the road to solving these problems, we propose an ideal gesture-based system and present the results of a mostly qualitative user-experiment on our current prototype for gestural navigation in Google Earth, which back up our assumptions about the importance of gestural interactions being both bimanual and simultaneous.
The Transmission of Self: Body Language Availability and Gender in Videoconferencing
Cameron Teoh, Holger Regenbrecht & David O’Hare, University of Otago
Videoconferencing technology is increasingly used for work and personal use. While a lot of research has been done on the perceptual qualities of videoconferencing systems, little research has been done on self-transmission or the ways in which individuals manage and control the impressions received by the communication partner.
In an experimental study with 134 participants, we investigated the influence of the availability of body language and both partners’ gender on the ability to transmit oneself in videoconferencing. We found that participant gender and partner gender both had significant effects on perceptions of dominance/persuasion and impression management. We discuss these results in relation to the transmission of self in remote communication and their implications for future design and research.
A Tuplespace Event Model for Mashups
Sheng Tian, Gerald Weber & Christof Lutteroth, University of Auckland
Inter-widget communication is essential for enterprise mashup applications. To implement it, current mashup platforms use the publish/subscribe pattern. However, the way publish/subscribe is used in these platforms requires a lot of manual wiring between widgets. In this paper, we propose a new Unified Widget Event Model (UWEM), which is conceptually an extension of Linda tuplespaces. UWEM separates event publishers and subscribers in space, time, and reference. Using the Keyboard-Level Model (KLM) we show that UWEM requires fewer operations to build typical mashups than conventional mashup platforms. We have implemented UWEM in a popular enterprise mashup framework, and performed an empirical study that compares UWEM with the established approach for creating mashups. The study confirms the KLM predictions, and shows that UWEM is significantly more efficient than the established approach.
Supporting Young Children's Communication with Adult Relatives Across Time Zones
René Vutborg, Jesper Kjeldskov & Jeni Paay, Aalborg University; Sonja Pedell & Frank Vetere, The University of Melbourne
Regular contact between children and their adult relatives can be a problem if they live in different time zones. In this situation, finding an agreed time to contact each other can be both confusing and complicated. This paper presents a study of the effect of time zone differences on communication between grandparents and grandchildren living in different time zones. We deployed a system between time zone distributed families to study this effect and analysed its use based on four parameters of time and events based theory: rigid sequential structures (that some events cannot occur before others), fixed durations (that most events always last the same time), standard temporal locations (that events have a standard time when they occur during the day) and uniform rates of recurrence (that some events always reoccur at a uniform rate). Our findings highlight the importance of: the need to consider the parents’ role in facilitating contact and making the technology easy to use by children independently; the advantage of concurrent synchronous and asynchronous interaction forms; and the need to respect people’s private time. These findings can inform the design of technology for supporting young children’s communications with adult relatives across time zones.
Collecting Cross-Cultural User Data with Internationalized Storyboard Survey
Tanja Walsh, Piia Nurkka, Tiina Koponen, Jari Varsaluoma & Sari Kujala, Tampere University of Technology; Sara Belt, Nokia Oyj
Globalization and the search for experiential aspects of technology products and services have increased the demand for cross-cultural user feedback. Remote methods would suit agile global data collection, but only few common practices yet exist. Thus, the goal of the present study was to determine ways in which common visual stimulus material (internationalized storyboards) are perceived similarly and differently by cross-cultural respondents. An internationalized remote online storyboard survey was designed to collect cross-cultural user data of 252 respondents, from the USA, Brazil, India, Italy and Finland – around the topic of mobile content sharing concepts. It was found that, for the majority of situations and details, storyboards supported a similar interpretation by users from different cultural backgrounds; and internationalized pictures assisted respondents in providing rich answers to a long survey because of a sound understanding of the intended situations and ease of imagining themselves in different usage situations.
Expressive Interactions: Tablet Usability for Young Mobile Learners
Peta Wyeth, Mitchell McEwan, Paul Roe & Ian MacColl, Queensland University of Technology
In this paper we examine the usability of tablets for students in middle school in the context of mobile environmental education. Our study focuses on the expressive qualities of three input methods – text, audio and drawing – and the extent to which these methods support on-task behaviour. In our study 28 small groups of children were given iPads and asked to record ecological observations from around their schoolyard. The effectiveness of the devices and their core utility for expressive, on-task data capture is assessed.
Biases and interaction effects in gestural acquisition of auditory targets using a hand-held device
Lonce Wyse, Suranga Nanayakkara & Norikazu Mitani, National University of Singapore
A user study explored bias and interaction effects in an auditory target tracking task using a hand-held gestural interface device for musical sound. Participants manipulated the physical dimensions of pitch, roll, and yaw of a hand-held device, which were mapped to the sound dimensions of musical pitch, timbre, and event density. Participants were first presented with a sound, which they then had to imitate as closely as possible by positioning the hand-held controller. Accuracy and time-to-target were influenced by specific sounds as well as pairings between controllers and sounds. Some bias effects in gestural dimensions independent of sound mappings were also found.
Awareness to Improve Interaction: Design of Distance Learning Environment
Moonyati Yatid & Masahiro Takatsuka, University of Sydney
This paper investigates the appropriate region to place the remote scene in a synchronous distance learning environment to attain lecturer’s awareness towards students’ activities. We carried out experiments aimed to engineer the new design environment that improves current distance learning classrooms by finding the relationship between human factors and design parameter. We hypothesized that it is easiest to recognize remote students’ actions when the remote scene is located in lecturer’s visual field, and closer to where he/she regularly puts his/her attention. We found that performance involving visual ability could be divided into 3 groups; near fixation (below 51° horizontally and below 80° laterally), far from fixation but within visual field, and out of visual field. The main contributions of this paper are the numerical evidence of visual ability and a framework of design that were derived from the experiments. We also provided design example of distance learning classroom that matches our findings.
Careless touch: A comparative evaluation of mouse, pen, and touch input in shape tracing task
Stanislaw Zabramski, Uppsala University
This short paper is a work-in-progress report on an experimental, exploratory comparison and evaluation of three input methods (mouse, pen, and touch-input) in a line-tracing task. A method to compare the original shape and user-generated version is presented. Measurements of user efficiency and accuracy showed that participants replicating a particular shape using touch-input performed the worst in terms of accuracy but were the fastest in comparison to the remaining input methods. No effect of controlled visual feedback was observed. Additionally, subjective operational biases were observed that, together with input method and expected shape related issues, might strongly affect the results.