DOI QR코드

DOI QR Code

The Effects of Multi-Modality on the Use of Smart Phones

  • Lee, Gaeun (Department of Industrial Management Engineering, Korea University) ;
  • Kim, Seongmin (Department of Industrial Management Engineering, Korea University) ;
  • Choe, Jaeho (Department of Industrial & Management Engineering, Daejin University) ;
  • Jung, Eui Seung (Department of Industrial Management Engineering, Korea University)
  • Received : 2014.04.29
  • Accepted : 2014.05.19
  • Published : 2014.06.30

Abstract

Objective: The objective of this study was to examine multi-modal interaction effects of input-mode switching on the use of smart phones. Background: Multi-modal is considered as an efficient alternative for input and output of information in mobile environments. However, there are various limitations in current mobile UI (User Interface) system that overlooks the transition between different modes or the usability of a combination of multi modal uses. Method: A pre-survey determined five representative tasks from smart phone tasks by their functions. The first experiment involved the use of a uni-mode for five single tasks; the second experiment involved the use of a multi-mode for three dual tasks. The dependent variables were user preference and task completion time. The independent variable in the first experiment was the type of modes (i.e., Touch, Pen, or Voice) while the variable in the second experiment was the type of tasks (i.e., internet searching, subway map, memo, gallery, and application store). Results: In the first experiment, there was no difference between the uses of pen and touch devices. However, a specific mode type was preferred depending on the functional characteristics of the tasks. In the second experiment, analysis of results showed that user preference depended on the order and combination of modes. Even with the transition of modes, users preferred the use of multi-modes including voice. Conclusion: The order of combination of modes may affect the usability of multi-modes. Therefore, when designing a multi-modal system, the fact that there are frequent transitions between various mobile contents in different modes should be properly considered. Application: It may be utilized as a user-centered design guideline for mobile multi modal UI system.

Keywords

References

  1. Oviatt, S. and Olsen. E., Integration themes in multimodal human-computer interaction, Proceedings of the 3rd International Conference on Spoken Language Processing (ICSLP). Yokohama, Japan. 1994.
  2. Oviatt, S., Ten myths of multimodal interaction, Communications of the ACM, 41(11), 74-81, 1999.
  3. Oviatt, S., Cohen, P., Wu, L., Duncan, L., Suhm, B., Bers, J., Holzman, T., Winograd, T., Landay, J., Larson, J. and Ferro, D., Designing the user interface for multimodal speech and pen-based gesture applications: state-of-the-art systems and future research directions. Human-computer interaction 15(4), 263-322, 2000. doi:10.1207/S15327051HCI1504_1
  4. Park, S.S., Park, S.C., Ahn, S.Y., Kim. W.W. and Koo, M.W., Design and Implementation of Multimodal Middleware for Mobile Environments, Korean Society of Speech Sciences, 60, 125-144, 2006.
  5. Reeves, L.M., Lai, J., Larson, J.A., Oviatt, S., Balaji, T.S., Buisine, S., Collings, P., Cohen, P., Kraal, B., Martin, J.C., McTear, M., Raman, T., Stanney, K.M., Su. H. and Wang, Q.Y., Guidelines for multimodal user interface design. Communications of the ACM, 47(1), 57- 59, 2004.
  6. Stifelman, L.J., Arons, B., Schmandt, C. andHulteen, E.A., Voice Notes: a speech interface for a hand-held voice note taker. Proceedings of the INTERACT'93 and CHI'93 conference on Human factors in computing systems. 179-186, New York, NY, 1993.
  7. Trouvain, B. and Christopher, M., A comparative study of multimodal displays for multi robot supervisory control. Engineering Psychology and Cognitive Ergonomics, Volume 4562, 184-193, 2007.
  8. Wickens, C.D., Multiple resources and performance prediction. Theoretical issues in ergonomics science 3(2), 159-177, 2002. doi:10.1080/14639220210123806
  9. Yang, J.Y. and Jung, E.C., Model Development for multi-tasking and multi-modality through a case analysis of mobile devices, Society of Korea Design Trend, 37, 277-288, 2012.
  10. Yankelovich, Nicole, Gina-Anne Levow, and Matt Marx., Designing Speech Acts : Issues in Speech User Interfaces.Proceedings of the SIGCHI conference on Human factors in computing systems, 369-376, New York, NY, 1995.
  11. Yu, J., Kim, S., Choe, J. and Jung, E.S. Multi-Modal Controller Usability for Smart TV Control.Journal of the Ergonomics Society of Korea, 32(6), 517-528, 2013. https://doi.org/10.5143/JESK.2013.32.6.517