Zacytuj

D. Te’eni, J. Carey and P. Zhang, Human Computer Interaction: Developing Effective Organizational Information Systems, John Wiley & Sons, Hoboken (2007).Search in Google Scholar

B. Shneiderman and C. Plaisant, Designing the User Interface: Strategies for Effective Human-Computer Interaction (4th edition), Pearson/Addison-Wesley, Boston (2004).Search in Google Scholar

J. Nielsen, Usability Engineering, Morgan Kaufman, San Francisco (1994).Search in Google Scholar

D. Te’eni, “Designs that fit: an overview of fit conceptualization in HCI”, in P. Zhang and D. Galletta (eds), Human-Computer Interaction and Management Information Systems: Foundations, M.E. Sharpe, Armonk (2006).Search in Google Scholar

A. Chapanis, Man Machine Engineering, Wadsworth, Belmont (1965).Search in Google Scholar

D. Norman, “Cognitive Engineering”, in D. Norman and S. Draper (eds), User Centered Design: New Perspective on Human-Computer Interaction, Lawrence Erlbaum, Hillsdale (1986).Search in Google Scholar

R.W. Picard, Affective Computing, MIT Press, Cambridge (1997).10.1037/e526112012-054Search in Google Scholar

J.S. Greenstein, “Pointing devices”, in M.G. Helander, T.K. Landauer and P. Prabhu (eds), Handbook of Human-Computer Interaction, Elsevier Science, Amsterdam (1997).Search in Google Scholar

B.A. Myers, “A brief history of human-computer interaction technology”, ACM interactions, 5(2), pp 44-54 (1998).10.1145/274430.274436Search in Google Scholar

B. Shneiderman, Designing the User Interface: Strategies for Effective HumanComputer Interaction (3rd edition), Addison Wesley Longman, Reading (1998).Search in Google Scholar

A. Murata, “An experimental evaluation of mouse, joystick, joycard, lightpen, trackball and touchscreen for Pointing - Basic Study on Human Interface Design”, Proceedings of the Fourth International Conference on Human-Computer Interaction 1991, pp 123-127 (1991).Search in Google Scholar

L.R. Rabiner, Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs (1993).Search in Google Scholar

C.M. Karat, J. Vergo and D. Nahamoo, “Conversational interface technologies”, in J.A. Jacko and A. Sears (eds), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).Search in Google Scholar

S. Brewster, “Non speech auditory output”, in J.A. Jacko and A. Sears (eds), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).Search in Google Scholar

G. Robles-De-La-Torre, “The Importance of the sense of touch in virtual and real environments”, IEEE Multimedia 13(3), Special issue on Haptic User Interfaces for Multimedia Systems, pp 24-30 (2006).10.1109/MMUL.2006.69Search in Google Scholar

V. Hayward, O.R. Astley, M. Cruz-Hernandez, D. Grant and G. Robles-De-La-Torre, “Haptic interfaces and devices”, Sensor Review 24(1), pp 16-29 (2004).10.1108/02602280410515770Search in Google Scholar

J. Vince, Introduction to Virtual Reality, Springer, London (2004).10.1007/978-0-85729-386-2Search in Google Scholar

H. Iwata, “Haptic interfaces”, in J.A. Jacko and A. Sears (eds), The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).Search in Google Scholar

W. Barfield and T. Caudell, Fundamentals of Wearable Computers and Augmented Reality, Lawrence Erlbaum Associates, Mahwah (2001).10.1201/9780585383590Search in Google Scholar

M.D. Yacoub, Wireless Technology: Protocols, Standards, and Techniques, CRC Press, London (2002).Search in Google Scholar

K. McMenemy and S. Ferguson, A Hitchhiker’s Guide to Virtual Reality, A K Peters, Wellesley (2007).10.1201/b10677Search in Google Scholar

Global Positioning System, “Home page”, http://www.gps.gov/, visited on 10/10/2007.Search in Google Scholar

S.G. Burnay, T.L. Williams and C.H. Jones, Applications of Thermal Imaging, A. Hilger, Bristol (1988).Search in Google Scholar

J. Y. Chai, P. Hong and M. X. Zhou, “A probabilistic approach to reference resolution in multimodal user interfaces”, Proceedings of the 9th International Conference on Intelligent User Interfaces, Funchal, Madeira, Portugal, pp 70-77 (2004).10.1145/964442.964457Search in Google Scholar

E.A. Bretz, “When work is fun and games”, IEEE Spectrum, 39(12), pp 50-50 (2002).10.1109/MSPEC.2002.1088457Search in Google Scholar

ExtremeTech, “Canesta says “Virtual Keyboard” is reality”, http://www.extremetech.com/article2/0,1558,539778,00.asp, visited on 15/10/2007.Search in Google Scholar

G. Riva, F. Vatalaro, F. Davide and M. Alaniz, Ambient Intelligence: The Evolution of Technology, Communication and Cognition towards the Future of HCI, IOS Press, Fairfax (2005).Search in Google Scholar

M.T. Maybury and W. Wahlster, Readings in Intelligent User Interfaces, Morgan Kaufmann Press, San Francisco (1998).10.1145/291080.291081Search in Google Scholar

A. Kirlik, Adaptive Perspectives on Human-Technology Interaction, Oxford University Press, Oxford (2006).Search in Google Scholar

S.L. Oviatt, P. Cohen, L. Wu, J. Vergo, L. Duncan, B. Suhm, J. Bers, T. Holzman, T. Winograd, J. Landay, J. Larson and D. Ferro, “Designing the user interface for multimodal speech and pen-based gesture applications: state-of-the-art systems and future research directions”, Human-Computer Interaction, 15, pp 263-322 (2000).Search in Google Scholar

D.M. Gavrila, “The visual analysis of human movement: a survey”, Computer Vision and Image Understanding, 73(1), pp 82-98 (1999).Search in Google Scholar

L.E. Sibert and R.J.K. Jacob, “Evaluation of eye gaze interaction”, Conference of Human-Factors in Computing Systems, pp 281-288 (2000).10.1145/332040.332445Search in Google Scholar

Various Authors, “Adaptive, intelligent and emotional user interfaces”, Part II of HCI Intelligent Multimodal Interaction Environments, 12th International Conference, HCI International 2007 (Proceedings Part III), Springer Berlin, Heidelberg (2007).Search in Google Scholar

M.N. Huhns and M.P. Singh (eds), Readings in Agents, Morgan Kaufmann, San Francisco (1998).Search in Google Scholar

C.S. Wasson, System Analysis, Design, and Development: Concepts, Principles, and Practices, John Wiley & Sons, Hoboken (2006).Search in Google Scholar

A. Jaimes and N. Sebe, “Multimodal human computer interaction: a survey”, Computer Vision and Image Understanding, 108(1-2), pp 116-134 (2007).Search in Google Scholar

I. Cohen, N. Sebe, A. Garg, L. Chen and T.S. Huang, “Facial expression recognition from video sequences: temporal and static modeling”, Computer Vision and Image Understanding, 91(1-2), pp 160-187 (2003).Search in Google Scholar

B. Fasel and J. Luettin, “Automatic facial expression analysis: a survey”, Pattern Recognition, 36, pp 259-275 (2003).Search in Google Scholar

M. Pantic and L.J.M. Rothkrantz, “Automatic analysis of facial expressions: the state of the art”, IEEE Transactions on PAMI, 22(12), pp 1424-1445 (2000).Search in Google Scholar

J.K. Aggarwal and Q. Cai, “Human motion analysis: a review”, Computer Vision and Image Understanding, 73(3), pp 428-440 (1999).Search in Google Scholar

S. Kettebekov and R. Sharma, “Understanding gestures in multimodal human computer interaction”, International Journal on Artificial Intelligence Tools, 9(2), pp 205-223 (2000).10.1142/S021821300000015XSearch in Google Scholar

Y. Wu and T. Huang., “Vision-based gesture recognition: a review”, in A. Braffort, R. Gherbi, S. Gibet, J. Richardson and D. Teil (eds), Gesture-Based Communication in Human-Computer Interaction, volume 1739 of Lecture Notes in Artificial Intelligence, Springer-Verlag, Berlin/Heidelberg (1999).Search in Google Scholar

T. Kirishima, K. Sato and K. Chihara, “Real-time gesture recognition by learning and selective control of visual interest points”, IEEE Transactions on PAMI, 27(3), pp 351364 (2005).10.1109/TPAMI.2005.61Search in Google Scholar

R. Ruddaraju, A. Haro, K. Nagel, Q. Tran, I. Essa, G. Abowd and E. Mynatt, “Perceptual user interfaces using vision-based eye tracking”, Proceedings of the 5th International Conference on Multimodal Interfaces, Vancouver, pp 227-233 (2003).10.1145/958432.958475Search in Google Scholar

A.T. Duchowski, “A breadth-first survey of eye tracking applications”, Behavior Research Methods, Instruments, and Computers, 34(4), pp 455-470 (2002).10.3758/BF03195475Search in Google Scholar

P. Rubin, E. Vatikiotis-Bateson and C. Benoit (eds.), “Special issue on audio-visual speech processing”, Speech Communication, 26, pp 1-2 (1998).10.1016/S0167-6393(98)00046-6Search in Google Scholar

J.P. Campbell Jr., “Speaker recognition: a tutorial”, Proceedings of IEEE, 85(9), pp 1437-1462 (1997).Search in Google Scholar

P.Y. Oudeyer, “The production and recognition of emotions in speech: features and algorithms”, International Journal of Human-Computer Studies, 59(1-2), pp 157-183 (2003).Search in Google Scholar

L.S. Chen, Joint Processing of Audio-Visual Information for the Recognition of Emotional Expressions in Human-Computer Interaction, PhD thesis, UIUC, (2000).Search in Google Scholar

M. Schröder, D. Heylen and I. Poggi, “Perception of non-verbal emotional listener feedback”, Proceedings of Speech Prosody 2006, Dresden, Germany, pp 43-46 (2006).Search in Google Scholar

M.J. Lyons, M. Haehnel and N. Tetsutani, “Designing, playing, and performing, with a vision-based mouth interface”, Proceedings of the 2003 Conference on New Interfaces for Nusical Expression, Montreal, pp 116-121 (2003).Search in Google Scholar

D. Göger, K. Weiss, C. Burghart and H. Wörn, “Sensitive skin for a humanoid robot”, Human-Centered Robotic Systems (HCRS’06), Munich, (2006).Search in Google Scholar

O. Khatib, O. Brock, K.S. Chang, D. Ruspini, L. Sentis and S. Viji, “Human-centered robotics and interactive haptic simulation”, International Journal of Robotics Research, 23(2), pp 167-178 (2004).10.1177/0278364904041325Search in Google Scholar

C. Burghart, O. Schorr, S. Yigit, N. Hata, K. Chinzei, A. Timofeev, R. Kikinis, H. Wörn and U. Rembold, “A multi-agent system architecture for man-machine interaction in computer aided surgery”, Proceedings of the 16th IAR Annual Meeting, Strasburg, pp 117-123 (2001).Search in Google Scholar

A. Legin, A. Rudnitskaya, B. Seleznev and Yu. Vlasov, “Electronic tongue for quality assessment of ethanol, vodka and eau-de-vie”, Analytica Chimica Acta, 534, pp 129-135 (2005).10.1016/j.aca.2004.11.027Search in Google Scholar

S. Oviatt, “Multimodal interfaces”, in J.A. Jacko and A. Sears (eds), The HumanComputer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Application, Lawrence Erlbaum Associates, Mahwah (2003).Search in Google Scholar

R.A. Bolt, “Put-that-there: voice and gesture at the graphics interface”, Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques, Seattle, Washington, United States, pp 262-270 (1980).Search in Google Scholar

M. Johnston and S. Bangalore, “MATCHKiosk: a multimodal interactive city guide”, Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, Barcelona, Spain, Article No. 33, (2004).Search in Google Scholar

I. McCowan, D. Gatica-Perez, S. Bengio, G. Lathoud, M. Barnard and D. Zhang, “Automatic analysis of multimodal group actions in meetings”, IEEE Transactions on PAMI, 27(3), pp 305-317 (2005).10.1109/TPAMI.2005.4915747787Search in Google Scholar

S. Meyer and A. Rakotonirainy, “A Survey of research on context-aware homes”, Australasian Information Security Workshop Conference on ACSW Frontiers, pp 159-168 (2003).Search in Google Scholar

P. Smith, M. Shah and N.D.V. Lobo, “Determining driver visual attention with one camera”, IEEE Transactions on Intelligent Transportation Systems, 4(4), pp 205-218 (2003).10.1109/TITS.2003.821342Search in Google Scholar

K. Salen and E. Zimmerman, Rules of Play: Game Design Fundamentals, MIT Press, Cambridge (2003).Search in Google Scholar

Y. Arafa and A. Mamdani, “Building multi-modal personal sales agents as interfaces to E-commerce applications”, Proceedings of the 6th International Computer Science Conference on Active Media Technology, pp 113-133 (2001).10.1007/3-540-45336-9_16Search in Google Scholar

Y. Kuno, N. Shimada and Y. Shirai, “Look where you’re going: a robotic wheelchair based on the integration of human and environmental observations”, IEEE Robotics and Automation, 10(1), pp 26-34 (2003).Search in Google Scholar

A. Ronzhin and A. Karpov, “Assistive multimodal system based on speech recognition and head tracking”, Proceedings of 13th European Signal Processing Conference, Antalya (2005).Search in Google Scholar

M. Pantic, A. Pentland, A. Nijholt and T. Huang, “Human computing and machine understanding of human behavior: a survey” Proceedings of the 8th International Conference on Multimodal Interfaces, Banff, Alberta, Canada, pp 239-248 (2006).Search in Google Scholar

A. Kapoor, W. Burleson and R.W. Picard, “Automatic prediction of frustration”, International Journal of Human-Computer Studies, 65, pp 724-736 (2007).10.1016/j.ijhcs.2007.02.003Search in Google Scholar

H. Gunes and M. Piccardi, “Bi-modal emotion recognition from expressive face and body gestures”, Journal of Network and Computer Applications, 30, pp 1334-1345 (2007).10.1016/j.jnca.2006.09.007Search in Google Scholar

C. Busso, Z. Deng, S. Yildirim, M. Bulut, C.M. Lee, A. Kazemzadeh, S. Lee, U. Neumann and S. Narayanan, “Analysis of emotion recognition using facial expressions, speech and multimodal information”, Proceedings of the 6th International Conference on Multimodal Interfaces, State College, PA, USA, pp 205-211 (2004).10.1145/1027933.1027968Search in Google Scholar

M. Johnston, P.R. Cohen, D. McGee, S.L. Oviatt, J.A. Pittman and I. Smith, “Unification-based multimodal integration”, Proceedings of the Eighth Conference on European Chapter of the Association for Computational Linguistics, pp 281-288 (1997).10.3115/979617.979653Search in Google Scholar

D. Perzanowski, A. Schultz, W. Adams, E. Marsh and M. Bugajska, “Building a multimodal human-robot interface”, Intelligent Systems, IEEE, 16, pp 16-21 (2001).10.1109/MIS.2001.1183338Search in Google Scholar

H. Holzapfel, K. Nickel and R. Stiefelhagen, “Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures”, Proceedings of the 6th International Conference on Multimodal Interfaces, pp 175-182 (2004).10.1145/1027933.1027964Search in Google Scholar

Brown University, Biology and Medicine, “Robotic Surgery: Neuro-Surgery”, http://biomed.brown.edu/Courses/BI108/BI108_2005_Groups/04/neurology.html, visited on 15/10/2007.Search in Google Scholar

eISSN:
1178-5608
Język:
Angielski
Częstotliwość wydawania:
Volume Open
Dziedziny czasopisma:
Engineering, Introductions and Overviews, other