À propos de cet article

Citez

World Health Organization. (2011). Global Report. Accessed: Sep. 9, 2016. [Online]. Available: http://www.who.int/disabilities/worldreport/2011/en/. Search in Google Scholar

Wilkinson, K. M., & Hennig, S. (2007). The state of research and practice in augmentative and alternative communication for children with developmental/intellectual disabilities. Developmental Disabilities Research Reviews, 13(1), 58–69. Search in Google Scholar

DeCoste, D. C., & Glennen, S. (1997). The Handbook Augmentative Alternative Communication. San Diego, CA, USA: Singular. Search in Google Scholar

de Sousa Gomide, R., Loja, L. F. B., Lemos, R. P., Flˆores, E. L., Melo, F. R., & Teixeira, R. A. G. (2016). A new concept of assistive virtual keyboards based on a systematic review of text entry optimization techniques. Research in Biomedical Engineering, 32(2), 176–198. Search in Google Scholar

Mele, M. L., & Federici, S. (2012). A psychotechnological review on eye-tracking systems: Towards user experience. Disability and Rehabilitation: Assistive Technology, 7(4), 261–281. Search in Google Scholar

Park, S.-W., Yim, Y.-L., Yi, S.-H., Kim, H.-Y., & Jung, S.-M. (2012). Augmentative and alternative communication training using eye blink switch for locked-in syndrome patient. Annals of Rehabilitation Medicine, 36(2), 268–272. Search in Google Scholar

Cipresso, P., et al. (2011). The combined use of brain-computer interface and eye-tracking technology for cognitive assessment in amyotrophic lateral sclerosis. In Proceedings of the IEEE International Conference on PervasiveHealth (pp. 320–324). Search in Google Scholar

Schalk, G., Brunner, P., Gerhardt, L. A., Bischof, H., & Wolpaw, J. R. (2008). Brain–computer interfaces (BCIs): Detection instead of classification. Journal of Neuroscience Methods, 167(1), 51–62. Search in Google Scholar

Usakli, A. B., & Gurkan, S. (2010). Design of a novel efficient human-computer interface: An electrooculagram based virtual keyboard. IEEE Transactions on Instrumentation and Measurement, 59(8), 2099–2108. Search in Google Scholar

Fu, Y.-F., & Ho, C.-S. (2009). A fast text-based communication system for handicapped aphasiacs. In Proceedings of the IEEE Conference on Information Intelligence and Security (pp. 583–594). Search in Google Scholar

Orhan, U., Hild, K. E., Erdogmus, D., Roark, B., Oken, B., & Fried Oken, M. (2013). RSVP keyboard: An EEG based typing interface. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 1–11). Search in Google Scholar

Biswas, P., & Samanta, D. (2008). Friend: A communication aid for persons with disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 16(2), 205–209. Search in Google Scholar

Molina, A. J., Rivera, O., & Gómez, I. (2009). Measuring performance of virtual keyboards based on cyclic scanning. In Proceedings of the IEEE International Conference on Autonomic and Autonomous Systems (pp. 174–178). Search in Google Scholar

Ghosh, S., Sarcar, S., & Samanta, D. (2011). Designing an efficient virtual keyboard for text composition in Bengali. In Proceedings of the ACM International Conference on Human-Computer Interaction in India (pp. 84–87). Search in Google Scholar

Bhattacharya, S., & Laha, S. (2013). Bengali text input interface design for mobile devices. Universal Access in the Information Society, 12(4), 441–451. Search in Google Scholar

Samanta, D., Sarcar, S., & Ghosh, S. (2013). An approach to design virtual keyboards for text composition in Indian languages. International Journal of Human-Computer Interaction, 29(8), 516–540. Search in Google Scholar

Sutton, R. S., & Barto, A. G. (1998). Introduction to Reinforcement Learning. 1st ed. MIT Press. Search in Google Scholar

Ward, D. J., & MacKay, D. J. C. (2002). Artificial intelligence: Fast hands-free writing by gaze direction. Nature, 418, 838. Search in Google Scholar

Rough, D., Vertanen, K., & Kristensson, P. O. (2014). An evaluation of dasher with a high-performance language model as a gaze communication method. In Proceedings of the International Working Conference on Advanced Visual Interfaces (pp. 169–176). Search in Google Scholar

Cecotti, H. (2016). A multimodal gaze-controlled virtual keyboard. IEEE Transactions on Human-Machine Systems, 46(4), 601–606. Search in Google Scholar

Sarcar, S., & Panwar, P. (2013). Eyeboard++: An enhanced eye gaze-based text entry system in Hindi. In Proceedings of the ACM International Conference on Computer-Human Interaction (pp. 354–363). Search in Google Scholar

Anson, D., et al. (2006). The effects of word completion and word prediction on typing rates using on-screen keyboards. Assistive Technology, 18(2), 146–154. Search in Google Scholar

Pouplin, S., et al. (2014). Effect of a dynamic keyboard and word prediction systems on text input speed in patients with functional tetraplegia. Journal of Rehabilitation Research & Development, 51(3), 467–480. Search in Google Scholar

Jacob, R. J. K. (1990). What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of the ACM International Conference on Human Factors in Computing Systems (pp. 11–18). Search in Google Scholar

Prabhu, V., & Prasad, G. (2011). Designing a virtual keyboard with multimodal access for people with disabilities. In Proceedings of the IEEE International Conference on Information and Communication Technologies (pp. 1133–1138). Search in Google Scholar

Singh, J. V., & Prasad, G. (2015). Enhancing an eye-tracker based human-computer interface with multi-modal accessibility applied for text entry. International Journal of Computers & Applications, 130(16), 16–22. Search in Google Scholar

Cretual, A., & Chaumette, F. (2001). Application of motion-based visual servoing to target tracking. The International Journal of Robotics Research, 20(2), 169–182. Search in Google Scholar

Gaskett, C., Fletcher, L., Zelinsky, A., et al. (2000). Reinforcement learning for visual servoing of a mobile robot. In Australian Conference on Robotics and Automation. Search in Google Scholar

Bustamante, G., Danes, P., Forgue, T., & Podlubne, A. (2016). Towards information-based feedback control for binaural active localization. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Search in Google Scholar

Magassouba, A., Bertin, N., & Chaumette, F. (2018). Aural servo: sensor-based control from robot audition. IEEE Transactions on Robotics, 34(1), 169–186. Search in Google Scholar

Ghadirzadeh, S., Butepage, J., Maki, A., Kragic, D., & Bj orkman, M. (2016). A sensorimotor reinforcement learning framework for physical Human-Robot Interaction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 2682–2688). Search in Google Scholar

Mitsunaga, N., Smith, C., Kanda, T., Ishiguro, H., & Hagita, N. (2006). Robot behavior adaptation for human-robot interaction based on policy gradient reinforcement learning. Journal of Robotic Systems, 23(10), 545–554. Search in Google Scholar

Thomaz, A. L., Hoffman, G., & Breazeal, C. (2006). Reinforcement learning with human teachers: Understanding how people want to teach robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 352–357). Search in Google Scholar

Cruz, F., Parisi, G. I., Twiefel, J., & Wermter, S. (2016). Multi-modal integration of dynamic audiovisual patterns for an interactive reinforcement learning scenario. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 759–766). Search in Google Scholar

Rothbucher, M., Denk, C., & Diepold, K. (2012). Robotic gaze control using reinforcement learning. In Proceedings of the IEEE International Symposium on Haptic Audio-Visual Environments and Games. Search in Google Scholar

Qureshi, A. H., Nakamura, Y., Yoshikawa, Y., & Ishiguro, H. (2016). Robot gains social intelligence through multimodal deep reinforcement learning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 745–751). Search in Google Scholar

Qureshi, A. H., Nakamura, Y., Yoshikawa, Y., & Ishiguro, H. (2017). Show, attend and interact: Perceivable human-robot social interaction through neural attention Q-network. In Proceedings of the IEEE International Conference on Robotics and Automation. Search in Google Scholar

Vazquez, M., Steinfeld, A., & Hudson, S. E. (2016). Maintaining awareness of the focus of attention of a conversation: A robot-centric reinforcement learning approach. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Search in Google Scholar

Bennewitz, M., Faber, F., Joho, D., Schreiber, M., & Behnke, S. (2005). Towards a humanoid museum guide robot that interacts with multiple persons. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 418–423). Search in Google Scholar

Ban, Y., Alameda-Pineda, X., Badeig, F., Ba, S., & Horaud, R. (2017). Tracking a varying number of people with a visually-controlled robotic head. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Search in Google Scholar

Yun, S.-S. (2017). A gaze control of socially interactive robots in multiple-person interaction. Robotica, 35(11), 2122–2138. Search in Google Scholar