Cite

Introduction

Globally, at least 2.2 billion individuals are blind or have some degree of near or distance vision impairment, according to estimates from the World Health Organization (World Health Organisation, 2021). Guide Dogs are a key mobility aid for people who are blind or have low vision (Lloyd, et al., 2021) and can support more fluid mobility for the user than other existing aids including white canes and sensor-integrated electronic mobility aids. Guide Dogs Australia are the leading provider of Orientation and Mobility services for people who are blind or have low vision, including comprehensive Guide Dog breeding, training, and mobility services. Training Guide Dogs to be effective mobility aids is a time-consuming, costly process which requires highly specialised expertise. Internationally qualified Guide Dog Mobility Instructors (GDMIs) train every Guide Dog to recognise obstacles, move through large crowds, board public transportation, and get the user almost anywhere they need or want to go. This kind of mobility support from a trained Guide Dog allows the user to move freely and confidently regardless of their vision level. However, Guide Dogs are not universally available and may not be suitable for all users due to a variety of factors such as user expectations, travel patterns, living situation and attitude towards dogs. Despite the level of training that Guide Dogs receive, their efficiency may not be consistent across different climatic conditions or circumstances due to a variety of factors such as the dog's health, physical and mental abilities, and so on.

Given the substantial advancements in technology that have been made in today's society, it is possible to provide consistent vital support and increase the mobility of people who are blind or have low vision through the use of intelligent robots with smart sensing and advanced artificial intelligence features. The experts from Guide Dogs Australia, in collaboration with researchers from the UTS Robotics Institute aim to develop an intelligent robot guide to provide consistent mobility and navigation support for people who are blind or have low vision, similar to that offered by Guide Dogs while recognising that such a device would not equal a real Guide Dog in terms of experience, appearance, behaviour and social support.

In order to provide consistency and support more people who are blind or have low vision, it is necessary to design a robot guide with key features that allow the robot to comprehend and react effectively in various environments and contexts. Therefore, in this article, we discuss commonly used sensor-assisted handheld electronic mobility aids and outline a set of fundamental functional mobility aid characteristics. We also compare existing robotic mobility aids and cutting-edge robotics technology against these key characteristics and discuss gaps in functionality. Finally, the article provides recommendations for next steps in the development of a minimum viable product robot mobility aid to support people who are blind or have low vision in everyday mobility and navigation tasks.

Commonly Used Sensor Integrated Hand-held Mobility Aids

In addition to white canes and Guide Dogs, there are low-cost, handheld, sensor-assisted technologies, such as smart canes, ultrasonic obstacle detection and GPS navigation devices, that provide secondary support for the navigation and mobility of people who are blind or low vision.

Sensor Assisted Long Canes

The long cane is the most widely used device by people who are blind or have low vision to identify obstacles during their mobility (Shiizu, et al., 2007). A comparative study was carried out by Santos and colleagues (2020) with blind and blindfolded participants to assess the performance of electronic long canes and traditional canes. The findings of the research suggest that electronic long canes are advantageous as they integrate technologies which can detect hanging obstacles where a traditional long cane cannot.

Several developments in the area of electronic long cane systems have been reported over the years. For instance, in Global Positioning System (GPS) enabled environments, Guerrero et al. (2018) reported a smart cane which leverages ultrasonic sensors to detect obstacles from 0.5m-5m in front and 90° to the left and right of the user. Users are alerted to detected obstacles by an auditory alarm. Additionally, the GPS module in the white cane can report an estimate of the user’s location.

In non-GPS environments, Barra et al. (2019) enhanced the mobility of test subjects using a long cane with integrated sensors (inertial measurement unit [IMU] and Pulse-Doppler radar) which report the user’s location.

Researchers have also used a 3D printed interdigitated capacitor-based tilt sensors (Ozioko, et al., 2021), electromagnetic sensors (Cardillo, et al., 2018), optical sensors based on Light Detection and Ranging (LiDAR) (Yupa, et al., 2021), and proximity sensors (Wong, et al., 2001) attached to a long cane to support the mobility of the people who are blind or have low vision in different settings.

The primary advantages of using sensor assisted long canes are that they are familiar, low-cost, easily replaceable, and also provide information about the user’s direct environment, such as ground surface, hazards and walkway conditions. This enables the user to make informed adjustments and maintain orientation in the environment. However, long canes can provide disadvantages resulting from their physical size. The cane detects environmental elements through contact, which may sometimes be unsafe or socially inappropriate, as in the case of other people or breakable obstacles – and can increase the difficulty of movement for the user. Additionally, weather conditions might have an impact on how the user perceives the area.

Ultrasonic Obstacle Detectors

Ultrasonic obstacle detectors, such as the Miniguide or BuzzClip, are electronic devices that support the user to detect obstacles in a variety of environments. The user holds the device in the palm of their hand and scans from left to right as they move (Hill & Black, 2003). An ultrasonic sensor is used to detect obstacles and alert the user through haptic (vibration) or audio feedback (Petsiuk & Pearce, 2019).

There are various benefits to utilising ultrasonic devices. They are largely controlled via a single button and have a battery life that can last for several months. They are also effective for detecting obstacles without needing to make physical contact, including hanging obstructions, street furniture, parked automobiles, and tables and chairs in residential and commercial settings. However, in contrast to the long cane, ultrasonic devices are less able to detect changes in ground surface levels and trip hazards such as stairs, railway platform drop-offs, and road crossings. Therefore, the majority of individuals choose to utilise ultrasonic devices as an auxiliary device in addition to a long cane.

GPS Navigation Devices

GPS navigation devices, such as the Victor Reader Trek, combine GPS with an audio output to assist the mobility of the user (Deverell, et al., 2020). These devices provide information such as real time auditory feedback about the user‘s location, direction of travel, points of interest in the vicinity and route guidance. They can be standalone devices or available through Smartphone applications. They can also provide additional audio features, such as podcasts and audiobooks. The pedestrian guidance function gives a greater verbosity of information to add to user awareness of their surroundings.

GPS aids, in a similar way to the ultrasonic aids, are used as complementary devices to the long cane to assist people who are blind or have low vision with navigation and mobility.

Remarks on Commonly Used Sensor Integrated Hand-held Mobility Aids

In the preceding sub-sections, we have discussed the commonly used sensor integrated hand-held mobility aids such as the smart long cane, ultrasonic devices and GPS devices. Despite the fact that such systems are driven by smart sensing and powerful navigation technology, their features are restricted to a single functionality. As a result, combining multiple features into a single system which considers the latest technology available may considerably improve the ability of users to move around efficiently, safely and with confidence. An intelligent guide robot with many features is a prospective contender for overcoming the constraints of hand-held mobility aids.

Fundamental Functional Features of an Intelligent Guide Robot

An effective, intelligent guide robot must include as many key functional characteristics of mobility aids, including Guide Dogs, canes, human guides and technologies, as possible. The suggestions from past studies (Holmes & Prentice, 2015) to develop a cost-efficient strategy to expand mobility services and increase in autonomy of traveller, discussions were carried out with Guide Dog Mobility Instructors, Orientation and Mobility Specialists and people who are blind or have low vision. As a result of these discussions, we have identified eight fundamental functional requirements for the supported mobility of individuals who may use a robotic guide. They are (a) sensing and interpreting, (b) device interface, (c) fluid and adaptable movement, (d) design factors. The functional requirements are discussed as follows:

Sensing and Interpreting

Sensing and interpreting is the functional requirement that discusses abilities to sense features in the path of travel, including significant changes in ground surface level, obstacles that would interact with the path of travel, detect critical environmental cues and prioritizing information related to that detection to the context of that travel plan.

Obstacle Detection and Object Targeting are elements of sensing and interpreting the distance and motion of obstacles with the use of sensors and algorithms in order to travel safely through different environments (Shoval, et al., 1994). Sensors often employed include ultrasonic sensors (Shoval, et al., 1998), cameras (Rahman, et al., 2018), laser range scanners (Gomez & Sandnes, 2012), sonar sensors (Antoun & McKerrow , 2010), millimeter wave radars (Long, et al., 2019), and RGB-D sensors (Long, et al., 2019). Sensor data is analysed using a variety of signal processing methods, including cutting-edge machine learning algorithms (Vorapatratorn, et al., 2021) (Gunethilake, 2020), in order to identify potential obstructions in the path of the sensing device.

In the case of guiding robots, obstacle detection and object targeting are vital functions for enabling the user to move freely and confidently throughout the environment (Capi & Toda, 2011). The guide robot would need to be capable of assessing the location, speed, direction, and distance of upcoming objects including ground hazards such as road signs, sloping surfaces such as steps or ramps, and off-ground impediments like trees or overhead signs before they contact the user.

Obstacle and object detection for this application requires the robot to be aware and able to evaluate both its own and the user’s surroundings. The majority of commercially available guide robots employ front and back cameras to provide real-time 3D vision and modify their movements appropriately. Distance sensors are coupled on their bodies to detect barriers and make adjustments as needed. However, there have not been significant efforts to increase robots’ awareness to the surroundings of a separate entity (e.g. a blind user beside or behind the robot). In general, further research and comprehensive testing is required to ensure that obstacle detection is effective and fully reviewed.

Obstacle Avoidance

Obstacle avoidance is the ability to react quickly and safely to a threat by coordinating movement to avoid another object (Kayukawa, et al., 2019). Obstacle avoidance is enabled by obstacle detection and the two functions often operate in parallel. (Lacey & Dawson-Howe, 1998). Both obstacle detection and obstacle avoidance are critical abilities in the context of a guiding device.

Obstacle avoidance systems for robots are available in a range of configurations (Sharifi, et al., 2020) (Guerreiro , et al., 2019) (Rumipamba L., et al., 2019) (Ulrich & Borenstein, 1998). In terms of functionality, this is the basic minimum requirement for general-purpose robotic guides. The Miniguide is similar in purpose to the robotic application, as ultrasonic sensing is utilised to notify the user when he or she is in close proximity to an object in their environment.

Over the years, many obstacle detection and avoidance algorithms have been developed and implemented on a variety of platforms (Bruno, et al., 2019) (Chuang, et al., 2018) (Wei, et al., 2014) (Miyazaki, et al., 2000). As a result, it can be considered that the technology is practically complete. Nevertheless, it still has to be applied and evaluated in the context of a standalone robot guide.

Enables Fluid and Adaptable Movement

Fluid and adaptable movement refers to the ability of the user and robot guide pair to have a smooth natural gait and adapt to sudden changes in that gait pattern such as stopping.

Mobility can be described as the act or ability to move from one’s present position to one’s desired position in another part of the environment safely and comfortably (Blasch, Weiner & Welsh 1997). A guiding aid should be able to keep up with the user's pace and react promptly to new circumstances, such as obstacles on the walking route, as they arise (Khandelwal & Stone, 2014).

The use of quadrupedal (four-legged) mobility in guiding robots has been suggested in recent research as a viable alternative when considering stability and manoeuvrability (Chen & Tsui , 2021) (Raibert, et al., 2008). However, complicated, undulating terrains and friction-sensitive surfaces must be meticulously considered and rigorously tested to ensure that the robot's performance is synchronised with the user's pace during its operation. Further investigation is required into robot mobility, particularly the capacity to recognise a surface and employ countermeasures to remain stable, and employ a mobility threshold to lower the risk of slipping or falling by alerting user. Orientation, stability, and a consistent walking pattern are the bare minimum requirements for this functional characteristic. Use of quadrupedal legs has many advantages, including the ability to self-correct after a fall, a large number of movable joints (i.e. degrees of freedom) and enhanced reflexes (Boston Dynamics, 2021).

For the purpose of constructing a robot guide, fewer legs equal less adaptable movement. Conversely, more legs would allow for more complex movements but are more difficult and time-consuming for the robot to compute. Current state-of-the-art mobility technology mentioned in this section is prepared to deploy, scan and self-adjust with respect to its surroundings.

Navigation

Navigation can be described as knowing where you are, where you’re going, and how you will get there (Lacey & Howe, 1997). To develop a guiding device capable of supporting independent travel, it is essential that the device can recognise the users location, points of interest in the vicinity and can direct the user to a set location within a measured timeframe. (Wei, et al., 2014).

Several stimuli must be recognised by the robot guide and calculated to inform route suggestions. For example, the density and potential duration of a human crowd and locations of interest. This would provide a considerable service to the user, particularly if the individual is unfamiliar with the surroundings and wants to explore with their robot companion. The robot must be extremely specialised in order to be able to recognise elements of interest in diverse situations and transmit relevant information to the user, allowing for flexibility of route and destination.

Localisation and GPS are a common technological combination that helps provide accurate location information, allowing for real-time navigation of the device (Endo, et al., 2017) (Adagale & Mahajan, 2015) (Yelamarthi, et al., 2010). Although technology for localisation and navigation is available, it will be necessary to evaluate it in the context of a robotic guide. There are still challenges with achieving the necessary levels of accuracy, reliability, and availability to be used as a guiding aid. Functions enabled by GPS localisation may also be restricted in certain environments, such as a retail mall, private residence or other unmapped space. A bespoke localization system is required for indoor navigation, such as beacon technology, which makes it a complex task (Megalingam, et al., 2019). Despite the fact that several efforts have been made to accomplish indoor localization, the technology has yet to be improved in critical applications such as a robotic guide.

Interlink of Device Interface and Sensing and Interpreting (Communication with User)

For the user of a guiding aid, the amount and type of data obtained and the quality of the translation has a significant influence on the decision-making process. A guiding device must take into consideration how, when, and how much information should be given to the user, and provide it in a manner that is personalised to their individual needs and adapts to their current circumstances.

To take a simple walk in a crowded area can be overwhelming; as a countermeasure, the robot must plan ahead and direct the user away from busy pathways by relaying haptic (touch) information or audio information via voice output (Zhang, et al., 2021). The device may provide information on the environment and the user’s whereabouts, and suggest alternative routes.

Understanding and being aware of environmental variables may improve the handler's confidence and independence with a robot guide, as well as their capacity to prepare ahead of time for their own safety and the safety of others. Numerous design elements must be considered: A communication baseline of providing minimal, crucial information may help minimise cognitive overload and reduce stress levels in the user. Excessive information may lead to confusion, misunderstanding or frustration with the device..

In addition, it is necessary for the user to receive feedback with minimal delays. There may be a direct relationship between the amount of time necessary to gather and convert data from the robots' cameras and sensors, and the amount of time available to react to the present circumstances. For example, a robot guide with delayed information relay may not allow the user enough time to react to moving obstacles, such as other people intersecting the user’s path.

To support the user’s adaptation to a new guiding aid, such as a robot guide, it is important that information is communicated using methods that are preferable. The method may be decided due to previous experience, other health considerations of disabilities, the ability to process the information in a timely way to assist reactivity and understanding, or may be dictated by the environment (for example, noisy, congested, complex). In terms of functional efficacy, this is still in its early stages. In order for this to be possible, the robot guide's hardware must be capable of gathering, comparing, and transmitting information to the owner in a cohesive manner. Moreover, because data can only be saved and used in a finite amount of storage space, even with the most advanced hardware and the highest level of construction quality, maintaining the required quality and speed of information input and output would be a difficult task.

Device Interface (Communication with Device)

Device Interface refers to the ability of artificial intelligence (AI) to comprehend and respond to communications from its users. The AI interprets information from the user and must choose the most relevant response from many possible solutions.

When using a robot guide, it is vital that the user can communicate with, and their intentions be understood by the AI (Kalpana, et al., 2020) (Kulyukin, et al., 1999). The robot must be able to take directions from the user and respond appropriately. A self-learning system that adapts to the user's preferences is required, as are established algorithms that identify the best responses to an inquiry after receiving an intent.

Real-time interactions may be possible via the use of speech recognition and voice synthesis. It is possible to employ a tactile or handheld method as a means of communicating intent, such as signaling direction changes and communicating status information between the robot and the user. Currently, autocorrection software, language translators, and voice input are examples of direct human-robot connection in commercially available products. It is hoped that technology continues to develop and allows for more intricate communication between person and machine. While this functional characteristic has not been thoroughly investigated for the purpose of robotics, it provides a strong basis upon which to build by starting with basic orders such as “stop”, “go left/right”, or “go faster”.

Form Factor

Form factor describes the optimal shape and size of the robot guide, which must be determined by the individual needs of the user. The robot's physical form must be easily transportable and socially acceptable. To begin with, if the device is designed in the shape of a dog, it may allow the public to identify the role of the robot as a guiding aid more easily.

The design base of a guiding robot may be similar in size and form to a typical Guide Dog but other forms will be explored. Over time, the movement technology in robotic dogs has developed from basic circular wheels to legs with multi-stage actuators that allow for joint flexibility. Robots on wheels have the benefit of being quicker, more stable, and simpler to handle than other types of robots. They do, however, need a somewhat level surface in order to function properly.

There are many advantages to using legs rather than wheels, including the ability to self-stabilise, self-right, and exert force in an opposite direction to physical contact. For successful autonomous mobility, almost every robot dog prototype that has been produced to date has been based on this design principle.

While the addition of legs enhances a robot’s agility and capacity to navigate uneven terrain, it also generates potential issues such as instability, sluggish-ness, and added complexity of movement and control. With current advanced technology, the suggested form factor is viable for development. However, as previously mentioned, the robot must also gain approval in terms of social acceptability.

Design

Endurance describes several characteristics which contribute to the short and long term longevity of a device. This includes hardware factors like battery life and software factors like the ability to safely transfer and store data. A robot guide must be able to manage its hardware and software autonomously, and adjust its performance in response to the expectations of the user.

The battery life should be long enough to allow the handler to complete their desired activities while using the robot guide. When in low battery phase, it should be able to conserve energy if instructed, or enter an autonomous recharging mode without the need for intervention by the user.

A robot with internet connectivity will necessitate privacy and security protocols. The robot may include a function to access security settings and updates, only accessible to the owner of the device. It must also have an autonomous security and update process which runs software checks and manages internal security.

Endurance is a fundamental need for people who use guiding devices, which necessitates the use of technologies that are self-charging and self-managing.

Existing Robotic Technologies

In this section, we will explore cutting-edge robotic technologies that may be considered in the development of a guide robot. Some listed devices are already commercially available, while others are still in prototype form.

Spot

Spot® is a mobile dog robot from Boston Dynamics, USA (Boston Dynamics, 2021). It currently comes in two models: SPOT EXPLORER and SPOT ENTERPRISE. Both models feature an auto-walk capability, which allow them to move around independently to complete tasks. SPOT EXPLORER can only travel up to 1000m for a single task, while SPOT ENTERPRISE can complete tasks at an unlimited distance. Both variants are capable of dynamically self-balancing while carrying burdens up to 14 kilograms. They use 360-degree awareness to map terrain and avoid obstacles as they emerge. In addition, they are capable of travelling over loose gravel, grass, curbs and stairs.

Spot® robots come with a software development kit which allows the user to develop custom controls, program autonomous missions, and analyse collected sensory information with data analysis tools. Currently, this robot is employed in a variety of environments, including building sites, oil and gas facilities, mining fields, and locations that are heavily electrified or exposed to radiation.

Spot® robots have the ability to detect obstacles in their environment and avoid collisions with them. However, these capabilities are restricted to only the robot's surroundings, rather than the user’s. With pre-programming, the robot can move around in familiar environments, and can use Wi-Fi localization and GPS to navigate in unfamiliar surroundings. Both models can be operated using a tablet computer that can be carried with the user. The tablet receives information from the robot via a real-time video transmission, which may not have relevance for a user who is blind or has low vision. The robot can also be directed to move forward or backward via the tablet. Though, again, this function may not be relevant in the context of a robot guide.

Similar to a typical Guide Dog, Spot® robots use four legs to walk. They are powered by rechargeable batteries, and in the case of the SPOT ENTERPRISE model, a self-charging function enables the robot to dock independently in a charging station. The operational time of the robot is limited to around 2 hours before recharging.

Due to the limited functional features of the commercially available prototype, the Spot® robot cannot be immediately employed as a guide robot to support the mobility of people who are blind or have low vision. However, some of the present capabilities of the system may be utilised for the development of a minimum viable product.

LIGHBOT

LIGHBOTTM is a guide robot for indoor navigation from NSK Ltd., Japan (NSK Ltd, 2015). In indoor environments, such as hospitals, LIGHBOTTM can detect and avoid obstacles during navigation. However, no information has been provided on its performance in terms of obstacle detection, avoidance, mobility, and navigation in outside settings. The robot does not share any information about its surroundings with the user. LIGHBOTTM is fitted with a control panel that allows sighted users to choose from a list of pre-set destinations. It also includes a remote control which can be used to stop the robot during navigation. The robot stands 110 centimetres tall and was designed only to navigate flat surfaces in indoor environments.

LIGHBOTTM was designed for indoor environments, therefore it has certain limitations in terms of obstacle recognition and avoidance. This has a significant negative influence on its mobility and navigation in outside environments, along with its inability to navigate non-flat surfaces. Additionally, the robot does not communicate information about the environment to the user and has only limited ability to recognise intentions through the control panel and remote. Finally, the robot’s endurance is limited by an internal battery. As a result, developing a guide robot with the current functional characteristics of LIGHBOTTM would be challenging.

ANYmal

ANYmal is an autonomous quadrupedal (four-legged) robot manufactured by ANYbotics in Switzerland (Anybotics, 2022). The robot stands at 70cm tall, 80cm long and 60cm wide, with legs that provide exceptional mobility in navigating stairs, as well as avoiding obstacles in its path. The Robotics Operating System on ANYmal allows for the development of custom software with the purpose of extending its capabilities to the user. The main body of the robot is water and dust resistant to IP67 standards.

ANYmal is equipped with sensors that can detect and avoid obstacles in its immediate surroundings. However, additional sensors would be required to detect impediments at the user's height. The robot's mobility is good in familiar environments, as shown by its ability to locate itself with centimetre precision in both confined and large open spaces. Additionally, its mobility and navigation may be supplemented by built-in 2.5 GHz or 5 GHz Wi-Fi, as well as 4G/LTE telecommunications with an add-on module, which can be purchased separately. In conjunction with on-board cameras, real-time video may be sent to a remote station. However, the information relayed may not be significant in the context of mobility for a person who is blind or has low vision. The robot may also be operated using a handheld joystick, though this may not be appropriate for guided mobility purposes. ANYmal has a payload capacity of up to 15 kilograms. When completely charged, the robot's battery has an operating duration of around 90 minutes. Additionally, it is able to dock autonomously for charging.

ANYmal possesses some characteristics necessary for use as a guide robot. However, it cannot be used as a fully operational guide robot in the current state of development. This robot may be a reasonable candidate for the foundation of a fully functioning guide robot in the future.

Aliengo

Aliengo is a sophisticated industrial grade quadrupedal robot by Unitree Robotics in China (Unitree Robotics, 2022). The robot can operate on a variety of terrains. The software control interface supports C/C++ coding language and uses the Robot Operating System, which allows for the development of user-specific software and analysis of sensor data. The robot features an external interface which includes Ethernet connectors, USB ports, and an HDMI port.

The mobility of Aliengo is excellent in familiar surroundings but restricted in unfamiliar ones. GPS may be used to navigate the robot through unfamiliar environments. Onboard sensors, such as depth cameras, are used to detect and avoid obstacles. However, detection and avoidance are only possible in the robot’s immediate space. In its current condition, it may be difficult for the robot to identify and avoid obstructions at a user's height. Video feedback from the robot’s sensors is provided to the user at a remote station, which may not be relevant or useful for a person using a guiding robot. The robot is equipped with a lithium battery pack that provides a maximum operational period of 4.5 hours.

Although AlienGo has some desired functional characteristics, it does not meet the minimal standards of a guiding robot. However, it has the potential to serve as a foundation for the development of a guide robot.

UC Berkeley Robotic Guide Dog

University of California (UC) Berkeley researchers recently published a paper (Xiao, et al., 2021) detailing the early advancements of a robotic guide dog. The robot consists of a quadrupedal Mini Cheetah robot (MIT) with custom additions. The robot can identify and avoid obstacles at close range and is able to navigate in familiar indoor areas. In addition, it can access GPS for navigation in unfamiliar outdoor environments. The robot interacts with humans via the use of a leash, which allows the robot to respond to the user in a dynamic manner and accept orders. The robot features a leash which can be used in ”taut” and ”slack” modes and is designed to pass limited tactile information between the user and robot.

In the framework of a robot guide dog, this robot seems to have some essential characteristics. However, it also includes several limitations. Further improvement would be needed to utilise the product as a functional guide robot.

AlphaDog

AlphaDog is a quadrupedal robot developed by Weilan, a China-based start-up founded in 2019 (Weilan, 2022). There are four models: AlphaDog C100, AlphaDog C200, AlphaDog E300 and AlphaDog E400L. Alphadog robots include features like artificial intelligence, Internet of Things, 5G, virtual reality, autonomous driving, and swarm intelligence. They are equipped with sensors that identify and avoid obstacles in the environment, but are unable to detect obstacles outside of their own immediate surroundings. Alphadogs can be programmed to navigate in familiar surroundings but may find it difficult to manoeuvre in unfamiliar ones. The robots’ 5G and GPS capabilities may facilitate navigation, but need to be manually programmed for new environments. Alphadog robots provide video relay of their surroundings and can be controlled wirelessly, however those functions are not relevant in the context of a guiding robot. Alphadogs are equipped with a rechargeable Lithium-ion battery, which provides an average walking time of 1.5 hours for the AlphaDog C100, 3 hours for the AlphaDog C200 and AlphaDog E300, and 5 hours for the AlphaDog E400L.

Alphadog robots possess some desired functional characteristics for robot guidance, however they are not immediately usable as robot guides in their current state. These robots may be of interest as a foundational platform for a robot guide.

Jueying X 20

DEEP Robots, based in China, have developed the Jueying X 20 industrial-grade quadrupedal robot (Deep Robotics, 2022). The Jueying X 20 features an extensive array of sensors, including a depth-sensing camera and a laser radar. The Robotics Operating System included with the robot enables the development of customised software. With an on-board ”intelligent perception” system, the robot can build a 3D map of an environment and use it to recognise and avoid obstacles in its path. However, this capability is restricted to the robot's immediate environment and may not be helpful in detecting barriers to the user's movement. The robot also uses Wi-Fi, 4G/5G, and GPS to facilitate its movement.

As with the other robots, the Jueying X 20 provides video relay wirelessly to the user. It is not capable of recognising user intentions at this stage. Its working period is limited to 2 hours with a payload of up to 20 kilograms.

The Jueying X 20 has some desired functional characteristics of a guiding robot but has several limitations in its current form. It may provide a foundation for future development of a robot guide.

BBeep

Researchers from Carneige Mellon University, USA; University of Tokyo, Japan and Waseda Research institute for Science and Engineering, Japan are developing BBeep – a battery-powered sonic collision avoidance robot for use by people who are blind or have low vision in airport settings (Kayukawa, et al., 2019). This robot is still in research phase. BBeep makes use of RGB-D sensors to detect the location of pedestrians, monitor them in real time, anticipate their future positions, and avoid colliding with them. Obstacle detection and avoidance are focused primarily on the robot, rather than the user. Given that this robot was designed for organised environments, its mobility and navigation are limited and may not be usable in open, unstructured environments. Using sound indications, this robot can provide information about obstacles to the user. The sounds are classified as ”low urgency” sounds, ”intermediate urgency” sounds, and a ”stop” sound which occurs when the robot is within 0.7 metres of an obstacle. The ability of the robot to recognise the user's intent is quite restricted. The robot is similar in shape to a suitcase, which may not be suitable for navigating in several environments.

BBeep has several desired functional characteristics, such as obstacle identification, avoidance, and warning for potential hazards, but its design places serious limitations on its ability to operate on a range of terrains. Although it may have limitations for robot guide application, this robot's characteristics may be used to inform a minimum viable robot guide product.

Buddy

Buddy is an autonomous robot developed by the Israeli company Seamless Vision (Seamless Vision, 2022). It employs a variety of sensors to detect static and dynamic obstacles in urban environments. Buddy is best suited to flat surfaces, and its mobility and navigation are limited as a result. The robot is able to direct a user to known locations in metropolitan areas. It offers limited information about its surroundings and can only detect the user's intentions to a limited extent. There is no information publicly available the robot’s endurance and battery life.

While Buddy cannot be used as a guide robot in its present state, some characteristics may be leveraged to develop a robot guide in future.

Ollie

Tencent Robotics, based in China, have recently unveiled Ollie – a wheel-legged robot (Ackerman, 2021). The two individual wheeled legs and “tail” of this robot enable it to balance, flip, and negotiate stairways with ease. The development of Ollie is still in early research phases, though there have been some experimental studies on the robot’s balance control (Wang, et al., 2021). More details about the robot's capabilities are not publicly accessible at this time. Future advances of this robot should be closely monitored.

Gaps in the Emerging Robotic Technologies and Way Forward

Functional gaps in emerging technologies are presented in this section and compared to the functional characteristics required to develop a minimum viable prototype guide robot. Table 1 shows a comparison of the attributes of the robots covered in the previous section.

Table 1:

Emerging Robotic Technologies - A Comparison of Their Features

There have been several attempts by research organisations and companies to develop guide robots in the past, with varied degrees of success. According to the findings of our study, existing robotic technologies are limited in scope and are focused on only some desired functional characteristics identified in this review. Each robot has benefits and drawbacks which must be considered for their foundational suitability to develop a minimum viable guiding robot for people who are blind or have low vision. The quadrupedal robots that are now on the market have specific characteristics that a guide robot needs. Such robots may be used as a base robotic platform to which other features can be added in order to build a guide robot. Following is the list of essential system features with minimal requirements to act as a guide robot, which Guide Dogs NSW/ACT has assisted to identify:

1. Functions
(a) Sensing and Interpreting

Minimal Requirements: detects static and dynamic obstacles, from ground to head height at an appropriate distance to allow user reaction.

(b) Device Interface

Minimum Requirements: informs the user of information necessary, at their preferred verbosity level, to move through their environment in a timely fashion and preferred method (e.g. audio or haptic).

Minimum Requirements: takes commands from the user either via tactile or auditory means and retains their preferences to enhance the travel experience.

(c) Fluid and Adaptable Movement

Minimum Requirements: negotiates various types of terrain, including negotiation of stairs and surface level changes.

2. Outcomes
(d) Obstacle Avoidance and Object Targeting

Minimum Requirements: avoids both static and dynamic obstacles while returning to the intended line of travel when possible.

Minimum Requirements: targets key obstacles in the environments, stopping within hand or foot reaching distance and announcing the object to the user.

(e) Navigation

Minimum Requirements: guides a user from one location to another, in both known and unknown areas.

3. Form Factor/Design
(g) Design Factors

Minimum Requirements: socially acceptable design while being ergonomic and functional for guidance

Minimum Requirements: characteristics which contribute to the short- and long-term longevity of the device, as well as battery capacity that allows the user to complete necessary tasks.

Summary and Prospects

In this article, we reviewed the functional aspects of cutting-edge technology that assists people who are blind or have low vision with daily mobility tasks, with the goal of developing a minimum viable product of a robot guide. The findings of our research reveal that emerging technologies are restricted in scope and are focused on only some identified desired characteristics, and do not individually achieve the functionality of a fully working robot guide. The majority of existing products are designed for general purpose applications rather than particularly for guide robot application. Those which are designed for guiding have a limited feature set. With feedback from Guide Dogs Australia and people who are blind or have low vision, we also identified the functional gaps in the existing technologies and proposed a list of desired functions to inform development of a minimum viable product of a guide robot.

eISSN:
2652-3647
Language:
English
Publication timeframe:
Volume Open
Journal Subjects:
Medicine, Clinical Medicine, Physical and Rehabilitation Medicine