À propos de cet article

Citez

Introduction

Human activity recognition systems are systems that use data obtained from sensors to identify different actions carried out by people. These systems have a wide range of applications in various fields such as occupational safety [12, 13, 16, 44], personal fitness and sports [68], rehabilitation [24, 67], elderly care [41, 46, 47, 56], telemedicine [11], human-computer interaction [18], etc.

Human activity recognition systems are broadly classified into two main groups based on the different data collection approaches: vision-based methods [53] and sensor-based methods [36]. Vision-based activity recognition involves the use of images or videos captured by optical sensors in the recognition of human activities. However, these methods are affected by poor lighting conditions and changing environments [36]. To avoid the shortages of the vision-based method, methods that rely on sensors such as pressure sensors, temperature sensors, sound sensors, and radar sensors embedded in the environment were proposed. The most common example of such applications is in the implementation of smart homes for assisted living [19, 48]. However, these methods require a high number of sensors which makes them suitable in controlled environments, but not practical in free-living environments. This limits the application of activity recognition in daily life.

In recent years, with the development of sensing and computing electronics, activity sensing systems that can be worn on the human body have been proposed. With the help of these wearable activity sensing systems, the application of many disciplines has been expanded from controlled environments to free-living environments, which contributes to the recognition of activities in daily life. An example of such discipline is gait analysis. Gait analysis is the systematic study of human locomotion [57]. It generally involves the measurement, estimation, and analysis of certain measurable parameters including absolute and relative body angles, positions, movement patterns and joints’ range of motion [55]. It also includes the gait study of different activities, which can contribute to activity recognition directly.

Since gait analysis is traditionally performed in controlled environments, it is mainly used for the diagnosis of locomotory-related abnormalities, but not for activity recognition purposes. With the help of wearable activity sensing systems, motion data for gait analysis can be collected in free-living environments. This makes it possible to use gait analysis to contribute to activity recognition in daily life.

Many surveys have been conducted over the years on wearable activity recognition systems. However, there are currently no existing reviews on wearable-gait-analysis-based (WGA-based) activity recognition systems. This study aims to close this gap by highlighting the WGA-based activity recognition systems.

Google Scholar was used in searching for all papers included in this study. Different combinations of the following group of keywords were employed in the search; “activity recognition”, “wearable sensors”, “sensors”, “gait analysis”, “gait”, “toe off”, and “heel strike”. The results from the keyword searches were then filtered to include articles published from 2012 to 2022. The top 150 relevant articles from each search were selected from which wearable-gait-analysis-based activity recognition methods that met the selection criteria were chosen. The complete selection process, as well as the selection criteria, are shown in Fig. 2. As shown in Fig. 3, there is a growing interest in this field over the past 10 years.

Related Work

A number of surveys have been conducted on activity recognition systems in past years. There have been broader surveys on both vision-based and wearable activity recognition systems covering a range of topics including the type of sensors used, activities recognized, segmentation approaches, classification algorithms, applications, and advantages of the two approaches [6, 21, 37, 50, 58, 62]. Other surveys focused on either vision-based activity recognition methods [4, 7, 9, 54, 60, 65]; sensor-based activity recognition methods which include the use of environment embedded sensors, smartphones, and wearable sensors [17, 49]; or solely on wearable activity recognition methods.

Similar to broader activity recognition surveys, review studies, which focus solely on wearable-based activity recognition methods, have covered a wide range of topics concerning this field. Mukhopadhyay et al., for instance, discussed some challenges and advancements in wearable activity recognition systems, with a primary focus on the sensors employed by such systems [40]. Salam et al. employed a standardized evaluation benchmark to evaluate various wearable activity recognition methods with six publicly available data sets [3]. Similarly, Lara et al. evaluated twenty-eight wearable activity recognition systems in terms of recognition performance, energy consumption, obtrusiveness, and flexibility [25].

With growing interest in the application of wearable activity recognition systems in the healthcare sector, some reviews have highlighted wearable activity recognition methods with applications in the healthcare field. Rex et al., for instance, focused on wearable human activity recognition systems with applications in the healthcare field [26]. Topics such as sensor types, numbers, and placements, as well as classification algorithms, were discussed in this review study.

The use of machine learning algorithms for activity recognition has gained a lot of attention in recent years. As such, a number of review studies have been conducted to highlight the various machine learning methods which have been employed by wearable activity recognition systems. Zang et al., for example, focuses on deep learning methods used by wearable activity recognition systems [64]. This study highlights the current advancements, developing trends, and major challenges in this field.

Although a wide range of topics has been covered by existing literature reviews, there is currently no existing review study focused on wearable activity recognition systems which have employed knowledge from the field of gait analysis in the activity recognition process. This study seeks to close this gap by highlighting the ways by which gait analysis is employed by current activity recognition systems, including discussing some commonly used wearable sensor types and positions to realize gait analysis-based segmentation and extract gait analysis-based features to distinguish various human activities.

WGA-based activity recognition

As shown in Fig. 1, WGA-based activity recognition systems generally involve four main processes. These include data collection with wearable sensors, data segmentation, feature extraction, and activity classification. WGA-based activity recognition techniques incorporate gait analysis in one or more of these processes. In this section, WGA-based activity recognition techniques were discussed from the aspect of these four processes.

Figure 1

Four main steps for WGA-based activity recognition. The pressure sensors and IMU in the “Data Collection” section represent the commonly used wearable sensors in WGA-based activity recognition systems. The plots in the “Data Segmentation” section represent the gait cycle-based method which involves the segmentation of data through the detection of gait cycles and, the fixed non-overlapping sliding window approach which involves the segmentation of data using fixed time windows. To extract features for activity recognition, knowledge-driven features and data-driven features are frequently used. The icons in the “Classification” section represent examples of activities that can be recognized by activity recognition systems during the classification phase.

Figure 2

Flow Chart of the Article Selection Process.

Figure 3

Distribution of the WGA-Based Activity Recognition Publications Over Time.

Wearable Sensors

Wearable sensors are sensors that can be worn on various parts of the human body, such as the feet, knees, or hips, to measure data related to human subjects. For wearable gait analysis and wearable activity recognition, wearable sensors that can measure human motion are necessary. As shown in Table 1, the two most frequently used sensors are Inertial Measurement Unit (IMU) and pressure sensors.

IMUs are the most popularly used sensors in WGA-based activity recognition systems. They are small, lightweight, and inexpensive, which makes them suitable to be worn on the human body. IMUs are mostly made up of a 3-axis accelerometer, a 3-axis gyroscope, and/or a 3-axis magnetometer. Accelerometers are used to measure acceleration, based on which velocity and distance can be calculated. Gyroscopes, on the other hand, are used to measure angular velocity and orientation. Magnetometers measure magnetic field or magnetic dipole moment [5]. By fusing the data from accelerometers, gyroscopes, and/or magnetometers, human motion can be measured with good accuracy.

Different research employed varying numbers of IMUs and positioned IMUs at different locations. The number of IMUs used by research in Table 1 ranged from 1 to 3. Locations on the lower limb, including the thigh, shank, ankle, and foot, are popular choices for many WGA-based activity recognition systems. This is generally because most human activities are performed with the support of the lower limbs. Therefore, necessary data can be acquired for gait analysis and activity recognition with IMUs placed on the lower limbs. For example, as shown in Fig. 4b, Lopez-Nava et al. proposed the use of one IMU placed on the right ankle, to collect data for the detection of gait events, such as toe-off and heel-strike, and for the recognition of activities, such as level-ground walking, stairs ascent, stairs descent, ramp ascent, and ramp descent [29]. Similarly, the use of a single IMU worn on the ankle to collect data for the recognition of walking, stair ascent, and stairs descent activities was presented by McCalmont et al. [35]. As shown in Fig. 4a, Martinez-Hernandez et al. [32] proposed using data from three IMUs attached to the thigh, shank, and foot for the recognition of level-ground walking, ramp ascent, and ramp descent activities.

Figure 4

Commonly used IMU sensor positions. a) Three IMU sensors positioned at the thigh, shank, and foot to capture data for activity recognition [32]. b) A single IMU sensor worn at the ankle for activity recognition [29].

Summary of WGA-based Activity Récognition Techniques

Références Recognized Activities Wearable Sensors Data Segmentation Extracted Features Activity Recognition
Martinez et al. [32] Level-ground walking, ramp ascent, and ramp descent. 3-axis gyroscope and pressure sensors. Gait cycle-based method Time-domain features Adaptive Bayesian Inference method
McCalmont et al. [35] Slow walking, normal walking, Fast walking, stair ascent, and stair descent. 3-axis accelerometer, 3-axis gyroscope, 3-axis magnetometer, Pressure sensor array. Gait cycle-based method Time-domain features and gait-based features. Artificial neural network, K-nearest neighbour (KNN), and Random Forest.
Ng et al. [42] Walking, sitting, lying, and falling. Sensor tags Gait cycle-based method Raw sensor data KNN and Random
Lopez et al. [29] Level-ground walking, Stair ascent, stair descent, Ramp ascent, and ramp descent. 3-axis accelerometer Gait cycle-based method. Time-domain features and frequency-domain features. KNN
Chenet al. [14] Walking, running, standing, sitting, stair ascent, and Stair descent. 3-axis accelerometer, 3-axis gyroscope, Pressure sensor array. Gait cycle-based method Gait-based features Support vector machine (SVM)
Jeong et al. [23] Level-ground walking, ascent. and stair descent. Pressure sensors Gait cycle-based method Raw sensor data SVM
Truong et al. [59] Level-ground walking, stair ascent. and stair descent. Pressure sensors Gait cycle-based method Time-domain features SVM
Martinez et al. [33] Level-ground walking, ramp ascent, and ramp descent. 3-axis accelerometer, 3-axis gyroscope, and Pressure sensors. Gait cycle-based method Time-domain features Bayesian formulation Based approach
Achkaretal. [38] Level-ground walking, standing, sitting, stair ascent, stair descent, Ramp ascent, and ramp descent. 3-axis accelerometer, 3-axis gyroscope, 3-axis magnetometer, Pressure sensors, and barometric sensor. Gait cycle-based method Gait-based features. Rule-based method.
Zhao et al. [66] Level-ground walking, Stair ascent. stair descent. Ramp ascent, and ramp descent. Pressure sensors and electromyography sensors. Gait cycle-based method Time-domain features. SVM
Mazumder et al. [34] Level-ground walking, fast walking, standing, sitting, Stair ascent, stair descent, and ramp ascent. 3-axis accelerometer, 3-xis gyroscope, and pressure sensors. Gait cycle-based method Time-domain features, Polynomial coefficients Extracted from hip angle Trajectory and centre-of-pressure (CoP) trajectory. SVM
Camargo et al. [10] Level-ground walking, Stair ascent, stair descent, Ramp ascent, and ramp descent. 3-axis accelerometer, 3-axis gyroscope, goniometer, and îlectromyography sensor. Gait cycle-based method Time-domain features and frequency-domain features. Dynamic Bayesian network
Ershadi et al. [20] Toe level ground walking, Normal level-ground walking, Sitting, and standing. Pressure sensors. Gait cycle-based method Time-domain features. Rule based method
Martindale et al. [31] Level-ground walking, sitting, stair ascent, stair descent, jogging, running, cycling, and jumping. 3-axis accelerometer, 3-axis gyroscope, and pressure sensors. Gait cycle-based method Raw sensor data. Convolutional Neural Networks (CNN) and Récurrent Neural Network (RNN).
Benson et al. [8] Normal running and fast running. 3-axis accelerometer, 3-axis gyroscope Gait cycle-based method Time-domain features, frequency-domain features, and wavelet-based features. SVM
Hamdi et al. [22] Level-ground walking, Stair ascent, stair descent, ramp ascent, and ramp descent. 3-axis accelerometer, and 3-axis gyroscope Gait cycle-based method Gait-based features, time-domain features, frequency-domain, and wavelet-based features. Random Forest
Achkar et al. [39] Level-ground walking, standing, sitting, Stair ascent, stair descent, Ramp ascent, and ramp descent. 3-axis accelerometer, 3-axis gyroscope, 3-axis magnetometer, pressure sensors, and barometric sensor. Gait cycle-based method Gait-based features and time-domain features. Rule based method
Xiuhua et al. [27] Level-ground walking, Ramp ascent, and ramp descent. 3-axis accelerometer, 3-axis gyroscope, and pressure sensors. Gait cycle-based method Gait-based features. Class incrémental learning method.
Ngo et al. [2] Level-ground walking, Stair ascent, stair descent, Ramp ascent, and ramp descent. 3-axis accelerometer and 3-axis gyroscope. Gait cycle-based method Time-domain features. KNN and SVM.

Pressure sensors are another commonly used sensor type in WGA-based activity recognition systems. They are usually placed beneath the foot and are used to capture foot plantar pressure during the execution of activities. Variations in plantar pressure during various activities provide insights for gait analysis and activity recognition [14, 23, 45].

Similarly to IMUs, varying sensor numbers and positions have been employed in existing works. For example, as shown in Fig. 5a, Chen et al. [14] proposed the use of an insole-shaped pressure sensor array, with 96 pressure sensors evenly distributed on it, to capture plantar pressure with high spatial resolution. The plantar pressure data was used to calculate 26 gait parameters and recognize 6 daily activities. As shown in Fig. 5b, Jeong et al. [23] and Truong et al. [59] proposed using eight pressure sensors distributed at the big toe, metatarsal, and heel positions for the collection of data to detect gait cycles and recognize level-ground walking, stair ascent, and descent activities. Mazumder et al., as shown in Fig. 5c, proposed the use of five pressure sensors – placed at the heel, toe, and metatarsal positions – in the detection of gait cycles and the recognition of level-ground walking, fast walking, standing, sitting, stair ascent, stair descent, and ramp ascent activities [34]. Similarly, Martinez-Hernandez et al. proposed using four pressure sensors embedded in insoles in the detection of gait cycles for the recognition of level-ground walking, ramp ascent, and ramp descent activities [33].

Figure 5

Different numbers and locations of pressure sensors used in WGA-based activity recognition systems. a) A pressure sensor array with 96 pressure sensors evenly distributed on it [14]. b) Eight pressure sensors distributed at the big toe, metatarsal, and heel [23, 59]. c) Five pressure sensors placed at the toe, metatarsal, and heel [34].

Data Segmentation

Sensor data segmentation is an important step in the activity recognition process. It can influence the real-time performance and accuracy of activity recognition systems. There are generally two types of data segmentation techniques: the sliding window method and the gait cycle-based method.

The sliding window method is one of the most popularly used segmentation approaches, especially in non-WGA-based activity recognition methods. It involves the use of a fixed or dynamic time interval to segment time-series sensor data. However, with the sliding window approach, there is a higher tendency of capturing two different activities in one data segment, especially during the transition phases. When this happens, data segments will most likely be wrongly labeled as one activity, thus influencing the accuracy of the activity recognition system.

To address this problem, most WGA-based activity recognition systems use gait cycles to segment sensor data [14, 23, 35, 42]. One full gait cycle begins with one repetitive gait event (e.g., heel-strike or toe-off) and continues until the occurrence of the same gait event on the same foot. For activities performed with both feet, since one human subject can only perform one activity during one gait cycle, the gait cycle is recognized as the unit for different activities [14]. Therefore, the gait cycle can be used to clearly separate different activities. Toe-off and heel-strike are two frequently used gait events for gait cycle detection.

Both IMUs and pressure sensors can be used to detect gait cycles. For pressure sensors, since the plantar pressure will increase significantly during heel-strike and decrease significantly during toe-off, a threshold could be used to recognize these two gait events and then detect gait cycles. For example, Martinez-Hernandez et al. [32] proposed a threshold crossing method for the recognition of toe-off and heel-strike events and detection of gait cycles. The acceleration data captured by IMUs placed on the ankle can also be used to detect toe-off and heel-strike events of the gait cycle. The toe-off event can be detected as the initial acceleration in a characteristic peak of the x-axis acceleration or z-axis acceleration. The heel-strike can also be detected as the deceleration in the characteristic peak of the x-axis acceleration [29, 30].

Feature Extraction

After segmenting the sensor data, features usually need to be extracted for the classification of activities. Generally, there are two main feature categories: data-driven features and knowledge-driven features.

Data-driven features are the most frequently used features for activity recognition, which can be extracted automatically and manually. Deep learning algorithms can be used for extracting data-driven features automatically. For example, Convolutional Neural Networks (CNN) are able to learn complex structures and patterns from the segmented data and automatically extract features for the recognition of activities. Manually extracted data-driven features are popularly used for activity recognition. Such features include time-domain features and frequency-domain features. These features are mostly based on characteristic differences, such as differences in frequency, acceleration, and cycle time between the activities to be recognized [61]. Examples of time-domain features include mean value, variance, median, skewness, percentile, and interquartile ranges, etc. McCalmont et al. [35] used 30 features consisting of the time domain features such as the mean and standard deviation of the acceleration, and angular velocity signals in the recognition of slow walking, normal walking, fast walking, stair ascent, and stair descent activities. Frequency-domain features include the signal power, spectral entropy, auto-correlation coefficients, mean frequency, median frequency, etc. Lopez-Nava et al. [29] extracted the power of the acceleration which is a frequency-domain feature from the segmented sensor data to assist in the recognition of level-ground walking, stair ascent, stair descent, ramp ascent, and ramp decent activities. Although data-driven features have proved to be effective in controlled research environments, the performance of these features highly relies on the collected training dataset. It would be challenging for data-driven features to recognize activity varieties that are not included in the training dataset.

Knowledge-driven features are complementary to data-driven features. Knowledge-driven features are representative features that are extracted based on existing knowledge resources of observed data. For activity recognition, knowledge-driven features based on gait analysis have shown to be effective. Examples of knowledge-driven features are “foot contact pitch” and “double support time”. They can be used to recognize daily activities. The “foot contact pitch” is the pitch angle at the time when the foot initially contacts the ground (i.e., heel-strike). The “double support time” is the period when both feet are in contact with the ground. According to the existing knowledge, the “foot contact pitch” of stair ascent, stair descent, and level-ground walking are −4.7° ± 6.4, −16.6° ± 4.7°, and 19.0° ± 4.4°, respectively [51]. Therefore, based on the “foot contact pitch”, those three activities can be discriminated. In addition, since the “double support time” of stair ascent, stair descent, level-ground walking, and running account for 13.6% ±1.9%, 11.2% ± 2.3%, 11.1% ± 1.7%, and 0.0% of a whole gait cycle, it is easy to discriminate running from the other three activities [43, 51]. Researches have been done to show the effectiveness of knowledge-driven features. Chen et al. [14], for example, extracted two of the three features (i.e., “foot contact pitch”, “percentage of double support time”, and “pitch angle at midstance”) based on the knowledge of the human gait characteristics to recognize walking, stair ascent, stair descent, and running activities. As shown in Fig. 6a, for walking, the position of the forefoot is significantly higher than the hindfoot during heel-strike. For stair descent (Fig. 6c), the position of the forefoot is significantly lower than the hindfoot. And for stair ascent (Fig. 6b), the foot is almost flat. These posture differences could lead to significant differences in the “foot contact pitch”, which enables these three activities to be distinguished. The “percentage of double support time” is the percentage of “double support time” over the total gait cycle time. It can be used to discriminate running from all the other three activities because as compared to other walking activities, there is no “double support time” for running, instead, there is a phase known as the “double float” phase when both feet are off the ground (Fig. 6d). This research achieved 99.8% accuracy in recognizing these activities.

Figure 6

Foot contact pitch during (a) walking, (b) stair ascent, (c) stair descent, and (d) the double float phase during running. This gait-analysis-based parameter was used by Chen et al. [14] in the recognition of activities.

Classification

Classification is the final step for activity recognition. Artificial intelligence models are the most popular used methods for activity recognition. About 63% of the WGA-based activity recognition systems reviewed in this study used artificial intelligence models. Artificial intelligence models such as Artificial Neural Networks (ANN), Support Vector Machines (SVM), K-Nearest Neighbor (KNN), Naive Bayes, and Convolutional Neural Networks (CNN) are among the most popularly used. Lopez-Nava et al. used a KNN classifier in the recognition of level-ground walking, ramp ascent, ramp descent, stair ascent, and stair descent activities with an accuracy of 85.5% [29]. Jeong et al. employed an SVM classifier in the recognition of level-ground walking, stair descent, and stair ascent activities [23]. This classifier was able to attain an accuracy of 95.2%. Similarly, Chen et al. achieved an overall accuracy of 99.8% in the recognition of walking, running, sitting, standing, stair ascent, and stair descent activities with the use of an SVM classifier [14]. McCalmont et al. conducted a comparative study on the ANN, KNN, and Random Forest classifiers [35]. In this study, the ANN classifier was able to achieve the highest accuracy of 80% with both the KNN and random forest achieving an accuracy of 70% in the recognition of slow, normal, and fast walking, stair ascent, and stair descent activities.

Discussion

This study demonstrates how gait analysis can be used to contribute to wearable activity recognition. In this section, the limitations of current research and the potential opportunities for future research in the field of WGA-based activity recognition will be discussed.

Wearable Sensor Types, Numbers, and Locations

The most frequently used wearable sensors for WGA-based activity recognition systems are IMUs and pressure sensors, with the use of other sensor types not yet explored. Although the use of IMUs and pressure sensors has been demonstrated to be effective for the recognition of simple daily activities [14, 35], these two sensors have some shortcomings. For example, the accuracy of IMUs is influenced by drift problems [15], especially during long-term measurement, and the accuracy of pressure sensors is influenced by the contact environment (e.g., soft or hard).

To help improve the performance of the WGA-based activity recognition systems, using different types of wearable sensors is one solution. For instance, to capture data for the recognition of activities that involve a change in altitude and posture, the use of barometer sensors could be explored. Barometers (Fig. 7a) are generally used to capture changing atmospheric pressure, which can be used to detect changing altitudes [52]. Rodriguez-Martin et al. [52] for example, proposed the use of barometers together with accelerometers in the recognition of activities. The addition of the barometer sensors increased the accuracy of detected posture transitions and falls by up to 11%. Another type of wearable sensor that can be used in WGA-based activity recognition systems is the strain sensor (Fig. 7b). For wearable activity recognition systems that need to be worn for a long time, people prefer sensors embedded into their clothing or accessories than wearing the system separately. Strain sensors have gained much attention due to their flexibility, lightweight nature, and their ability to be integrated into clothing or directly mounted on the skin [63]. In addition, they can be used in the detection of the elbow, wrist, finger, and joint movements and thus can be employed in the recognition of activities of higher complexity that involve the movement of these body parts.

Figure 7

Other wearable sensor types which can be employed in activity recognition. a) Barometer [1] b) Strain sensor [28].

As discussed in the section “Wearable Sensors”, different studies applied different numbers of sensors on different body locations for activity recognition. However, for wearable systems, low cost, high accuracy, and long battery life are important performance parameters. Currently, there is no standard to follow in terms of the wearable activity recognition system design. More research will be necessary to explore the optimal sensor number and locations to achieve the best performance in cost, accuracy, and battery life.

Gait Features

Gait-related features can be used to effectively recognize human activities [14]. These knowledge-based features are helpful to improve the generalization performance of the activity recognition models. For example, in most of the scenarios, running is faster than walking. Therefore, the data-driven model for walking and running discrimination might put a higher weight on the speed. However, walking and running are not discriminated based on speed but double support time [43]. When testing with new data, the performance of models built with data-driven features might decrease, but not the model built with knowledge-based features – the double support time. In recent years, more research has applied the gait cycle for data segmentation. However, for feature extraction, findings from this study indicate that there are very few existing recognition systems employing gait analysis-based features for the recognition of activities. In a study by Chen et al [14], the use of gait features enabled the recognition of human activities with relatively fewer features than activity recognition systems that employed non-gait analysis-based features. Considering the advantages of gait features in generalization and efficiency, they can be further explored and applied in the future to contribute to activity recognition-related applications.

Conclusion

In this study, existing WGA-based activity recognition systems in the past years were reviewed. Important topics related to WGA-based activity recognition, including wearable sensors, data segmentation, feature extraction, and classification were discussed. The ways that gait analysis can be used to assist activity recognition were summarized and highlighted. Finally, limitations in the current research and the potential opportunities for future research were discussed to help inform future research endeavors in this field.

eISSN:
1178-5608
Langue:
Anglais
Périodicité:
Volume Open
Sujets de la revue:
Engineering, Introductions and Overviews, other