Система управления человеческой походкой методами машинного обучения, подходящая для роботизированных протезов в случае двойной трансфеморальной ампутации тема диссертации и автореферата по ВАК РФ 05.13.17, кандидат наук Черешнев Роман Игоревич

  • Черешнев Роман Игоревич
  • кандидат науккандидат наук
  • 2019, ФГАОУ ВО «Национальный исследовательский университет «Высшая школа экономики»
  • Специальность ВАК РФ05.13.17
  • Количество страниц 119
Черешнев Роман Игоревич. Система управления человеческой походкой методами машинного обучения, подходящая для роботизированных протезов в случае двойной трансфеморальной ампутации: дис. кандидат наук: 05.13.17 - Теоретические основы информатики. ФГАОУ ВО «Национальный исследовательский университет «Высшая школа экономики». 2019. 119 с.

Оглавление диссертации кандидат наук Черешнев Роман Игоревич

Contents

1 Introduction

1.1 The relevance of research

1.2 Aims and objectives of research

1.3 Current approaches

1.4 Importance of work

1.5 Novelty and summary of the Author's main results

1.6 Publications

2 Literature review

2.1 Overview of the history of HAR

2.1.1 Machine learning in HAR

2.1.2 Seeking the best classifiers for HAR

2.1.3 Ensemble methods for HAR

2.1.4 Era of Smartphones

2.2 Review of HAR datasets

2.3 Literature review of methods for human gait inference

2.4 Review of prosthetic leg controllers

2.4.1 High-level controller

2.4.2 Mid-level controllers

2.4.2.1 Phase-based approach

2.4.2.2 Non phase-based approach

3 Overview of the Gain system

3.1 Gain system design principles and objectives

3.2 Main results

4 HuGaDB: A human gait database for gait inference

4.1 Motivation and design goals

4.2 Sensor network topology

4.3 Data acquisition programs

4.4 Participants

4.5 Data format

4.6 HuGaDB issues

4.7 Noise

4.8 Conclusions

4.9 Availability

5 Gain high-level controller

5.1 RapidHARe classification method

5.2 Activity mode recognition using RapidHARe

5.2.1 Continuous activity recognition

5.2.2 Directional features

5.2.3 State-of-the-art methods

5.2.4 Comparison to the state-of-the-art methods

5.3 The high-level controller for Gain

5.3.1 Sitting-standing module

5.3.2 Sitting-down module

5.3.3 Standing-up module

5.3.4 The controller algorithm

5.3.5 Experimental evaluation of the high-level controller in Gain

5.3.6 Implementation details

5.4 Conclusions

6 Gait mid-level controller

6.1 LSTM

6.2 Feature extraction methods

6.3 Gait inference results

6.4 Variance in different phases

6.5 inference error around activity change

6.6 Gait inference results for one leg

6.7 Conclusions

7 Conclusions

7.1 Main results of this thesis

Acknowledgments

List of abbreviations and conventions

Bibliography

List of figures

List of tables

Appendix A. Gain system inference visualization

Рекомендованный список диссертаций по специальности «Теоретические основы информатики», 05.13.17 шифр ВАК

Введение диссертации (часть автореферата) на тему «Система управления человеческой походкой методами машинного обучения, подходящая для роботизированных протезов в случае двойной трансфеморальной ампутации»

1 Introduction

1.1 The relevance of research

Machine learning (ML) methods provide a general framework to adapt algorithms to certain tasks using a large collection of data. ML-based methods excel in several tasks, including image recognition [140], speech recognition [95], and product or service recommendation [132]. This dissertation introduces novel machine learning methods for human activity recognition (HAR) problems.

Broadly speaking, HAR is a field that focuses on recognizing or analyzing the activities performed by humans [90]. Activity recognition may be useful in public surveillance for security reasons, in fall detection in elder care, as well as in gesture recognition, virtual reality, homeland security, robotics, exoskeletons, smart environments, etc. [88, 16]. Human activity analysis can be useful in healthcare, for instance, in inpatient recovery monitoring after surgery, exoskeleton control, monitoring performance improvement, analyzing athletes' technique in sports, etc.

HAR methods are primarily based on two types of data: visual or sensory. In the first group, HAR methods are mostly based on images or videos captured by cameras. In the second group, the prediction of HAR methods is based on sensory data obtained from inertial sensors, such as accelerometers and gyroscopes of mobile phones or specifically mounted sensors on certain parts of the human body. The topic of this thesis falls into the second category and focuses on sensory data-based HAR.

There are three main areas of HAR: (1) gesture recognition, (2) recognition of activities of daily living, and (3) human gait analysis.

Gesture recognition (GR) mainly focuses on recognizing hand-drawn gestures in the air [96, 91, 45, 55]. Patterns to be recognized may include numbers, circles, boxes, or Latin alphabet letters [45]. Prediction is usually made on data obtained from smartphone sensors or special gloves equipped with inertial sensors, such as 3-axis accelerometers, 3-axis gyroscopes, and occasionally electromyography (EMG) sensors, to measure the electrical potential on the human skin during muscular activities [2].

Recognition of activities of daily living (ADL), on the other hand, aims at recognizing daily lifestyle activities [167, 164, 166]. For instance, an interesting research topic is recognizing activities in or around the kitchen, such as cooking; loading the dishwasher or washing machine; preparing brownies or salads; scrambling eggs; light cleaning; opening or closing drawers, the fridge, or doors; and so on. Often, these activities can be interrupted by, for example, answering phones [137, 70, 68, 116, 33, 24, 122]. In this topic, on-body inertial sensors are usually worn on the wrist, back, or ankle; however, additional sensors, such as temperature sensors, proximity sensors, water consumption sensors, heart rate sensors, etc., can be employed as well.

Human gait analysis (HGA), in contrast, focuses not only on the identification of activities performed by the user but also on how the activities are performed [30]. This can be useful in health-care systems for monitoring patients recovering after surgery, fall detection, and diagnosing

Figure 1: Phases of a full gait cycle during locomotion. Image source: [139].

the state of, for example, Parkinson's disease [125, 124], and even for increasing typing accuracy on touch screens during walking [105]. An unusual gait cycle can be evidence of disease; therefore, gait analysis is important in evaluating gait disorders, as well as neurodegenerative diseases such as multiple sclerosis, cerebellar ataxia, brain tumors, etc. Multiple sclerosis patients show alterations in step size and walking speed [44]. The severity of Parkinson's disease and stroke shows a strong correlation with stride length [123]. Wearable sensors can be used to detect and measure gait-related disorders, to monitor patient's recovery, and to improve athletic performance. For instance, EMG sensors can be used to evaluate muscle contraction force to improve performance [154, 152] in running [146] and other sport fields [34]. Emergency fall events can be detected with tri-axial accelerometers attached to the elderly people's waists [21]. Accelerometers installed on the hips and legs of people with Parkinson's disease can be used to detect freezing of gait and can prevent falling incidents [10, 125, 124, 31].

Human gait inference (HGI), also referred to as human gait trajectory prediction aims at predicting, what the movements of amputated or injured leg parts (thigh, shank, or foot) would be for walking-related activities [139]. HGI methods are often hierarchical and consist of three layers. The first, called high-level control, aims at recognizing the current activity performed by the patient. Once the activity or the intention of the user is recognized, then the high-level controller commands the mid-level controller to infer the appropriate gait. The need for a high-level controller is explained by the fact that the locomotion task requires its own mid-level controller in most of the cases. Mid-level controllers generate the gait trajectory patterns for robotic prosthetic legs or exoskeletons. Mid-level controllers can be categorized into two types: phase-based and non-phase-based. Phase-based mid-level controllers consist of several models that infer gait for particular gait phase. The phases of the human gait are shown in Figure 1. After recognition of the current phase, the mid-level controller performs the appropriate actions. Non-phase mid-level

Figure 2: Concept of robotic prosthetic legs for patients suffering from double trans-femoral amputation. Circles show the location of EMG sensors, and boxes show the location of accelerometers and gyroscopes.

controllers directly aim at predicting the desired gait trajectory, usually based on the physiological motion of the other leg, using linear regression models. The low-level controller carries out the physical control of the robotic legs at close to the hardware level. Even though HGI is closely related to HAR, advanced ML methods are not routinely used in this field, especially in mid-level controller design.

In this thesis, the Author introduces a new system called Gain (standing for Gait Inference) that is suitable for controlling the robotic prosthetics of patients suffering at most double transfemoral amputation by means of machine learning techniques. The concept of this idea is illustrated in Figure 2. The Author's idea is based on the observation that the correlation between the movements of the leg parts of healthy people — people without functional gait disorder during usual activities — is high, yet nonlinear. Figure 3 shows the non-linear correlation between the thigh and shank angles (of the same leg) during several gait cycles, measured during walking-related activities. The angles of the thigh and shank are measured to the horizontal line. Consequently, it is possible to infer the movements of both lower legs based on the movements of both thighs using machine learning methods. The Gain system could be installed on microchip- or smartphone-controlled robotic leg prostheses that could be attached to patients in a non-invasive way to infer the movements of the lower limbs, as illustrated in Figure 2. Therefore, the GaIn system could help patients suffering partial or double lower limb amputation to move and walk by themselves. The GaIn system consists of two controllers: (1) a high-level controller, based on the RapidHARe method, for activity mode and patient intention recognition and (2) the non-phase based mid-level controller for gait inference. Both controllers were developed by the Author of this thesis. The first component is based on a dynamic Bayesian model and recognizes whether the patient is sitting, standing, or moving. In a sitting position, GaIn does not allow any gait inference to be performed, so the legs remain motionless. However, when thigh muscle activity is detected by electromyography sensors, the controller performs standing up

(A)

-100 f-,0

-130 -140 -150

Figure 3: Correlation between shank and thigh movement over several gait cycles in different activities. The angles of the thigh and shank are measured to the horizontal line.

activity. When the patient is standing and starts swinging one of his legs, then Gain activates the gait inference procedure. When a person stands and -wants to sit down, the high-level controller can predict his intention based on signals from his muscles. Because human movement is produced by neural mechanisms in the motor cortex of the human brain or spinal neural circuits [106] the Author believes that neurally inspired artificial neural networks could be suitable models for gait inference. Therefore, Gain uses recurrent neural networks to infer human gait. In addition, Gain ■was designed to be fast and computationally inexpensive, with lo w prediction latency. These features are necessary in order to be applied in mobile devices where energy consumption matters [28]. The Author notes that turning while walking involves rotating the torso, the hips, and the thighs at the hip joints but not the shanks [59]; therefore, our analysis does not involve examination of turning strategies. It should be noted that the methods related to the low-level controller and the actual construction of such robotic prosthetic legs are not part of this thesis.

The inference of the position of the shanks is made on the position (angle) and motion (angular velocity) of the thigh residual limbs, wherein the position and motion are determined using 3-axis accelerometers and 3-axis gyroscopes. In addition, as shown in the experiments, the Gain system is capable of producing smooth inference for activity changes between several activity modes, including walking at various speeds, taking stairs up and down, sitting down and standing up, as well as running. It should be noted that EMG sensors are used only to recognize standing up or sitting down intention and not for gait inference.

1.2 Aims and objectives of research

The Gain system could potentially be installed on microchip-controlled robotic leg prostheses that could be attached to patients in a non-invasive way to infer the movements of the lower limbs. In order to make Gain efficient for use in portable real-time prediction systems, it should meet the following requirements:

A-l: Low prediction latency. Gain should respond quickly to sudden changes in user behavior in real-time.

A-2: Fast and energy-efficient. In order to be suitable for mobile and portable systems, Gain is to be energy-efficient and computationally inexpensive.

A-3: Smooth recognition. Gain should provide consistent recognition within a given activity mode and rapid transition in-between activity modes.

A-4: Generalization. Gain should be accurate for new patients whose data was not seen during training.

A-5: Accuracy. The Gain system should be developed using machine learning techniques because these methods have demonstrated the ability to adapt to problem-specific tasks with high accuracy.

The Gain method carries out the gait inference using accelerometer, gyroscope, and EMG sensors mounted on both thighs. These sensors are inexpensive and widely available.

1.3 Current approaches

Here, most relevant methods to this dissertation are discussed briefly. The reader is kindly referred to Chapter 2 for a detailed review of all relevant methods. The first artificial neural network-based (ANN) system to aid patients with spinal cord injuries was developed in 1995 by Sepulveda and his colleagues [126], who showed that ANNs are plausible models to restore muscle signals based on the joint flexion and extension at the hip, knee, and ankle. in addition, the proposed system obeyed voice control to switch between activity modes. The main conclusions of the article are that (1) one needs two separate neural networks for swing and stance phases and that (2) the ANN model requires calibration to the patient. in this PhD thesis, the Author shows that these conclusions are incorrect. First, the Author has built a single neural network which is capable of inferring the gait in both the swing and stance phases. However, the Author emphasizes that higher natural variance was observed in the swing phase for the same person than in the stance phase; however, very accurate gait prediction is not necessarily needed for the swing phase, because the more important supporting work is done by the standing leg. Also, note that voice-based control is not necessary to switch between walking and other activity modes. in addition, sitting down and standing up can be recognized from thigh muscle activities using EMG sensors. As for the second point, the Author suspects that Sepulveda and his colleagues used data from too few patients and thus their model did not generalize well. in this PhD research work, an adequate number of people provided data, and the Gain system obtained good generalization performance during the training.

Perhaps the best-known non-phase-based mid-level controller is the complementary limb motion estimation (CLME) developed by the group led by Martin Buss [142]. CMLE is based on the idea that the trajectory of a missing leg can be mapped from the movement and the position of the whole sound leg using linear transformations. Therefore, CLME uses information from the state of the whole sound leg and provides an inference method for patients suffering at most singleleg trans-femoral amputation. This work unfortunately has serious limitations: (1) This system

was trained and tested on the same patient; therefore, the system's generalization performance is unknown. (2) The system was trained and tested only for walking on a treadmill and ascending stairs. (3) Sitting down and standing up were not considered or investigated. Contrary to the CMLE method, the Gain system uses information from only the movements of the thighs, and it can be used with patients suffering from double leg amputation; Gain was tested in natural environments in several walking-related activity modes, including the transitions between activity modes, sitting down and standing up. The Gain system is demonstrated to have low generalization error for new users.

A reinforcement learning-based (RL) method for gait inference was published in January of 2019 by Wen et al. [151], after the Author of this thesis had completed his research project. Wen and his co-authors divided the full gait cycle into four sections and used a reinforcement learning algorithm to set up parameters of each of the four mid-level controllers. The authors achieved root-mean-square error of 3.99 ± 0.62° (compared to the target) for two participants (one healthy and one one-side transfemoral amputee). The authors hypothesized that the feedback from knee kinematics and optimization state was reasonable as a first step towards autonomous mid-level gait control, but questions regarding the appropriate control objective remain open. Unfortunately, the authors tested their system only in laboratory conditions on a treadmill at a steady pace on a flat surface. One of the biggest drawbacks of their approach is that real-life systems should be trained for several ambulatory tasks, such as walking on grass, going up and down stairs, stopping and starting to walk, and so on. in addition, this approach requires having a good high-level controller to transmit impedance function from one locomotion activity mode to another.

The Author has compared the results of Wen et al. to those obtained with Gain, and the comparison is summarized in Table 1. Note that, unfortunately, a direct comparison cannot be performed because the methods were tested on different data. On the one hand, Wen and his colleagues used one patient to calibrate and test their model in walking activity on a treadmill, and they obtained 3.99° root mean squared error (RMSE) for one leg. On the other hand, the Author of this thesis tested the Gain system to predict the gait trajectory of one leg and used data from several participants in various ambulatory activity modes performed in real environments. Gain was tested in walking with a subject whose data was not seen during the training, and it achieved 4.75° RMSE in this harder scenario. However, when Gain was trained and tested with the same subject, it achieved an error as low as 3.58° during walking activity on average over several subjects. it should be noted that the smallest error that Gain achieved was 2.37° with participant ID = 6. it also should be noticed that the Gain system encapsulates gait trajectory prediction into one neural network model for several ambulatory activity modes, including starting and stopping walking as well, while the method by Wen et al. has been tested only in the walking scenario.

Based on an excellent review by Tucker et al. [139] from 2015, the main drawbacks of current gait inference methods are as follows:

B-1: Some methods assume a fully periodic gait process. it has been shown that this assumption is incorrect [60].

Table 1: Comparison of RL by Wen et al. [151] against Gain in walking activity.

RL from [151]1 Gain on new subject2 Gain on same subject3

Root Mean Squared Error (RMSE)4 3.99 4.75 3.58

1 Wen et al. trained and tested their model with the same subject. 2 Gain was tested on a subject whose data were not seen during training. 3 Gain was trained and tested on the data of the same subjects. 4 The difference between the true and the predicted angles of the shanks.

B-2: Several methods have been developed for only one activity. Moreover, these methods cannot adapt to changes in terrain, and they do not provide procedures to handle starting or terminating an ambulatory activity [86].

B-3: One complication with inference methods is whether they can handle gait inference safely when activity changes between gait phases [139].

B-4: Gait inference methods often require information about the subject, such as length of limbs, the position of the center of mass, and pelvis direction [8, 29, 73, 153].

B-5: The desired impedance function depends on the locomotion task, as the dynamics and kinematics of the joints vary across different locomotion modes [41].

B-6: Current lower limb prosthesis controllers are not capable of transitioning automatically and seamlessly between locomotion modes, such as walking on level ground, stairs, and slopes [161].

1.4 Importance of work

Limb losses occur due to (a) vascular disease (54%) including diabetes and peripheral arterial disease; (b) trauma (45%); and (c) cancer (less than 2%) [168]. Up to 55% of people with a lower extremity amputation due to diabetes will require amputation of the second leg within 2-3 years [110]. in the USA, about 2 million people live with limb loss [168]. in the last 18 years, in italy, there were 4877 arteriopathic patients who needed lower limb amputations as a consequence of their illness. Sixty-six percent of them were major amputations, of which 73% were transfemoral amputations while only 34% were partial foot or toe amputations [38].

The Author hopes the prosthesis will be a useful tool in combating disability discrimination as is called for under several human rights treaties, such as the Rights of Persons with Disabilities convention by the United Nations [61] and Equality Acts [144, 17] in jurisdictions worldwide. These also mandate access to goods, services, education, transportation, and employment. The Author expects that the Gain tool will be effective in helping patients tackle common obstacles such as stairs and curbs in urban areas.

The Author assumes that the Gain system can potentially be useful for exoskeleton controls. Exoskeletons can provide augmented physical power or assistance in gait rehabilitation. in the former case, exoskeletons can be used to help firefighters and rescue workers in dangerous environments, nurses to move heavy patients [78], or soldiers to carry heavy loads [79]. Rehabilitation exoskeletons can be used to provide walking support for elderly people or can be applied in the rehabilitation of stroke or spinal cord injury [147, 133]. The neuromuscular disease cerebral palsy,

which affects the symmetry and the variability of walking, represents the main pathology that requires the use of exoskeletons/prostheses to rehabilitate walking [103].

1.5 Novelty and summary of the Author's main results

In this thesis, the Author introduces a new method called GaIn for predicting the movements of amputated leg parts for walking-related activities such as walking, taking stairs, sitting down, standing up, etc. This dissertation is supported by three articles, all of them published in international research journals as original articles. A summary of the supporting articles can be found in Table 24.

The GaIn system comprises three main parts: (1) a dataset suitable for training and testing, (2) a high-level controller to recognize the patient's activity modes and intentions, and (3) a gait inference method to generate the trajectory for robotic prosthetic legs. Below, the novelty and the Author's results are summarized in three thesis points, and a summary of the supporting articles can be found in Table 24.

1. HuGaDB: the dataset for training the GaIn system [26]. Unfortunately, existing datasets for HGA and HAR were not adequate for the aim of this research project, because they did not contain detailed information on the movements of the parts of the legs. This dataset is unique in the sense that HuGaDB is the first to provide human gait data in great detail, mainly from inertial sensors, and contains segmented annotations for studying the transitions between different activities. The Author constructed the HuGaDB dataset, of which the main and novel characteristics are the following:

(a) The HuGaDB dataset provides information about each part of the human leg during several walking-related activities in great detail, from inertial and EMG sensors. Six inertial sensors (each sensor consisted of one 3D-axis accelerometer and one 3D-axis gyroscope) were mounted on the left and right thigh, shin, and foot, respectively, and a pair of EMG sensors were mounted on the left and right thighs. Therefore, HuGaDB gives detailed information on how each part of the legs moves and how the parts move relative to each other.

(b) The HuGaDB dataset contains continuous recordings of combinations of activities, and the data are segmented and annotated with the label of the activity currently performed. Thus, this dataset is suitable for analyzing both human gait and transition activities.

(c) The data were collected from 18 participants in total. These participants were healthy young adults: four females and 14 males, average age of 23.67 (STD: 3.69) years, an average height of 179.06 (STD: 9.85) cm, and an average weight of 73.44 (STD: 16.67) kg. In total, they provide around 10 hours of data recording.

(d) The HuGaDB article was published in Springer's Q2 journal Lecture Notes in Computer Science: [26] and it became quite popular among researchers. HuGaDB has been cited by [129, 80, 134, 13, 12] as of 29 March 2019.

2. RapidHARe: the Author developed a novel activity mode and intention recognition method

used in GaIn as a high-level controller called RapidHARe [28]. This method is also suitable for HAR tasks in general.

(a) RapidHARe is based on a dynamic Bayesian network. RapidHARe has low prediction latency (A-1),1 is of fast and computationally inexpensive (A-2), provides smooth recognition (A-3), and generalizes well to new users (A-4).

(b) RapidHARe outperforms all other state-of-the-art HAR methods in accuracy and speed (A-5). RapidHARe reduces the F1-score error rate by 45%, 65%, and 63% and the accuracy error rate by 41%, 55%, and 62% when it is compared to artificial neural networks, recurrent neural networks, and hidden Markov models, respectively.

(c) RapidHARe is used in the high-level controller to predict the patient's intention for stand up and sit down mainly from data obtained by EMG sensors placed on the skin over the vastus lateralis thigh muscles. The controller achieved 99% precision and recall in recognizing standing up intention and 99% precision and 68% recall in recognizing sitting down intention.

3. GaIn: a gait inference system that is suitable for controlling robotic prosthetic legs [27].

(a) The GaIn framework can be used in lower limb prostheses for patients suffering from double transfemoral amputation, in exoskeleton design, etc. In contrast, most other methods are only suitable for controlling one prosthetic leg.

(b) The GaIn system is based on the observation that the movement of the thigh and shin is highly but non-linearly correlated during regular walking-related activities. This is illustrated in Figure 3. No other method relies on this assumption; in fact, other methods usually extract more data from the sound leg as well.

(c) GaIn infers the shin position based on the position and movement of the thighs using recurrent neural networks with long-short-term memory units. GaIn achieves a prediction error as low as 4.55° on average on natural terrain and generalizes well to new users. In contrast, other methods are often calibrated and tested on the same patient on treadmills.

(d) The GaIn system does not assume a fully periodic gait (B-1)2; it can infer gait for several ambulatory activities (B-2, B-6), has small prediction error during activity transitions (B-3), and does not rely on information about the patients, such as length of limbs, weights, etc. (B-4). These are in contrast to some other methods in the scientific field.

(e) The gait inference model for several ambulatory modes is encapsulated into one single neural network. Other approaches often use different mid-level controllers for different gait phases and activity modes.

(f) The GaIn article was published in the Sensors journal, which is ranked as Q2 by Scopus.

1Cf. the list in section 1.2.

2Cf. the list of drawbacks in section 1.3.

1.6 Publications

The PhD candidate is the main author in all of these articles. All articles have bsteen published in international research journals in English as original research papers. Ranking is based on Scopus and Web of Science. The independent citations are provided as of April 2019.

First-tier publications.

1. Chereshnev R., Kertesz-Farkas A.: HuGaDB: Human gait database for activity recognition from wearable inertial sensor networks, Lecture Notes in Computer Science - Springer, 2017. - pp. 131-141. The journal is ranked by Web of Science as Q4 and by Scopus as Q2. This article has obtained five independent citations. HuGaDB article was presented at the 6th international Conference on Analysis of images, Social networks and Texts and won the best talk award.

2. Chereshnev R., Kertesz-Farkas A.: Gain: Human gait inference for lower limbic prostheses for patients suffering from double trans-femoral amputation, Sensors, - 2018. - Vol. 18. -No. 12. The journal is ranked by Web of Science as Q2 and by Scopus as Q2. This article has been published recently and has not obtained any citations yet.

Second-tier publications.

3. Chereshnev R., Kertesz-Farkas A.: RapidHARe: A computationally inexpensive method for real-time human activity recognition from wearable sensors, Journal of Ambient Intelligence and Smart Environments. -2018. - Vol. 10. - No. 5. - pp. 377-391. The journal is ranked by Web of Science as Q4 and by Scopus as Q3. This article has obtained one independent citation.

Other publications.

4. Kertesz-Farkas A., Sulimov P., Sukmanova E., Chereshnev R.: Guided Layer-wise Learning for Deep Models using Side information, Annals of Mathematics and Artificial Intelligence. This paper is under review.

Reports at conferences and seminars.

5. Roman Chereshnev: Energy efficient method of recognition of human activity in real time using inertial sensors and dynamic Bayesian networks, Research Seminar of the Graduate School of Computer Science, CS HSE, June 1, 2017.

6. Roman Chereshnev: Using hidden Markov models for human activities recognition in real time, Annual interuniversity Scientific and Technical Conference of Students, Postgraduates and Young Specialists named after E.V. Armensky, MiEM HSE, February 19, 2018.

Похожие диссертационные работы по специальности «Теоретические основы информатики», 05.13.17 шифр ВАК

Заключение диссертации по теме «Теоретические основы информатики», Черешнев Роман Игоревич

7.1 Main results of this thesis

The GaIn system comprises three main parts: (1) a dataset suitable for training and testing, (2) a high-level controller to recognize the patient's activity modes and intentions, and (3) a gait inference method to generate the trajectory for robotic prosthetic legs. Below, the novelty and the Author's results are summarized in three thesis points, and a summary of the supporting articles can be found in Table 24.

1. HuGaDB: the dataset for training the GaIn system [26]. Unfortunately, existing datasets for HGA and HAR were not adequate for the aim of this research project, because they did not contain detailed information on the movements of the parts of the legs. This dataset is unique in the sense that HuGaDB is the first to provide human gait data in great detail, mainly from inertial sensors, and contains segmented annotations for studying the transitions between different activities. The Author constructed the HuGaDB dataset, of which the main and novel characteristics are the following:

(a) The HuGaDB dataset provides information about each part of the human leg during several walking-related activities in great detail, from inertial and EMG sensors. Six inertial sensors (each sensor consisted of one 3D-axis accelerometer and one 3D-axis gyroscope) were mounted on the left and right thigh, shin, and foot, respectively, and a pair of EMG sensors were mounted on the left and right thighs. Therefore, HuGaDB gives detailed information on how each part of the legs moves and how the parts move relative to each other.

(b) The HuGaDB dataset contains continuous recordings of combinations of activities, and the data are segmented and annotated with the label of the activity currently performed. Thus, this dataset is suitable for analyzing both human gait and transition activities.

(c) The data were collected from 18 participants in total. These participants were healthy young adults: four females and 14 males, average age of 23.67 (STD: 3.69) years, an average height of 179.06 (STD: 9.85) cm, and an average weight of 73.44 (STD: 16.67) kg. In total, they provide around 10 hours of data recording.

(d) The HuGaDB article was published in Springer's Q2 journal Lecture Notes in Computer Science: [26] and it became quite popular among researchers. HuGaDB has been cited by [129, 80, 134, 13, 12] as of 29 March 2019.

2. RapidHARe: the Author developed a novel activity mode and intention recognition method used in GaIn as a high-level controller called RapidHARe [28]. This method is also suitable for HAR tasks in general.

(a) RapidHARe is based on a dynamic Bayesian network. RapidHARe has low prediction latency (A-1),3 is of fast and computationally inexpensive (A-2), provides smooth recognition (A-3), and generalizes well to new users (A-4).

(b) RapidHARe outperforms all other state-of-the-art HAR methods in accuracy and speed (A-5). RapidHARe reduces the Fl-score error rate by 45%, 65%, and 63% and the accuracy error rate by 41%, 55%, and 62% when it is compared to artificial neural networks, recurrent neural networks, and hidden Markov models, respectively.

(c) RapidHARe is used in the high-level controller to predict the patient's intention for stand up and sit down mainly from data obtained by EMG sensors placed on the skin over the vastus lateralis thigh muscles. The controller achieved 99% precision and recall in recognizing standing up intention and 99% precision and 68% recall in recognizing sitting down intention.

3. GaIn: a gait inference system that is suitable for controlling robotic prosthetic legs [27].

(a) The GaIn framework can be used in lower limb prostheses for patients suffering from double transfemoral amputation, in exoskeleton design, etc. In contrast, most other methods are only suitable for controlling one prosthetic leg.

(b) The GaIn system is based on the observation that the movement of the thigh and shin is highly but non-linearly correlated during regular walking-related activities. This is illustrated in Figure 3. No other method relies on this assumption; in fact, other methods usually extract more data from the sound leg as well.

(c) GaIn infers the shin position based on the position and movement of the thighs using recurrent neural networks with long-short-term memory units. GaIn achieves a prediction error as low as 4.55° on average on natural terrain and generalizes well to new users. In contrast, other methods are often calibrated and tested on the same patient on treadmills.

(d) The GaIn system does not assume a fully periodic gait (B-1)4; it can infer gait for several ambulatory activities (B-2, B-6), has small prediction error during activity transitions (B-3), and does not rely on information about the patients, such as length of limbs, weights, etc. (B-4). These are in contrast to some other methods in the scientific field.

3Cf. the list in section 1.2.

4Cf. the list of drawbacks in section 1.3.

(e) The gait inference model for several ambulatory modes is encapsulated into one single neural network. Other approaches often use different mid-level controllers for different gait phases and activity modes.

(f) The Gain article was published in the Sensors journal, which is ranked as Q2 by Scopus.

Table 24: Summary of the support publications for this dissertation.

Article title Authors1 Journal name2 Scopus Journal Quartile3 Citations4 Reference

HuGaDB HuGaDB: human gait database for activity recognition from wearable inertial sensor networks (Best talk award) Roman Chereshnev and Attila Kertesz-Farkas Lecture Notes in Computer Science Q2 5 [26]

RapidHARe RapidHARe: a computationally inexpensive method for realtime human activity recognition from wearable sensors Roman Chereshnev and Attila Kertesz-Farkas Journal of Ambient Intelligence and Smart Environments Q3 1 [28]

GaIn GaIn: human gait inference for lower limbic prostheses for patients suffering from double trans-femoral amputation Roman Chereshnev and Attila Kertesz-Farkas Sensors Q2 N/A5 [27]

1The PhD candidate is the main author in all of these articles.

2 All articles have been published in international research journals in English language. 3Ranking is based on Scopus. independent citations only, as of March 2019. 5 It has been published very recently.

Список литературы диссертационного исследования кандидат наук Черешнев Роман Игоревич, 2019 год

Bibliography

[1] Kerem Altun, Billur Barshan, and Orkun Tuncel. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recognition, 43(10):3605-3620, 2010.

[2] Christoph Amma, Marcus Georgi, and Tanja Schultz. Airwriting: A wearable handwriting recognition system. Personal and Ubiquitous Computing, 18(1):191-203, 2014.

[3] Javier Andreu and Plamen Angelov. Real-time human activity recognition from wireless sensors using evolving fuzzy systems. In Fuzzy Systems (FUZZ), 2010 IEEE International Conference on, pages 1-8. IEEE, 2010.

[4] Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge L Reyes-Ortiz. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In International Workshop on Ambient Assisted Living, pages 216-223. Springer, 2012.

[5] Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge Luis Reyes-Ortiz. A public domain dataset for human activity recognition using smartphones. In ESANN, 2013.

[6] Evyatar Arad, Ronny P Bartsch, Jan W Kantelhardt, and Meir Plotnik. Performance-based approach for movement artifact removal from electroencephalographic data recorded during locomotion. PloS one, 13(5):e0197153, 2018.

[7] Louis Atallah, Benny Lo, Rachel King, and Guang-Zhong Yang. Sensor positioning for activity recognition using wearable accelerometers. Biomedical Circuits and Systems, IEEE Transactions on, 5(4):320-329, 2011.

[8] Samuel K Au, Paolo Bonato, and Hugh Herr. An emg-position controlled system for an active ankle-foot prosthesis: an initial experimental study. In 9th International Conference on Rehabilitation Robotics, 2005. ICORR 2005, pages 375-379. IEEE, 2005.

[9] Akin Avci, Stephan Bosch, Mihai Marin-Perianu, Raluca Marin-Perianu, and Paul Havinga. Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In Architecture of computing systems (ARCS), 2010 23rd international conference on, pages 1-10. VDE, 2010.

[10] Marc Bachlin, Meir Plotnik, Daniel Roggen, Inbal Maidan, Jeffrey M Hausdorff, Nir Giladi, and Gerhard Troster. Wearable assistant for parkinsons disease patients with the freezing of gait symptom. IEEE Transactions on Information Technology in Biomedicine, 14(2):436-446, 2010.

[11] Marc Bachlin, Daniel Roggen, Gerhard Troster, Meir Plotnik, Noit Inbar, Inbal Meidan, Talia Herman, Marina Brozgol, Eliya Shaviv, Nir Giladi, et al. Potentials of enhanced context awareness in wearable assistants for Parkinson's disease patients with the freezing of gait syndrome. In 2009 International Symposium on Wearable Computers, pages 123-130. IEEE, 2009.

[12] Abeer A Badawi, Ahmad Al-Kabbany, and Heba Shaban. Daily activity recognition using wearable sensors via machine learning and feature selection. In 2018 13th International Conference on Computer Engineering and Systems (ICCES), pages 75-79. IEEE, 2018.

[13] Abeer A Badawi, Ahmad Al-Kabbany, and Heba Shaban. Multimodal human activity recognition from wearable inertial sensors using machine learning. In 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), pages 402-407. IEEE, 2018.

[14] Oresti Banos, Miguel Damas, Hector Pomares, Fernando Rojas, Blanca Delgado-Marquez, and Olga Valenzuela. Human activity recognition based on a sensor weighting hierarchical classifier. Soft Computing, 17(2):333-343, 2013.

[15] Ling Bao and Stephen S Intille. Activity recognition from user-annotated acceleration data. In Pervasive computing, pages 1-17. Springer, 2004.

[16] Akram Bayat, Marc Pomplun, and Duc A Tran. A study on human activity recognition using accelerometer data from smartphones. Procedia Computer Science, 34:450-457, 2014.

[17] David Bell and Axel Heitmueller. The disability discrimination act in the uk: Helping or hindering employment among the disabled? Journal of health economics, 28(2):465-480, 2009.

[18] Selim R Benbadis and Diego Rielo. Eeg artifacts. Distribution, 12:1-23, 2010.

[19] Hamid Benbrahim and Judy A Franklin. Biped dynamic walking using reinforcement learning. Robotics and Autonomous Systems, 22(3-4):283-302, 1997.

[20] Yoshua Bengio et al. Learning deep architectures for AI. Foundations and trends in Machine Learning, 2(1):1-127, 2009.

[21] Alan K Bourke, Pepijn Van De Ven, Mary Gamble, Raymond O'Connor, Kieran Murphy, Elizabeth Bogan, Eamonn McQuade, Paul Finucane, Gearoid OLaighin, and John Nelson. Assessment of waist-worn tri-axial accelerometer based fall-detection algorithms using continuous unsupervised activities. In Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, pages 2782-2785. IEEE, 2010.

[22] Gabriele Bovi, Marco Rabuffetti, Paolo Mazzoleni, and Maurizio Ferrarin. A multiple-task gait analysis approach: kinematic, kinetic and emg reference data for healthy young and adult subjects. Gait & Posture, 33(1):6-13, 2011.

[23] Andreas Bulling, Ulf Blanke, and Bernt Schiele. A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys (CSUR), 46(3):33, 2014.

[24] Ricardo Chavarriaga, Hesam Sagha, Alberto Calatroni, Sundara Tejaswi Digumarti, Gerhard Troster, Jose del R. MiMn, and Daniel Roggen. The opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognition Letters, 34(15):2033-2042, 2013.

[25] Liming Chen, Jesse Hoey, Chris D Nugent, Diane J Cook, and Zhiwen Yu. Sensor-based activity recognition. Systems, man, and cybernetics, Part C: Applications and reviews, IEEE Transactions on, 42(6):790-808, 2012.

[26] Roman Chereshnev and Attila Kertesz-Farkas. Hugadb: Human gait database for activity recognition from wearable inertial sensor networks. In International Conference on Analysis of Images, Social Networks and Texts, pages 131-141. Springer, 2017.

[27] Roman Chereshnev and Attila Kertesz-Farkas. Gain: Human gait inference for lower limbic prostheses for patients suffering from double trans-femoral amputation. Sensors, 18(12):4146, 2018.

[28] Roman Chereshnev and Attila Kertesz-Farkas. Rapidhare: A computationally inexpensive method for real-time human activity recognition from wearable sensors. Journal of Ambient Intelligence and Smart Environments, 10(5):377-391, 2018.

[29] Christine Chevallereau, Dalila Djoudi, and Jessy W Grizzle. Stable bipedal walking with foot rotation through direct regulation of the zero moment point. IEEE Transactions on Robotics, 24(2):390-401, 2008.

[30] Prudhvi Tej Chinmilli, Sangram Redkar, Wenlong Zhang, and Tom Sugar. A review on wearable inertial tracking based human gait analysis and control strategies of lower-limb exoskeletons. Int Rob Auto J, 3(7):00080, 2017.

[31] Laura Comber, Rose Galvin, and Susan Coote. Gait deficits in people with multiple sclerosis: a systematic review and meta-analysis. Gait & posture, 51:25-35, 2017.

[32] Jose L Contreras-Vidal and Robert G Grossman. Neurorex: A clinical neural interface roadmap for eeg-based brain machine interfaces to a lower body robotic exoskeleton. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 1579-1582. IEEE, 2013.

[33] Fernando De la Torre, Jessica Hodgins, Adam Bargteil, Xavier Martin, Justin Macey, Alex Collado, and Pep Beltran. Guide to the carnegie mellon university multimodal activity (cmu-mmac) database. Robotics Institute, page 135, 2008.

[34] Buddhika de Silva, Anirudh Natarajan, Mehul Motani, and Kee-Chaing Chua. A real-time exercise feedback utility with body sensor networks. In Medical Devices and Biosensors, 2008. ISSS-MDBS 2008. 5th International Summer School and Symposium on, pages 49-52. IEEE, 2008.

[35] Max Donath. Proportional EMG control for above knee pros-theses. PhD thesis, Massachusetts Institute of Technology, 1974.

[36] Matthieu Duvinage, Thierry Castermans, Rene Jimenez-Fabian, Thomas Hoellinger, Caty De Saedeleer, Mathieu Petieau, Karthik Seetharaman, Guy Cheron, Olivier Verlinden, and Thierry Dutoit. A five-state p300-based foot lifter orthosis: Proof of concept. In 2012 ISSNIP Biosignals and Biorobotics Conference: Biosignals and Robotics for Better and Safer Living (BRC), pages 1-6. IEEE, 2012.

[37] Jochen Fahrenberg. Ambulatory assessment: Issues and perspectives. Ambulatory assessment: Computer-assisted psychological and psychophysiological methods in monitoring and field studies, pages 3-20, 1996.

[38] Maurizio Falso, Silvia Zani, Eleonora Cattaneo, Marco Zucchini, and Franco Zucchini. Tria-mf protocol as an innovative tool in the comprehensive treatment and outcome evaluation of lower limb amputees before and after prosthesis use. Journal of Novel Physiotherapy and Rehabilitation, 3:1-24, 2019.

[39] Kevin Fite, Jason Mitchell, Frank Sup, and Michael Goldfarb. Design and control of an electrically powered knee prosthesis. In 2007 IEEE 10th International conference on rehabilitation robotics, pages 902-905. IEEE, 2007.

[40] Christian Fleischer and Giinter Hommel. A human-exoskeleton interface utilizing electromyography. IEEE Transactions on Robotics, 24(4):872-882, 2008.

[41] Woodie C Flowers and Robert W Mann. An electrohydraulic knee-torque controller for a prosthesis simulator. Journal of biomechanical engineering, 99(1):3-8, 1977.

[42] Friedrich Foerster, Manfred Smeja, and Jochen Fahrenberg. Detection of posture and motion by accelerometry: a validation study in ambulatory monitoring. Computers in Human Behavior, 15(5):571-583, 1999.

[43] Jeremi Gancet, Michel Ilzkovitz, Elvina Motard, Yashodhan Nevatia, Pierre Letier, David De Weerdt, Guy Cheron, Thomas Hoellinger, Karthik Seetharaman, Mathieu Petieau, et al. Mindwalker: Going one step further with assistive lower limbs exoskeleton for sci condition subjects. In 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), pages 1794-1800. IEEE, 2012.

[44] Gale Gehlsen, Karen Beekman, Nikki Assmann, Donald Winant, Michael Seidle, and Arnold Carter. Gait characteristics in multiple sclerosis: progressive changes and effects of exercise on parameters. Archives of physical medicine and rehabilitation, 67(8):536-539, 1986.

[45] Marcus Georgi, Christoph Amma, and Tanja Schultz. Recognizing hand and finger gestures with IMU based motion and EMG based muscle activity sensing. In Proceedings of the International Conference on Bio-inspired Systems and Signal Processing, pages 99-108, 2015.

[46] Matteo Giuberti and Gianluigi Ferrari. Simple and robust BSN-based activity classification: Winning the first bsn contest. In Proceedings of the 4th International Symposium on Applied Sciences in Biomedical and Communication Technologies, page 34. ACM, 2011.

[47] Hristijan Gjoreski, Simon Kozina, Matjaz Gams, Mitja Lustrek, Juan Antonio Alvarez-Garcia, Jin-Hyuk Hong, Anind K Dey, Maurizio Bocca, and Neal Patwari. Competitive live evaluations of activity-recognition systems. IEEE Pervasive Computing, 14(1):70-77, 2015.

[48] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.

[49] Dawud Gordon, Jurgen Czerny, Takashi Miyaki, and Michael Beigl. Energy-efficient activity recognition using prediction. In Wearable Computers (ISWC), 2012 16th International Symposium on, pages 29-36. IEEE, 2012.

[50] Daniel Graupe and Hubert Kordylewski. Artificial neural network control of fes in paraplegics for patient responsive ambulation. IEEE transactions on biomedical engineering, 42(7):699-707, 1995.

[51] Raffaele Gravina, Parastoo Alinia, Hassan Ghasemzadeh, and Giancarlo Fortino. Multisensor fusion in body sensor networks: State-of-the-art and research challenges. Information Fusion, 35:68-80, 2017.

[52] Robert D Gregg and Anne E Martin. Prosthetic leg control in the nullspace of human interaction. In 2016 American Control Conference (ACC), pages 4814-4821. IEEE, 2016.

[53] Donghai Guan, Tinghuai Ma, Weiwei Yuan, Young-Koo Lee, and AM Jehad Sarkar. Review of sensor-based activity recognition systems. IETE Technical Review, 28(5):418-433, 2011.

[54] Farid Gulmammadov. Analysis, modeling and compensation of bias drift in mems iner-tial sensors. In Recent Advances in Space Technologies, 2009. RAST'09. 4th International Conference on, pages 591-596. IEEE, 2009.

[55] Isabelle Guyon and Vassilis Athitsos. Demonstrations and live evaluation for the gesture recognition challenge. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on, pages 461-462. IEEE, 2011.

[56] Norbert Gyorbiro, Akos Fabian, and Gergely Homanyi. An activity recognition system for mobile phones. Mobile Networks and Applications, 14(1):82-91, 2009.

[57] Kevin H Ha, Huseyin Atakan Varol, and Michael Goldfarb. Volitional control of a prosthetic knee using surface electromyography. IEEE Transactions on Biomedical Engineering, 58(1):144-151, 2011.

[58] Nils Y Hammerla, Shane Halloran, and Thomas Ploetz. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604-08880, 2016.

[59] Kazunori Hase and R.B. Stein. Turning strategies during human walking. Journal of Neurophysiology, 81(6):2914-2922, 1999.

[60] Jeffrey M Hausdorff, CK Peng, ZVI Ladin, Jeanne Y Wei, and Ary L Goldberger. Is walking a random walk? evidence for long-range correlations in stride interval of human gait. Journal of Applied Physiology, 78(1):349-358, 1995.

[61] Aart Hendricks. Un convention on the rights of persons with disabilities. Eur. J. Health L., 14:273, 2007.

[62] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.

[63] Neville Hogan. Impedance control: An approach to manipulation: Part ii—implementation. Journal of dynamic systems, measurement, and control, 107(1):8-16, 1985.

[64] Carl D Hoover, George D Fulk, and Kevin B Fite. The design and initial experimental validation of an active myoelectric transfemoral prosthesis. Journal of Medical Devices, 6(1):011005, 2012.

[65] Carl D Hoover, George D Fulk, and Kevin B Fite. Stair ascent with a powered transfemoral prosthesis under direct myoelectric control. IEEE/ASME Transactions on Mechatronics, 18(3):1191-1200, 2013.

[66] He Huang, Todd A Kuiken, Robert D Lipschutz, et al. A strategy for identifying locomotion modes using surface electromyography. IEEE Transactions on Biomedical Engineering, 56(1):65-73, 2009.

[67] He Huang, Fan Zhang, Levi J Hargrove, Zhi Dou, Daniel R Rogers, and Kevin B Engle-hart. Continuous locomotion-mode identification for prosthetic legs based on neuromuscular-mechanical fusion. IEEE Transactions on Biomedical Engineering, 58(10):2867-2875, 2011.

[68] Tam Huynh, Mario Fritz, and Bernt Schiele. Discovery of activity patterns using topic models. In Proceedings of the 10th international conference on Ubiquitous computing, pages 10-19. ACM, 2008.

[69] Tam Huynh and Bernt Schiele. Analyzing features for activity recognition. In Proceedings of the 2005 joint conference on Smart objects and ambient intelligence: innovative context-aware services: usages and technologies, pages 159-163. ACM, 2005.

[70] Stephen S Intille, Kent Larson, JS Beaudin, Jason Nawyn, E Munguia Tapia, and Pallavi Kaushik. A living laboratory for the design and evaluation of ubiquitous computing technologies. In CHI'05 extended abstracts on Human factors in computing systems, pages 1941-1944. ACM, 2005.

[71] Pyeong-Gook Jung, Gukchan Lim, Seonghyok Kim, and Kyoungchul Kong. A wearable gesture recognition device for detecting muscular activities based on air-pressure sensors. IEEE Transactions on Industrial Informatics, 11(2):485-494, 2015.

[72] Holger Junker, Oliver Amft, Paul Lukowicz, and Gerhard Troster. Gesture spotting with body-worn inertial sensors to detect user activities. Pattern Recognition, 41(6):2010-2024, 2008.

[73] Shuuji Kajita, Fumio Kanehiro, Kenji Kaneko, Kiyoshi Fujiwara, Kensuke Harada, Kazuhito Yokoi, and Hirohisa Hirukawa. Biped walking pattern generation by using preview control of zero-moment point. In ICRA, volume 3, pages 1620-1626, 2003.

[74] Nobuo Kawaguchi, Nobuhiro Ogawa, Yohei Iwasaki, Katsuhiko Kaji, Tsutomu Terada, Kazuya Murao, Sozo Inoue, Yoshihiro Kawahara, Yasuyuki Sumi, and Nobuhiko Nishio. Hasc challenge: Gathering large scale human activity corpus for the real-world activity understandings. In Proceedings of the 2nd Augmented Human International Conference, page 27. ACM, 2011.

[75] Nobuo Kawaguchi, Hodaka Watanabe, Tianhui Yang, Nobuhiro Ogawa, Yohei Iwasaki, Katsuhiko Kaji, Tsutomu Terada, Kazuya Murao, Hisakazu Hada, Sozo Inoue, et al. Hasc2012corpus: Large scale human activity corpus and its application. In Proceedings of the IPSN, volume 12, 2012.

[76] Nobuo Kawaguchi, Ying Yang, Tianhui Yang, Nobuhiro Ogawa, Yohei Iwasaki, Katsuhiko Kaji, Tsutomu Terada, Kazuya Murao, Sozo Inoue, Yoshihiro Kawahara, et al. Hasc2011corpus: towards the common ground of human activity recognition. In Proceedings of the 13th International Conference on Ubiquitous Computing, pages 571-572. ACM, 2011.

[77] Hiroaki Kawamoto, Shigehiro Kanbe, and Yoshiyuki Sankai. Power assist method for hal-3 estimating operator's intention based on motion information. In The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003, pages 67-72. IEEE, 2003.

[78] Hiroaki Kawamoto, Stefan Taal, Hafid Niniss, Tomohiro Hayashi, Kiyotaka Kamibayashi, Kiyoshi Eguchi, and Yoshiyuki Sankai. Voluntary motion support control of robot suit hal triggered by bioelectrical signal for hemiplegia. In Engineering in Medicine and Biology Society (EMBC), 2010 Annual International Conference of the IEEE, pages 462-466. IEEE, 2010.

[79] Hami Kazerooni, J-L Racine, Lihua Huang, and Ryan Steger. On the control of the berkeley lower extremity exoskeleton (bleex). In Robotics and automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE international conference on, pages 4353-4360. IEEE, 2005.

[80] A Kegeci, A Yildirak, K Ozyazici, G AyluCtarhan, O Agbulut, and i Zincir. Gait recognition via machine learning. In International Conference on Cyber Security and Computer Science

(ICONCS'18), 2018.

[81] Attila Kertesz-Farkas, Somdutta Dhir, Paolo Sonego, Mircea Pacurar, Sergiu Netoteia, Harm Nijveen, Arnold Kuzniar, Jack AM Leunissen, Andras Kocsor, and Sandor Pongor. Benchmarking protein classification algorithms via supervised cross-validation. Journal of biochemical and biophysical methods, 70(6):1215-1223, 2008.

[82] Adil Mehmood Khan, Young-Koo Lee, Sungyoung Y Lee, and Tae-Seong Kim. A triaxial accelerometer-based physical-activity recognition via augmented-signal features and a hierarchical recognizer. Information Technology in Biomedicine, IEEE Transactions on, 14(5):1166-1172, 2010.

[83] Siddhartha Khandelwal and Nicholas Wickstrom. Evaluation of the performance of accelerometer-based gait event detection algorithms in different real-world scenarios using the marea gait database. Gait & Posture, 51:84-90, 2017.

[84] Tony Khoshaba, Kambiz Badie, and RM Hashemi. Emg pattern classification based on back propagation neural network for prosthesis control. In Engineering in Medicine and Biology Society, 1990., Proceedings of the Twelfth Annual International Conference of the IEEE, pages 1474-1475. IEEE, 1990.

[85] Atilla Kilicarslan, Saurabh Prasad, Robert G Grossman, and Jose L Contreras-Vidal. High accuracy decoding of user intentions using eeg to control a lower-body exoskeleton. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 5606-5609. IEEE, 2013.

[86] Taisuke Kobayashi, Kosuke Sekiyama, Yasuhisa Hasegawa, Tadayoshi Aoyama, and Toshio Fukuda. Unified bipedal gait for autonomous transition between walking and running in pursuit of energy minimization. Robotics and Autonomous Systems, 103:27-41, 2018.

[87] Andreas Krause, Matthias Ihmig, Edward Rankin, Derek Leong, Smriti Gupta, Daniel Siewiorek, Asim Smailagic, Michael Deisher, and Uttam Sengupta. Trading off prediction accuracy and power consumption for context-aware wearable computing. In Wearable Computers, 2005. Proceedings. Ninth IEEE International Symposium on, pages 20-26. IEEE, 2005.

[88] Miguel A Labrador and (Oscar D Lara Yejas. Human Activity Recognition: Using Wearable Sensors and Smartphones. CRC Press, 2013.

[89] Oscar D Lara and Miguel A Labrador. A mobile platform for real-time human activity recognition. In Consumer Communications and Networking Conference (CCNC), 2012 IEEE, pages 667-671. IEEE, 2012.

[90] Oi scar D Lara and Miguel A Labrador. A survey on human activity recognition using wearable sensors. Communications Surveys & Tutorials, IEEE, 15(3):1192-1209, 2013.

[91] Gregoire Lefebvre, Samuel Berlemont, Franck Mamalet, and Christophe Garcia. Inertial gesture recognition with blstm-rnn. In Artificial Neural Networks, pages 393-410. Springer, 2015.

[92] Jonathan Lester, Tanzeem Choudhury, Nicky Kern, Gaetano Borriello, and Blake Hannaford. A hybrid discriminative/generative approach for modeling human activities. In IJCAI, volume 5, pages 766-772, 2005.

[93] Yifan David Li and Elizabeth T Hsiao-Wecksler. Gait mode recognition and control for a portable-powered ankle-foot orthosis. In 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR), pages 1-8. IEEE, 2013.

[94] Lin Liao. Location-based activity recognition. PhD thesis, University of Washington, 2006.

[95] Chee Peng Lim, Siew Chan Woo, Aun Sim Loh, and Rohaizan Osman. Speech recognition using artificial neural networks. In Web Information Systems Engineering, 2000. Proceedings of the First International Conference on, volume 1, pages 419-423. IEEE, 2000.

[96] Jiayang Liu, Lin Zhong, Jehan Wickramasuriya, and Venu Vasudevan. uwave: Accelerometer-based personalized gesture recognition and its applications. Pervasive and Mobile Computing, 5(6):657-675, 2009.

[97] Li Liu, Yuxin Peng, Ming Liu, and Zigang Huang. Sensor-based human activity recognition system with a multilayered model using time series shapelets. Knowledge-Based Systems, 90:138-152, 2015.

[98] Ines P Machado, A Luisa Gomes, Hugo Gamboa, Vitor Paixao, and Rui M Costa. Human activity data discovery from triaxial accelerometer sensor: Non-supervised learning sensitivity to feature extraction parametrization. Information Processing & Management, 51(2):204-214, 2015.

[99] Andrea Mannini and Angelo Maria Sabatini. Machine learning methods for classifying human physical activity from on-body accelerometers. Sensors, 10(2):1154-1175, 2010.

[100] Jani Mantyjarvi, Johan Himberg, and Tapio Seppanen. Recognizing human motion with multiple acceleration sensors. In Systems, Man, and Cybernetics, 2001 IEEE International Conference on, volume 2, pages 747-752. IEEE, 2001.

[101] Sinziana Mazilu, Michael Hardegger, Zack Zhu, Daniel Roggen, Gerhard Troster, Meir Plot-nik, and Jeffrey M Hausdorff. Online detection of freezing of gait with smartphones and machine learning techniques. In Pervasive Computing Technologies for Healthcare (Perva-siveHealth), 2012 6th International Conference on, pages 123-130. IEEE, 2012.

[102] Daniela Micucci, Marco Mobilio, and Paolo Napoletano. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Applied Sciences, 7(10):1101, 2017.

[103] Ilaria Mileti, Juri Taborri, Stefano Rossi, Maurizio Petrarca, Fabrizio Patane, and Paolo Cappa. Evaluation of the effects on stride-to-stride variability and gait asymmetry in children with cerebral palsy wearing the wake-up ankle module. In Medical Measurements and Applications (MeMeA), 2016 IEEE International Symposium on, pages 1-6. IEEE, 2016.

[104] David Minnen, Thad Starner, Jamie A Ward, Paul Lukowicz, and Gerhard Troster. Recognizing and discovering human actions from on-body sensor data. In Multimedia and Expo,

2005. ICME 2005. IEEE International Conference on, pages 1545-1548. IEEE, 2005.

[105] Josip Music, Daryl Weir, Roderick Murray-Smith, and Simon Rogers. Modelling and correcting for the impact of the gait cycle on touch screen typing accuracy. mUX: The Journal of Mobile User Experience, 5(1):1, 2016.

[106] Kimitaka Nakazawa, Hiroki Obata, and Shun Sasagawa. Neural control of human gait and posture. The Journal of Physical Fitness and Sports Medicine, 1(2):263-269, 2012.

[107] Domen Novak, Peter Rebersek, Stefano Marco Maria De Rossi, Marco Donati, Janez Podob-nik, Tadej Beravs, Tommaso Lenzi, Nicola Vitiello, Maria Chiara Carrozza, and Marko Mu-nih. Automated detection of gait initiation and termination using wearable sensors. Medical engineering & physics, 35(12):1713-1720, 2013.

[108] Daniel Olguin Olguin and Alex Sandy Pentland. Human activity recognition: Accuracy across common locations for wearable sensors. In Proceedings of 2006 10th IEEE International Symposium on Wearable Computers, Montreux, Switzerland, pages 11-14. Citeseer,

2006.

[109] Francisco Javier Ordonez and Daniel Roggen. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors, 16(1):115, 2016.

[110] G Pandian. Rehabilitation of the patient with peripheral vascular disease and diabetic foot problems. Rehabilitation medicine: principles and practice, pages 1517-1544, 1998.

[111] Juha Parkka, Miikka Ermes, Panu Korpipaa, Jani Mantyjarvi, Johannes Peltola, and Ilkka Korhonen. Activity classification using realistic data from wearable sensors. Information Technology in Biomedicine, IEEE Transactions on, 10(1):119-128, 2006.

[112] Donald J Patterson, Dieter Fox, Henry Kautz, and Matthai Philipose. Fine-grained activity recognition by aggregating abstract object usage. In Wearable Computers, 2005. Proceedings. Ninth IEEE International Symposium on, pages 44-51. IEEE, 2005.

[113] Mark Pedley. Tilt sensing using a three-axis accelerometer. Freescale semiconductor application note, 1:2012-2013, 2013.

[114] Louis Peeraer, B Aeyels, and Georges Van der Perre. Development of emg-based mode and intent recognition algorithms for a computer-controlled above-knee prosthesis. Journal of biomedical engineering, 12(3):178-182, 1990.

[115] Serge Pfeifer, Heike Vallery, Robert Riener, Renate List, and Eric J Perreault. Finding best predictors for the control of transfemoral prostheses. AUTOMED Fortschritt-Berichte VDI. Zurich, CH: VDI Verlag GmbH, 2010.

[116] Cuong Pham and Patrick Olivier. Slice&dice: Recognizing food preparation activities using embedded accelerometers. In European Conference on Ambient Intelligence, pages 34-43. Springer, 2009.

[117] Thomas Plotz, Nils Y Hammerla, and Patrick Olivier. Feature learning for activity recognition in ubiquitous computing. In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, 2011.

[118] Hugo Quintero, Ryan Farris, Clare Hartigan, Ismari Clesson, and Michael Goldfarb. A powered lower limb orthosis for providing legged mobility in paraplegic individuals. Topics in spinal cord injury rehabilitation, 17(1):25-33, 2011.

[119] Nishkam Ravi, Nikhil Dandekar, Preetham Mysore, and Michael L Littman. Activity recognition from accelerometer data. In AAAI, volume 5, pages 1541-1546, 2005.

[120] Attila Reiss and Didier Stricker. Creating and benchmarking a new dataset for physical activity monitoring. In Proceedings of the 5th International Conference on Pervasive Technologies Related to Assistive Environments, page 40. ACM, 2012.

[121] Attila Reiss and Didier Stricker. Introducing a new benchmarked dataset for activity monitoring. In 2012 16th International Symposium on Wearable Computers, pages 108-109. IEEE, 2012.

[122] Hesam Sagha, Sundara Tejaswi Digumarti, Jose del R MiMn, Ricardo Chavarriaga, Alberto Calatroni, Daniel Roggen, and Gerhard Troster. Benchmarking classification techniques using the opportunity human activity dataset. In Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on, pages 36-40. IEEE, 2011.

[123] Arash Salarian, Heike Russmann, Francois JG Vingerhoets, Catherine Dehollain, Yves Blanc, Pierre R Burkhard, and Kamiar Aminian. Gait assessment in parkinson's disease:

toward an ambulatory system for long-term monitoring. IEEE transactions on biomedical engineering, 51(8):1434-1443, 2004.

[124] Anita Sant'Anna. A symbolic approach to human motion analysis using inertial sensors: Framework and gait analysis study. PhD thesis, Halmstad University, 2012.

[125] Anita Sant'Anna, Arash Salarian, and Nicholas Wickstrom. A new measure of movement symmetry in early Parkinson's disease patients using symbolic processing of inertial sensor data. IEEE Transactions on Biomedical Engineering, 58(7):2127-2135, 2011.

[126] Francisco Sepulveda and Alberto Cliquet Jr. An artificial neural system for closed loop control of locomotion produced via neuromuscular electrical stimulation. Artificial organs, 19(3):231-237, 1995.

[127] Francisco Sepulveda, Derek M Wells, and Christopher L Vaughan. A neural network representation of electromyography and joint dynamics in human gait. Journal of biomechanics, 26(2):101-109, 1993.

[128] Muhammad Shoaib, Stephan Bosch, Ozlem Durmaz Incel, Hans Scholten, and Paul JM Havinga. Fusion of smartphone motion sensors for physical activity recognition. Sensors, 14(6):10146-10176, 2014.

[129] Pekka Siirtola, Heli Koskimaki, and Juha Roning. Openhar: A matlab toolbox for easy access to publicly open human activity data sets. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pages 1396-1403. ACM, 2018.

[130] J0rgen Skotte, Mette Korsh0j, Jesper Kristiansen, Christiana Hanisch, and Andreas Holtermann. Detection of physical activity types using triaxial accelerometers. Journal of Physical Activity and Health, 11(1):76-84, 2014.

[131] S. Srinivasan, Robert E. Gander, and Hugh C. Wood. A movement pattern generator model using artificial neural networks. IEEE Transactions on Biomedical Engineering, 39(7):716-722, 1992.

[132] Charles Stack. System and method for providing recommendation of goods or services based on recorded purchasing history, 2004. US Patent 6,782,370.

[133] Katherine A Strausser and H Kazerooni. The development and testing of a human machine interface for a mobile medical exoskeleton. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 4911-4916. IEEE, 2011.

[134] Yingnan Sun, G Yang, and Benny Lo. An artificial neural network framework for lower limb motion signal estimation with foot-mounted inertial sensors. 2018.

[135] Frank Sup, Amit Bohara, and Michael Goldfarb. Design and control of a powered trans-femoral prosthesis. The International journal of robotics research, 27(2):263-273, 2008.

[136] Frank Sup, Huseyin Atakan Varol, and Michael Goldfarb. Upslope walking with a powered knee and ankle prosthesis: initial results with an amputee subject. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 19(1):71-78, 2011.

[137] Emmanuel Munguia Tapia, Stephen S Intille, Louis Lopez, and Kent Larson. The design of a portable kit of wireless sensors for naturalistic data collection. In International Conference on Pervasive Computing, pages 117-134. Springer, 2006.

[138] Warren Tryon. Activity measurement in psychology and medicine. New York: Plenum Press, 1991.

[139] Michael R Tucker, Jeremy Olivier, Anna Pagel, Hannes Bleuler, Mohamed Bouri, Olivier Lambercy, Jose del R Millan, Robert Riener, Heike Vallery, and Roger Gassert. Control strategies for active lower extremity prosthetics and orthotics: a review. Journal of neuroengineering and rehabilitation, 12(1):1, 2015.

[140] Matthew A Turk and Alex P Pentland. Face recognition using eigenfaces. In Computer Vision and Pattern Recognition, 1991. Proceedings CVPR'91., IEEE Computer Society Conference on, pages 586-591. IEEE, 1991.

[141] Heike Vallery, Rainer Burgkart, Cornelia Hartmann, Jürgen Mitternacht, Robert Riener, and Martin Buss. Complementary limb motion estimation for the control of active knee prostheses. Biomedizinische Technik/Biomedical Engineering, 56(1):45-51, 2011.

[142] Heike Vallery and Martin Buss. Complementary limb motion estimation based on interjoint coordination using principal components analysis. In 2006 IEEE Conference on Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, pages 933-938. IEEE, 2006.

[143] Huseyin Atakan Varol, Frank Sup, and Michael Goldfarb. Multiclass real-time intent recognition of a powered lower limb prosthesis. IEEE Transactions on Biomedical Engineering, 57(3):542-551, 2010.

[144] Senate Vote and House Vote. Americans with disabilities act of 1990. 1990.

[145] Michalis Vrigkas, Christophoros Nikou, and Ioannis A Kakadiaris. A review of human activity recognition methods. Front. Robot. AI 2: 28. doi: 10.3389/frobt, 2015.

[146] Yufridin Wahab and Norantanum Abu Bakar. Gait analysis measurement for sport application based on ultrasonic system. In Consumer Electronics (ISCE), 2011 IEEE 15th International Symposium on, pages 20-24. IEEE, 2011.

[147] Letian Wang, Shiqian Wang, Edwin HF van Asseldonk, and Herman van der Kooij. Actively controlled lateral gait assistance in a lower limb exoskeleton. In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pages 965-970. IEEE, 2013.

[148] Shuangquan Wang, Jie Yang, Ningjiang Chen, Xin Chen, and Qinfeng Zhang. Human activity recognition with user-free accelerometers in the sensor networks. In Neural Networks and Brain, 2005. ICNN&B'05. International Conference on, volume 2, pages 1212-1217. IEEE, 2005.

[149] Zhelong Wang, Ming Jiang, Yaohua Hu, and Hongyi Li. An incremental learning method based on probabilistic neural networks and adjustable fuzzy clustering for human activity recognition by using wearable sensors. Information Technology in Biomedicine, IEEE Transactions on, 16(4):691-699, 2012.

[150] Gary M Weiss and Jeffrey W Lockhart. The impact of personalization on smartphone-based activity recognition. In AAAI Workshop on Activity Context Representation: Techniques and Languages, pages 98-104, 2012.

[151] Yue Wen, Jennie Si, Andrea Brandt, Xiang Gao, and He Huang. Online reinforcement learning control for the personalization of a robotic knee prosthesis. IEEE transactions on cybernetics, 2019.

[152] Eva Wentink, V.G.H. Schut, Erik Prinsen, Johan S Rietman, and Peter H Veltink. Detection of the onset of gait initiation using kinematic sensors and emg in transfemoral amputees. Gait & posture, 39(1):391-396, 2014.

[153] Bruce J West and Nicola Scafetta. Nonlinear dynamical model of human gait. Physical review E, 67(5):051917, 2003.

[154] Scott C White and David A Winter. Predicting muscle forces in gait from emg signals and musculotendon kinematics. Journal of Electromyography and Kinesiology, 2(4):217-231, 1992.

[155] David A Winter. The biomechanics and motor control of human gait: normal, elderly and pathological. 2. University of Waterloo, Waterloo, 1991.

[156] Danny Wyatt, Matthai Philipose, and Tanzeem Choudhury. Unsupervised activity recognition using automatically mined common sense. In AAAI, volume 5, pages 21-27, 2005.

[157] Zhixian Yan, Vigneshwaran Subbaraju, Dipanjan Chakraborty, Archan Misra, and Karl Aberer. Energy-efficient continuous activity recognition on mobile phones: An activity-adaptive approach. In Wearable Computers (ISWC), 2012 16th International Symposium on, pages 17-24. Ieee, 2012.

[158] Allen Y Yang, Philip Kuryloski, and Ruzena Bajcsy. Ward: A wearable action recognition database. 2009.

[159] Maxime Yochum and Stephane Binczak. A wavelet based method for electrical stimulation artifacts removal in electromyogram. Biomedical Signal Processing and Control, 22:1-10, 2015.

[160] Aaron J Young, Ann M Simon, Nicholas P Fey, and Levi J Hargrove. Classifying the intent of novel users during human locomotion using powered lower limb prostheses. In 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), pages 311-314. IEEE, 2013.

[161] Aaron J Young, Ann M Simon, Nicholas P Fey, and Levi J Hargrove. Intent recognition in a powered lower limb prosthesis using time history information. Annals of biomedical engineering, 42(3):631-641, 2014.

[162] Piero Zappi, Clemens Lombriser, Thomas Stiefmeier, Elisabetta Farella, Daniel Roggen, Luca Benini, and Gerhard Troster. Activity recognition from on-body sensors: accuracy-power trade-off by dynamic sensor selection. In Wireless sensor networks, pages 17-33. Springer, 2008.

[163] Mi Zhang and Alexander A Sawchuk. Usc-had: a daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pages 1036-1043. ACM, 2012.

[164] Mi Zhang and Alexander A Sawchuk. Human daily activity recognition with sparse representation using wearable sensors. IEEE journal of Biomedical and Health Informatics, 17(3):553-560, 2013.

[165] Chun Zhu and Weihua Sheng. Human daily activity recognition in robot-assisted living using multi-sensor fusion. In Robotics and Automation, 2009. ICRA '09. IEEE International Conference on, pages 2154-2159. IEEE, 2009.

[166] Chun Zhu and Weihua Sheng. Multi-sensor fusion for human daily activity recognition in robot-assisted living. In Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, pages 303-304. ACM, 2009.

[167] Chun Zhu and Weihua Sheng. Wearable sensor-based hand gesture and daily activity recognition for robot-assisted living. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, 41(3):569-573, 2011.

[168] Kathryn Ziegler-Graham, Ellen J MacKenzie, Patti L Ephraim, Thomas G Travison, and Ron Brookmeyer. Estimating the prevalence of limb loss in the united states: 2005 to 2050. Archives of physical medicine and rehabilitation, 89(3):422-429, 2008.

Обратите внимание, представленные выше научные тексты размещены для ознакомления и получены посредством распознавания оригинальных текстов диссертаций (OCR). В связи с чем, в них могут содержаться ошибки, связанные с несовершенством алгоритмов распознавания. В PDF файлах диссертаций и авторефератов, которые мы доставляем, подобных ошибок нет.