Abstract

Heading estimation using inertial sensors built-in smartphones has been considered as a central problem for indoor pedestrian navigation. For practical daily lives, it is necessary for heading estimation to allow an unconstrained use of smartphones, which means the varying device carrying positions and orientations. As a result, three special human body motion states, namely, random hand movements, carrying position transitions, and user turns, are introduced. However, most existing heading estimation approaches neglect the three motion states, which may render large estimation errors. We propose a robust heading estimation system adapting to the unconstrained use of smartphones. A novel detection and classification method is developed to detect the three motion states timely and discriminate them accurately. For normal working, the user heading is estimated by a PCA-based approach. If a user turn occurs, it is estimated by adding horizontal heading change to previous user heading directly. If one of the other two motion states occurs, it is obtained by averaging estimation results of the adjacent normal walking steps. Finally, an outlier filtering algorithm is developed to smooth the estimation results. Experimental results show that our approach is capable of handling the unconstrained situation of smartphones and outperforms previous approaches in terms of accuracy and applicability.

1. Introduction

For Global Navigation Satellite Systems- (GNSS-) denied environments, various indoor pedestrian navigation solutions [15], such as wireless local area positioning systems, ultrawideband, and radio frequency identification, have been proposed. The usage of these infrastructure based solutions is limited to the space where special equipment or infrastructures are available. Pedestrian dead reckoning (PDR) [68] using inertial sensors built-in a smartphone may overcome this limitation, since the inertial sensors are continuously available. PDR achieves location estimation by combining step detection and step length estimation with user heading estimation. Once each user step is detected, the user position is updated by adding the current estimated relative displacement to the user position estimated at previous step. Recently, since smartphones are ubiquitous and carried almost everywhere we go, it is more and more practicable to deploy PDR approach in our daily lives.

User heading estimation is a central problem for PDR. Most existing heading estimation solutions deploy traditional attitude estimation based approaches [9, 10]. The user heading is estimated by adding a fixed user heading offset to the estimated device forward heading. This approach performs well if the device is constrained to a fixed carrying position and device orientation. However, for the smartphone in the trouser pocket or swinging in hand, the heading offset is changing all the time and hard to be estimated. Traditional attitude estimation based approaches may render a large heading estimation error due to the biased estimation of the heading offset.

To address the heading estimation problem with devices placed in the trouser pocket or swinging in hand, the uDirect approach [11], the PCA-based approach including conventional PCA approach [12], and our previous proposed RMPCA approach [13] combining rotation matrix (RM) and PCA have been proposed. The uDirect approach extracts local walking direction directly at a specific region, where the forward acceleration dominates the acceleration in the horizontal plane. In contrast, PCA-based approach is more robust to the body locomotion dynamics, since all acceleration samples are exploited to extract the most variation of horizontal acceleration signals, which is assumed to be parallel with walking direction. Recently, to remove the restrictions on device carrying positions, we proposed a heading estimation approach independent of carrying position. The carrying position is recognized by a position classifier upon the distinguishable acceleration patterns, while assuming relatively stable device orientations under the same carrying positions.

To enable a really unconstrained use of smartphones for heading estimation, not only device carrying positions but also the varying device orientations should be considered. Thus, several critical problems still need to be solved to enhance the reliability and applicability of heading estimation.

Firstly, the current heading estimation solutions are sensitive and nonrobust to random hand movements. For PCA-based approach, the fundamental assumption is seriously corrupted by random hand movements, which make the maximum variance of the acceleration signals in the horizontal plane deviate from the walking direction.

Secondly, three motion states, namely, hand movements, carrying positions transitions, and user turns, and their impacts on heading estimations have been paid little attentions. Missed detection and confusion among the three motion states may render a large heading estimation error.

To solve the above problems, we propose a robust heading estimation system using unconstrained smartphones. The main idea is that we select the related optimal heading estimation strategy based on the detection and discrimination of three special motion states, namely, random hand movements, carrying position transitions, and user turns. A novel detection and classification method is developed to detect the three motion states timely and discriminate them accurately. If user turn occurs, the user heading is estimated by computing heading change in the horizontal plane directly. If the other two motion states occur, the user heading is assumed to be smooth and obtained by averaging estimation results of the normal neighbor walking steps.

The proposed estimation system deploys our previous RMPCA approach for heading estimation during normal walking. Finally, the estimation results are further smoothed by filtering outliers caused by hand movements undetected or hand movements misclassified as user turns. For simplicity, in this work, we only consider the two main pedestrian activities, namely, walking and standing, and assume that they have been already accurately recognized according to their different acceleration patterns [14, 15].

Experiments show that our heading estimation system outperforms existing approaches in terms of accuracy, reliability, and applicability under the unconstrained situations. In summary, our work makes the following contributions:(i)We propose a robust heading estimation system independent of carrying positions and device orientations.(ii)We point out that a large heading estimation error may be introduced by PCA-based approach, if the following three motion states cannot be detected and discriminated: random hand movements, carrying position transitions, and user turn.(iii)We propose a novel detection and classification method to detect and discriminate the above three motion states.(iv)We develop an outlier filtering algorithm to remove the outliers of user heading estimation results.(v)We report the evaluation of the proposed detection and classification technique based on extensive samples collected from four participants and compare the accuracy performance of our heading estimation approach to existing approaches.

In the rest of this paper, we will firstly introduce an overview of the proposed robust heading estimation system using unconstrained smartphones in Section 2. Section 3 describes the detection and classification methods against the three special motion states in detail. Section 4 describes the robust user heading estimation approaches adapting the special motion states caused by unconstrained use of smartphones. The evaluations of experimental results are reported in Section 5. Conclusions are presented in the last section.

2. System Overview

Figure 1 shows an overview of the proposed robust user heading estimation system using unconstrained smartphones. The proposed system selects user heading estimation scheme based on continuous motion states detection and classification results. The basic idea of the motion state detection and discrimination method is to exploit their distinguishable acceleration and device attitude changing patterns. The attitude changing patterns include the accumulated pitch and roll angles, the horizontal device heading change angles, which can be derived from a continuous device attitude tracking model. For practical smartphone uses, we assume that position transition and user turn cannot occur simultaneously. Similar assumption is given to hand movement and user turn motion states. The assumptions are reasonable for the following two reasons. First, the probability of both motion states happening is rather low when the probability of each motion state is low. Second, requiring device carrying position transitions or significant hand movements when a user turn occurs is unnatural and also complicated for pedestrians.

For heading estimation scheme selection, during normal walking without any special motion state detected, we deploy our previous RMPCA approach, since the fundamental assumption of PCA-based approach still holds. If random hand movements or carrying position transitions are detected, the user heading is assumed to be smooth and calculated by averaging heading estimation results of the neighboring normal walking steps. If user turns are detected, the user heading is obtained by adding horizontal heading change to previous user heading. Besides, to avoid introducing estimation errors caused by undetected hand movements, the estimation results are further smoothed by an outlier filtering algorithm.

In order to describe the heading estimation problem, we define two coordinate systems, namely, global coordinate system (GCS) and device coordinate system (DCS). GCS is defined by three axes , , and of the Earth coordinate system, which point east, north, and in the opposite direction of the gravity vector. User heading estimation problem can be considered as seeking walking direction expressions at GCS. All raw inertial signals including acceleration and angular velocity samples are initially measured at DCS. DCS is defined by axes , , and . The axis points rightward and points forward, while both axes are parallel with the device screen. The axis points outside of the screen and is the cross-product of and .

In this work, we only investigate the most common four device carrying positions [1720], namely, held in hand (hand-held), against ear during phone call (phone-call), placed in trouser pocket (in-pocket), and swinging in hand (swinging-hand). For pedestrian activities, we only consider two situations, namely, normal walking and standing, which may be easily recognized by their different acceleration patterns [14, 15]. For simplicity, we assume that the two activities are already accurately recognized.

3. Detection and Classification of the Three Special Motion States

Three special motion states, namely, hand movements, carrying position transitions, and user turns, may render the fundamental assumption of PCA-based approaches seriously corrupted. Therefore, we develop detection and classification method of the three motion states, as seen in Figure 2. We collect acceleration signals from an accelerometer and obtain the device attitude data from a continuous device attitude tracking model, which will be presented in Section 4.1. Then, the collected data are divided into small segments for subsequent detection and discrimination by a sliding window. We select a window twice the size of user walking step period, with 50% overlap between consecutive windows, which is appropriate for detecting and discriminating the motion states accurately and timely.

As shown in Figure 2, four parameters, namely, accumulated absolute acceleration change ΔAcc, device heading change ΔYaw, accumulated absolute pitch angle change ΔPitch, and accumulated absolute roll angle change ΔRoll, are calculated as follows:where , , are the th measured three dimensional acceleration signals at DCS; and are the th pitch and roll angle values of smartphone at GCS as seen in (13) in Section 4.1; and and are the yaw values of smartphone at GCS at the start time and end time of a sliding window as seen in (13).

After obtaining the four parameters, we start to deploy them to detect and discriminate the three special motion states by comparing the parameters with related threshold values. Firstly, we detect and recognize the position transition motion by exploiting the significant acceleration change patterns. Figure 3 shows a classical example of acceleration signals when position transitions occur during a pair of device carrying positions. Clearly, all kinds of position transitions consist of substantial acceleration change over three dimensions. Moreover, the acceleration changes are significantly larger than those of normal walking, user turn motion states, and most hand movements with a relatively small magnitude. This may be contributed to two reasons. First, position transitions involve significant relative displacement and subsequently introduce extra acceleration. Second, the subcomponents of the gravity vector added on the three axes are changing all the time due to the changing device attitude. Therefore, we set a threshold value for the accumulated absolute acceleration change ΔAcc, denoted as Th(ΔAcc). When the accumulated absolute acceleration change exceeds the threshold value, a position transition is assumed to occur. To avoid false detection, we deploy the Random Forest based carrying position classifier proposed in [16], to confirm if the carrying position is changed. The carrying position classifier selects statistics of mean, variance, maximum, and minimum over windowed acceleration samples as input features. For detailed information about the carrying position classifier, please see [16]. If a position transition does not occur and the carrying position is not in-pocket, hand movement with a large magnitude is detected.

Secondly, after excluding the position transition motion state, we detect and discriminate the remaining two motion states. We observe that the user turns only render device heading change in the horizontal plane, while most hand movements may introduce significant roll and pitch angle changes. Therefore, we set threshold values for the accumulated absolute pitch angle change ΔPitch and accumulated absolute roll angle change ΔRoll, denoted as Th(ΔPitch) and Th(ΔRoll), respectively. When the accumulated absolute pitch angle or roll angle change exceeds the related threshold values, respectively, and the carrying position is not in-pocket, a hand movement is assumed to occur.

Finally, if both carrying position transitions and hand movements are excluded, we may infer that a user turn occurs when the device heading change ΔYaw exceeds a predefined threshold value . Otherwise, no special motion state is detected. The heading estimation is achieved by RMPCA approach. Generally, the choice of an appropriate threshold value is different for various device carrying positions. For in-pocket and swinging-hand positions, the threshold values are usually significantly larger than those of hand-held and phone-call positions, due to a stronger intensity of the body locomotion.

It should be noted that two false situations may happen, though their probabilities are rather low. One is that a hand movement may be missed, when it only introduces extra acceleration in the horizontal plane, and have little impacts on the pitch angles and yaw angles. Another false situation is that a hand movement may be misclassified as a user turn, when it only renders device heading change in the horizontal plane, and have little impacts on the pitch angles and yaw angles. Fortunately, large heading estimation errors caused by the two false situations may be avoided by an outlier filtering algorithm, which will be given in Section 4.3.

4. User Heading Estimation

The proposed user heading estimation approach consists of three strategies: an average of heading estimations of the neighboring normal walking steps, adding horizontal heading change to user heading of previous step, and RMPCA approach. Upon the detected and discrimination results of three special motion states, the most suitable strategy is selected for heading estimation, as seen in Figure 1. The first strategy is selected when hand movements or carrying position transitions are detected, while the second strategy is employed when user turns occur. During normal walking, RMPCA approach is used as the last strategy. Besides, to avoid heading estimation errors caused by undetected hand movements, the estimation results of RMPCA are further smoothed by an outlier filtering algorithm.

Two basic assumptions are used for heading estimation. First, the initial heading offset between user heading and device forward heading is zero, since users need to hold device in hand and gaze at it when they start the localization application. Second, the initial user heading and related device attitude are assumed to be known a priori [21, 22], which can be obtained by Global Position System (GPS) tracking when the user enters a building, landmarks, or WiFi localization.

For both RMPCA approach and three special motion states detection, continuous device attitude information is required. Therefore, we firstly present the device attitude tracking model based on quaternions in Section 4.1. Then, we describe the heading estimation strategies in detail in Section 4.2. An outlier filtering algorithm is developed in Section 4.3.

4.1. Attitude Tracking Model

An Extended Kalman Filter (EKF) is employed to fuse the inertial sensors and magnetometer for device attitude tracking. We deploy the quaternion [23, 24] to describe device attitude tracking model, since it can avoid the singularity problems. The relationship between GCS and real-time device attitude is constructed by the quaternion vector and related rotation matrix. The rotation matrix may transform any inertial signals represented at GCS into DCS as follows:where is the rotation matrix of DCS corresponding to GCS at time and and are the same vectors represented at GCS and DCS, respectively. We may establish the one to one mapping relationship between rotation matrix and a quaternion vector:where is the normalized quaternion vector with the scalar part and the vector part , and the parameter is omitted for simplicity.

For device attitude estimation, we firstly construct the state equation of EKF. Based on the kinematic law of rigid body [24], the discrete-time state model of quaternions is given by where and are the quaternion vectors at time instants and , is the time interval of state model, is the raw angular velocity vector measured at DCS, is an identity matrix, , and is given byIn order to construct the state equation of EKF, we deploy a first order linearized approximation of the state model: where the linearized transition matrix , andwhere and are the scalar and vector parts of the quaternion vector , is the white Gaussian noise of gyroscope outputs, and is a standard vector cross-product operator. The real-time quaternion vector is determined once the initial value of quaternion vector is known. Based on the two basic assumptions of our heading estimation approach, the related initial quaternion can be easily calculated by initial user heading and device attitude that are already known.

Then, we construct the measurement model of EKF based on the observed acceleration and magnetic field samples measured at DCS:where and are the observed acceleration sample at DCS and related gravity vector sample at GCS, and are the observed magnetic field sample at DCS and related magnetic field sample at GCS, and and are the white Gaussian measurement noise of the acceleration and magnetic field values. The covariance matrices of the acceleration and magnetic field values are set according to [25], respectively.

In order to construct the measurement equation of EKF, we linearize the nonlinear function in (11) by computing the related Jacobian matrix:where is a priori state estimate of the unavailable true state and is the EKF estimation result of the state available at the previous time instant.

According to the state equation (9), measurement equation (11), and related parameters, the EKF model for estimating the real-time quaternion vector is constructed. Detailed procedures for executing the EKF model may be found in [26]. Therefore, the real-time device attitude tracking is achieved. We may calculate the real-time device yaw, pitch, and roll values at GCS as follows:

4.2. Heading Estimation Strategy Selection

During normal walking, we deploy RMPCA approach for heading estimation. User heading is estimated step by step. The walking steps are detected by a classic peak detection algorithm [2729], which recognizes the peak acceleration caused by the unique heel strike during each walking step. We firstly project all acceleration within a walking step measured at DCS into GCS. where is the acceleration sample represented at GCS at time , is the related acceleration sample measured at DCS, and is the related rotation matrix between GCS and DCS. All acceleration samples at GCS in the horizontal plane can be given as where is the number of acceleration samples within the walking step and is the ith acceleration sample in the horizontal plane. Then, we deploy PCA and extract the first eigenvector of the acceleration samples in the horizontal plane as the ultimate global walking direction vector . The ambiguity problem of PCA [30] can be solved by exploiting the phase relationship between the user walking direction and the vertical acceleration [13]. The ultimate user heading estimation can be given as follows:For more details about RMPCA, we refer readers to our previous work [13].

When three special motion states are detected, extra acceleration may be introduced into the acceleration in the horizontal plane. Therefore, the principal component of the acceleration in the horizontal plane may deviate from the user walking direction. For example, when a hand movement occurs, the first eigenvector of PCA deviates from the walking direction and moves towards the hand movement direction. Similar observations may be seen when position transitions or user turns occur. Therefore, when hand movements or position transitions occur, instead of deploying RMPCA approach, we obtain the user heading by averaging user heading estimation of the neighbor normal walking steps:where is the user heading of the th walking step. Usually, without user turn, the user heading is smooth and setting equal to four may obtain a reasonable heading estimation result.

When user turn occurs, we calculate the user heading of current walking step by adding user heading change to user heading of previous walking step. The horizontal device heading change is computed by the yaw angle change at GCS:where is the horizontal device heading change within one walking step, is the kth angular velocity component rotating around at GCS, is the total number of angular velocity samples within the step, is the related sampling interval. The angular velocity sample at GCS can be obtained by angular velocity sample measured at DCS and the related rotation matrix as follows:where is the angular velocity sample vector with respect to GCS at time t, is the related sample at DCS, and is the related rotation matrix.

4.3. Outlier Filtering for Heading Estimation Results

This section describes a postprocessing outlier filtering algorithm to remove the outliers caused by undetected hand movements. This fault situation can be divided into two classes. The first class is that a hand movement motion state is undetected and recognized as a normal walking state. The second class is that a hand movement state is misclassified as a user turn. During normal walking without significant user turns, the heading estimation results of RMPCA are smooth and do not change significantly compared with neighbor steps. If a user turn occurs, the user heading of current walking step may vary significantly from that of the previous step, being close to that of the next step without the user turn. In contrast, if an undetected hand movement occurs, the heading estimation result may vary significantly from the heading estimation results of both the previous step and the next step, as shown in Figure 4. Therefore, from the above observations, we develop an outlier filtering algorithm to detect the fault situation by comparing the difference values of heading estimation results between adjacent walking steps. If one heading estimation result is significantly larger (smaller) than previous walking step and smaller (larger) than the next walking step and the two difference values exceed related threshold values, an outlier is detected and removed. Then, we correct the user heading by averaging the estimation results of the adjacent normal walking steps.

5. Evaluation

5.1. Experimental Setup

We performed the user heading estimation experiments in indoor test environments, as seen in Figure 5. In order to test the heading estimation performance of both straight and nonstraight walking paths, the test environment consisted of two symmetric parabolas, one straight line and a half circle, whose total length was 76.9 m. Four subjects participated in the experiments with a smartphone. Before each experimental run, each subject initially held the phone in hand for few seconds to start the developed user heading estimation application. We also did all necessary calibrations to make the measurement outputs of all inertial sensors built in the smartphone as precise as possible. As in many other works [31, 32], we assume that the initial user heading was known. To label the ground truth, we also applied a camera to record the entire walking path of each subject. The user heading estimations were obtained and compared with the true values for each walking step.

In order to study the classification accuracy performance of the proposed detection and discrimination algorithm, each subject walked along the path with one hundred walking steps, including seventy steps for normal walking and the remaining steps for one of the three special motion states with an equal quantity. Each subject repeated the procedure ten times.

In order to study the effects of three special motion states on heading estimation accuracy individually, we compare the heading estimation errors between RMPCA and the proposed robust heading estimation approach when each special motion state occurs individually. For hand movements, various kinds of hand locomotion with different magnitudes were performed during walking steps. For carrying position transitions, we performed all twelve kinds of position transitions between four classic device carrying positions. For user turns, we performed user turns varying from 20 degrees to 180 degrees. Each special motion state was carried out one thousand times. We assume that each special motion state is surrounded by normal walking steps during practical pedestrian walking.

5.2. Classification Accuracy Results

Table 1 shows the classification results of the proposed three motion states detection and discrimination algorithm. The results show that the proposed detection and discrimination algorithm can classify all three special motion states with high accuracy. An averaged classification accuracy of 97.6% is obtained for the three special motion states. For hand movements, though the probability is rather low, some hand movements may involve small pitch and roll angle changes, while the yaw angle changes exceed the threshold. Therefore, this kind of hand movements may be misclassified as the user turns. Fortunately, these fault situations may not finally affect the user heading estimation performance, since the negative effects may be eliminated by the following outlier filtering algorithm of the proposed user heading estimation approach. For carrying position transitions, the accumulated absolute acceleration changes are always large enough to distinguish them from the other motion states. The confusion between position transitions and the other motion states can be neglected.

5.3. Heading Estimation Results

Figures 6 and 7 show the mean and standard deviation of absolute estimation errors of RMPCA approach and the proposed approach. The heading estimation errors were calculated over the walking steps when the special motion states occurred. Compared with RMPCA approach, the proposed approach reduces the mean absolute estimation error by 43.1 percent (10.9 degrees), 48.0 percent (15.4 degrees), and 67.4 percent (38.2 degrees), for hand movement, user turn, and position transition motion states, respectively. Compared with RMPCA approach, the proposed approach reduces the standard deviation of absolute estimation error by 35.8 percent (2.4 degrees), 40.7 percent (3.3 degrees), and 46.3 percent (4.4 degrees), for hand movement, user turn, and position transition motion states, respectively. Significant absolute heading estimation error reduction can be achieved by the proposed robust heading estimation approach. Among three special motion states, the error reduction of the position transition motion state is the largest, since the extra acceleration signals introduced by a position transition motion are always significantly larger than those of the other two motion states. For the proposed robust heading estimation approach, when three special motion states are detected and accurately discriminated, an appropriate heading estimation scheme is selected rather than using RMPCA.

Furthermore, we study the overall heading estimation accuracy of the proposed robust heading estimation approach with all three special motion states involved. The subject walked along the whole walking path, which is divided into four subpaths by different colors, as seen in Figure 5. The device carrying position was transited into another position of the reminaing ones with equal probability once the subpath changed. The whole walking path required about 128 walking steps for each subject. Each subject consciously did random hand movements for fifteen times over the whole walking path. They repeated the procedure for twenty times. We performed user heading estimations over each walking step, and then a total number of more than ten thousand samples can be used. We also carried out and compared the proposed approach without outlier filtering, RMPCA approach [13], and uDirect approach [11].

Figure 8 shows the cumulative error distribution of the compared heading estimation approaches. As seen in Figure 8, the 75th percentile absolute error of the proposed approach is 13.8 degrees, while those of the proposed approach without outlier filtering, RMPCA, and uDirect are 14.2, 26.2, and 34.4 degrees, respectively. The 90th percentile absolute error of the proposed approach is 24.9 degrees, while those of the proposed approach without outlier filtering, RMPCA, and uDirect are 28.1, 42.7, and 54.9 degrees, respectively. As seen in Figure 9, compared with the proposed approach without outlier filtering, RMPCA approach, and uDirect approach, the proposed approach reduces the mean absolute estimation error by 13.9 percent (1.5 degrees), 46.9 percent (8.2 degrees), and 59.4 percent (13.6 degrees), respectively. For standard deviation of absolute estimation error, the proposed approach reduces it by 13.6 percent (0.6 degrees), 34.5 percent (2.0 degrees), and 38.7 percent (2.4 degrees), respectively.

The proposed approach achieves more significant heading estimation accuracy improvement than the compared approaches. This is because, when three special motion states occur, the proposed approach may detect and discriminate them accurately and timely and then select the related suitable heading estimation strategy. The traditional RMPCA and uDirect approaches may render large heading estimation errors, since extra acceleration caused by irregular body locomotion may violate the fundamental assumptions of traditional approaches. However, these large heading estimation errors may be always avoided by the proposed approach. If one of the three special motion states is detected, related optimal heading estimation strategy will be carried out. For hand movements and position transitions, the user heading estimations are obtained by averaging heading estimation results of the neighboring normal walking steps. For user turns, the user headings are estimated by adding the yaw angle change to the user heading of the previous step. Besides, the outlier filtering algorithm may also improve the heading estimation accuracy slightly, since the outliers of heading estimations can be removed.

6. Conclusions

This paper proposes a robust user heading estimation approach adapting three special motion states, namely, rand hand movements, position transitions, and user turns. When these three motion states occur, the traditional approaches exploiting acceleration patterns may render large heading estimation errors. Therefore, we firstly develop the motion state detection and discrimination algorithm to determine the special motion state. Then, the most suitable strategy is selected for user heading estimation. Finally, an outlier filtering algorithm is developed to smooth the user heading estimation results. Experimental results show that the proposed robust heading estimation approach may improve the user heading estimation accuracy significantly. Compared with RMPCA and uDirect approaches, the proposed approach reduces the mean absolute user heading estimation error by 46.9 percent (8.2 degrees) and 59.4 percent (13.6 degrees), respectively.

Data Availability

The authors confirmed that the data supporting the findings of this study were available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research is supported by National Natural Science Foundation of China (Grants Nos. 61301132, 61300188, 61671168, and 61301131) and the Fundamental Research Funds for the Central Universities (Grants Nos. HEUCFJ180801 and HEUCF180801).