1 Introduction

Wearable sensors that combine acceleration and gyroscope sensors have existed for decades but have only recently become feasible for use in population research on physical activity and sleep. Population research field requires that the wearable sensors are worn for at least a week, while high resolution data is being logged for later offline analysis. R package GGIR originated from the need to process accelerometer-only data in the 2010s. However, with the new availability of gyroscope data it is important that GGIR is expanded to also facilitate gyroscope data.

For health researchers with no background in sensor technology it may be good to know that a gyroscope sensor captures angular velocity. This information can help us to achieve a more accurate estimation of the sensor’s magnitude of acceleration and orientation in space. The magnitude of acceleration is related to force caused by muscle contractions, which relates to energy expenditure. You may not be aware of this, but accelerometer-only applications have always had the fundamental limitation that they were poor at detecting the magnitude of acceleration while the sensor is moving and rotating relative to gravity. See also the work by my colleagues and I in our 2013 PLOSONE publication. Gyroscope can help to address this problem.

Extracting the improved estimate of sensor acceleration and orientation is a non-trivial task as we need to combine information from both the acceleration- and gyroscope signals, typically referred to as sensor fusion.

2 Requirements

From the perspective of typical GGIR use-cases the following criteria apply to a possible sensor fusion algorithm:

  • The algorithm should be fast enough to process a week worth of 100 Hertz sensor data. A couple of minutes of processing is fine, but it would be unacceptable if the processing takes hours.
  • The primary focus for GGIR is to quantify acceleration related to movement. A more elaborate kinematic description including estimates of orientation relative to the poles (called yaw angle) seems less urgent within the context of current GGIR use-cases.

3 Initial exploration of existing algorithms

Various sensor fusion algorithms have been proposed. I spent a couple of days exploring the algorithms proposed by Luinge and Veltink in 2005, and the agorithm proposed by Madgewick in 2009. However, I struggled to get them to run within an R environment at an acceptable speed. Further, I found it difficult to translate their output to an estimate of local (gravity free) acceleration. As this has only been a side-project for me, I decided to develop my own algorithm tailored to the needs of GGIR. In this way I challenged myself to learn about this topic and it allowed me to make a starting point with facilitating sensor fusion in GGIR. GGIR is an open-source project, so I welcome help and/or financial support to further improve this functionality. Please get in touch if you are interested: .

4 Algorithm description

In this chapter I describe the algorithm as implemented in GIRR function separategravity.

4.1 Input

The input arguments of the function are:

  • acc, a three-column matrix with the accelerometer values in g-units.
  • gyr, a three-column matrix with the gyroscope values in radians per second.
  • sf the sample frequency in Hertz. The algorithm assumes a constant sample frequency throughout the recording. Please note that GGIR has function resample to aid in converting irregularly sampled data to a regular sample frequency.

4.2 Coordinate system

All calculations will be done within the local Cartesian coordinate system of the sensor. I am not attempting to calculate the roll, pitch or yaw of the sensor, because calculating those would require transitioning between multiple coordinate systems. For example, pitch derived from a gyroscope is expressed in the local coordinate system of the sensor, while pitch derived from an accelerometer is relative to the orientation of gravity. Instead, I will derive the orientation of gravitational acceleration relative to the local coordinate system of the sensor and use the angular velocity vector relative to the local coordinate system to rotate the orientation of the gravity during time segments involving movement. By tracking the orientation of gravity, which I will refer to as the gravity vector, within the local coordinate system of the sensor over time I will be able to subtract gravity from the original acceleration signals.

4.3 Angular velocity vector

The gyroscope signals represent angular velocity in radians per second per sensor axis. As I want to rotate the gravity vector, I need to split the gyroscope signals up in an angular velocity vector with a magnitude (theta in the code) per time step and 3 coordinates for the vector orientation (object OV in the code) relative to the sensors coordinate system.

theta = sqrt(rowSums(gyr ^ 2)) # theta (magnitude of angular velocity around orientation vector OV)
OV = matrix(0, N, 3) # N is number of samples.
nozero = which(theta > 0)
OV[nozero,] = as.numeric(gyr[nozero,] / theta[nozero])

Next, theta is divided by the sample frequency to get radians per time step.

theta = theta / sf

4.4 Defining fusion weights

Weights are used to decide for each time step the extent to which we want to rely on the accelerometer or on the gyroscope for gravity orientation assessment. For this I am using the high-frequency component of the signal as indicator of movement. However, later on I will need a low-pass filtered signal as indicator of gravity orientation during static (non-movement periods). So, to safe computational time I apply a low-pass filter and subtract it from the original signal to get the high-pass filtered signals.

lb = 0.5 # cut-off frequency for the filter in Hertz
lowpf = signal::butter(n=4,c(lb/(sf/2)),type=c("low")) #creating filter coefficients
acc_lf = acc_hf = matrix(NA, nrow(acc), ncol(acc)) # initialize matrices
for (i in 1:3) {
    # note: acc_lf will also be used for orientation of gravity in the
    # absence of movement further down, so this calculate serves two purposes
    acc_lf[,i] <- signal::filter(lowpf, acc[,i]) # low-pass filtered
}
acc_hf <- acc - acc_lf # high-pass filtered

Weights are expressed on a scale between 0 and 1, where 0 indicates full dependence on accelerometer, and 1 indicates full dependence on gyroscope. When the summed acceleration of the three axes is less than 0.04g the weight is set to 0. The threshold of 0.04g reflects the assumed combined noise of the three acceleration signals. The following code has a built-in ramp from 0.04 to 0.05. When the summed acceleration is above 0.05 the weight is set to 1.

weight = pmin(pmax((rowSums(abs(acc_hf)) - 0.04),0) / 0.01, 1) 

4.4.1 Maximum weight value

By setting the maximum weight value to 1-(0.5/sf) I ensure that there is always a negative exponential drift from the gyroscope estimate to the accelerometer estimate. This to counteract a possible drift in the gyroscope derived orientation change. Here, I am making the assumption that even under dynamic conditions the low-pass filtered acceleration signal averaged across multiple seconds provides some indication of the average orientation of the sensor during that period. This would not hold true if you apply the sensor to a spinning wheel, but for human movement I suppose it is a fair assumption. Note that the maximum weight value is relative to sample rate, to ensure that process is similar across data sets with different sample rates.

maxweight = 1-(0.5/sf)
weight = ifelse(weight > maxweight, yes = maxweight, no = weight)

4.4.2 Minimum non-zero weight value

By setting a minimum non-zero weight value to 0.01, and rounding all values below 0.01 to 1, I ensure that the fusion step is skipped when the gyroscope contribution is small. This to speed up the algorithm.

weight = ifelse(weight < 0.01, yes = 0, no = weight)

4.5 Rotation vector

To rotate the orientation vector of gravity we need to convert theta and OV into a rotation matrix. Note that this is a standard textbook procedure:

  RotArr = array(dim = c(N, 3, 3)) # this is a rotation matrix for every timestep
  costheta = cos(theta)
  sintheta = sin(theta)
  RotArr[,1,1:3] = cbind(costheta +
                           OV[,1]^2 * (1-costheta), OV[,1] * OV[,2] * (1- costheta) -
                           OV[,3] * sintheta, OV[,1] * OV[,3] * (1- costheta) +
                           OV[,2] * sintheta)
  RotArr[,2,1:3] = cbind(OV[,2] * OV[,1] * (1- costheta) +
                           OV[,3] * sintheta, costheta +
                           OV[,2]^2 * (1-costheta),OV[,2] * OV[,3] * (1- costheta) -
                           OV[,1] * sintheta)
  RotArr[,3,1:3] = cbind(OV[,3] * OV[,1] * (1- costheta) -
                           OV[,2] * sintheta, OV[,3] * OV[,2] * (1- costheta) +
                           OV[,1] * sintheta,  costheta +
                           OV[,3]^2 * (1-costheta))

4.6 Rotating the gravity vector

As final step, I loop over the non-zero weight values to rotate the gravity vector within the local coordinate system. Note that this is an iterative process, each step depends on the previous step, which is why I am using a loop and not an apply statement in R. Nonetheless, if you have suggestions on how I could speed this up then I would love to hear from you. Object acclocal will be the local (gravity free) acceleration of the accelormeter, and object gvector can be used as an improved estimate of the orientation of the sensor relative to the horizontal plane/ direction of gravity.

gvector = acc_lf # initialize gvector as equivalent of acc_lf
weight_not_zero = which(weight > 0)
if (weight_not_zero[1] == 1) weight_not_zero = weight_not_zero[2:length(weight_not_zero)]
for (j in weight_not_zero) {
  gvector[j,] = (crossprod(RotArr[j-1,,],  gvector[j-1,]) * weight[j]) +
    (acc_lf[j,] * (1-weight[j]))
}
acclocal = acc - gvector

5 Using the algorithm with GGIR

To use the function separategravity on it’s own do:

library("GGIR")
# acc: a 3-column matrix with your accelerometer data
# gyr: a 3-column matrix with your gyroscope data
# sf: sample frequency in Hertz
output = separategravity(acc = acc, gyr = gyr, sf = sf)`
acclocal = output$acclocal
gvector = output$gvector

To use separategravity within the context of GGIR do:

library("GGIR")
datadir = "/your/data/directory" # with .cwa files from the AX6 sensor (Axivity Ltd)
outputdir = "/your/output/directory"
g.shell.GGIR(datadir=datadir, outputdir=outputdir, acc.metric="sgAccEN")

Note that GGIR function g.shell.GGIR is the central interface to all GGIR functionality. Use this function as you would do with accelerometer-only data and the code will automatically use the gyroscope data. For the moment the sensor fusion only works for .cwa data collected with AX6 sensors by Axivity Ltd. Here, GGIR will extracts metric sgAccEN as epoch level summary of the data, which represents the average magnitude of acceleration after separation of the gravitational component for that epoch. To skip the procedure and use accelerometer-only data set argument do.sgAccEN = FALSE.

6 Reflections on performance and next steps

The separategravity function is able to process 24 hours of 100 Hertz gyroscope and accelerometer data in 30 seconds (Ubuntu 20, 8 core, 16GB), which seems promising. I have been doing some pilot experiments to get an initial idea of whether the algorithm provides plausible output. For example, by swinging the AX6 like a pendulum in the vertical plane and checking the amplitude of the rotation in the data with the amplitude I applied. However, a thorough benchmark test has not been done yet. If you are interested in doing or supporting such work then please let me know: . I also welcome suggestions and support for implementing other fusion algorithms in GGIR.