Machine Learning with Conexio Stratus and Edge Impulse

banner

Preface

The concept of Tiny Machine Learning (TinyML) has been around for a while, but only recently with the popularization of more efficient algorithms such as TensorFlow Lite, for example, and platforms like Edge Impulse, we are now able to see more applications being easily created and deployed on low-power and constraint embedded devices. Edge Impulse is the leading development platform for machine learning on edge devices and it is free for developers.

This tutorial…

Walks through how to build and run machine learning models using the Stratus kit and Edge Impulse studio. The Conexio Stratus from Conexio Technologies is a versatile cellular IoT platform built around Nordic Semi’s nRF9160 with Cortex-M33. With 1 MB of Flash, 256 KB of RAM, and 500 MB of cellular data it’s the perfect board for edge computing and executing machine learning models right on the platform without needing any external MCU or carrier boards.

The complete source code for this tutorial including the accelerometer data injector, ML model, and classification code can be found in this GitHub repo.

Required Toolchains

This tutorial assumes that one has already installed and set up the nRF Connect SDK v1.7.0 or later, the main toolchain required for building and compiling applications for the Stratus device. If not, please refer to this tutorial for getting up and running with the Stratus platform.

On the software side, create an account and a new project with Edge Impulse and also install the edge-impulse-cli, by running the following command in a terminal

npm install -g edge-impulse-cli

The Edge Impulse CLI is used to control local devices, acts as a proxy to synchronize data for devices that don’t have an internet connection, and uploads and converts local files.

Note: the edge-impulse-cli requires Node.js v12 or later.

So let’s dive in.

Forwarding Data from Stratus to Edge Impulse

Before we can start training or generating any ML models, we need to collect some data and create labels first. For this, we will utilize the Edge Impulse data forwarder to relay sensor data from Stratus to the Edge Impulse studio over serial. The Stratus kit already comes with an onboard LIS2DH accelerometer sensor from ST microelectronics. We will use the accelerometer data to create a simple classification model for analyzing movement over a period of time using one of the following gestures:

  • Idle (no motion)
  • Circle
  • Motion in the shape of the letter “W”

Overview of the Data Forwarder

The sample data forwarder application periodically performs the following operations:

  • Samples and reads the accelerometer sensor data for X, Y, and Z axes at a pre-defined sampling frequency.
  • Forwards this data to the Edge Impulse studio through the UART interface using the protocol specified by Edge Impulse’s data forwarder.

Building and Running the Data Forwarder Application

The data forwarder sample can be found under the

conexio_stratus_firmware/samples/edge_impulse/data_forwarder

To compile the application, open a terminal window in the application directory and issue the following west command

west build -b conexio_stratus_ns

In case, you do not want to recall the west commands every time, we have also included a python script (generate_firmware.py) to generate the Stratus device firmware. Simply command the following in the terminal and it will take care of the rest.

python3 ./generate_firmware.py

Once the application is compiled successfully, connect the Stratus device and put it into the DFU mode.

Flash the compiled binary using newtmgr:

newtmgr -c serial image upload build/zephyr/app_update.bin

Next, open up a serial console with a baud rate of 115200 and reset the Stratus device. The following serial UART output will be displayed in the terminal indicating that the Stratus has started sampling the accelerometer sensor.

[2021-11-28 18:49:24] SPM: NS image at 0x20200
[2021-11-28 18:49:24] SPM: NS MSP at 0x20015d78
[2021-11-28 18:49:24] SPM: NS reset vector at 0x23945
[2021-11-28 18:49:24] SPM: prepare to jump to Non-Secure image.
[2021-11-28 18:49:24] -0.04,-0.04,10.07
[2021-11-28 18:49:24] -0.04,0.08,10.00
[2021-11-28 18:49:24] 0.00,0.08,10.11
[2021-11-28 18:49:25] -0.11,0.08,10.00
[2021-11-28 18:49:25] -0.08,0.08,9.92
[2021-11-28 18:49:25] -0.04,0.04,10.00
[2021-11-28 18:49:25] -0.08,0.04,10.04
[2021-11-28 18:49:25] -0.15,0.19,10.15
[2021-11-28 18:49:25] -0.08,0.00,10.07
[2021-11-28 18:49:25] -0.11,-0.04,10.04
[2021-11-28 18:49:25] -0.11,0.00,10.00
[2021-11-28 18:49:25] -0.08,0.00,10.04
[2021-11-28 18:49:25] -0.08,0.00,9.96
[2021-11-28 18:49:25] -0.08,0.04,10.11
[2021-11-28 18:49:26] -0.04,0.00,10.07
[2021-11-28 18:49:26] -0.08,0.08,9.96
[2021-11-28 18:49:26] -0.08,0.00,10.00

The next step is to forward these readings to the Ege Impulse Studio to capture various gestures that we want to label and classify. To do so, we will start the Edge Impulse forwarder using the command-line tool. To start, run the following command from a terminal:

edge-impulse-data-forwarder

and follow the step-by-step prompts to log into your Edge Impulse account.

fig 1

After log-in, select a project and assign names to the 3 sensor axes as x,y, and z. This represents the format in which data is streamed from the accelerometer sensor.

Now head over to the Edge Impulse Studio, and under Devices, you should see your device with a status “Green” indicating it’s active and communicating to the studio.

fig 2

Next, go to the Data acquisition tab, and in the “Record new data” window, select the Stratus under “Device”, set the label for your gesture, and the “Sample length(ms)”. For this example, we have chosen one of the labels as a “circle” with a sample length of 2000 ms (2 seconds). After you are happy with the configuration, click “Start sampling” to acquire the raw data from the Stratus device.

fig 3

At this point, the Edge Impulse Studio sends a command down to the forwarder (CLI) running on your machine and instructs it to capture a 2s sample from the Stratus accelerometer as shown:

fig 4

Once the sample is acquired and uploaded, you will observe the received sample data on the raw data graph as follows:

fig 5

Before building and training your machine learning model, capture enough data for each gesture that you want to classify in your application. The richer the dataset, the better your ML model will be. For this example, we have collected over 120s of data.

Building and Training ML Model

After collecting enough datasets, you’re now ready to design and build your ML model in the Edge Impulse Studio. To do so, under the Impulse design menu, click Create impulse. An impulse simply takes the raw data, slices it up in smaller windows, uses signal processing blocks to extract features, and then uses a learning block to classify new data.

For this tutorial, we’ll use the ‘Spectral Analysis’ processing block. This block applies a filter, performs spectral analysis on the signal, and extracts frequency and spectral power data. To add this block, click Add a processing block, select and add Spectral Analysis.

fig 6

Then, for classification, we’ll use the “Classification (Keras)” block that takes these spectral features and learns to distinguish between the three classes (idle, circle, “W”).

fig 7

Once the impulse pipeline is complete, click Save Impulse.

fig 8

Next, under the “Spectral features” tab we will keep the default parameters and click Save parameters.

fig 9

This will then take you to the Training set window. Click Generate features to start the process. Once the features are generated, Feature explorer will load up as shown:

fig 10

The plot will show all the extracted features against all the generated windows. Here you can pan, zoom, and scroll around the plot to drill into your sensor data.

Once you are happy, it’s time to start training a neural network. Next, click on the “NN Classifier” and set the Number of training cycles to 300 and the Learning rate to 0.0005. And then click Start training. It may take a while so sit back and relax while the Edge Impulse Studio does all the heavy lifting.

fig 11

Once the training is complete, the performance of your model will be displayed together will the On-device performance.

fig 12

Hip hip hooray! You have now successfully generated and trained your ML model.

Classifying New Data

We will now test how well the trained model works against the new data. For this, we will utilize the Live Classification feature of the Edge Impulse Studio. Make sure your Stratus device is connected and the Edge Impulse data forwarder CLI running. In the Edge Impulse Studio click the Live Classification menu, select the Stratus device, and set the “Sample length”, click Start sampling and start performing gestures. Afterward, you will get the full report on what the network thought that you did against the model.

fig 13

Deploying the Model to Conexio Stratus

With the working model in place, we are now ready to deploy this model back to the Stratus device. This makes the model run locally on the embedded device without internet connectivity.

To export the model, click on the Deployment in the left-hand side menu. Then under Create library select C++ library and click Build. Edge Impulse will build a model package containing the Edge Impulse C++ SDK, your impulse, and all the required external dependencies. Once prompted, download the .zip file and place the contents in the folder

conexio_stratus_firmware/samples/edge_impulse/standalone-inferencing

The final standalone-inferencing folder structure should now look like this:

standalone-inferencing
 ├── CMakeLists.txt
 ├── edge-impulse-sdk
 ├── model-parameters
 ├── prj.conf
 ├── README.md
 ├── sample.yaml
 ├── src
 ├── tflite-model
 └── utils

At this point, we need to make some minor changes to our inference application to work with the Edge Impulse SDK. To verify that the Zephyr application performs the same classification when running locally on your board, we need to use the same raw inputs as those provided to the Live classification for any given timestamp. To do so, click on the Copy to clipboard button next to ‘Raw features’. This will copy the raw values from this validation file before any signal processing or inferencing happened.

fig 14

Next, open the src/main.cpp in the conexio_stratus_firmware/samples/edge_impulse/standalone-inferencing directory and paste the raw features inside the static const float features[] definition.

For example:

static const float features[] = {
    0.4100, 0.5600, 0.5100, 0.5200, ...
}

Then command west build -b conexio_stratus_ns to build the application.

Once the application is compiled successfully, connect the Stratus device and put it into the DFU mode.

Flash the compiled firmware using newtmgr:

newtmgr -c serial image upload build/zephyr/app_update.bin

Open up a serial console, set the baud rate to be 115200, and reset the Stratus device. The serial UART output will be displayed in the terminal showing the signal processing pipeline and the results of the classification. This output should match the values that you got in the Edge Impulse Studio under Live classification.

Congratulations! 🎉

Conclusion

This is just the starting point for running ML models on the edge using the Conexio Stratus cellular IoT kit. In the upcoming tutorials, we will also demonstrate how to perform the continuous classification on Stratus and send out the results via cellular connectivity to the cloud.

If you want to create and connect your own IoT applications without having to worry about cellular data or contracts, grab your own Conexio Stratus kit today and support open hardware. 🙏

Thank you and happy hacking!