Skip to main content

· 12 min read

Background

K3s is the open source, lightweight version of Kubernetes developed by SUSE. It can run on the edge where computing power is limited, which makes it ideal for scenarios with IoT devices.

As a Kubernetes native open-source IoT development framework, Shifu abstracts each IoT device into K8s pods, and expose their capabilities to applications as APIs.

Overall Architecture

How-to Guide

Prerequisites

Software

nameversion
K3sv1.24.4+k3s1
Wireguardv1.0.200513

Hardware

namearchitecturecpuRAMHDDnetwork
master nodeamd64/arm/arm64at least 2at least 2GB32GBnetworkable, with public IP/accessible to worker nodes
worker nodeamd64/arm/arm64at least 1at least 2GB16GBnetworkable, with access to master node

Steps

  1. Deploy the wireguard server on the server side

    a. Install

    https://github.com/angristan/wireguard-install

    b. Run the following command:

    curl -O https://raw.githubusercontent.com/angristan/wireguard-install/master/wireguard-install.sh
    chmod +x wireguard-install.sh
    . /wireguard-install.sh

    c. Enter the public IP of the server and add users on demand. The following is the actual output, please make changes accordingly.

    root@localhost:~# . /wireguard-install.sh 
    Welcome to the WireGuard installer!
    The git repository is available at: https://github.com/angristan/wireguard-install

    I need to ask you a few questions before starting the setup.
    You can leave the default options and just press enter if you are ok with them.

    IPv4 or IPv6 public address: 192.168.0.1 # Change this to your public IP, you can get it by "curl ip.sb"
    Public interface: ens5
    WireGuard interface name: wg0
    Server's WireGuard IPv4: 10.66.66.1 # IPv4 address of wireguard server interface, use the default value if there is no special requirement
    Server's WireGuard IPv6: fd42:42:42::1 # IPv6 address of the wireguard server interface, use the default value if there is no special requirement
    Server's WireGuard port [1-65535]: 64191 # Change this to your port, you need to allow UDP in the host's firewall after opening the port
    First DNS resolver to use for the clients: 114.114.114.114
    Second DNS resolver to use for the clients (optional): 119.29.29.29

    Okay, that was all I needed. we are ready to setup your WireGuard server now.
    .................................
    The output here is omitted
    .................................
    Tell me a name for the client.
    The name must consist of alphanumeric character. It may also include an underscore or a dash and can't exceed 15 chars.
    Client name: client1 # After installation prompt for a username, customize it
    Client's WireGuard IPv4: 10.66.66.2 # IPv4 address of the wireguard client interface, use the default value if there is no special requirement
    Client's WireGuard IPv6: fd42:42:42::2 # The IPv6 address of the wireguard client interface, use the default value if there is no special requirement
    .................................
    The output here is omitted
    .................................
    It is also available in /home/ubuntu/wg0-client-client1.conf # Generate a configuration file for the worker node

    d. Please save the configuration file /home/ubuntu/wg0-client-client1.conf generated by the script, it will be used on the worker node. e. After you run the script and has successfully added the interface, you can check the status by running wg show all.

    root@localhost:~# wg show all
    interface: wg0
    public key: adsdadhkaskdhadkjhs12312kl3j1l2o
    private key: (hidden)
    listening port: 64191

    peer: adsdadhkaskdhadkjhs12312kl3j1l2odsada2
    preshared key: (hidden)
    allowed ips: 10.66.66.2/32, fd42:42:42::2/128

    f. At this point, the server-side configuration is complete, if you need more clients just execute . /wireguard-install.sh

  2. Deploy K3s server on the server side

    a. When you are done with step 1, you can deploy K3s on the server side through the wireguard interface, the command is as follows:

    curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_TOKEN=token INSTALL_K3S_EXEC = "--advertise-address=10.66.66.1 --flannel-iface=wg0" sh -

    b. Configuration items

    • K3S_TOKEN=token, where token can be changed accordingly, but worker nodes should be consistent when joining

    • INSTALL_K3S_EXEC="--advertise-address=10.66.66.1 ---flannel-iface=wg0", here we have configured two items

      • --advertise-address=10.66.66.1 , using the wireguard interface as the IP for connection, instead of the server IP
      • --flannel-iface=wg0, inform the flannel component of K3s to use wg0 interface

    c. The output should be as follows:

    [INFO] Finding release for channel stable
    [INFO] Using v1.24.4+k3s1 as release
    [INFO] Downloading hash rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/sha256sum-amd64.txt
    [INFO] Downloading binary rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/k3s
    [INFO] Verifying binary download
    [INFO] Installing k3s to /usr/local/bin/k3s
    [INFO] Skipping installation of SELinux RPM
    [INFO] Creating /usr/local/bin/kubectl symlink to k3s
    [INFO] Creating /usr/local/bin/crictl symlink to k3s
    [INFO] Creating /usr/local/bin/ctr symlink to k3s
    [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
    [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
    [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
    [INFO] systemd: Creating service file /etc/systemd/system/k3s.service
    [INFO] systemd: Enabling k3s unit
    Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
    [INFO] systemd: Starting k3s
    root@localhost:~#

    d. Run kubectl get pods -A to ensure all pods are running:

    ~# kubectl get pods -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-b96499967-hs6bn 1/1 Running 0 4m14s
    kube-system local-path-provisioner-7b7dc8d6f5-8szzd 1/1 Running 0 4m14s
    kube-system helm-install-traefik-crd-9bhdp 0/1 Completed 0 4m14s
    kube-system helm-install-traefik-h5q4h 0/1 Completed 1 4m14s
    kube-system metrics-server-668d979685-tlvzc 1/1 Running 0 4m14s
    kube-system svclb-traefik-99c87d41-cqcnb 2/2 Running 0 3m49s
    kube-system traefik-7cd4fcff68-b6cjj 1/1 Running 0 3m49s

    e. Check master node status by running kubectl get nodes:

    #kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    ip-172-31-37-138 Ready control-plane,master 8m35s v1.24.4+k3s1

    f. At this point, K3s should be successfully deployed on the server-side

  3. configure wireguard on worker node

tip

This tutorial uses an Ubuntu 20.04.5 LTS server running on ARM64 for demonstration purposes

a. Update the software list and install resolvconf and wireguard:

apt-get update && apt-get install resolvconf wireguard -y

b. Fill the following configuration in /etc/wireguard/wg0.conf:

note

The last line in the configuration file AllowedIPs is 0.0.0.0/0,::/0 by default, you need to change it to the wireguard IP address 10.66.66.0/24

[Interface]
PrivateKey = casasdlaijo()(hjdsasdasdihasddad
Address = 10.66.66.2/32,fd42:42:42::2/128
DNS = 114.114.114.114,119.29.29.29

[Peer]
PublicKey = asdasd21edawd3resaedserw3rawd
PresharedKey = dasda23e134e3edwadw3reqwda
Endpoint = 192.168.0.1:64191 # This should be the public IP of the server and the open UDP port
AllowedIPs = 10.66.66.0/24 # Note that the default here is 0.0.0.0/0 and needs to be changed

c. Run the following command to bring up the wg0 interface:

wg-quick up /etc/wireguard/wg0.conf 

d. Test the interface by ping 10.66.66.1, if the ping is successfull, then it is in effect.

root@k3s:~# ping 10.66.66.1
PING 10.66.66.1 (10.66.66.1) 56(84) bytes of data.
64 bytes from 10.66.66.1: icmp_seq=1 ttl=64 time=12.9 ms
64 bytes from 10.66.66.1: icmp_seq=2 ttl=64 time=13.1 ms
64 bytes from 10.66.66.1: icmp_seq=3 ttl=64 time=18.9 ms
64 bytes from 10.66.66.1: icmp_seq=4 ttl=64 time=8.21 ms
64 bytes from 10.66.66.1: icmp_seq=5 ttl=64 time=13.3 ms
64 bytes from 10.66.66.1: icmp_seq=6 ttl=64 time=7.66 ms
^C
--- 10.66.66.1 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5316ms
rtt min/avg/max/mdev = 7.659/12.345/18.863/3.729 ms
  1. Configure K3s agent on worker nodes

    a. Install K3s and join the cluster:

    curl -sfL https://rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_TOKEN=token K3S_URL https:// 10.66.66.1:6443 INSTALL_K3S_EXEC="--node-ip=10.66.66.3 --flannel-iface=wg0" sh -

    b. Configuration items:

    - `K3S_TOKEN=token`, where `token` needs to be the same as the server token

    - `INSTALL_K3S_EXEC="--advertise-address=10.66.66.3 --flannel-iface=wg0"`, where we have configured three items.

    - `-K3S_URL=https://10.66.66.1:6443`, the IP of the master node, here it is 10.66.66.1

    - `--advertise-address=10.66.66.3`, the interface of the wireguard is used as the IP for connection, instead of the worker IP
    - `--flannel-iface=wg0`, inform the flannel component of K3s to use wg0 interface

    c. The output should be as follows:

    [INFO] Finding a stable channel version
    [INFO] Use v1.24.4+k3s1 as release version.
    [INFO] Download hash rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/sha256sum-arm64.txt
    [INFO] Download the binary rancher-mirror.oss-cn-beijing.aliyuncs.com/k3s/v1.24.4-k3s1/k3s-arm64
    [INFO] Verify the binary download
    [INFO] Install k3s to /usr/local/bin/k3s.
    [INFO] Skip the SELinux RPM installation.
    [INFO] Create /usr/local/bin/kubectl symbolic link to k3s
    [INFO] Create /usr/local/bin/crictl symbolic link to k3s.
    [INFO] Create /usr/local/bin/ctr symbolic link to k3s
    [INFO] Create killall script /usr/local/bin/k3s-killall.sh
    [INFO] create uninstall script /usr/local/bin/k3s-agent-uninstall.sh
    [INFO] env: Create environment file /etc/systemd/system/k3s-agent.service.env
    [INFO] systemd. create service file /etc/systemd/system/k3s-agent.service
    [INFO] systemd: Enabling the k3s-agent unit
    Create symbolic link /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
    [INFO] systemd. start k3s-agent
    root@k3s:~#

    d. On the server side, you can check whether the node has been added by kubectl get nodes:

    #kubectl get nodes
    Name Status Role Age Version
    ip-172-31-37-138 Ready control-plane,master 24m v1.24.4+k3s1
    k3s Ready <none> 2m52s v1.24.4+k3s1
  1. Deploy Shifu by cloud-edge collaboration

    a. Clone Shifu:

    git clone https://github.com/Edgenesis/shifu.git

    Modify the image inside the controller (may not be pulled down in China):

    vim shifu/pkg/k8s/crd/install/shifu_install.yml

    Replace line 428 with

    image: bitnami/kube-rbac-proxy:latest

    b. Install Shifu:

    kubectl apply -f shifu/pkg/k8s/crd/install/shifu_install.yml

    c. Lable the worker node of K3s:

    kubectl label nodes k3s type=worker

    d. Try to run the Pod on the specified node, e.g. an nginx Pod:

    kubectl run nginx --image=nginx -n deviceshifu --overrides='{"spec": { "nodeSelector": { "type": "worker"}}}'

    e. Run kubectl get pods -n deviceshifu -owide, we can see that we have successfully run the pod on the edge node k3s

    #kubectl get pods -n deviceshifu -owide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
    nginx 1/1 Running 0 42s 10.42.1.3 k3s
  1. deploy a digital twin of the Hikvision camera

    a. Login to shifu.cloud

    Shifu Cloud is a PaaS based on the open source IoT development framework Shifu developed by Edgenesis, which provides convenient tools for developers to integrate IoT devices by simply filling in basic information. Without Shifu Cloud, developers need to manually fill in the YAML configuration files and then deploy the virtual devices.

    b. Add device

    c. Select Private Protocol --> Select Hikvision in the drop-down menu, then click Next

    d. Add basic information about the device, such as device name, manufacturer and model:\

    e. Fill in the IP address, RTSP username and password

    f. Click Access and go to Add Interface

    g. Finally click Upload and Shifu Cloud will automatically generate the YAML file for deviceShifu (the digital twin)

    h. Copy the command and run it on the master node of K3s

    i. The output should be as follows:

    root@localhost:~# kubectl apply -f https://******.com/*****/******.yaml
    configmap/deviceshifu-devicehikvision-configmap created
    service/deviceshifu-devicehikvision-service created
    deployment.apps/deviceshifu-devicehikvision-deployment created
    edgedevice.shifu.edgenesis.io/edgedevice-devicehikvision created
    root@localhost:~#
    • What happens behind the scene Shifu Cloud automatically generates the digital twin's YAML files (Edge Device, ConfigMap, Deployment and Service) with some basic information filled in by the user, if you want to go deeper, go to github to see the deployment specific files

    j. Shifu Cloud does not support adding nodeSelector for now (stay tuned), the device twin will be deployed on the master node by default, we need to update the spec in the deployment file to deploy the Pod in the worker node:

    • We get the name of the current deployment file with the command
      root@localhost:~# kubectl get deployment -n deviceshifu
      NAME READY UP-TO-DATE AVAILABLE AGE
      deviceshifu-devicehikvision-deployment 0/1 1 0 16m
    • Then edit the deployment with the kubectl edit deployment -n deviceshifu deviceshifu-devicehikvision-deployment command, add the following two lines and save it:
         ......
      nodeSelector:
      type: worker
      ......

    k. At this point we look again and see that the digital twin has been deployed to the edge node k3s:

    root@localhost:~# kubectl get pods -n deviceshifu -owide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    nginx 1/1 Running 0 165m 10.42.1.4 k3s <none> <none>
    deviceshifu-devicehikvision-deployment-5f56fb56d9-2ph5s 2/2 Running 0 21s 10.42.1.6 k3s <none> <none>

Outcome

  1. Now we can try to interact with the camera. Let's run an nginx container on the master node to simulate the interaction between the application and devicesShifu by running kubectl run nginx-master -n deviceshifu --image=nginx and we can see that nginx-master is indeed running on the master node:

    root@localhost:~# kubectl get po -n deviceshifu -owide
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    nginx 1/1 Running 0 169m 10.42.1.4 k3s <none> <none>
    deviceshifu-devicehikvision-deployment-5f56fb56d9-2ph5s 2/2 Running 0 3m53s 10.42.1.6 k3s <none> <none>
    nginx-master 1/1 Running 0 32s 10.42.0.11 localhost <none> <none>
  2. We can use kubectl exec -it -n deviceshifu nginx -- bash to get to nginx-master, so that we can interact directly with the digital twin and obtain the meta information:

    root@localhost:~# kubectl exec -it -n deviceshifu nginx -- bash
    root@nginx:/# curl deviceshifu-devicehikvision-service/info
    <?xml version="1.0" encoding="UTF-8"? >
    <DeviceInfo version="2.0" xmlns="http://www.hikvision.com/ver20/XMLSchema">
    <deviceName>IP CAMERA</deviceName>
    <deviceID>*****</deviceID>
    <deviceDescription>IPCamera</deviceDescription>
    <deviceLocation>hangzhou</deviceLocation>
    <systemContact>Hikvision.China</systemContact>
    <model>DS-2DE3Q140CN-W</model
    <serialNumber>DS-*****</serialNumber>
    <macAddress>c8:02:8f:c8:86:11</macAddress
    <firmwareVersion>V5.5.800</firmwareVersion>
    <firmwareReleasedDate>build 210816</firmwareReleasedDate>
    <encoderVersion>V7.3</encoderVersion
    <encoderReleasedDate>build 200601</encoderReleasedDate>
    <bootVersion>V1.3.4</bootVersion
    <bootReleasedDate>100316</bootReleasedDate>
    <hardwareVersion>0x0</hardwareVersion
    <deviceType>IPCamera</deviceType>
    <telecontrolID>88</telecontrolID>
    <supportBeep>true</supportBeep
    <supportVideoLoss>false</supportVideoLoss
    <firmwareVersionInfo>B-R-E7-0</firmwareVersionInfo>
    </DeviceInfo>

    The camera can be controlled directly by the following command:

    curl deviceshifu-devicehikvision-service/move/{up/down/left/right}

    If we want to see what the camera is currently shooting and the current video stream, we need forward the local port to a pod by using kubectl port-forward -n deviceshifu service/deviceshifu-devicehikvision-service 30080:80 -- address 0.0.0.0 The image/video stream can be viewed directly by entering the server's IP and port number in the browser:

    <SERVER_IP>:30080/capture
    <SERVER_IP>:30080/stream

Summary

In this article, we shared how to run Shifu in a K3s cluster, and achieve cloud-edge collaborative device control and data collection.

In the future, Shifu Cloud will continue to integrate with Kubernetes, add deployment control for edge nodes and automatically deploy device twins to the cluster without manual replication.

Thank you very much for reading, we look forward to your feedback, don't hesitate to leave comments if you like this article or have any suggestions.

· 6 min read

On 2022.10.18, Shifu Cloud was officially announced to the public, and users can quickly integrate their devices into Shifu by simply filling in fields in the UI.

In this day's live online demonstration, Yang Xijie from Edgenesis demonstrated how to use Shifu Cloud to access three physical devices and develop applications based on them. Let's review the process together!

Creating the cluster & Installing shifu

Before using Shifu Cloud, we need to make sure we have already installed Shifu on our computer.

# Start the k8s cluster locally using kind
$ sudo kind create cluster --image="kindest/node:v1.24.0"

# Clone the shifu repository and install it
$ git clone https://github.com/Edgenesis/shifu.git
$ cd shifu
$ sudo kubectl apply -f pkg/k8s/crd/install/shifu_install.yml

If you would like to know how to install Shifu and test it locally, you may want to check out download and install and local installing testing.

Connecting to thermometer and LED

The devices we want to connect to are respectively an RS485 thermohygrometer and an RS485 LED display. The thermohygrometer is connected to the host PC (computer) via a serial server, and the LED display is connected to the host PC via an RS485 to USB chip. Here we don't want to bother you with any detail, and after the host computer opens the HTTP service:

  • You may visit localhost:23330/temperature to get the temperature from the thermohygrometer
  • You may visit localhost:23330/humidity to get the humidity from the thermohygrometer
  • You may visit localhost:23331/setfloat\?value=123.4, fill value with the number you need to display on the LED
curl localhost:23330/temperature
curl localhost:23330/humidity
curl localhost:23331/setfloat\?value=123.4

Next we're going to integrate the two devices into Shifu, which means that the two pjysical devices (edgeDevices) are converted to digital twins (deviceShifus) in the k8s cluster.

Generating configuration file with one click

Shifu Cloud can easily generate configuration files for deviceShifu.

After logging in, click "All projects" to add devices. Both devices use HTTP protocol, so choose Public Protocol > HTTP. The IP address of the device cannot be written as localhost, you need to find the local IP in the network settings of your computer. For the demonstration, we used 192.168.0.123:23330 and 192.168.0.123:23331.

Once the information is filled in, a command pops up on the website, click the button on the right to copy it to the terminal and execute it to deploy the device to the local k8s cluster. This saves the time of manually writing YAML configuration files and is more convenient.

Testing if the device has been successfully integrated

We can go into the cluster to see if we could access the digital twin of both devices using the network address within the cluster:.

## Run an nginx container
$ sudo kubectl run --image=nginx:1.21 nginx
# Enter the nginx container
$ sudo kubectl exec -it nginx -- bash
# Interact with the device
$ curl http://deviceshifu-mythermometer-service.deviceshifu.svc.cluster.local/humidity
$ curl http://deviceshifu-mythermometer-service.deviceshifu.svc.cluster.local/temperature
$ curl http://deviceshifu-myled-service.deviceshifu.svc.cluster.local/setfloat?value=321

You can see that the thermometer reading and the LED display settings are working properly, which means that devices have been successfully converted to digital twins.

Packaging the application as an image

We are going to develop the application based on the thermometer and LED, here's the Python program we wrote:

main.py

import time
import requests
import json

isLocal = False
localIp = "192.168.0.123"
flag = -1
while True:
flag += 1

# [get data]
if flag % 2 == 0:
# Get temperature
url = f "http://{localIp}:23330/temperature" if isLocal else "http://deviceshifu-mythermometer-service.deviceshifu.svc.cluster.local/temperature"
else:
# Get humidity
url = f "http://{localIp}:23330/humidity" if isLocal else "http://deviceshifu-mythermometer-service.deviceshifu.svc.cluster.local/humidity"
res = requests.get(url)

# [convert data]
try:
value = json.loads(res.text)['value']
print("DEBUG", value)
# [display data]
led_url = f "http://{localIp}:23331/setfloat?value={value}" if isLocal else f "http://deviceshifu-myled-service.deviceshifu.svc.cluster.local/setfloat?value={value}"
requests.get(led_url)
except:
print("DEBUG", res.text)

time.sleep(2)

The program reads the temperature and humidity of the thermohygrometer alternately every 2 seconds and displays the readings on the LED display.

Next we want to package this program as an image, so that we can load the image in the cluster and run:

requirements.txt

requests

Dockerfile

FROM python:3.9-slim-bullseye

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY main.py .
CMD ["python3", "main.py"]
# Build the image
$ sudo docker build -t xxx/connection:v0.0.1 .
# Make sure the image has been built
$ sudo docker images | grep connection
xxx/connection v0.0.1 a9526147ddad 2 minutes ago 125MB
# Load the image into the cluster
$ sudo kind load docker-image xxx/connection:v0.0.1
# Run the container as an instance of the image
$ sudo kubectl run --image=xxx/connection:v0.0.1 connection-name

Accessing the camera

Next we want to access the Hikvision camera. The camera is not nearby, it is connected wirelessly by Wi-Fi.

You can see that Shifu Cloud supports this module of Hikvision, click on it and configure the camera ip address, username and password to access it with one click. The device name in the demo is mycamera.

# Enter the nginx container
$ sudo kubectl exec -it nginx -- bash
# Check camera information
$ curl http://deviceshifu-mycamera-service.deviceshifu.svc.cluster.local/info

This means that the Hikvision camera is integrated into Shifu and has been converted to a digital twin.

Controlling the position of a camera

Immediately after that, we want to adjust the camera's position, and the camera's orientation can be controlled using APIs like move/up move/down move/left move/right.

$ curl http://deviceshifu-mycamera-service.deviceshifu.svc.cluster.local/move/up

To check the outcome, we turn on the video player on our computer and open that streaming address: rtsp://<user_name>:<password>@<ip_address>. In macOS, we can open IINA.app, go to menu bar > Open Url... > paste and enter, and we can see the live surveillance video stream.

As you can see, the camera position has changed, and we managed to reorient the camera to the angle we need(the angle of the camera moved from the ceiling to the back the computer screenon the desk).

Summary

In this demonstration, we used Shifu Cloud to access three devices, and if you compare it to our first Meetup, you will see that we have achieved faster integration and lowered the threshold of integrating a device.

Shifu Cloud includes an Aha Moment to help you get familiar with this website.

In the future, Shifu Cloud will offer:

  • More protocol support (both Edgenesis and the open source community will continue to support more protocols to improve the coverage of fast access)
  • Application development support (for now we still need to package applications we developed locally, but in the future, we can develope applications on Shifu Cloud
  • App store support (which contains apps uploaded by developers or third-party plugins that users can install with one click)

Thank you for reading this, let's stay tuned to future progresses of Shifu Cloud!

· 8 min read

In the offline Shifu Meetup event held on 2022.9.29, Yang Xijie from Edgenesis demonstrated how Shifu is integrated into multiple physical IoT devices, proving through this intuitive way that Shifu framework enables fast access to devices, good isolation and easy App development without single-point failure.

Five devices were displayed in this activity, namely MQTT server, RS485 temperature and humidity meter and RS485 LED, Siemens S7 PLC, and Hikvision camera - all of which are relatively common IoT devices. Let's review the integration process below.

Create a Cluster and Install Shifu

First we need to start Docker locally. Just open Docker Desktop using Windows or macOS Search and minimize it to the background.

After that we need to create a k8s cluster with kind. Later on Shifu and the digital twins of IoT devices will exist in this cluster as Pod.

# Create a Cluster
$ sudo kind create cluster --image="kindest/node:v1.24.0"

# Prepare the image in advance for import into the cluster
$ sudo docker pull bitnami/kube-rbac-proxy:0.13.1
$ sudo docker pull edgehub/shifu-controller:v0.1.1
$ sudo docker pull nginx:1.21
$ sudo kind load docker-image bitnami/kube-rbac-proxy:0.13.1 edgehub/shifu-controller:v0.1.1 nginx:1.21

Shifu supports one-click installation, just clone Shifu repository first, and deploy it later with one single command:

# Install shifu
$ git clone https://github.com/Edgenesis/shifu.git
$ cd shifu
$ sudo kubectl apply -f pkg/k8s/crd/install/shifu_install.yml

# Run an application that will be used later
$ sudo kubectl run --image=nginx:1.21 nginx

You can also check out a more detailed tutorial on installing Shifu locally.

Device Integration

MQTT

Test MQTT Server

We have deployed an MQTT server and can test it by first opening two shells.

# shellA
$ mosquitto_sub -h 82.157.170.202 -t topic0

# shellB
$ mosquitto_pub -h 82.157.170.202 -t topic0 -m "lol"

You can see that the message sent can be received correctly.

Integrate the Device

Next we can modify the corresponding configuration, download the corresponding image, and then use the kubectl apply command to integrate the MQTT server into Shifu as a digital twin.

Modify spec.address to 82.157.170.202:1883 and spec.protocolSettings.MQTTSetting.MQTTTopic to topic0 in examples/my_mqtt/mqtt_deploy.

$ sudo docker pull edgehub/deviceshifu-http-mqtt:v0.1.1
$ sudo kind load docker-image edgehub/deviceshifu-http-mqtt:v0.1.1
$ sudo kubectl apply -f examples/my_mqtt/mqtt_deploy

Read the Data

We can interact with the digital twin by starting an nginx application in the cluster.

$ sudo kubectl exec -it nginx -- bash

$ curl http://deviceshifu-mqtt.deviceshifu.svc.cluster.local/mqtt_data

Connect Thermometer and LED

Connect Device to Computer

  • The thermometer is connected to the computer using a serial server via a network cable
  • LEDs are connected to the computer using an RS485 to USB chip

Start HTTP Service Locally

Since Shifu does not support Modbus protocol right now, we need to convert the data read by Modbus to HTTP data.

$ cd api_thermometer
$ uvicorn --host 0.0.0.0 --port 23330 main:app

$ cd api_led
$ uvicorn --host 0.0.0.0 --port 23331 main:app
contents in api_thermometer
main.py
from fastapi import FastAPI
from typing import List
from pymodbus.client.sync import ModbusTcpClient

app = FastAPI()

def getHumidityAndTemperature() -> List[float]:
"""
return temperature and humidity datas retrieved from TAS-LAN-460
"""
client = ModbusTcpClient(host='192.168.0.80', port=10123) # the port of TAS-LAN-460
client.connect()
SLAVE = 0x01
r = client.read_holding_registers(address=0x0000, count=2, unit=SLAVE)
print("collected data", r.registers)
client.close()

result = [r.registers[0] / 10, r.registers[1] / 10]
return result

@app.get("/")
def root():
return { "message": "Hello World" }

@app.get("/temperature")
def getTemperature():
temperature = getHumidityAndTemperature()[1]
return { "value": f"{temperature}" }

@app.get("/humidity")
def getHumidity():
humidity = getHumidityAndTemperature()[0]
return { "value": f"{humidity}" }
requirements.txt
fastapi
pymodbus
contents in api_led
main.py
from fastapi import FastAPI
from pymodbus.client.sync import ModbusSerialClient
from typing import List, Dict

app = FastAPI()

class ZhongshengLed:
"""
DEVICE_NAME = "ZhongshengLed"
"""

def __init__(self, device_address: int = 0x01, port: str = '/dev/tty.usbserial-14120') -> None:
self.device_address = device_address
self.client = ModbusSerialClient(method='rtu', port=port, stopbits=1, bytesize=8, parity='N', baudrate=9600, timeout=2.0)

def setLedCharacter(self, position: int, character: str):
self.setLedAscii(position=position, ascii_value=ZhongshengLed.character2ascii[character])

def setLedAscii(self, position: int, ascii_value: int):
self.client.connect()
self.client.write_register(address=position, value=ascii_value, unit=self.device_address)
self.client.close()

def setFourLedsString(self, string: str):
self.setFourLedsAsciis(ascii_values=[ZhongshengLed.character2ascii[string[0]], ZhongshengLed.character2ascii[string[1]], ZhongshengLed.character2ascii[string[2]], ZhongshengLed.character2ascii[string[3]]])

def setFourLedsAsciis(self, ascii_values: List[int]):
self.client.connect()
self.client.write_registers(address=ZhongshengLed.LedPosition.one, values=ascii_values, unit=self.device_address)
self.client.close()

class LedPosition:
one = 0
two = 1
three = 2
four = 3

character2ascii: Dict[str, int] = {
"0": 0x30, "1": 0x31, "2": 0x32, "3": 0x33, "4": 0x34,
"5": 0x35, "6": 0x36, "7": 0x37, "8": 0x38, "9": 0x39,
".": 0x2e, "-": 0x2d, " ": 0x20
}


def setDot(self, count: int = 1):
self.client.connect()
self.client.write_register(address=16, value=count, unit=self.device_address)
self.client.close()

def setNegative(self, isNegative: bool = False):
self.client.connect()
self.client.write_register(address=17, value=1 if isNegative else 0, unit=self.device_address)
self.client.close()

def setFloat(self, value: float):
"""
display one decimal place
"""
self.setDot(count=1)
if value < 0:
self.setNegative(True)
else:
self.setNegative(False)

data = int(abs(value) * 10)

self.client.connect()
self.client.write_register(address=7, value=data, unit=self.device_address)

# self.client.write_register(address=16, value=value, unit=self.device_address)
self.client.close()

def setBrightness(self, brightness: int = 7):
self.client.connect()
self.client.write_register(address=14, value=brightness, unit=self.device_address)
self.client.close()

device = ZhongshengLed()

@app.get("/")
def root():
return { "message": "Hello World" }

@app.get("/setfloat/{value}")
def setTemperature(value: float):
device.setFloat(value=value)
return { "OK": "OK" }

@app.get("/setfloat/{value}")
def setTemperature(value: float):
device.setFloat(value=value)
return { "OK": "OK" }

@app.get("/setfloat")
def setTemperature(value: float = 0.0):
device.setFloat(value=value)
return { "OK": "OK" }
requirements.txt
fastapi
pymodbus

Local Verification

$ curl http://localhost:23330/temperature
$ curl http://localhost:23330/humidity
$ curl http://localhost:23331/setfloat\?value\=123.4

Device Integration

  • Modify the IP address inhttp_thermometer/deployment/http_edgedevice.yaml.
  • Modify the IP address inhttp_led/deployment/http_edgedevice.yaml
$ sudo docker pull edgehub/deviceshifu-http-http:v0.1.1
$ sudo kind load docker-image edgehub/deviceshifu-http-http:v0.1.1
$ sudo kubectl apply -f examples/my_http_led/deployment
$ sudo kubectl apply -f examples/my_http_thermometer/deployment

Interact with Devices

Start nginx to interact with the thermohygrometer:

$ sudo kubectl exec -it nginx -- bash

$ curl http://my-thermometer.deviceshifu.svc.cluster.local/temperature
$ curl http://my-thermometer.deviceshifu.svc.cluster.local/humidity
$ curl http://my-led.deviceshifu.svc.cluster.local/setfloat?value=23.4

Application Development

Read the temperature and humidity and then display it alternately on the LED.

$ sudo docker build -t yangxijie/connection:v0.0.1 .
$ sudo docker images | grep connection
yangxijie/connection v0.0.1 a9526147ddad 2 minutes ago 125MB
$ sudo kind load docker-image yangxijie/connection:v0.0.1
$ sudo kubectl run --image=yangxijie/connection:v0.0.1 connection-name

The illustration of this application is as follows:

You can see the temperature and humidity are displayed alternely on the LED once the application is running.

files in current folder
main.py
import time
import requests
import json

isLocal = False
localIp = "192.168.31.138"
flag = -1
while True:
flag += 1

# [get data]
if flag % 2 == 0:
# get temperature
url = f"http://{localIp}:23330/temperature" if isLocal else "http://my-thermometer.deviceshifu.svc.cluster.local/temperature"
else:
# get humidity
url = f"http://{localIp}:23330/humidity" if isLocal else "http://my-thermometer.deviceshifu.svc.cluster.local/humidity"
res = requests.get(url)

# [converse data]
try:
value = json.loads(res.text)['value']
print("DEBUG", value)
# [display data]
led_url = f"http://{localIp}:23331/setfloat?value={value}" if isLocal else f"http://my-led.deviceshifu.svc.cluster.local/setfloat?value={value}"
requests.get(led_url)
except:
print("DEBUG", res.text)

time.sleep(2)
requirements.txt
requests
Dockerfile
FROM python:3.9-slim-bullseye

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY main.py .
CMD ["python3", "main.py"]

Simons PLC

Device Integration

First change the IP address to that of the PLC.

$ sudo docker pull edgehub/deviceshifu-http-http:v0.1.1
$ sudo docker pull edgehub/plc-device:v0.0.1
$ sudo kind load docker-image edgehub/deviceshifu-http-http:v0.1.1 edgehub/plc-device:v0.0.1
$ sudo kubectl apply -f examples/my_plc/plc-deployment

Interact with the Device

Here we modify one bit on the PLC and you can see the indicator light on the PLC turns on. In actual scenarios, the PLC will control large equipments such as an robotic arm.

$ sudo kubectl run nginx --image=nginx:1.21 -n deviceshifu 
$ sudo kubectl exec -it nginx -n deviceshifu -- bash

$ curl "deviceshifu-plc/sendsinglebit?rootaddress=Q&address=0&start=0&digit=1&value=1"; echo

Hikivision Camera

Device Integration

Get the IP address of the camera and then replace the IP address in rtsp/camera-deployment/deviceshifu-camera-deployment.yaml.

$ sudo docker pull edgehub/deviceshifu-http-http:v0.1.1
$ sudo docker pull edgehub/camera-python:v0.0.1
$ sudo kind load docker-image edgehub/deviceshifu-http-http:v0.1.1 edgehub/camera-python:v0.0.1
$ sudo kubectl apply -f examples/my_rtsp/camera-deployment

Interact with Device

Check device information in cluster using nginx:

# Interact through `curl` in cluster
$ sudo kubectl exec -it nginx -- bash

$ curl http://deviceshifu-camera.deviceshifu.svc.cluster.local/info

The command that the digital twin of the camera supports is:

To view the images captured by the camera, we need to forward the port to the local machine and access it on the browser.

# Local browser access
$ sudo kubectl port-forward svc/deviceshifu-camera -n deviceshifu 8080:80
# Enter`localhost:8080/info`check information
# Enter`localhost:8080/capture`access pictures
# Enter`localhost:8080/move/{up|down|left|right}`move the camera
# Enter`localhost:8080/stream?timeout=0`access real-time streamline

Summary

The Shifu Meetup event was a huge success. As you can see Shifu enables developers to quickly access devices, unify various protocols to HTTP for easy management and subsequent application development. Shifu also has many advantages including running with no single point failure and good isolation.

If you are interested in Shifu, please visit Shifu official website to learn more. You are also welcome to give the project a star at Shifu's GitHub repository!

· 4 min read

This article will briefly describe how to integrate WasmEdge into Shifu to cleanse data collected from IoT devices.

Background 🌇

When we use Shifu to collect data, it usually happens that the data collected from the device is in a different format from the data we need. To solve this problem, we can use Shifu + WasmEdge to process the data collected by Shifu through WasmEdge and then return it to our application.

The following is the simple logic.

WasmEdge Introduction 🏬

WasmEdge is a lightweight, high-performance WebAssembly(WASM) virtual machine optimized for the edge. WasmEdge can be used in a variety of scenarios such as severless cloud functions, SaaS, blockchain smart contracts, IoT, automotive real-time software applications, etc.

Prepare 🗂

  1. kubectl v1.24.2
  2. docker 20.10.16
  3. kind v0.14.0
  4. git 2.36.1

Deployment 🔨

To make this article faster for you, you can download the program from Github with the following command. 🚀

git clone https://github.com/Edgenesis/wasm-shifu-demo.git
cd wasm-shifu-demo

Create a K8s Cluster 🐝

Use the following command to create a k8s cluster.

$ kind delete cluster && kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

Build Shifu image 🪞

Use the following command to build a Shifu image.

$ make -f shifu/Makefile build-image-deviceshifu
$ kind load docker-image edgehub/deviceshifu-http-http:v0.0.6
$ docker images | grep edgehub/deviceshifu-http-http
edgehub/deviceshifu-http-http v0.0.6 1d6b3544b8ad 54 minutes ago 36.1MB

Run Virtual Devices 🔌

To make your experience easier, here we use a virtual appliance for simulation.

Install and run the virtual appliance with port number 8099.

$ docker build -f mockDevice/dockerfile -t mockdevice:v0.0.1 .
$ docker run -p 8099:8099 -itd mockdevice:v0.0.1
bdfd2b1323be mockdevice:v0.0.1 ". /mockDevice" 19 seconds ago Up 18 seconds 0.0.0.0:8099->8099/tcp admiring_feistel

Write Rules & Compile Wasm

You can write rules by using JavaScript. If you are not familiar with JavaScript, you can just use the default rules. 🥮

Rule file path: wasmEdge/js-func/src/js/run.js You can achieve different functions by modifying the rule.

$ docker build -t wasm:v0.0.1 -f wasmEdge/js.dockerfile .
$ kind load docker-image wasm:v0.0.1
$ kubectl apply -f wasmEdge/k8s

You can check the pod operation of WasmEdge with the following command.

$ kubectl get pod -n wasmedge
NAME READY STATUS RESTARTS AGE
wasm-deployment-fbc9564d8-td428 1/1 Running 0 1s

Install and Run Shifu

Install Shifu.

$ kubectl apply -f shifuConfig/shifu_install.yml
$ kubectl get pod -n shifu-crd-system
NAME READY STATUS RESTARTS AGE
shifu-crd-controller-manager-5bbdb4d786-s6h4m 2/2 Running 0 1s

Install deviceShifu to connect with mockDeivce. Before doing so, please change the address in the shifuConfig/task3/task3.yaml file to the IP of your computer.

spec:
sku: "E93"
connection: Ethernet
address: "192.168.14.163:8099"

Deploy and run deviceShifu with the following command. 🏖

$ kubectl apply -f shifuConfig/task3
$ kubectl get pod -n deviceshifu
NAME READY STATUS RESTARTS AGE
deviceshifu-demodevice-deployment-5589b55569-l5nb2 1/1 Running 0 4s

Experience 🕹

You can start a nginx to communicate with deviceShifu.

$ kubectl run nginx --image=nginx:1.21
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3s

With the following command, you can interact with Shifu to clean the data collected from IoT devices. 🛁

$ kubectl exec -it nginx -- curl -v http://deviceshifu-demodevice-service.deviceshifu.svc.cluster.local/get_info;echo
[
{
"code":375287,
"name": "atmospheric temperature",
"val": "24.56",
"unit":"°C",
"exception": "Temperature is too high"
},
{
"code":375287,
"name": "Atmospheric Humidity",
"val": "81.63",
"unit":"%RH",
"exception": "Humidity too high"
}
]

Also we can use the following command to check the data generated by the IoT device.

$ curl localhost:8099/getInfo
{
"statusCode": "200",
"message": "success",
"entity":[
{
"dateTime": "2022-09-09 09:46:45",
"eUnit":"℃",
"eValue": "23.87",
"eKey": "e1",
"eName": "Atmospheric temperature",
"eNum": "101"
},
{
"dateTime": "2022-09-09 09:46:45",
"eUnit":"%RH",
"eValue": "80.62",
"eKey": "e2",
"eName": "Atmospheric Humidity",
"eNum": "102"
}
],
"deviceId":950920,
"deviceName": "950920",
"deviceRemark": "2022-09-09 09:46:45"
}

Comparing the two outputs, we can see that we have successfully collected and cleaned the data to get the data we want. The comparison chart is as follows :

comparison chart

· 8 min read

OpenYurt is a cloud-side computing platform, it converts existing Kubernetes clusters into OpenYurt clusters with OpenYurt, extends the capabilities of Kubernetes to the edge side. OpenYurt provides diverse features for collaborative development on the cloud-side, such as YurtTunnel to bridge cloud-side communication, Yurt-App-Manager for easy management of node-unit application deployment/operation and maintenance, and YurtHub to provide edge autonomy.

Developers can focus on application development on cloud-edge products without worrying about the operation and maintenance of the underlying architecture. As Kubernetes native open source IoT development framework, Shifu can be compatible with various IoT device protocols and abstract them into a microservice software object. The two complement each other very well. In particular, Shifu can abstract devices in a way that is native to OpenYurt, greatly improving the efficiency for IoT developers when we add YurtDeviceController to OpenYurt, .

With OpenYurt and Shifu, we can transform the complex IoT, cloud-side collaborative development into simple Web development.

Introduction

This article is a guide to use Shifu to integrate RTSP protocol cameras into OpenYurt clusters , which contains the basic operations of Shifu, Docker, Linux, Kubernetes, OpenYurt. Any developer can learn how to develop Shifu through this article.

The Shifu architecture in this article is as follows

The northbound opens up the HTTP API interface via deviceshifu-http-http and the southbound interacts with the actual device via rtsp-driver.

Objectives

  1. deploy OpenYurt on Server side and Edge side via yurtctl, and add Edge side to Server side cluster
  2. deploy the digital twin of webcam on Edge side
  3. realize remote automation control of webcam via HTTP

Required devices

  1. two virtual machines running Linux, the configuration of Server and Edge should be 4 cores with 16G RAM and 2 cores with 8G RAM respectively
  2. a webcam with RTSP protocol, the camera model used in this article is Hikvision DS-2DE3Q140CN-W.

Software environment

  • CentOS 7.9.2009
  • Go v1.17.1
  • yurtctl v0.6.1
  • kubectl: v1.19.8 (installed by yurtctl)

Step 1 Install and deploy the OpenYurt cluster

This article refers to the official tutorial for OpenYurt

First let's download OpenYurt and clone the project directly from the official GitHub:

git clone https://github.com/openyurtio/openyurt.git

Next, let's download the v0.6.1 version of yurtctl:

curl -LO https://github.com/openyurtio/openyurt/releases/download/v0.6.1/yurtctl 
chmod +x yurtctl

Server-side deployment

Create the OpenYurt cluster on the Server side:

./yurtctl init --apiserver-advertise-address <SERVER_IP> --openyurt-version latest --passwd 123 

The cluster is successfully created when you see the following message, and use --token to mark that the Edge node is added to the cluster

Next, take a look at the status of each Pod by running kubectl get pods -A:

problems

If you encounter in kubectl logs yurt-hub-server -n kube-system:

Please try kubectl apply -f config/setup/yurt-controller-manager.yaml (method from https://github.com/openyurtio/openyurt/issues/872#issuecomment- 1148167419 )

In addition, if you encounter the following output in kubectl logs yurt-hub-server -n kube-system.

Please try kubectl apply -f config/setup/ yurthub-cfg.yaml

If similar logs are encountered in yurt-tunnel-server and yurt-tunnel-agent, fix the RBAC issue in yurt-tunnel with the following command.

kubectl apply -f config/setup/yurt-tunnel-agent.yaml 
kubectl apply -f config/setup/yurt-tunnel-server.yaml

Use untaint master node to run Shifu controller.

kubectl taint nodes server node-role.kubernetes.io/master-

At this point, the Server side deployment is succeed.

Deploy the Edge side

First, run with the token you just initialized on the Server side.

./yurtctl join <MASTER_IP>:6443 --token <MASTER_INIT_TOKEN> --node-type=edge --discovery-token-unsafe-skip-ca-verification --v=5 

Verify Node status by kubectl get nodes.

At this point, a Server-side + an Edge-side cluster is created.

Step 2 Deploy Shifu in the cluster

Next, let's deploy Shifu to the OpenYurt cluster

On the Server side, clone the Shifu project locally.

git clone https://github.com/Edgenesis/shifu.git 
cd shifu/

Next, install Shifu.

kubectl apply -f pkg/k8s/crd/install/shifu_install.yml

Check the Pod status with kubectl get pods -A.

Just see the Pod running in the shifu-crd-system namespace.

At this point, Shifu is successfully installed.

Step 3 Deploy the camera's digital twin deviceShifu

OpenYurt provides a very convenient node pool (NodePool) feature that allows us to manage clusters of nodes and deploy them.

To create a beijing node pool.

export WORKER_NODEPOOL="beijing" 
export EDGE_NODE="edge"
cat <<EOF | kubectl apply -f -
apiVersion: apps.openyurt.io/v1alpha1
kind: NodePool
metadata:
name: $WORKER_NODEPOOL
spec:
type: Edge
EOF

The output is as follows.

Next, take the Edge server label to beijing's NodePool.

kubectl label node $EDGE_NODE apps.openyurt.io/desired-nodepool=beijing

Check the status of NodePool, there should be a READYNODES.

kubectl get nodepool

Since IoT edge nodes are usually distributed within the same scenario, here you can use OpenYurt's UnitedDeployment feature to automate the deployment based on the NodePool .

Install Yurt-app-manager:

git clone https://github.com/openyurtio/yurt-app-manager.git
cd yurt-app-manager
kubectl apply -f config/setup/all_in_one.yaml

Use UnitedDeployment to deploy a virtual Haikon camera with the following YAML file.

deviceshifu-camera-unitedDeployment.yaml
apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
metadata:
labels:
controller-tools.k8s.io: "1.0"
name: deviceshifu-hikvision-camera-deployment
spec:
selector:
matchLabels:
app: deviceshifu-hikvision-camera-deployment
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: deviceshifu-hikvision-camera-deployment
name: deviceshifu-hikvision-camera-deployment
namespace: default
spec:
selector:
matchLabels:
app: deviceshifu-hikvision-camera-deployment
template:
metadata:
labels:
app: deviceshifu-hikvision-camera-deployment
spec:
containers:
- image: edgehub/deviceshifu-http-http:v0.0.1
name: deviceshifu-http
ports:
- containerPort: 8080
volumeMounts:
- name: deviceshifu-config
mountPath: "/etc/edgedevice/config"
readOnly: true
env:
- name: EDGEDEVICE_NAME
value: "deviceshifu-hikvision-camera"
- name: EDGEDEVICE_NAMESPACE
value: "devices"
- image: edgenesis/camera-python:v0.0.1
name: camera-python
ports:
- containerPort: 11112
volumeMounts:
- name: deviceshifu-config
mountPath: "/etc/edgedevice/config"
readOnly: true
env:
- name: EDGEDEVICE_NAME
value: "deviceshifu-hikvision-camera"
- name: EDGEDEVICE_NAMESPACE
value: "devices"
- name: IP_CAMERA_ADDRESS
value: "<CAMERA_IP>"
- name: IP_CAMERA_USERNAME
value: "<CAMERA_USERNAME>"
- name: IP_CAMERA_PASSWORD
value: "<CAMERA_PASSWORD>"
- name: IP_CAMERA_CONTAINER_PORT
value: "11112"
- name: PYTHONUNBUFFERED
value: "1"
volumes:
- name: deviceshifu-config
configMap:
name: deviceshifu-hikvision-camera-configmap-0.0.1
serviceAccountName: edgedevice-sa
topology:
pools:
- name: beijing
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- beijing
replicas: 1
revisionHistoryLimit: 5
deviceshifu-camera-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: deviceshifu-hikvision-camera-deployment
name: deviceshifu-hikvision-camera
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: deviceshifu-hikvision-camera-deployment
type: LoadBalancer
deviceshifu-camera-unitedDeployment.yaml
apiVersion: shifu.edgenesis.io/v1alpha1
kind: EdgeDevice
metadata:
name: deviceshifu-hikvision-camera
namespace: devices
spec:
sku: "HikVision Camera"
connection: Ethernet
address: 0.0.0.0:11112
protocol: HTTP
deviceshifu-camera-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: deviceshifu-hikvision-camera-configmap-0.0.1
namespace: default
data:
driverProperties: |
driverSku: HikVision
driverImage: edgenesis/camera-python:v0.0.1
instructions: |
capture:
info:
stream:
move/up:
move/down:
move/left:
move/right:
telemetries: |
device_health:
properties:
instruction: info
initialDelayMs: 1000
intervalMs: 1000

Put the four files into a directory as follows.

camera-unitedDeployment/ 
├── camera-edgedevice.yaml
├── deviceshifu-camera-configmap.yaml
├─ deviceshifu-camera-service.yaml
└── deviceshifu-camera-unitedDeployment.yaml

Next, deploy:

kubectl apply -f camera-unitedDeployment/

Check the UnitedDeployment status via kubectl get ud.

Confirm that Pod is deployed in the Edge server in beijing NodePool with kubectl get pods -owide.

We can check Shifu's virtual devices in the cluster via kubectl get edgedevices -n devices at

Then use kubectl describe edgedevices -n devices to see the details of the devices such as configuration, status, etc.

At this point, the digital twin of the camera is deployed.

Running results

Next we control the camera, here we use a Pod of nginx to represent the application.

kubectl run nginx --image=nginx

When nginx is running, go to the nginx command line with kubectl exec -it nginx -- bash.

The camera can be controlled directly with the following command.

curl deviceshifu-hikvision-camera/move/{up/down/left/right}

If we want to see what the current photo and video stream, we need to proxy the camera's service locally via kubectl port-forward service/deviceshifu-hikvision-camera 30080:80 --address='0.0.0.0'.

You can view the image/video stream directly by entering the server's IP plus the port number in the browser at

<SERVER_IP>:30080/capture
<SERVER_IP>:30080/stream

Conclusion

In this article, we showed you how to deploy Shifu in an OpenYurt cluster to support RTSP camera.

In the future, we will also try to integrate Shifu with OpenYurt YurtDeviceController We will also try to extend the capabilities of OpenYurt to manage more IoT devices in a way that is native to OpenYurt.

· 3 min read

EMQX is a popular MQTT Broker in the world. It has a cloud-native architecture based on Kubernetes, making it extremely suitable for the increasingly complex IoT scenarios and making the transmission of device messages more efficient. Shifu, as a Kubernetes native framework, can be perfectly combined with EMQX to provide EMQX with intelligent multi-protocol device linkage capabilities.

Introduction

This article will show you how to deploy EMQX and Shifu in a cluster, integrate a thermometer with MQTT as the communication method and a Hikvision camera with RTSP as the transmission protocol, and add an application to interact with Shifu so that every time the thermometer detects a body temperature above 37 degrees it will ask the camera to take a photo.

The simple architecture used in this article is as follows.

Preparation

The following services and tools are used in this article.

  1. Kubernetes: 1.20.10
    • kubectl
    • kubeadm
    • kubelet
  2. Golang: 1.16.10
  3. Docker: 19.03.9
  4. EMQX: 4.1-rc1

Step 1 Deploy Kubernetes

For this step, you can refer to the official Kubernetes tutorial for deployment: Kubernetes.

https://kubernetes.io/docs/setup/

After the deployment is complete we should see the following message printed on the terminal.

Step 2 Deploy Shifu

Clone the GitHub repository for Shifu locally to.

git clone https://github.com/Edgenesis/shifu.git

You can then deploy Shifu with the following command.

kubectl apply -f shifu/pkg/k8s/crd/install/shifu_install.yml

Once deployed we should see that the CRD controller for Shifu has been deployed.

Step 3 Deploy EMQX

First you need to install EMQX Operator Controller.

$ curl -f -L "https://github.com/emqx/emqx-operator/releases/download/1.1.6/emqx-operator-controller.yaml" | kubectl apply -f -

Then we write the simplest deployment.yaml:

Then it's time to deploy an EMQX:

kubectl apply -f deployment.yaml

Step 4 Integrate into the device

For the thermometer, we just need to adjust its MQTT settings so that it can post MQTT messages to EMQX.

(For thermometers outside the cluster, we can open up External IP for access via Kubernetes Service)

In terms of cameras, Shifu's repository already includes a configuration file for Hikvision cameras using RTSP that we can simply change IP, username, password in the config file to integrate the camera into Shifu.

At this point, our device is connected and we are ready to start linking below.

Linking Applications

We can write a Python application to implement the following logic.

The app subscribes to EMQX temperature-shifu-mqtt messages, each message includes only a number refering the current temperature; if the current temperature is higher than 37 degrees, the camera is asked take a picture and save it locally.

The application code is as follows.

Add a capture function to encapsulate all camera actions. Then we can deploy it to the cluster and start monitoring.

python3 app.py 10.244.0.33

Summary

This article shows how to get EMQX to empower Shifu with more efficient MQTT Broker capabilities, and make Shifu to work with MQTT to provide linkage capabilities to devices. In a real-world application scenario, we can use a cheap combination of IR thermometer and camera to replace thousands of dollars of unstable temperature cameras, saving huge costs in a large-scale deployment scenario.

· 4 min read

The popular open source project KubeEdge provides developers with a cloud-edge collaboration solution based on Kubernetes. It successfully integrates the cluster orchestration capabilities of Kubernetes into the IoT edge scenario, making the scheduling and management of edge computing power lighter and more efficient.

As an open source IoT development framework, Shifu is also based on Kubernetes, its compatibility with multiple devices and virtualization will help KubeEdge to be applied at the edge. In fact, the two complement each other very well in terms of capabilities. Not only compatible to multiple devices, Shifu running on KubeEdge can easily manage lightweight Pod running on the edge.

With the powerful combination of KubeEdge + Shifu, we can abstract IoT devices into API and turn the traditional complex IoT development model into a simple web development model!

Let's see how to make Shifu run on KubeEdge and provide value for developers!

Introduction

This article will briefly describe the steps to deploy Shifu on KubeEdge and integrate into Hikvision camera (using RTSP for video streaming), so that KubeEdge architecture can support Hikvision camera.

The architecture used in this article is as follows.

Preparation

The following services and tools are used in this article.

  1. Kubernetes: 1.21.5
    • kubectl
    • kubeadm
    • kubelet
  2. Golang: 1.16.10
  3. Docker: 19.03.9
  4. KubeEdge: 1.7.2

Meanwhile, the Cloud side and Edge side of KubeEdge are running on separate Linux instances, but both on Ubuntu Server 20.04 environment.

You need to install all the above mentioned services and tools on the Cloud side , but only Docker and KubeEdge on the Edge side.

Step 1 Deploy Kubernetes on the Cloud side

You can check official tutorial on Kubernetes to finish the deployment.

After the deployment is complete we should see the terminal print out the following message.

Step 2 Deploy Shifu on Cloud side

Clone the Github repository of Shifu to your computer.

git clone https://github.com/Edgenesis/shifu.git

Then you can deploy Shifu with the following command:

kubectl apply -f shifu/pkg/k8s/crd/install/shifu_install.yml

Once deployed we should see that the CRD controller for Shifu has been deployed.

Step 3 Deploy KubeEdge on Cloud side

For this step, you can refer to KubeEdge's official tutorial and use keadm for deployment.

After the deployment, we should see the following message printed on the terminal.

Step 4 Get the token on the Cloud side

Run the following command.

keadm gettoken

Please save the token you got for the Edge side.

Now that the Cloud side configuration is completed, we switch to the Edge side machine and add it to the cluster.

Step 5 Join the cluster on the Edge side

Run the following command on the Edge side.

keadm join --cloudcore-ipport="<Cloud-side advertise-address>:10000" --token=<token obtained in step 4>

After the deployment is complete we should see the terminal print out the following message.

At this point switch back to the Cloud side and look at the nodes.

We can see that both the Cloud side and the Edge side have been deployed.

Now we can start deploying the device.

With KubeEdge, we can perform Kubernetes operations only on the Cloud side and deploy it to the Edge side, while keeping the Edge side free of Kubernetes components to keep lightweight.

Step 6 Modify the Hikvision camera configuration file on the Cloud side

Shifu needs a simple configuration file to generate digital twin. In Shifu, the digital twin is called deviceShifu and runs in a cluster as Pod.

Shifu provides configuration file to access to Hikvision cameras, here's the path https://github.com/Edgenesis/shifu/tree/main/examples/rtspDeviceShifu/.

Shifu deploys deviceShifu on a machine with a full Kubernetes instance by default. In a KubeEdge environment, there is no need to run full Kubernetes on the edge, so Shifu also has a lightweight deviceShifu for cloud-edge collaboration. We can change deviceshifu-camera-deployment.yaml to use the edge-side deviceShifu and add nodeName to deploy it to the edge node:

Step 7 Deploy the Hikvision Camera Pod

On the Cloud side, run the following command.

kubectl apply -f shifu/examples/rtspDeviceShifu/camera-deployment

At this point, we can look at the Pod associated with the camera.

Final step Confirmation on the Edge side

On the Edge side, we can see that the camera-related Docker container is already running:

We can simply call the capture/stream/info/move and a host of other HTTP APIs provided by deviceShifu to perform operations on the camera, such as the following motion picture.

Related commands.

curl edgedevice-camera/move

This completes all the steps we need to run Shifu on KubeEdge.

· 13 min read

In this article, we will first write a GPIO driver to control LED using Python, and then connect it to Shifu for interaction and management.

Create the driver

Objective

  • Complete simple LED circuit connection
  • Basic Raspberry Pi/SSH configuration
  • Basic Python syntax and knowledge of GPIO libraries

Devices

  • Raspberry Pi Raspberry Pi 3B+ running 64-bit Raspberry Pi OS
  • 1 breadboard
  • 3 LED bulbs (red, yellow, green)
  • 1 x 330 ohm resistor

Basic knowledge

  • Simple Python syntax
  • Basic Linux command line operations (create files, install apps, SSH, run programs)

Step 1 Circuit design

First let's design the circuit. We need to design a circuit that will allow the GPIO output of the Raspberry Pi to control the on/off of a single LED bulb. In this article, we have used the most straightforward approach, i.e., using GPIO directly to power the LED bulbs.

The circuit diagram is as follows.

The GPIO pins 22, 23, and 19 control the red, green, and yellow LEDs respectively. 330 ohms resistors are connected in series at the end to prevent the LEDs from burning out due to excessive current.

Step 2 Circuit Implementation

According to the Pin Layout in the official Raspberry Pi documentation, we can see the exact location of the pins, here pins 15, 16, 35 and 39 are used.

Connect these four pins to the female port of the cable, and then connect the remaining circuits on the breadboard too.

The red, green, yellow, and gray cables in the diagram correspond to GPIO22 GPIO23 GPIO19 and ground, respectively.

This concludes the circuit design and connectivity section.

Step 3 Raspberry Pi Preparation

First install an operating system in the Raspberry Pi, the one used in this article is Raspberry Pi OS (64-bit). download link

Insert the SD card into the reader, connect it to the USB port of your computer, and then swipe the downloaded zip file into the SD card via balenaEtcher. balenaEtcher link

Insert the SD card into the Raspberry Pi, plug in the power and monitor cable, and we can start the configuration.

First, for development/debugging we need to enable SSH, open a terminal from the desktop and enter sudo raspi-config to access the configuration screen and select Interface Options.

Select SSH:

Press enter, then press left to select Yes to enable the SSH service: <img src="/blog-220507/7.png

After that, press right and select Finish, then enter to exit.

At this point, the SSH service is on, but we need to know the IP of the Raspberry Pi in order to SSH, so we can check it with the built-in ip addr command.

As you can see, the IP address is 192.168.15.122.

Back in the computer, you can remotely access the Raspberry Pi from the command line via ssh pi@192.168.15.122

This concludes the preparation of the Raspberry Pi and the hardware.

note

Earlier versions of the Raspberry Pi OS may need to manually enable GPIO

Step 4 Write a Driver

Everything is in place, now let's write the first driver!

First make sure that Python is installed on your system, if not you can run the following command.

$ sudo apt-get update && sudo apt install python3 -y

After installation, you can check the status of the installation with python -V, if it shows the version, then the installation is successful:

$ python -V
Python 3.9.2

Next let's start with an LED bulb control, we want to control the red LED on/off using the following code.

The modules used in the driver are:

  • RPi.GPIO to control the GPIO of the Raspberry Pi
  • argparse is used to parse command line input

First set the GPIO mode to GPIO.BCM mode, in this mode the pin numbers are the GPIO numbers, not the pin order on the Raspberry Pi board.

GPIO.setmode(GPIO.BCM)

Then turn the warning off, the Raspberry Pi will only be controlled only by this driver in this article.

GPIO.setwarnings(False)

Next process the program input, this driver will accept two inputs.

  1. -p, --port, representing the GPIO pin that the program is manipulating
  2. -o, --operate, represents the program's operation of the GPIO pin, on represents 1, i.e. 3.3V in the circuit, off represents 0, i.e. 0V in the circuit

The code is as follows

parser = argparse.ArgumentParser()
parser.add_argument("-p", "--pin", type=int, default=None, help="Specify the GPIO pin to operate, e.g.: '17'")
parser.add_argument("-o", "--operate", type=str, default=None, help="Specify the GPIO output, e.g.: 'on/off'")
args = parser.parse_args()

Next we need to handle errors, passing the arguments into the function turnOnLed to operate when the pin count and operation are not empty, otherwise printing a warning

if args.pin and args.operate:
turnOnLed(args.pin, args.operator)
else:
print("need to specify both pin and operate arguments, type --help for more information")

The main part of the program ends, so let's look at the function turnOnLed that controls the LED bulb.

The first thing is to determine whether the operate variable is on or off, otherwise it returns. The output variable gpio_out is set to GPIO.HIGH when the variable is on and to GPIO.LOW when it is off.

These two values represent on or off.

if operate == "on":
gpio_out = GPIO.HIGH
elif operate == "off":
gpio_out = GPIO.LOW
else:
print("operate is neither on/off, quitting...")
return

The last thing is to set the mode of the pin to output: GPIO.setup(pin, GPIO.OUT)

and switch the pin's output to on/off: GPIO.output(pin, gpio_out)

Outcome

The program can be executed using python led_driver.py -p {pin #} -o {operate}

If we want to make the red bulb light up, we run python led_driver.py -p 22 -o on

And that's it, a simple control LED bulb driver for Raspberry Pi!

note

This driver essentially manipulates Raspberry Pi GPIO pin, so we can also use this driver to manipulate any circuit that can be controlled by 3.3V, and the pins are not limited to 22, 23 and 19 in this article. You can use your imagination to create a variety of test circuits.

Integrate into Shifu

Next, we use the driver we wrote to access the Shifu framework for interaction and management.

The Shifu architecture in this article is as follows.

The northbound opens up the HTTP API interface via deviceshifu-http-http and the southbound interacts with the actual device via rpio-gpio-driver.

Objectives

  1. install k3s cluster on Raspberry Pi and install Shifu Framework
  2. package the Raspberry Pi LED driver to a container image
  3. deploy the digital twin of the Raspberry Pi LED in Shifu
  4. Achieve remote automation control of Raspberry Pi LEDs

Devices

  • Raspberry Pi Raspberry Pi 3B+ running 64-bit Raspberry Pi OS

Basic knowledge

  • basic operation of Docker/containerd
  • basic operation of K8s/K3s

Step 1 Install k3s

First we need to run a Kubernetes cluster in the Raspberry Pi. There is no restriction on the version users can use here, but to save resources we use k3s in this article. Installation Tutorial

After installation, run kubectl version to see the current Kubernetes version.

Use kubectl get nodes to see the status of the current cluster, showing Ready means the cluster is available:

At this point, the k3s installation is complete.

Step 2 Install Shifu

First clone the Shifu project repository on your computer:

$ git clone https://github.com/Edgenesis/shifu.git

You can deploy Shifu to the k3s cluster with one click by using kubectl apply -f shifu/pkg/k8s/crd/install/shifu_install.yml.

Execute kubectl get pods -A again to see the Shifu Framework controller deployed to the cluster: <img src="/blog-220507/connect-3.png

We can also manage device resources (currently there are no devices) via edgedevices the CRD: <img src="/blog-220507/connect-4.png

At this point, Shifu is successfully installed.

Step 3 Package the driver

We need to use a small tool provided by Shifu to realize that we can manipulate the local driver remotely, for a detailed tutorial please see.

This tool helps to convert HTTP requests sent by the user/program to a local command line for execution.

A sample driver is provided inside the tutorial at https://github.com/Edgenesis/shifu/blob/main/examples/driver_utils/simple-alpine/Dockerfile.sample

The content is as follows.

You can see that the Dockerfile has two parts, the first is to use the golang image to compile the http_to_ssh_stub.go provided by Shifu to convert HTTP command to SSH command. The next step is to use an empty alpine image to configure SSH for demonstration.

Next let's practice it.

Considering the limitations of Raspberry Pi, this compilation will be executed from the computer side, and the compiled image will be pushed to Docker Hub for remote invocation.

First, we create a new folder, here we use dev, and then save the created Raspberry Pi LED driver to this directory:.

dev/
└── led_driver.py

The driver content remains the same.

Copy Dockerfile.sample from the driver_util/examples/simple-alpine/ directory of the Shifu project to the dev directory.

dev/
├── Dockerfile.sample
└── led_driver.py

Change the following fields to change the second part of the image from alpine to python:alpine, install the Python library for RPi.GPIO

Finally, copy the Python driver into the run container, and the new Dockerfile will look like this, with the changes marked with comments.

FROM golang:1.17.1 as builder

WORKDIR /

ENV GOPROXY=https://goproxy.cn,direct
ENV GO111MODULE=on
ENV GOPRIVATE=github.com/Edgenesis

COPY driver_util driver_util

WORKDIR /driver_util
RUN go mod download

# Build the Go app
RUN CGO_ENABLED=0 GOOS=$(go env GOOS) GOARCH=$(go env GOARCH) gobuild -a -o /output/http2ssh-stub http_to_ssh_stub.go

FROM python:alpine # modified

RUN apk add --no-cache --update openrc openssh \
&& mkdir -p /run/openrc \
&& touch /run/openrc/softlevel \
&& sed -ie "s/#PubkeyAuthentication/PubkeyAuthentication/g"/etc/ssh/sshd_config \
&& sed -ie "s/#PasswordAuthenticationyes/PasswordAuthentication no/g" /etc/ssh/sshd_config \
&& sed -ie "s/AllowTcpForwardingno/AllowTcpForwarding yes/g" /etc/ssh/sshd_config \
&& echo"PubkeyAcceptedKeyTypes=+ssh-rsa" >> /etc/ssh/ sshd_config\ # modified
&& ssh-keygen -A \
&& passwd -d root \
&& mkdir ~/.ssh \
&& while ! [ -e/etc/ssh/ssh_host_rsa_key.pub ]; do sleep 1; done \
&& cp /etc/ssh/ssh_host_rsa_key.pub~/.ssh/authorized_keys

RUN apk add --no-cache -Uu --virtual .build-dependencies libffi-devopenssl-dev build-base musl \
&& pip3 install --no-cache --upgrade RPi.GPIO\
&& apk del --purge .build-dependencies \
&& apk add --no-cache --purge curlca-certificates musl \
&& rm -rf /var/cache/apk/* /tmp/* # modified

WORKDIR /root/

COPY --from=builder /output/http2ssh-stub http2ssh-stub
COPY --from=builder/driver_util/examples/simple-alpine/docker-entrypoint.sh docker-entrypoint.sh
COPY dev/led_driver.py led_driver.py # modified
RUN chmod +x docker-entrypoint.sh

# Command to run the executable
ENTRYPOINT ["./docker-entrypoint.sh"]

Next we will package the Docker image, because the CPU of the Raspberry Pi is ARM64 processor, the computer used for compiling in this article is x86-64, so we need to use the buildx function of Docker to build the image, the tutorial about buildx will not be described in this article, you can move to https://docs.docker.com/buildx/working-with-buildx/.

Use docker buildx build --platform=linux/arm64 -f dev/Dockerfile.sample . -t edgehub/rpi-gpio-driver:v0.0.1 --push to build the image and push it to Docker Hub.

At this point, the image packaging part is complete.

Step 4 Deploy device twin to Raspberry Pi

Once we have the image, we can deploy the digital twin to the cluster, so let's prepare the files we need for the deployment.

First is a Kuberenetes Deployment YAML file to run deviceShifu and the driver Pod.

deviceshifu-rpi-gpio-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: edgedevice-rpi-gpio-deployment
name: edgedevice-rpi-gpio-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: edgedevice-rpi-gpio-deployment
template:
metadata:
labels:
app: edgedevice-rpi-gpio-deployment
spec:
containers:
- image: edgehub/deviceshifu-http-http:v0.0.1
name: deviceshifu-http
ports:
- containerPort: 8080
volumeMounts:
- name: edgedevice-config
mountPath: "/etc/edgedevice/config"
readOnly: true
env:
- name: EDGEDEVICE_NAME
value: "edgedevice-rpi-gpio"
- name: EDGEDEVICE_NAMESPACE
value: "devices"
- image: edgehub/rpi-gpio-driver:v0.0.1
name: driver
volumeMounts:
- mountPath: /dev/gpiomem
name: gpiomem
securityContext:
privileged: true
ports:
- containerPort: 11112
env:
- name: EDGEDEVICE_DRIVER_SSH_KEY_PATH
value: "/etc/ssh/ssh_host_rsa_key"
- name: EDGEDEVICE_DRIVER_HTTP_PORT
value: "11112"
- name: EDGEDEVICE_DRIVER_EXEC_TIMEOUT_SECOND
value: "5"
- name: EDGEDEVICE_DRIVER_SSH_USER
value: "root"
volumes:
- name: edgedevice-config
configMap:
name: rpi-gpio-configmap-0.0.1
- name: gpiomem
hostPath:
path: /dev/gpiomem
serviceAccountName: edgedevice-sa

Please note that in the Deployment file we need to add privileged: true to the container's securityContext, so that we can use the Raspberry Pi's GPIO in the container, then mount the Raspberry Pi's /dev/gpiomem to the container as volume.

Write a Kubernetes Service YAML file to proxy requests from the deviceShifu to the real Pod from the domain.

deviceshifu-rpi-gpio-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: edgedevice-rpi-gpio-deployment
name: edgedevice-rpi-gpio
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: edgedevice-rpi-gpio-deployment
type: LoadBalancer

A Kubernetes ConfigMap YAML file to configure deviceShifu.

deviceshifu-rpi-gpio-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: rpi-gpio-configmap-0.0.1
namespace: default
data:
driverProperties: |
driverSku: RaspberryPiB+
driverImage: edgenesis/rpi-gpio-python:v0.0.1
driverExecution: "python led_driver.py"
instructions: |
pin:
operate:
help:
# Telemetries are configurable health checks of the EdgeDevice
# Developer/user can configure certain instructions to be usedas health check # of the device.
# of the device. In this example, the device_health telemetry ismapped to
# "get_status" instruction, executed every 1000 ms
telemetries: |
device_health:
properties:
instruction: help
initialDelayMs: 1000
intervalMs: 1000

In ConfigMap we need to configure the execution path of the driver, because we put the Python file directly under the default path when generating the image, just fill in python led_driver.py here. If the driver is a binary file, you can directly fill in the binary directory here.

Write a Shifu EdgeDevice YAML file to generate the device twin.

edgedevice-rpi-gpio-edgedevice.yaml
apiVersion: shifu.edgenesis.io/v1alpha1
kind: EdgeDevice
metadata:
name: edgedevice-rpi-gpio
namespace: devices
spec:
sku: "RaspberryPi 3B+"
connection: Ethernet
address: 0.0.0.0:11112
protocol: HTTPCommandline

Put these four files into the Raspberry Pi with the following directory contents.

led-deploy/
├──deviceshifu-rpi-gpio-configmap.yaml
├──deviceshifu-rpi-gpio-deployment.yaml
├──deviceshifu-rpi-gpio-service.yaml
└──edgedevice-rpi-gpio-edgedevice.yaml

Use kubectl apply -f <dir> to deploy deviceShifu to the k3s cluster:

Then check the running status with kubectl get pods.

View all device twins in the cluster with kubectl get edgedevices -n devices.

To see the details of the digital twin via describe.

Next we can interact with the device, where we deploy an nginx container to represent the application in a real-world scenario The deployment command is kubectl run nginx --image=nginx

Then execute kubectl exec -it nginx -- bash to get to the nginx command line

Finally, using curl to send commands to the device, the driver accepts commands in the format: python led_driver --pin <x> --operate <on/off>

Using Shifu to send commands will convert from HTTP to command line, the request address is written as: http://edgedevice-rpi-gpio/pin?flags_no_parameter=<pin>,--operate,<on/off>

Outcome

The program can control the LED bulb on/off by sending an HTTP request directly to the device's domain name.

At this point, we have successfully plugged the Raspberry Pi driver into Shifu.