How to deploy Custom-Trained model in HUSKYLENS 2

This guide instructs users on deploying custom-trained visual recognition models in HUSKYLENS 2, covering both code-free methods using Mind+ AI tools and Python-based YOLO model training techniques, ensuring an updated firmware for successful deployment.

1.Deploy Custom-Trained Models

Besides the built-in visual recognition functions of HUSKYLENS 2, users can also train their own models and deploy them to HUSKYLENS 2 to create their own unique visual recognition projects. To use this feature, please ensure HUSKYLENS 2 is updated to the latest firmware. Click Firmware Update to view the tutorial.

The following tutorials will explain:

  • How to train a model in a code-free way using the Mind+ AI tool and deploy it to HUSKYLENS 2 with one click. Requires an internet connection on the computer

  • How to train a model in a code-free way using the Mind+ AI tool, convert the model using the local computer's deployment environment, and deploy it to HUSKYLENS 2.

  • How to train a YOLO model using Python code and deploy it to HUSKYLENS 2.

If you have no experience training YOLO models and want to try it quickly, please directly refer to Section 1.1No-Code Model Training and Deployment-Mind+ Server .

If you already have a trained YOLOv8n ONNX model, please read 1.2.4 Deploy the Model to HUSKYLENS 2.

1.1 No-Code Model Training and Deployment-Mind+ Server

Complete the model training according to this tutorial:MindPlus 2.0 Wiki -4.2.2 [Object Detection] Quick Experience.

Once model training is complete, click 'Deploy to HUSKYLENS 2'.

Interface Diagram

  • Step 1: Select an icon

First, choose an appropriate icon for your model application.Several pre-selected icons are provided here.You can select your preferred icon from the previews, or click + to upload any custom icon image. We recommend using PNG images with transparent backgrounds, white-line icons, and a resolution of 60*60 pixels.

  • Step 2: Enter App name and Title

Set the App Name and Title based on the model's functionality. For example, if you want to use this model for supermarket product recognition, set App Name and Title to 'Product Recognition'.

  • Step 3: Click Start Conversion

Click 'Start Conversion' to begin the process.

Interface Diagram

Wait for the message 'model converted successfully' to appear, then click 'Download to Computer'.

Interface Diagram

Save the converted model compressed file to your computer.

Interface Diagram

Connect your computer to HUSKYLENS 2 using a Type-C cable. Once connected, a disk named 'Huskylens' will appear on your computer.

Copy the generated model ZIP file to the following directory on the 'Huskylens' disk:' \storage\installation_package'.

Interface Diagram

Tap the HUSKYLENS 2 screen to wake it up (if the screen is off), then navigate and tap to enter the 'Model Installation' menu.

Interface Diagram

Select 'Local Installation' from the menu. After a successful installation, the screen will display the interface shown in the diagram below.

Interface Diagram

At this point, observe the HUSKYLENS 2 screen.If a new function named "Product Recognition" appears, it indicates that we have successfully imported the self-trained model into HUSKYLENS 2.

Interface Diagram

You can tap to enter the "Product Recognition” function and observe the recognition effect of the self-trained model.

Interface Diagram

1.2 Code-Trained Model Deployment to HUSKYLENS 2

1.2.1 Environment Setup

To train the YOLO model via Python code, a high-version Python and relevant dependency libraries are required. We recommend using Miniconda for independent management of each Python project to avoid conflicts between multiple Python versions.
Click to download and install Miniconda.
After successful installation, locate Anaconda PowerShell on your computer, click to launch it, and access the terminal.

Interface Diagram

Copy the following command: conda --version

Press the Enter key. If a Conda version number is displayed, the installation is successful. If an error occurs, refer to the error message and consult AI for troubleshooting.

Interface Diagram

Enter the following command in Anaconda PowerShell: mkdir Custom_Model

Press the Enter key, this creates a folder named "Custom_Model". The following output will appear after the command is executed.

Interface Diagram

Next, enter cd + [folder path] + [folder name]

For example: cd "C:\Users\Alla.Fang\Custom_Model"

Press enter. Use the actual path displayed on your computer. After successful execution,you will be navigated to the Custom_Model folder directory.

Interface Diagram

Enter the command: conda create --name myenv312 python=3.12

Press the enter key, to create an isolated Python 3.12 environment.

Interface Diagram

The successful execution result is as follows:

Interface Diagram

Enter the command: conda activate myenv312

Press the enter key , to activate the newly created environment

Interface Diagram

Enter the command: pip install ultralytics

Press the enter key, to install the required dependency libraries for training the YOLO model.

Interface Diagram

After installation is complete, enter yolo to verify if the installation was successful. If the following screen appears, the installation is successful.

Interface Diagram

Do not close this PowerShell window; we will train the YOLO model in it next.

1.2.2 Prepare the Yolo Dataset

To train a YOLOv8 model, you need to prepare a dataset in YOLO format.

The YOLO format dataset is a standard data format for training YOLO models. The file structure of a standard YOLO dataset is as follows:

dataset/
  images/
    train/
      image1.jpg
      image2.jpg
      ...
    val/
      image3.jpg
      image4.jpg
      ...
  labels/
    train/
      image1.txt
      image2.txt
      ...
    val/
      image3.txt
      image4.txt
      ...
  data.yaml

Here, take a YOLO-format dataset named dataset as an example. It includes a folder named images for storing images, a folder named labels for storing annotation files corresponding to the images, and a configuration file named data.yaml that describes the dataset's configuration information.

Interface Diagram

data.yaml is an extremely important configuration file used to inform the model of "where the data is located and what the classes are".
In data.yaml, you need to specify the file paths of the training and validation sets, the number of classes, and the list of class names. Its role is to enable the model to "understand" the dataset structure—including the storage location of the data and the class information to be recognized.
During the training and validation processes, the model will read images and annotations according to this file. Refer to the content of data.yaml in the dataset folder below to understand what information needs to be included in the YAML file of a YOLO dataset.

Interface Diagram

To create a dataset for object detection training, refer to the official Ultralytics tutorial

If you don’t have an existing dataset but want to quickly experience YOLO model training, we recommend clicking here to download the sample dataset as a test dataset.

This is a test dataset containing only a small number of images for testing purposes. Extract the downloaded ZIP file to theCustom_Model folder.

This is a test dataset containing only a small number of images, intended for testing purposes. Models trained on this dataset will possibly have very low accuracy.

Interface Diagram

1.2.3 Train the Model with Python Code

Return to the Anaconda PowerShell window you opened earlier, and enter the following command and press enter to complete model training:

yolo detect train data=./datatset/coco8.yaml model=yolov8n.pt imgsz=320 epochs=10 project=output name=my_yolov8_run

Let’s take a look at the composition of this model training command:

yolo detect train This command initiates the object detection training process for the YOLO model. yolo is the tool command for the YOLO series, and detect indicates that this is a training task for an object detection model.

data=dataset/coco8/coco8.yaml This parameter specifies the path to the configuration file of the dataset used for training.

model=yolov8n.pt This parameter specifies the path to the pre-trained model. yolov8n.pt means using the YOLOv8 pre-trained model.

Note: Currently, only the pre-trained model of yolov8n.pt is supported.

imgsz=320, This parameter specifies the input image size. imgsz=320 indicates that the input images will be resized to 320x320 pixels, which meets the requirements of the YOLOv8 model. The input image size determines the scaling ratio of images during training; generally, a smaller image size helps speed up training but may affect model accuracy.

Note: Currently, only the sizes 320 or 640 are supported.

epochs=10, This parameter specifies the number of training epochs. epochs=10 means the model will undergo 10 complete training cycles, i.e., training on the entire training dataset 10 times.

project=output, This parameter specifies the folder location for saving training results. project=output means the training output results (such as model weights, logs, images, etc.) will be saved to a folder named output.

name=my_yolov8_run, This parameter specifies the name of the current training experiment. name=my_yolov8_run means the name of the current training experiment is my_yolov8_run, and this name will be used as the directory name for saving model files, logs, and other related files.

Interface Diagram

After successful training, check the Custom_Model folder—an output folder will be generated.

The trained model files are located in the Custom_Model\output\my_yolov8_run\weightsdirectory.

Interface Diagram

Enter the following command and press enter to convert the .pt model to an ONNX model.

Command:yolo export model=output/my_yolov8_run/weights/best.pt format=onnx imgsz=320 dynamic=False

Interface Diagram

Once completed, the best.onnx file will appear in the following path:Custom_Model\output\my_yolov8_run\weights

Interface Diagram

1.2.4 Deploy the Model to HUSKYLENS 2

  • Install the convert software

Download and install .NET 7.0 .
Based on your system selection, choose Windows x64 for 64-bit Windows systems. Please select the appropriate version based on your computer's operating system.

Interface Diagram

Download:onnx2kmodel.

Interface Diagram

Place the downloaded compressed package in the Custom_Model folder directory and extract it.

Interface Diagram

Ensure the directory structure matches the following diagram.

Interface Diagram

Enter the following command in PowerShell to navigate to the directory where the model conversion tool is located.
Command:cd ".\onnx2kmodel-master"

Interface Diagram

Then enter the following command to install the required dependencies.
Command:pip install -r requirements.txt

Interface Diagram

Enter the following command in the terminal to install the package:pip install nncase_kpu-2.10.0-py2.py3-none-win_amd64.whl

The following output indicates a successful installation.

Interface Diagram

Next, enter the following command to launch the software: python app.py

Interface Diagram

After launching the software, select Custom Modelin the top-right corner.

Interface Diagram

  • Upload the Model

Locate the Custom_Modelfolder on your computer and create a new subfolder named application within it. Then follow these steps:
Copy the coco8.yamlfile and images folder from Custom_Model\dataset to the application folder.

Copy the best.onnx file from Custom_Model\output\my_yolov8_run\weightsto the same application folder.

Finally, rename coco8.yaml to data.yaml

Interface Diagram

Note: The names of these three files in the application folder must match those shown in the above diagram—do not modify them.

Ensure the format of the images folder is consistent with that of the image folder in the YOLO dataset.

  images/
    train/
      image1.jpg
      image2.jpg
      ...
    val/
      image3.jpg
      image4.jpg
      ...

Click the button named Custom Directory in the model conversion software, then select the applicationfolder we just created to upload it.

Interface Diagram

  • Select Icon

Click the Select Iconbutton, locate the icon image in theonnx2kmodel-master folder, and click to upload it. This image will serve as the icon for the model when uploaded to HUSKYLENS 2. You can also upload a custom icon image—we recommend a PNG image with a resolution of 60×60 pixels, a transparent background, and white line icons.

Interface Diagram

  • Set Name

In the AppNamesection, set the Chinese and English names for the model application. For example, if you want the trained YOLO model to be mainly used for identifying supermarket products as shown in the diagram below, you can name it "商品识别" (Chinese)and its corresponding English name "Product Recognition".

Interface Diagram

Click the Convert and Package button. When the "please wait" prompt disappears and the "Convert and Package" button reverts to its original state, it indicates that the model conversion and packaging process is complete.

Interface Diagram

A new ZIP file will appear in the Custom_Model\onnx2kmodel-masterfolder—it is the packaged model application file.

Interface Diagram

Connect your computer to HUSKYLENS 2 using a Type-C cable. Once connected, a disk named 'Huskylens' will appear on your computer.

Copy the generated model ZIP file to the following directory on the 'Huskylens' disk:\storage\installation_package

Interface Diagram

Tap the HUSKYLENS 2 screen to wake it up (if the screen is off), then navigate and tap to enter the Model Installation menu.

Interface Diagram

Select "Local Installation" from the menu. After a successful installation, the screen will display the interface shown in the diagram below.

Interface Diagram

At this point, observe the HUSKYLENS 2 screen—if a new function named "Product Recognition" appears, it indicates that we have successfully imported the self-trained model into HUSKYLENS 2.

Interface Diagram

You can tap to enter the "Product Recognition" function and observe the recognition effect of the self-trained model.

Interface Diagram

Was this article helpful?

TOP