HUSKYLENS 2 Object Recognition Function Description
HUSKYLENS 2's Object Recognition feature identifies 80 types of objects, providing confidence scores and allowing parameter customization. The article guides users on utilizing this feature effectively, setting parameters, and managing models for precise object identification.
1.Introduction to Object Recognition
The Object Recognition feature of HUSKYLENS 2 can identify 80 types of fixed objects. During detection, it automatically frames the target object and displays its name along with a confidence score (e.g. "potted plant: 25").
This target detection model is trained on the MS COCO80 dataset. This may lead to lower confidence levels during recognition, as actual objects may differ from the model’s training data. (Confidence is an indicator of the model’s certainty in the prediction result under current input conditions, representing the relative credibility that the model believes the target belongs to a certain category.)
This function is a general recognition for common objects in daily life and cannot distinguish between objects of the same category. For example, it can only recognize "cat" but not specific breeds. To distinguish between objects of the same category, use the "Self-Learning Classifier" function or a self-trained target detection model.
The identifiable objects are as follows:
person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat;
dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard;
sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple;
sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop;
mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush
2.Object Recognition Usage Guide
In this section, we will learn how to use the Object Recognition feature of HUSKYLENS 2 to identify objects in the image and recognize specified objects.
2.1 Selecting the Object Recognition Function
Power on HUSKYLENS 2, wait for successful startup, then locate and tap the "Object Recognition" function.
2.2 Observing Object Recognition Effect
Direct the HUSKYLENS 2 toward identifiable objects and view its screen. All recognizable objects on the screen will be enclosed by white bounding boxes, with their corresponding object names and confidence levels displayed. In machine learning, confidence level refers to the model's "certainty" in its prediction result. For example, "potted plant: 25%" indicates that under the object recognition function, the model considers there is a 25% probability that the object in the current screen is a "potted plant".
2.3 Learning Object Observation Results
Users can learn any of the 80 object categories. The system will assign a unique ID number sequentially as objects are learned, which can then be used in project programs to perform logical judgments.
Learning Step: Point HUSKYLENS 2 at the target object (must be within the 80 categories listed above). When the white box frames the object, adjust the orientation of HUSKYLENS 2 so that the crosshair in the middle of the screen is inside the white box. Press Button-A on the top-right corner of HUSKYLENS 2 to learn this object. (If the target object is not framed by the white box, refer to the "Detection Threshold" in the "Parameter Settings" of this function and lower the threshold before learning.)
Once learning is complete, if a learned object is recognized, the screen will frame it with a colored bounding box and display "name: IDx Confidence" above it. For example, " potted plant:ID1 56%", repeat the process.
3.Object Recognition Parameter Settings
The default parameters of HUSKYLENS 2 are sufficient for basic functionality requirements and allow manual adjustment of various parameters to customize functionality. The following parameters are all based on the "Object Recognition" function, so first, ensure you have entered the "Object Recognition" function, as illustrated in the figure.
The parameter to be modified can be selected by sliding left or right on the parameter labels at the bottom of the screen.
3.1 Forgetting the ID
To forget an object, have HUSKYLENS 2 "look" at the previously forgotten object. The screen will display a white box showing the object's name and confidence level, but the object's ID number will not be displayed, indicating that the "forget" action is completed.
3.2 Detection Threshold
The detection threshold controls the strictness of object recognition: a smaller threshold requires a lower confidence score to recognize an object, while a larger threshold requires a higher confidence score.
Setting steps: Click "Detection Threshold", and a parameter adjustment slider will appear above it. Sliding left decreases the threshold value, and sliding right increases it. See the figure below for the effect.
3.3 NMS Threshold
NMS Threshold is a common parameter in visual recognition used to filter detection boxes. In visual recognition tasks, models often predict multiple overlapping detection boxes around the same target object. Without filtering, a single object may be boxed by multiple overlapping frames, which can be resolved by adjusting the NMS threshold to remove duplicate boxes and retain only the optimal one.
In simple terms, the NMS threshold defines the overlap percentage at which two boxes are considered "duplicate". For example, a low threshold (e.g.0.3) will mark boxes as duplicates if they overlap at all, removing one of them. A high threshold (e.g.0.7) will only mark boxes as duplicates if they overlap significantly, potentially retaining more boxes.
A high threshold is suitable for dense/occluded scenes, capturing more objects but possibly resulting in "multiple boxes per object". A low threshold is ideal for clear single-object scenes, ensuring one box per object but potentially missing blurry objects.
In the parameter settings of this function: Tap "NMS Threshold", and a parameter adjustment slider will appear above it. Sliding left decreases the value, sliding right increases it.
3.4 Restore Default
This parameter restores all settings to their default state and forgets the learned IDs, but does not clear the exported models (see below for model export).
Setting steps: Click "Restore Default", and after the "Restore Default Configuration" pop-up appears, click "Yes".
3.5 Export Model
This parameter allows saving and exporting the currently set parameters and learned IDs to the local memory of HUSKYLENS 2. It is suitable for scenarios such as migrating parameters to another HUSKYLENS 2. This operation does not require inserting a TF card.
Export steps: Tap "Export Model". When the pop-up "Save configuration to" appears, slide the number up and down to select the model number to save (up to 5 models can be saved), then tap the "Yes" button in the lower-left corner of the pop-up to save. After confirmation, the export will start automatically .
View Exported Model: After the "Exporting" pop-up disappears, you can view the exported model file on your computer.
First, connect the HUSKYLENS 2 to your computer's USB port.
Next, on your computer, using the path shown in the figure below, access the memory of HUSKYLENS 2. You will find two model-related files with the extensions .json and .bin. The number before the extension is the "Model Number" selected when saving the configuration. Both files can be copied and pasted to other locations.
3.6 Importing a Model
This parameter allows importing a model from HUSKYLENS 2 (HUSKYLENS A) to another HUSKYLENS 2 (HUSKYLENS B), enabling HUSKYLENS B to recognize and display the objects learned by HUSKYLENS A and their IDs and names without re-teaching.
Import Steps:
Step 1: Connect HUSKYLENS A to the computer, then copy the exported 2 files to the desktop.
Step 2: Connect HUSKYLENS B to the computer, then paste the files from Step 1 into the specified folder of HUSKYLENS B, as shown in the figure. (If the "object-detection" folder is not found, first create the folder manually, then return to Step 2.)
Step 3: First, confirm that you have entered the "Object Recognition" function. Then click "Import Model". Once the "Load Configuration" pop-up appears, adjust the number slider to select the model to load. This should match the model number saved in the previous step. For example, if you pasted the model files for HUSKYLENS B as config0.json and repo0.bin, select number 1. Finally, click "Confirm" to import.Once the "Loading..." pop-up window disappears, the import is complete.
Then you can perform an object recognition test. The left image below shows the result of HUSKYLENS B before model import, while the right image shows its result after importing the model.
Was this article helpful?
