HUSKYLENS 2 Eye Gaze Function Description
The HUSKYLENS 2 provides an Eye Gaze function detecting human gaze angles and directions. Users can adjust settings like Detection Threshold and Recognition Threshold, and manage models through export/import features. This guide helps in selecting, observing, and learning gaze directions, offering flexibility with additional parameter adjustments. Ideal for applications needing gaze detection precision.
1.Introduction to Eye Gaze
HUSKYLENS 2 is equipped with the Eye Gaze function. The Eye Gaze function can detect the gaze angle and viewing direction of human eyes, and determine which direction in physical space a person is looking at and where the line of sight falls.
2.Eye Gaze Function Description
In this section, we will learn how to use HUSKYLENS 2 to detect human eye gaze direction and the point of regard.
2.1 Select the Eye Gaze function
Power on HUSKYLENS 2. After startup is successful, swipe the screen to find the Eye Gaze function.
2.2 Observe Eye Gaze Detection Result
Point HUSKYLENS 2 at a frame with a human face. An arrow will appear near the human eye; the arrow direction represents the eye gaze direction, and the arrow points to the eye’s line of sight target.
At the same time, an angle value and a numerical value will appear on the screen, representing the angle and length of the gaze point projected onto the plane. For example, the screen display (350°, 247) means that after the eye gaze direction is projected onto the detection plane, the angle with the baseline is 350°, and the normalized length value along this projection direction is 247.
2.3 Learn Eye Gaze and observe the result
Aim at a gaze direction you want to learn, then press Button-A on the top-right corner of HUSKYLENS 2 to learn this gaze direction.
After learning is complete, if a learned gaze direction is recognized, a colored arrow will appear, and an ID and confidence level will be displayed after the angle and length values. Example: 345° 243:ID196%.
3 Eye Gaze Parameter Settings
The default factory parameters of HUSKYLENS 2 meet basic usage requirements. If you need more refined functions, you can manually adjust each parameter. All the following parameters are based on the Eye Gaze function. Therefore, first make sure you have entered the Eye Gaze function as shown in the figure.
Select the parameter you want to modify by swiping left or right on the parameter text at the bottom of the screen.
3.1 Forget ID
To forget all previously learned eye gaze directions:
First, tap Forget ID on the screen.
Second, when the pop-up Forget all IDs and names appears, tap Yes.
At this point, point HUSKYLENS 2 at the image of the eye gaze direction that was learned and then forgotten. A white arrow will appear on the HUSKYLENS 2 screen without an ID or confidence level, indicating a successful forget operation.
3.2 Detection Threshold
Detection Threshold controls the sensitivity of Eye Gaze detection:The lower the threshold, the looser the detection conditions for eye gaze direction;the higher the threshold, the higher the confidence score required (stricter judgment conditions).
Setting steps: Tap Detection Threshold. A parameter adjustment slider will appear above it.
Swipe left to decrease the value, swipe right to increase the value, as shown in the figure.
3.3 Recognition Threshold
Recognition Threshold controls the sensitivity for recognizing learned gaze directions: The lower the threshold, the higher the recognition sensitivity — slight sight deviations near the learned gaze direction can be accurately matched and recognized. The higher the threshold, the lower the recognition sensitivity; only when the confidence level exceeds the recognition threshold will the gaze direction be judged as a valid match and output.
3.4 NMS Threshold
NMS Threshold is a common parameter in visual recognition, used to filter detection boxes. In visual recognition tasks, the model often predicts multiple overlapping detection boxes around the same target. Without filtering, one object may be enclosed by multiple overlapping boxes.
You can adjust the NMS Threshold to remove duplicate overlapping boxes and keep only the optimal one. Simply put, the NMS Threshold determines how much overlap between two boxes counts as "duplicate".
For example: If the threshold is low (e.g., 0.3): even slight overlap between two boxes will be regarded as duplicate, and one will be removed. If the threshold is high (e.g., 0.7): only heavy overlap will be regarded as duplicate, so more boxes may be kept.
Setting steps: Tap NMS Threshold. A parameter adjustment slider will appear above it. Swipe left to decrease the value, swipe right to increase the value.
3.5 Set Name
This parameter allows you to assign a name to learned entries, supporting both Chinese and English.
Setting steps: Tap Set Name. Slide the number in the top-left corner up or down to select which ID to name. Use the on-screen keyboard to enter the name, as shown in the left image.
When finished, tap the √ button in the bottom-right corner to save. A green checkmark will appear in the top-right corner to confirm successful saving.
Note: After setting a name, the recognition box will show the custom name instead of the original text in the top-left corner. To restore the original text display, you must forget the ID.
3.6 Show Name
This parameter controls whether the name is displayed when text is recognized. It is enabled by default.
Setting steps: Tap Show Name. When the switch above it is blue (enabled), the name will be displayed when a learned gaze direction is recognized, as shown in the left image.
Tap the switch to turn it white (disabled); the name and related data will no longer be shown, as shown in the right image.
3.7 Reset Default
This parameter restores all settings to their default state and forgets learned IDs and names, but does not clear exported models (see below for details on exported models).
Setting steps: Tap Reset Default. When the Restore default configuration pop-up appears, tap Yes.
3.8 Export Model
This parameter can save and export the current settings, learned IDs, and custom names to the local memory of HUSKYLENS 2. It is suitable for scenarios such as migrating parameters to another HUSKYLENS 2. No TF card is required for this operation.
Export steps: Tap Export Model. When the Save configuration to pop-up (left image) appears, slide the number up or down to select which model slot to save to (up to 5 models can be saved). Then tap the Yes button at the bottom-left of the pop-up to save. After confirmation, it will be exported automatically, as shown in the right image.
After the “Exporting” pop-up disappears, you can view the exported model file on a computer.
First, connect HUSKYLENS 2 to a USB port on your computer.
Next, use your computer to access the internal memory of HUSKYLENS 2 via the path shown in the image. You will find two model-related files with the extensions .json and .bin. The number before the file extension corresponds to the model number you selected when saving the configuration. Both files can be copied and pasted to other locations.
3.9 Import Model
This parameter allows you to import a model exported from one HUSKYLENS 2 (hereafter called "Husky A") to another HUSKYLENS 2 (hereafter called "Husky B"), so that Husky B can recognize objects learned by Husky A and display their IDs and names without relearning.
Import steps:
1.Connect Husky A to the computer, then copy the two exported files to the desktop.
2.Connect Husky B to the computer, then paste the two files into the specified folder on Husky B, as shown in the path. (If the eye-gaze folder cannot be found, perform Step 3 first; the folder will be created automatically after one import, then return to Step 2.)
3.First, make sure you have entered the Eye Gaze function. Then tap Import Model. When the Load configuration pop-up appears, slide the number up or down to select which model to load — it must match the model number you saved earlier.
For example: If the model files you pasted into Husky B are config0.json and repo0.bin, select the number 0. Finally, tap Yes to import. Wait for the “Loading” pop-up to disappear; the import is complete.
Then you can perform an eye gaze direction test. The left image below shows the object recognition result of Husky B before model import, and the right image shows the result after model import.
Was this article helpful?
