Fruit Recognition Using ESP32-S3 AI CAM and Edge Impulse
This project utilizes the ESP32-S3 AI CAM module and EdgeImpulse to recognize apples and oranges. Through this project, you will learn how to train your own model using EdgeImpulse and deploy it on the ESP32-S3 AI CAM module.
- EdgeImpulse Official Website: https://edgeimpulse.com/
- EdgeImpulse Project Link: [https://studio.edgeimpulse.com/public/571380/live
Data Collection
- Burn the "CameraWebServer" example code to the ESP32-S3 AI CAM module.
- Open the serial monitor to check the IP address.
- Access the IP address through a browser on a device within the same local network. Click the “Start” button to view the camera feed.
- Save images to your computer by clicking the upper-right corner of the video frame. (It is recommended to save images of different objects in separate folders for easier data labeling during training.)
Collect as much image data as possible to improve model accuracy. For this project, around 50 images of apples and oranges were used.
Collected image dataset:
Data Labeling
-
Create a new project in Edge Impulse.
-
Click “Add existing data” to upload collected images.
-
Select “Upload data” and upload image files. Enter corresponding labels for the images.
-
In “Data acquisition → Labeling queue”, mark the object of interest in the images and save.
Example: Labeling oranges
Training the Model
-
Once all data is labeled, navigate to “Impulse design -> Create impulse” to create and save your impulse.
Processing blocks explanation: EdgeImpulse Documentation

-
Go to the “image” page and click “Save parameters”.

-
Navigate to the “Generate features” page and click “Generate features” to extract image features.

-
Go to “Object detection”, then click “Save & train” to train the model.

-
Once training is complete, review model performance. Adjust parameters and retrain if necessary.

-
Go to the "Retrain model" page and click "Train model"

Deploying the Model
Here's the English translation of the steps:
-
Extract the trained model files into the "libraries" folder of your Arduino installation directory.
-
Download the files: conv.cpp, depthwise_conv.cpp, and the edge_camera folder.
-
Copy the downloaded
conv.cppanddepthwise_conv.cppfiles into your trained model's folder, replacing existing files if prompted.- Path:
Your_Trained_Model_Folder\src\edge-impulse-sdk\tensorflow\lite\micro\kernels

- Path:
-
Copy the entire edge_camera folder and paste it into your trained model's directory.
- Path:
Your_Trained_Model_Folder\examples

- Path:
-
Open Arduino IDE, locate the edge_camera example sketch. Replace the header file with your model's header and enter your WiFi credentials (SSID and password).

-
Compile and upload the sketch. Open the Serial Monitor to view the IP address and classification results. Access the IP address in a web browser to view the camera feed.
Was this article helpful?
