Project Overview
This project utilizes the ESP32-S3 AI CAM module and EdgeImpulse to recognize apples and oranges. Through this project, you will learn how to train your own model using EdgeImpulse and deploy it on the ESP32-S3 AI CAM module.

- EdgeImpulse Official Website: https://edgeimpulse.com/
- EdgeImpulse Project Link: [https://studio.edgeimpulse.com/public/571380/live
Data Collection
- Burn the "CameraWebServer" example code to the ESP32-S3 AI CAM module.
- Open the serial monitor to check the IP address.
- Access the IP address through a browser on a device within the same local network. Click the “Start” button to view the camera feed.
- Save images to your computer by clicking the upper-right corner of the video frame. (It is recommended to save images of different objects in separate folders for easier data labeling during training.)
Note: Collect as much image data as possible to improve model accuracy. For this project, around 50 images of apples and oranges were used.

Collected image dataset:

Data Labeling
- Create a new project in EdgeImpulse.

- Click “Add existing data” to upload collected images.

- Select “Upload data” and upload image files. Enter corresponding labels for the images.

- In “Data acquisition -> Labeling queue”, mark the object of interest in the images and save.

Example: Labeling oranges:

Training the Model
-
Once all data is labeled, navigate to “Impulse design -> Create impulse” to create and save your impulse.
Processing blocks explanation: EdgeImpulse Documentation
-
Go to the “image” page and click “Save parameters”.
-
Navigate to the “Generate features” page and click “Generate features” to extract image features.
-
Go to “Object detection”, then click “Save & train” to train the model.
-
Once training is complete, review model performance. Adjust parameters and retrain if necessary.
-
Go to the "Retrain model" page and click "Train model"
Deploying the Model
- In the “Deployment” page:
- Select “Arduino library” under “DEFAULT DEPLOYMENT.”
- Choose “TensorFlow Lite” under “MODEL OPTIMIZATIONS.”
- Click “Build” to download the library.

- Extract the library into the “Arduino->libraries” folder.
- Replace the files in “src\edge-impulse-sdk\tensorflow\lite\micro\kernels” with the modified
depthwise_conv.cpp
andconv.cpp
files. - Move the edge_camera folder to the library's
examples
directory. - Open the edge_camera example in Arduino IDE. Update the code to include the library
.h
file and enter your Wi-Fi credentials. Compile and upload the code to the ESP32-S3 AI CAM module. - Open the serial monitor to view the IP address and recognition results. Access the camera feed through the IP address in your browser.
