This week, I focused on getting the computer vision model running on the Jetson and integrating the webcam instead of my local computer’s native components. It went pretty well, and here is where I am at now.
- Used SSH to load my model and the facial recognition model onto the Jetson
- Configured the Jetson with a virtual environment that would allow my model to run. This was the complicated part of integration.
- The Jetson’s native software is slightly old, so finding compatible packages is quite the challenge.
- Sourced the appropriate Pytorch, Torchvision, and OpenCV packages
- Transitioned the model to run on the Jetson GPUs
- This requires a lot of configuration on the Jetson including downloading the correct CUDA drivers and ensuring compatibility with our chosen version of Pytorch
- Worked on the output of the model so that it would send requests to both the web server and bracelet with proper formatting.
I am ahead of schedule, and my part of the project is done for the most part. I’ve hit MVP and will be making slight improvements where I see fit.
Goals for next week:
- Calibrate the model such that it assumes a neutral state when it is not confident in the detected emotion.
- Add averaging of the detected emotions over the past 3 seconds which should increase the confidence in our predictions.
- Add an additional output to indicate if a face is even present.
- Look into compiling the CV model onto onyx – a Jetson specific way of running models – so that there will be lower latency.