After some cross debugging (setting up backup jetson and trying different MIPI/USB cameras), we have finally sorted out the camera issue. On Sunday I was able to get the usb camera working and taking videos/images. On Monday, the whole team double checked and we decided to switch to usb cameras now. During labs, we have done measurements to determine the height and angle of the camera. Ting and I also worked on integrating the camera code with inference YOLO code (detect.py). We are able to capture images and trigger detect.py from the script that makes the camera captures images. We have done initial rounds of testing and the result makes sense. We are working on transforming image results to text files so that we could parse detection results with code and pipeline it to Arduino. We are able to find the classifications and confidence values and we could also direct them to text files. Right now as I am writing on Friday, we are still working on the logic of outputting a single result when there are multiple items identified. Overall, progress is much smoother this week compared to the last. We have also done initial mechanical measurements to better prepare for the interim demo.