Aichen’s Status Report for 04/08/2023

For the first half of this week, I’ve worked on integration and testing of Jetson (and camera) and the Arduino. By demo, feeding real-time pictures to the YOLO model and parsing classification results and sending those to Arduino could all run smoothly. By far, the integration between software and hardware is pretty much done.

One major issue we’ve found is that due to the limited variety of our current training datasets, the model could barely detect and classify any item that is not in a shape close to bottles. As we are getting a lot of “no detections”, more than we expected, the major next step is to train with more datasets containing items of all other types. 

After the demo, I have also integrated the “FSM” implementation so that YOLO is only run when a significant change happens under the camera. This change is to save resources and avoid unexpected classification (i.e. when the platform is not “ready”). When I tested with my local laptop, the code works all right. There are still some capture issues when I tried it on Jetson on Friday, which might be due to the usb camera. A lot of other possibilities are thought out with Ting and Samuel, and my biggest guess now is that the “usb camera” needs a longer turnaround time in between two captures and I would use a wait loop before each capture instead of a fixed sleep time as of now. Before, this was not an issue as each capture is followed by an inference call which is way longer than the sleep times I use now. More debugging will be done on Sunday and Monday.

By next week, the FSM part should be fully integrated and I will also help Ting with training a new dataset. Training of the new dataset will hopefully be done within next week too.

Leave a Reply

Your email address will not be published. Required fields are marked *