William Shaw’s Status Report for 2/8/2025
This week, I spent most of my time looking into hardware components for our project. More specifically, I looked into the camera, GPS/Compass, and battery components. I also began drafting plans for mounting these components cleanly and comfortably. Currently, I plan to mount the camera to the helmet with a GoPro mount. This would let us adjust the camera angle based on testing results. Furthermore, I would like to create a case for the Jetson board and its components using laser cutting or 3D printing. If it is light enough, the board and battery can be mounted to the helmet, but otherwise, we may need to use a hip-mounted pack for comfort.
I also placed the order form for the Nvidia Jetson Orin Nano. I noticed a 4GB variant (non-Orin) was also available in the existing ready-to-use stock, but I am unsure if it is sufficient to run our models.
I am on track for our Gantt Chart Schedule. Although I have not placed all the orders for the other parts yet, testing of the code can begin once we receive our Jetson board. I would like to get this before placing orders for some of the parts (like the camera), so we can test the interfacing available. I also want to check how well it runs the model, as the top contender for the camera (Arducam IMX219) outputs a 3280 x 2464 pixel image. Although the FOV seems promising, the resolution might be too high to run our model at a suitable refresh rate.
For next week, I hope to finalize all of the parts and place the orders! However, I will emphasize that I want to ensure the components work first, which may require testing on the Jetson board beforehand.
Max Tang’s Status Report for 2/8/2025
This week I presented our group’s initial proposal presentation. The presentation went well, and I received many thought-provoking questions that have helped me realize that there were some aspects to our design that we have not considered, such as intersections that have multiple sidewalks. I began searching for suitable models that we can use to create our walk sign image classification model. One of these is an off-the-shelf YOLOv8 model that we can simply fine tune on walk sign images. Another potential solution I found is to gather as many images of walk signs as possible, as a combination of existing online datasets and self-taken images, and upload them to Edge Impulse. Then I can use Edge Impulse’s image classification model, which would be great for our project since Edge Impulse has a feature that lets you create quantized models, which use smaller data types for storing parameters and reduces the total memory required.
Progress is still on schedule. We allocated ourselves a large chunk of time for researching and making the model, and I believe that picking a suitable model at the beginning will help save time tuning and testing later. Next week I hope to be able to start the training and initial testing against validation datasets. This will give ample time for iteration if further improvements are required, which is very likely.