Andrew Wang’s Status Report for 2/8/2025
This week, I began looking into different object detection algorithms online that we can use as part of our first iteration of our implementation. Specifically, I installed a pre-trained YOLOv8 model from the YOLO package “ultralytics”, and was able to get it working on a CMU computing cluster. Since a rigorous evaluation and fine-tuning of the models will be necessary for integration, I’m planning on beginning to implement an fine-tuning and evaluation pipeline in the next few days to measure the model performance on unseen data, such as generic datasets containing images of streets such as BDD100K, EuroCity Persons, and Mapillary Vistas. Unfortunately, these datasets are way too big to store on the clusters I currently have access to, so I am working on obtaining access to alternative computing resources, which should be approved in the next few days.
With regards to progress, I believe that I am about on schedule. We have specifically set aside the upcoming week and next to evaluate and handle the ML side of our project based on our Gantt chart, and I am optimistic that we should be able to get this done in the next two weeks as the models themselves can simply be fine-tuned to any degree as we see fit with our constraints.
By the end of next week, I’d hope to have completed the download of the image datasets, as well as finished preliminary evaluation of the YOLOv8 model. We may also consider using different object detection models, although this is likely something we will consider more seriously as we get the first results from our YOLOv8 model.
William Shaw’s Status Report for 2/8/2025
This week, I spent most of my time looking into hardware components for our project. More specifically, I looked into the camera, GPS/Compass, and battery components. I also began drafting plans for mounting these components cleanly and comfortably. Currently, I plan to mount the camera to the helmet with a GoPro mount. This would let us adjust the camera angle based on testing results. Furthermore, I would like to create a case for the Jetson board and its components using laser cutting or 3D printing. If it is light enough, the board and battery can be mounted to the helmet, but otherwise, we may need to use a hip-mounted pack for comfort.
I also placed the order form for the Nvidia Jetson Orin Nano. I noticed a 4GB variant (non-Orin) was also available in the existing ready-to-use stock, but I am unsure if it is sufficient to run our models.
I am on track for our Gantt Chart Schedule. Although I have not placed all the orders for the other parts yet, testing of the code can begin once we receive our Jetson board. I would like to get this before placing orders for some of the parts (like the camera), so we can test the interfacing available. I also want to check how well it runs the model, as the top contender for the camera (Arducam IMX219) outputs a 3280 x 2464 pixel image. Although the FOV seems promising, the resolution might be too high to run our model at a suitable refresh rate.
For next week, I hope to finalize all of the parts and place the orders! However, I will emphasize that I want to ensure the components work first, which may require testing on the Jetson board beforehand.
Max Tang’s Status Report for 2/8/2025
This week I presented our group’s initial proposal presentation. The presentation went well, and I received many thought-provoking questions that have helped me realize that there were some aspects to our design that we have not considered, such as intersections that have multiple sidewalks. I began searching for suitable models that we can use to create our walk sign image classification model. One of these is an off-the-shelf YOLOv8 model that we can simply fine tune on walk sign images. Another potential solution I found is to gather as many images of walk signs as possible, as a combination of existing online datasets and self-taken images, and upload them to Edge Impulse. Then I can use Edge Impulse’s image classification model, which would be great for our project since Edge Impulse has a feature that lets you create quantized models, which use smaller data types for storing parameters and reduces the total memory required.
Progress is still on schedule. We allocated ourselves a large chunk of time for researching and making the model, and I believe that picking a suitable model at the beginning will help save time tuning and testing later. Next week I hope to be able to start the training and initial testing against validation datasets. This will give ample time for iteration if further improvements are required, which is very likely.