Team Status Report for 02/19

This week we spent a lot of time deciding on the parts we want to order so that we can get started on the project.  We also made a considerable change to our MVP that we will instead be feeding the output to HUD glasses instead of a speaker.  Therefore instead of using audio alerts for the cyclist, we will be using visual alerts instead.  What we had is below:

This is more efficient for our use case as we are focusing more on city cyclists than others, and the city can be very noisy depending on the city.  Therefore, audio alerts can easily be missed or misinterpreted due to other noise coming from the surroundings.

So far, we have ordered the Jetson Nano, a camera, a 12m range lidar and HUD glasses.  We have received the Jetson Nano and will spend some time in the upcoming week configuring it along with working on the Design Presentation and Report.  We are currently still on schedule and hope to maintain this throughout next week.

Chad’s Status Report for 02/19

This past week, similar to last week I spent a lot of time doing more research on object detection algorithms.  After scanning a lot of Nvidia’s forums, I was able to find that the YOLO algorithm which we had decided to be the best algorithm for our project may not be the best as it uses darknet and consumes a lot of processing power of an Nvidia Jetson Nano.

The YOLO algorithm has multiple different versions for different usage. Looking online, people who have implemented the YOLO algorithm on the Jetson Nano seem to struggle with frame rate.  For example, with YOLO v3 implemented, many users were able to get 1-2 fps.  This can be seen on multiple forums on the Nvidia website such as this one:

https://forums.developer.nvidia.com/t/yolov3-is-very-slow/74073/3

However, there is another version made specifically for this reason to run more naturally suited for embedded computer vision/deep learning devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano.  This version is called tiny-YOLO which is approximately 442% faster than it’s larger big brothers, achieving upwards of 244 FPS on a single GPU as stated here:

https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/

Many people who implemented this version were able to achieve speeds of 10-15 fps.  We will try and implement this version ourselves.  We will also try the SSD algorithm along with the TensorRT package as this was recommended by many employees at Nvidia which apparently can get you even more fps on the Jetson Nano.  We will test with both algorithms and see which would be better for our use case in terms of accuracy and speed.

Below is an image showing the accuracy and speeds of different algorithms on the COCO dataset.

As you can see the tiny-YOLO has one of the fastest speeds, however, its accuracy suffers because of this.  This is something we will most certainly consider when deciding on an algorithm to use while testing.

Many parts have been ordered such as a camera, the Jetson Nano, lidar and AR glasses.  This upcoming week, I will be working on the design review presentations and and the design report as well.

 

Chad’s Status Report for 02/12

For this past week, I spent the first part of the week helping out with the presentation slides as we each had our own contribution to completing the slides.  Then for the rest of the week I spent some time researching on object detection algorithms and found great results with the YOLO algorithm.  I found out that the Yolov5 can be implemented on a Jetson Nano easily and it has been done before as JetsonYolo and achieved results with 12 frames per second.  Next week, I will be researching other object detection algorithms to see which ones would best fit with the components of our design and also if 12 frames per second would be satisfactory.  Currently, we are still on time with our schedule.