Chad’s Status Report for 02/26

For this past week, I spent a lot of my time creating the presentation slides and then preparing for the presentation as I was the presenter for the design review.  As said from last week we decided to experiment with both YOLO and SSD real-time detection algorithms. Many of the parts have been ordered and have just arrived. Next week we plan to begin configuring the Jetson Nano along with the Lidar as well.

Team status report for 2/26

This past week we mainly focused on finalizing the details of our design and implementation for the design review presentation. We finalized and ordered the components such as the A1M8 LIDAR, Nvidia Jetson Nano, HUD glasses, and a camera specific for the Nvidia Jetson.

 

Next week we will have the parts we ordered so we can began actually tinkering and integrating the different sensors and components of our system. We will begin trying to take measurements with our LIDAR and camera sensors and continue research into our object detection algorithm. Currently, we are exploring lightweight versions of the YOLO v3 algorithm.

Ethan’s Status Report 2/26

This past week we mainly focused on our proposal presentation and doing some more in depth research on different image recognition algorithms and researching how to connect the components. This week I mostly focused on researching how to integrate the LIDAR with our system, and next week when we have our parts I will be starting to test with the LIDAR to actually get data from the LIDAR

Fayyaz Status Report 2/26

This past week I focused on two key things. The first being the design proposal. For the weekend, I was helping culminate all slides for the proposal and ensuring everything was to be presentable for Monday. This mostly included detailing new and relevant additions and changes to our project as a whole. This included new calculations taking in to account city driving (average 8 mph), updating our block diagram that showcases our new hardware, algorithms, and frameworks that we decided to use. An updated picture is below.

The presentation went well. I have already worked on the design report that is much more detailed. I have found some papers as well as more articles that’s give more breath to our project and will allow for a clearer picture. We intend to work on this through the next week as the deadline is coming up.

I also worked with the jetson and ensuring it is ready for integration and testing. Using this link as a guide, https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-2gb-devkit#setup-display, I flashed the CPU and have gotten it ready with its respective OS. As the LiDAR is coming in this week, I hope to integrate them both as soon as possible so we start testing. I hope do that this week.

Overall, very standard week. We are still waiting on some parts. As the next week comes we hope to have a clear design report ready and the jetson fully tested and integrated with the LiDAR.

This week I mostly worked on researching the components for the LIDAR for our system. We ordered our Parts on Wednesday. Last week was mostly working on researching the components for the project, next week once our parts are ordered we will start researching the implementation of our design.

Team Status Report for 02/19

This week we spent a lot of time deciding on the parts we want to order so that we can get started on the project.  We also made a considerable change to our MVP that we will instead be feeding the output to HUD glasses instead of a speaker.  Therefore instead of using audio alerts for the cyclist, we will be using visual alerts instead.  What we had is below:

This is more efficient for our use case as we are focusing more on city cyclists than others, and the city can be very noisy depending on the city.  Therefore, audio alerts can easily be missed or misinterpreted due to other noise coming from the surroundings.

So far, we have ordered the Jetson Nano, a camera, a 12m range lidar and HUD glasses.  We have received the Jetson Nano and will spend some time in the upcoming week configuring it along with working on the Design Presentation and Report.  We are currently still on schedule and hope to maintain this throughout next week.

Fayyaz’s Status Report for 02/19

This past week, it was another stretch of researching different products to use, particularly which we wanted the CPU/brains of our project to be.  The keys aspects we needed were as followed. We wanted a strong enough processor to handle Lidar inputs and to run an object detection algorithm that provided an sufficient amount of frames that could be scanned and utilized in our kinematics algorithm while also cheap enough to fit in our budget. Through some research we found the Nvidia Jetson Nano seemed to be a clear front runner in the realm of micro processors that could handle multiple peripherals like a camera and Lidar, as well as any sort of object detection algorithm we decide to include. One of the articles we looked at is right here:

https://www.tomshardware.com/news/jetson-nano-features-price,38856.html

Through the ECE departments resources and inventory, we were able to reserve a Nvidia Jetson Nano and pick it up this previous Wednesday during lab time, thankfully allowing for no dollars spent on a key aspect of our project. After getting this, I intended to set up the board and flash it with is OS, but the developer kit did not include a microSD card, 5 volt 3 amp adapter, or USD to microSD  converter. Because of this, I had to put in another order and are currently waiting for the parts to fully setup the Jetson.

In the meantime, as Ethan was finalizing a Lidar to use, I was looking on how to setup the Lidar with the Jetson. I found this video online that uses the Nano as well as the hardware that Ethan was looking in to:

youtube.com/watch?v=-8VDYROG55U

Through this, I have a clear direction on what to do when the Jetson is fully setup and hope to do this early next week. With the ordering of the Jetson and other parts, we are on schedule.  I also looked in to different power supplies this week and hoped to order these soon. Some of the ideas that we discussed with Professor Saviddes included a Letacid Battery and LipoD cells as well as a portable charging pack.  Something along these lines

 

Chad’s Status Report for 02/19

This past week, similar to last week I spent a lot of time doing more research on object detection algorithms.  After scanning a lot of Nvidia’s forums, I was able to find that the YOLO algorithm which we had decided to be the best algorithm for our project may not be the best as it uses darknet and consumes a lot of processing power of an Nvidia Jetson Nano.

The YOLO algorithm has multiple different versions for different usage. Looking online, people who have implemented the YOLO algorithm on the Jetson Nano seem to struggle with frame rate.  For example, with YOLO v3 implemented, many users were able to get 1-2 fps.  This can be seen on multiple forums on the Nvidia website such as this one:

https://forums.developer.nvidia.com/t/yolov3-is-very-slow/74073/3

However, there is another version made specifically for this reason to run more naturally suited for embedded computer vision/deep learning devices such as the Raspberry Pi, Google Coral, and NVIDIA Jetson Nano.  This version is called tiny-YOLO which is approximately 442% faster than it’s larger big brothers, achieving upwards of 244 FPS on a single GPU as stated here:

https://pyimagesearch.com/2020/01/27/yolo-and-tiny-yolo-object-detection-on-the-raspberry-pi-and-movidius-ncs/

Many people who implemented this version were able to achieve speeds of 10-15 fps.  We will try and implement this version ourselves.  We will also try the SSD algorithm along with the TensorRT package as this was recommended by many employees at Nvidia which apparently can get you even more fps on the Jetson Nano.  We will test with both algorithms and see which would be better for our use case in terms of accuracy and speed.

Below is an image showing the accuracy and speeds of different algorithms on the COCO dataset.

As you can see the tiny-YOLO has one of the fastest speeds, however, its accuracy suffers because of this.  This is something we will most certainly consider when deciding on an algorithm to use while testing.

Many parts have been ordered such as a camera, the Jetson Nano, lidar and AR glasses.  This upcoming week, I will be working on the design review presentations and and the design report as well.

 

Team Status Report for Feb 12

As of right now, the most significant risks that we face are the components that we choose to operate with. These are risks as they are the core and foundation of our project and if one of them were to be faulty or not operational, it would be detrimental to our to timeline and will put us behind as we look for new parts. These risks are being managed through extensive research and clear checks and balances between the three of us to ensure that when we are deciding on a product, it is being double checked with the others. If we were to chose a not so suitable product, we intend to pivot on working on our other devices and moving the schedule around.

As of now, there were no changes to our initial system as we presented our proposal this past week. You can see the presentation on our page now.

We are still in the early stages of the process right now and have a much more clear idea of the hardware we intend to use over the next week. We intend to update our team status report next week with those.

Chad’s Status Report for 02/12

For this past week, I spent the first part of the week helping out with the presentation slides as we each had our own contribution to completing the slides.  Then for the rest of the week I spent some time researching on object detection algorithms and found great results with the YOLO algorithm.  I found out that the Yolov5 can be implemented on a Jetson Nano easily and it has been done before as JetsonYolo and achieved results with 12 frames per second.  Next week, I will be researching other object detection algorithms to see which ones would best fit with the components of our design and also if 12 frames per second would be satisfactory.  Currently, we are still on time with our schedule.