Chad’s Status Report for 04/30

For this past week, I spent most of my time helping out the final presentation slides and also the final poster.  Also, after discussing with the team, we decided on a battery to use to power the system so it can be taken outside.  The battery has already been ordered and will require a USB-C power cable to power the Jetson Nano which we already have.

Next week, we plan to make some final touches to our project by changing the GUI slightly so it looks more pleasing.  We essentially want to draw different objects with tkinter instead of a general box for each object detected whether it be a car or a person so the user can also know what object was detected behind them.

Currently on the final stretch, we are still on track to finish before the final day demo!

Chad’s Status Report for 04/23

A lot of progress was made the past week in terms of the integrations of all the subsystems of the project (the camera with YOLO detection and Lidar).  Initially we were planning on using two Jetson Nanos and communicating between both, one running the YOLO algorithm and another running the ROS with the Lidar.  We were going to do this because we thought the processing power of the YOLO algorithm would be too much for a singular Jetson Nano to run both subsystems.  However, after some testing, we realized that both subsystems can be run on one Jetson Nano if we choose not to display the camera and we run the tiny configuration of yolov4 which is less accurate than normal yolov4 since it has less layers in it’s neural network but is inherently much faster.

ROS is very lightweight and the Lidar itself doesn’t require much processing at all, hence we were able to run both on a singular Jetson Nano without maxing out the processing power of it.  This simplifies things a lot.

We were able to find the co-ordinates of each of the detections of cars and persons only from the YOLO algorithm, and then with this we were able to write these to an external file.  From the external file, a ROS publisher and subscriber nodes are used to read and write from a topic.  The ROS subscriber node deals with the Lidar processing and reads the co-ordinates from the topic that the ROS publisher node writes to.  With the combination of the data, an algorithm that was written up processes the data to display a GUI that shows where the car/person is from the lidar/camera.  A video was taken and will be shown in the final presentation.

With the progress made, the schedule is almost back on track and the only things left now is the physical design which is currently in progress to house the components and testing the system on cars.  The plan for next week is to keep testing the system, finish up the physical design of the system and also final presentation preparations.

Team Status Report for 04/16

This next week will require the effort of all of us to be able to produce a product for the upcoming final presentation.  However, since the YOLO algorithm is now finally working on the camera, all that is left is the interpretation of the prediction and position results of the detections and also the Lidar data.  We have already found a way to extract the data from the Lidar and figure out the angle from the axis (0 degrees) in which the detection was found.  Along with this, we will need to extract the position of the detected object and compare.  This is how we will we able to determine the severity of the warning to be sent to the user.  The physical design of the system is also being crafted at TechSpark to be able to house 2 Jetson Nanos

There are still currently no changes to this design apart from the fact that we will need to 3d print each side of the physical build and attach them together as the final result is too large to be printed in the 3d printers available.

Chad’s Status Report for 04/16

For the past week, significant process was made with getting the YOLO real-time detection algorithm working on the Jetson Nano.   After re-flashing the SD card on the Jetson Nano and trying again with the darknet version of the YOLO version instead of TensorRT, I was finally able to get the YOLO algorithm working on the Jetson Nano.  The CSI camera was still very unresponsive and I was still struggling to get the OpenCV library to work with the Gstreamer pipeline.  However, using a USB camera, I was able to get the YOLO detection algorithm running on the camera with an average of 2.5 fps with a resolution of 1920×1080.  Below is a picture of the YOLOv4 tiny version running on the Jetson Nano:

 

As you can see from the picture, the algorithm was able to detect people standing behind me with above 70% accuracy.

In order to test the algorithm on cars, I downloaded a 30 minute YouTube video of traffic on a highway and then ran the algorithm on this video.  The output of the algorithm can be shown below in the picture below:

The plan for next week is to be able to extract the predictions of the algorithm along with the position of the predictions and pass them on to the other Jetson Nano so that this data can be analyzed along with the data from  the lidar in order to determine the severity of the warning to the user.  I am still currently set back in terms of my schedule because of how long it took to get the YOLO algorithm to work but a lot of progress can be made this week now because it is working!

Chad’s Status Report for 04/10

For this past week, I spent more time trying to get the Yolo algorithm working on the Nano.  Not much work was done because of carnival and also the demo on Monday.  I re-flashed the SD card on the Nano to retry installing the necessary libraries for the YOLO algorithm so I will continue working on this for the coming week.  The schedule is still slightly delayed because of YOLO being so difficult to install on the Jetson Nano.  There is a chance I will have to use a different algorithm instead if it still doesn’t work after this week.

Chad’s Status Report for 04/02

This past week, I was still having many issues with installing the Yolo Algorithm on Jetson Nano.  I had installed the PyTorch and Torchvision libraries by following the steps on this website:

https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048

At first there was some issues but I was able to solve it by downloading the arch version of anaconda on the Nano called archiconda3. This version included python v3.7 which I was able to use to complete the installation process listed above.  However when trying to run the JetsonYolo.py from the JetsonYolo GitHub, or any python file that uses the OpenCV library for some reason doesn’t have permissions to use the CSI-camera module on the Nano.  I keep receiving a “Unable to Open Camera”.  After doing a deep dive, there was apparently an issue with the version of the OpenCV algorithm I was currently using so I am now installing one of the newer versions of the OpenCV package in hopes that it will fix this issue.

Its a very long installation process and while installing, warnings were popping up saying there may not be enough storage on the Jetson so this may pose an issue.  There have been multiple problems showing up while trying to get Yolo installed on the Jetson so this has set back my personal schedule a bit.  The goal for next week is that I can hopefully get the Yolo detection fully working on the Jetson.

 

Team Status Report for 03/26

For this past week, we were able to order another Jetson Nano since there was more in stock and also ordered two Edison 2-in-1 WiFi Adapters.  With this Ethan who is currently working on getting the Lidar setup with the Jetson can work separately from Fayyaz and I who are working on the camera module and real-time detection algorithm.  We are now more than ever considering purchasing the fan to mitigate the risk of the Jetson Nano overheating due to the computation required by these detection algorithms.  Running a simple face detection Algorithm from JetsonHacks caused the Heat Sync to heat up quite a bit.  Also, ever so often while working on the Jetson Nano, we will get a message saying “System throttled due to overcurrent”.  We’re not exactly sure whether this would pose an issue in the future but is definitely something to consider.

No changes were made to our design still as we each continue to work on our respective tasks and we are also still currently on track for our schedule.

Chad’s Status Report for 03/26

This past week, I spent some time getting the Edison 2-in-1 Wifi Adapter setup with the Jetson so that we can work on them without the use of ethernet cables.  With this we can now program the board from home with much more convenience.  I followed the instructions on this website to get everything setup:

https://learn.sparkfun.com/tutorials/adding-wifi-to-the-nvidia-jetson/all

I also spent some time working on getting the real-time detection algorithm setup on the Jetson Nano.  I decided to first try and implement the Yolo detection algorithm.  I chose to go with the Yolov5 implementation on the Jetson Nano since it seemed to be the least taxing in terms of frames per second on the Jetson Nano.  This implementation was found online and is called JetsonYolo. The repo for this implementation:

https://github.com/amirhosseinh77/JetsonYolo

I was having some issues with installing the torchvision libraries on the Jetson Nano and then spent some time trying to debug why I wasn’t able to install these libraries.  I will continue to work on this in the upcoming week and then eventually should be able to run the Yolo detection algorithm on the Jetson Nano.

Chad’s Status Report for 03/19

I spent my time working with Fayyaz setting up the Jetson so I can move on to installing the Yolo object detection algorithm next and testing that.  Fayyaz and I registered the Jetson Nano on the CMU website so we could connect to Wi-Fi through the Ethernet cable in the lab.  We then connected the camera to the Jetson Nano via the CSI connector on the board.  To test this camera, Fayyaz and I cloned a repo from JetsonHacks:

https://github.com/JetsonHacksNano/CSI-Camera

With this repo, we followed the instructions in the repo to test the camera with the simple_camera.py.  There was initially a pinkish tint around the camera, but we were able to remove this  tint by following the instructions here:

https://jonathantse.medium.com/fix-pink-tint-on-jetson-nano-wide-angle-camera-a8ce5fbd797f

We then went on to testing a simple face detection algorithm that was included in the repo.  This test can be seen in the picture below:

Next week, I plan to begin setting up the real-time detection algorithm by first cloning over repo for the Yolo algorithm and testing the detection with this algorithm.

 

Chad’s Status Report for 02/26

For this past week, I spent a lot of my time creating the presentation slides and then preparing for the presentation as I was the presenter for the design review.  As said from last week we decided to experiment with both YOLO and SSD real-time detection algorithms. Many of the parts have been ordered and have just arrived. Next week we plan to begin configuring the Jetson Nano along with the Lidar as well.