Team Status Report for 4/30/22

As of right now there are no clear risks that could jeopardize the project. The only potential issue that could happen is that when we try to integrate the system with a mobile battery back, the voltage or amperage might be off so it might cause some sort of issue. We have accounted this through a few ways. We made sure to check all the specs of what we are buying and ensuring that it matches the specs of the Jetson, lidar, and camera. If worse comes to it, we have an extra Jetson on standby with all of our code pushed such that if there is an issue, we can immediately move to the other device.

No changes as of now to our system expect we hope to add the battery pack soon.

No updates on the schedule!

Fayyaz Status Report for 4/30/22

This week was mostly focused on getting things ready for the end of the semester. I mostly worked on refining and building the final presentation that was given on Wednesday. I believe it went well and I felt like we were able to showcase all the things we wanted to discuss with the professors and to the class as a whole. Besides that, I also worked on the poster and designing that for the demo next week. Lastly, we did some testing and bought some materials to move towards mobility of the system.

We are on schedule and hope to finish everything up as time is winding down!

We hope to complete the final report, make the system mobile and do automobile testing this past week.

Ethan’s Status Report for 4/30

This week I mainly worked on figuring out how to power the Nvidia Jetson off a battery pack. We determined a sufficient system to use and will be using it to take our system mobile and do some outside testing with actual cars. Next week, we will be conducting more tests. Additionally, I will be working on additional functionality to detect the difference between cars and people.

Chad’s Status Report for 04/30

For this past week, I spent most of my time helping out the final presentation slides and also the final poster.  Also, after discussing with the team, we decided on a battery to use to power the system so it can be taken outside.  The battery has already been ordered and will require a USB-C power cable to power the Jetson Nano which we already have.

Next week, we plan to make some final touches to our project by changing the GUI slightly so it looks more pleasing.  We essentially want to draw different objects with tkinter instead of a general box for each object detected whether it be a car or a person so the user can also know what object was detected behind them.

Currently on the final stretch, we are still on track to finish before the final day demo!

Ethan’s Status Report for 4/23

This week I finished writing the algorithm which takes both the LIDAR input from the ROS node and the camera and displays a warning on the GUI if an object is detected within a certain range. I got the algorithm working with real LIDAR data and hard coded car position data from the camera. 

Shown above, the red bar indicates the position and range of a car detected within the camera and LIDAR’s field of view.

Next, I will work with the team to integrate our systems together for a full working system.

Team’s Status Report for 4/23

After working to integrate the different systems we’ve been working on. We have a complete fully functional system. We integrated the object detection of the cars with the range detection from the LIDAR system, and created a GUI which displays red boxes on the screen relative to the user which indicates the relative direction and range of the cars approaching. Additionally, we tested the GUI with the AR glasses and the system is working end to end. Next, we will be testing the system to collect data on the accuracy of prediction and the speed at which it can warn the user.

Shown above: for the sake of testing, we configured the YOLO algorithm to detect people instead of cars. The camera is detecting a person in the frame and figuring out the range to the object, and displays the warning as a red square. The system can detect and display multiple objects at once, and will only display a warning if an object of interest is detected by the camera. For example, if a chair is in range of the LIDAR, the system will not warn the user because it is not an object of interest.

Fayyaz Status Report for 4/23

This past week, there was clear significant progress made within our group, hitting near our MVP.  Regarding the work that I did over the past week, there was a quite a few.  The first being that me and chad worked together to scour through the YOLO code and were able to successfully print out and grab the coordinates of the bounding boxes that were being drawn per frame.  From here, we integrated both the lidar with the single Jetson as we found it did not create that much latency. With both components together, we needed to send over the coordinates to the Lidar script. We tried to make a ROS publisher node in the YOLO script but the paths of the dependencies were really messed up and convoluted. So, I wrote a function inside YOLO that constantly writes to a file that coordinates of bounding boxes in the most recent frame. Then, I wrote a ROS publisher node that took the coordinates from the script and published to the topic that Ethan’s node was subscribed to. From there, everything kind of came together!

With this done, we are basically on track with our schedule and all is left is testing, flushing out the range of our program, and physical components.

Next week, I hope to print out rest of acrylic  for design, test the system more with cars and people, and change the range of the system.

Here is the link to a video of a base showing of our system: https://drive.google.com/file/d/1uesYUynypOKnusUUs-6F8mhb_bZTSKB7/view?usp=sharing

Chad’s Status Report for 04/23

A lot of progress was made the past week in terms of the integrations of all the subsystems of the project (the camera with YOLO detection and Lidar).  Initially we were planning on using two Jetson Nanos and communicating between both, one running the YOLO algorithm and another running the ROS with the Lidar.  We were going to do this because we thought the processing power of the YOLO algorithm would be too much for a singular Jetson Nano to run both subsystems.  However, after some testing, we realized that both subsystems can be run on one Jetson Nano if we choose not to display the camera and we run the tiny configuration of yolov4 which is less accurate than normal yolov4 since it has less layers in it’s neural network but is inherently much faster.

ROS is very lightweight and the Lidar itself doesn’t require much processing at all, hence we were able to run both on a singular Jetson Nano without maxing out the processing power of it.  This simplifies things a lot.

We were able to find the co-ordinates of each of the detections of cars and persons only from the YOLO algorithm, and then with this we were able to write these to an external file.  From the external file, a ROS publisher and subscriber nodes are used to read and write from a topic.  The ROS subscriber node deals with the Lidar processing and reads the co-ordinates from the topic that the ROS publisher node writes to.  With the combination of the data, an algorithm that was written up processes the data to display a GUI that shows where the car/person is from the lidar/camera.  A video was taken and will be shown in the final presentation.

With the progress made, the schedule is almost back on track and the only things left now is the physical design which is currently in progress to house the components and testing the system on cars.  The plan for next week is to keep testing the system, finish up the physical design of the system and also final presentation preparations.

Team Status Report for 04/16

This next week will require the effort of all of us to be able to produce a product for the upcoming final presentation.  However, since the YOLO algorithm is now finally working on the camera, all that is left is the interpretation of the prediction and position results of the detections and also the Lidar data.  We have already found a way to extract the data from the Lidar and figure out the angle from the axis (0 degrees) in which the detection was found.  Along with this, we will need to extract the position of the detected object and compare.  This is how we will we able to determine the severity of the warning to be sent to the user.  The physical design of the system is also being crafted at TechSpark to be able to house 2 Jetson Nanos

There are still currently no changes to this design apart from the fact that we will need to 3d print each side of the physical build and attach them together as the final result is too large to be printed in the 3d printers available.

Chad’s Status Report for 04/16

For the past week, significant process was made with getting the YOLO real-time detection algorithm working on the Jetson Nano.   After re-flashing the SD card on the Jetson Nano and trying again with the darknet version of the YOLO version instead of TensorRT, I was finally able to get the YOLO algorithm working on the Jetson Nano.  The CSI camera was still very unresponsive and I was still struggling to get the OpenCV library to work with the Gstreamer pipeline.  However, using a USB camera, I was able to get the YOLO detection algorithm running on the camera with an average of 2.5 fps with a resolution of 1920×1080.  Below is a picture of the YOLOv4 tiny version running on the Jetson Nano:

 

As you can see from the picture, the algorithm was able to detect people standing behind me with above 70% accuracy.

In order to test the algorithm on cars, I downloaded a 30 minute YouTube video of traffic on a highway and then ran the algorithm on this video.  The output of the algorithm can be shown below in the picture below:

The plan for next week is to be able to extract the predictions of the algorithm along with the position of the predictions and pass them on to the other Jetson Nano so that this data can be analyzed along with the data from  the lidar in order to determine the severity of the warning to the user.  I am still currently set back in terms of my schedule because of how long it took to get the YOLO algorithm to work but a lot of progress can be made this week now because it is working!