This week I mostly researched how to setup the LIDAR with ROS. We needed an additional Nvidia Jetson so that we could work on the image detection and range detection at the same time. Therefore, I spent a lot of time configuring and updating the Jetson so that it could be connected to the internet so we can start downloading the libraries we need to run the LIDAR. Next week, I will be installing the LIDAR packages and implement a program in ROS to get LIDAR data.
Fayyaz Status Report for 3/26
There was two main things done on my end this week. The first was prepping and setting up for the ethics discussion by going over the assigned questions and the work assigned to us. Mostly though, I worked on setting up the other Jetson that is to be used for testing. I made sure to flash the same image as well as install all the same packages on this device that the other device also had.
We are still on schedule and hope to even move faster now that we have two jetsons that each device can work on.
In the next week, I hope the Lidar is fully setup and we can start the integration of both the Lidar and camera together.
Team Status Report for 03/26
For this past week, we were able to order another Jetson Nano since there was more in stock and also ordered two Edison 2-in-1 WiFi Adapters. With this Ethan who is currently working on getting the Lidar setup with the Jetson can work separately from Fayyaz and I who are working on the camera module and real-time detection algorithm. We are now more than ever considering purchasing the fan to mitigate the risk of the Jetson Nano overheating due to the computation required by these detection algorithms. Running a simple face detection Algorithm from JetsonHacks caused the Heat Sync to heat up quite a bit. Also, ever so often while working on the Jetson Nano, we will get a message saying “System throttled due to overcurrent”. We’re not exactly sure whether this would pose an issue in the future but is definitely something to consider.
No changes were made to our design still as we each continue to work on our respective tasks and we are also still currently on track for our schedule.
Chad’s Status Report for 03/26
This past week, I spent some time getting the Edison 2-in-1 Wifi Adapter setup with the Jetson so that we can work on them without the use of ethernet cables. With this we can now program the board from home with much more convenience. I followed the instructions on this website to get everything setup:
https://learn.sparkfun.com/tutorials/adding-wifi-to-the-nvidia-jetson/all
I also spent some time working on getting the real-time detection algorithm setup on the Jetson Nano. I decided to first try and implement the Yolo detection algorithm. I chose to go with the Yolov5 implementation on the Jetson Nano since it seemed to be the least taxing in terms of frames per second on the Jetson Nano. This implementation was found online and is called JetsonYolo. The repo for this implementation:
https://github.com/amirhosseinh77/JetsonYolo
I was having some issues with installing the torchvision libraries on the Jetson Nano and then spent some time trying to debug why I wasn’t able to install these libraries. I will continue to work on this in the upcoming week and then eventually should be able to run the Yolo detection algorithm on the Jetson Nano.
Ethan’s Status Report for 3/19
This week I mostly worked on doing research on how to interface and get data from the LIDAR. I worked on setting up a linux environment on my macbook to connect to the LIDAR, but this proved difficult. Instead, I have checked out another Nvidia Jetson so that I can directly work on the board that we will be using while my other group members work on getting the camera to work on the other Nvidia Jetson. Next week I will be using ROS to interface with the LIDAR and start recording initial measurements on the Jetson
Team Status Report for 3/19
As of right now, the biggest risk to the project is again the integration of all the different components. When we were integrating the camera with the Jetson, we could tell the temperature of the Jetson had increased as well as there was a clear lag in the OS. This could be a potential issue down the road when we try to run the Lidar in coordination with the camera. This could lead to overheating and the process slowing down entirely. The risks are being managed by looking in to cooling devices such as a fan as well as maybe looking at low intensity object detection. We do not have a clear contingency plan if they both do not work together but after seeing the camera in action this week, we discussed that it is needed by this time next week.
No definite changes were made to the design besides the use of wired ethernet. We have bought a wireless wifi adapter though and hope to use that instead of the wired connection soon. The change was not necessary and it was only some opportunity cost of some lab time.
No schedule changes have occurred. We are still on track.
Look at Fayyaz and Chad most recent reports for some pictures of what we got working!
Fayyaz Status Report for 3/19
This week of the project consisted mostly of me getting the Jetson to work. After receiving the Jetson, I tried flashing the OS on to the Jetson but when in boot mode, it would stay on the Nvidia Logo. After looking at this website for guidance, https://forums.developer.nvidia.com/t/jetson-nano-2gb-stuck-on-boot-screen-with-nvidia-logo/169601 , I found out that there is a separate image that I have to flash (https://developer.nvidia.com/embedded/jetpack). Once loaded, there was another issue that I ran into. In order to download the necessary packages for the camera and the Lidar, internet is needed. This was an issue as there is only wired internet on the Jetson and the network did not recognize the device. After many painful hours of trying to figure out the issue, and talking to 3 different professors, we recognized to register the MAC address on the CMU website. After an hour or so, the device was connected.
From here, we installed the camera packages on the Jetson using this guide https://www.youtube.com/watch?v=dHvb225Pw1s&ab_channel=JetsonHacks . We attached the camera and were able to run some initial tests. Below is a picture of the initial setup.
Our project is on schedule and I intend to help Ethan with the Lidar setup next week. In the next week I hope the Lidar is setup and there is a comprehensive environment such that we can take in readings from the Lidar and show them on the Jetson.
Chad’s Status Report for 03/19
I spent my time working with Fayyaz setting up the Jetson so I can move on to installing the Yolo object detection algorithm next and testing that. Fayyaz and I registered the Jetson Nano on the CMU website so we could connect to Wi-Fi through the Ethernet cable in the lab. We then connected the camera to the Jetson Nano via the CSI connector on the board. To test this camera, Fayyaz and I cloned a repo from JetsonHacks:
https://github.com/JetsonHacksNano/CSI-Camera
With this repo, we followed the instructions in the repo to test the camera with the simple_camera.py. There was initially a pinkish tint around the camera, but we were able to remove this tint by following the instructions here:
https://jonathantse.medium.com/fix-pink-tint-on-jetson-nano-wide-angle-camera-a8ce5fbd797f
We then went on to testing a simple face detection algorithm that was included in the repo. This test can be seen in the picture below:
Next week, I plan to begin setting up the real-time detection algorithm by first cloning over repo for the Yolo algorithm and testing the detection with this algorithm.
Chad’s Status Report for 02/26
For this past week, I spent a lot of my time creating the presentation slides and then preparing for the presentation as I was the presenter for the design review. As said from last week we decided to experiment with both YOLO and SSD real-time detection algorithms. Many of the parts have been ordered and have just arrived. Next week we plan to begin configuring the Jetson Nano along with the Lidar as well.
Team status report for 2/26
This past week we mainly focused on finalizing the details of our design and implementation for the design review presentation. We finalized and ordered the components such as the A1M8 LIDAR, Nvidia Jetson Nano, HUD glasses, and a camera specific for the Nvidia Jetson.
Next week we will have the parts we ordered so we can began actually tinkering and integrating the different sensors and components of our system. We will begin trying to take measurements with our LIDAR and camera sensors and continue research into our object detection algorithm. Currently, we are exploring lightweight versions of the YOLO v3 algorithm.