Aditi’s Status Report 10/29

This week I successfully setup the camera and wrote the code to integrate the camera with YOLO. First, I tried using instructions from Jetson website, but the instructions using the nvgstcapture did not work. I looked online to try and solve errors trying to see if I was missing a camera driver or whether there was a display issue. After reading that this camera should be compatible I looked further into display issues. I realized I was getting EGL display connection error. I had initially tried using the JetsonHack camera-caps library. I tried seeing if a VNC would help but it did not. After this I tried to use a library specific to USB Cameras. I was able to get it working but not with the gstreamer. I fixed a forwarding issue by changing -X to -Y and solved the EGL display issue and managed to get gstreamer working. Then, I wrote a simple script to take a picture and run them through YOLO.

 

Aditi’s Status Report 10/22

Before fall break, I spent most of my time working on the design report. I was in charge of writing up the use-case requirements, design requirements, hardware design studies, bill of materials, summary, testing, verification and validation and adding relevant implementation information. Writing these various parts of the report took 6+ hours as we were iterating and reworking various parts along the way.

Aditi’s Status Report 10/8

This week, I worked on setting up the Jetson Nano. I was able to successfully flash the microSD, connect the Jetson to my computer and complete the headless setup. I was quite busy this week, so I did not manage to complete the camera setup for the Nano. I plan to work on that this coming week as I have more time. I also decided what additional purchases we will need for the Nano like ethernet cables and researched and decided on a WIFI dongle that is compatible with the Jetson.

 

Team Status Update 10/8

The implementation of our project did not change much this week. We worked on our individual parts this week. Integration is definitely a worry for the team so we are going to try and get some initial integration done by next week. We have switched to the Jetson  Nano and was able to bring it up. After getting some questions from our presentation this week, we researched more into image subtraction and discussed our motivation for using hardware versus AWS. This was a slower week for our team as we all had other deadlines.

Aditi’s Status Report 10/1

I ordered the parts late Sunday night, and was able to pick them up during the week. This week a lot of our energy was spent on doing the design presentation. I was in charge of the hardware implementation slide, the testing and validation slides and the implementation plans. I also redefined the use-case requirements. This took a while as I needed to figure out how we were going to implement some of our testing and think of a new test case for one of our use-case requirements. I attempted to get the Jetson TX2 up and running but I was not able to download the SDK onto the ECE machines due to some user privileges. I also realized that the TX2 was too large and would be difficult to mount. After doing some research, I realized that the Nano can fit most of our requirements but we would have to switch to USB cameras. I did a trade study and looked at the average accuracy and FPS on some of the other options like using smaller YOLO framework, using a Raspberry Pi and support for multiple cameras. The setup for the nano also has a simpler headless setup so hopefully I can get things moving this week. I also reviewed all of our feedback, and gave each person on the team their relevant feedback.

Team Status Report 10/1

The implementation of our project changed slightly again this week and some more details were figured out. Instead of using a NVIDIA Jetson TX2, we will use a NVIDIA Jetson Nano. The TX2 was much larger in its form factor than we expected and the Nano has enough compute power for our use case. We have also decided on using YOLOv5 for object detection after doing some testing on preliminary images. There is some risk that we won’t be able to detect chairs when people are sitting on them, and in this case we may just ignore those objects and only identify empty chairs as it still will meet our use case requirements. Identifying chairs which are occluded also may be difficult and we may try and preprocess the image by filtering by known colors of chairs in the room. We have changed the delegation of tasks slightly, and Chen (instead of Mehar) will be working on counting the number of chairs and interpolating the middle of each of the bounding boxes outputted from Yolov5. 

 

Here is a picture of Yolov5 working on an image we took of a study room we hope to use for MVP:

Aditi’s Status Report 9/24

I am leading the hardware component of our project. Since last week, our hardware implementation has changed. After discussion with the professors, they thought it would be out of scope to implement the project with a tool I’m unfamiliar with. I spent many hours this week researching alternatives and settled on using the NVIDIA Jetson TX2. I additionally decided on the cameras we will be using for the project after researching various communication protocols that the Jetson supports.  Additionally, I helped out on making and revising the slide deck before our proposal presentation. The progress is on schedule but the schedule needs to be revised as we are using a GPU instead of an FPGA. By next week, I hope to have reserved and received they necessary hardware components and be able to capture a live video stream and display it.

Team Status Report for 9/24

The implementation of our project has changed since the proposal presentation. After reading the feedback and discussing with the professors we realized that an FPGA was out of scope and also may take too much time to realistically do real time video processing. We have settled on using a NVIDIA Jetson TX2 instead. We also sat down and refined our solution for our use case. We have decided to use dynamic seat mapping, but only reflect an update in position if it moves > 1m. We discussed risk mitigation with seat tracking and occupancy by possibly using QR codes to uniquely identify chairs and whether they are occupied. We are in the process of redoing our task assignment and Gantt chart to allow for more time and earlier integration of all of our parts of the project.