Month: February 2023

Nii-Okaitey’s Status Report for 2/25/23

Nii-Okaitey’s Status Report for 2/25/23

This week I received the Jetson Nano 2GB and the Sainsmart IMX129 camera module that I requested from the course list. However, upon receiving the box, I realized the box did not have a power cord or SD card to boot up the OS, so 

Likhitha Chintareddy’s Status Report for 02/25/2023

Likhitha Chintareddy’s Status Report for 02/25/2023

This week, I set up the Django project and the Github repo for version control: https://github.com/santacalculus/No-Time-to-Dine-capstone/. While we had initially thought of using MongoDB Atlas to store video data, we have come up with a simpler solution on the backend side that involves storing the 

Chi Nguyen’s Status Report for 02/25/2023

Chi Nguyen’s Status Report for 02/25/2023

Half of the week was spent on the Design Review. There were issues related to my part specifically that were addressed in our feedbacks. First of all, there was a concern related to how we would customize the Yolov7 model to account for human objects and tune out non-human objects. Basically, the current Yolov7 model will output the coordinates of bounding boxes for every object in the frame and also map each set of coordinates to a specific object. The first column in output text files specify what object it is so all I’ll have to do is to filter out the output data by selecting only rows of data that have “person” in the their first columns. Another concern is related to our wait time algorithm. The current algorithm cannot guarantee a margin of error of 1-2 minutes because if the wait time for 1 person is within 10-30 seconds, then the margin of error will be way bigger than 1-2 minutes when we multiply the wait time by the number of people in the line. The variation of wait time for 1 person depends on what they order (e.g, latte, hot chocolate, sandwich, etc.). Since our goal for the MVP is to have our system work perfectly well in one dining location, we can collect data about the wait time for each order over a certain time frame and then perform a linear regression on the collected data to find the average wait time for each kind of order. However, to estimate the wait time of the line, this will require each person in the line to provide information on their order on the web application, which is impossible because we don’t want to make users unwillingly use the application. Therefore, we’ll need the cashier to provide the order information on our application. To collect data about each kind of order, we can take turns and collect the data at La Prima over a couple of days. To test our margin of error, we can have another feature specifically for testing reasons that let the system know when we make and receive the order. Then, we can easily compare the calculated wait time and the actual wait time and see how different our estimated result is.

Again, you can see my updated code here. I will update more on the design, test results on there after our meeting tomorrow (02/26/2023). You’ll see a lot of progress on there by then as I’m currently fixing some bugs. My progress is a little bit behind because there are adjustments to make to the design but I should have enough time to catch up before spring break because I have more free time slots next week. I hope to finish the customized code for object detection by next week and have the code for linear regression model and wait time calculation mostly done.

Team Status Report for 02/25/2023

Team Status Report for 02/25/2023

The camera we decided on previously – the Sainsmart IMX219 – is not a USB camera as we had indicated previously, as has been pointed out in the feedback. However, as it is compatible with the Jetson Nano board in which our image processing algorithm 

LIKHITHA CHINTAREDDY’S STATUS REPORT FOR 02/18/2023

LIKHITHA CHINTAREDDY’S STATUS REPORT FOR 02/18/2023

This week I researched on what is the optimal database for us to use. We considered DocumentDB by Amazon AWS as it is a popular choice as a cloud database, however, MongoDB Atlas emerged as the better choice. As discussed in our presentation last week, 

Team Status Report for 02/18/23

Team Status Report for 02/18/23

Referring back to our team status report from last week, we mentioned that not having our design flow fleshed out before the design review was a significant risk that could jeopardize the success of the project. This issue has been resolved as we spent a major portion of last week going into details about the resources we would be using for different hardware and software components and how the system would be integrated together.
As of now, the new significant risk in our design is that we swapped out components that some team members were familiar with with components that we believed would be more practical for our project. As you can see from our new block diagram, we swapped out FPGA Ultra 96 with Jetson Nano because Jetson Nano contains accelerated hardware for AI applications and supports Python. We intend to use USB Camera Sainsmart IMX219 because it’s perfectly compatible with Jetson Nano. In case we are unable to connect the hardware system to wifi, we can use an Ethernet cable instead. We also intend to use MongoDB Atlas to deploy MongoDB in the cloud as it has full support for MongoDB API compared to other database service (e.g. AmazonDocumentDB). Our team members (Paanii – hardware, Likhitha – frontend engineering) have never worked with these components before and will have to spend time learning how to use them, which can potentially create more bugs in our system. However, Jetson Nano and MongoDB Atlas are commonly used in previous Capstone projects or student led projects so we believe there are a lot of helpful resources that will help us resolve any trouble. In terms of cost, MongoDB Atlas is free so we will save a lot of money because AmazonDocumentDB (minimum of $200/month) is expensive to use. Jetson Nano is also free for us because it’s in the inventory.
You can also find our updated schedule here.

Chi Nguyen’s status report for 02/18/23

Chi Nguyen’s status report for 02/18/23

As mentioned in my previous status report and in our schedule, I have been working on implementing YOLO algorithm for object detection process. I have created a github which includes all of my code and notes on the design and implementation process (you can find 

Nii-Okaitey’s Status Report for 2/18/23

Nii-Okaitey’s Status Report for 2/18/23

This week I worked on revising our hardware plan for our project. After speaking with course staff, it seems given our application that an FPGA may not be the best solution for us. Previous projects that performed similar functions to ours utilized an actual embedded 

Chi Nguyen’s status report for 02/11/23

Chi Nguyen’s status report for 02/11/23

For this week, I’ve been spending time on the design aspect of my algorithm before actually implementing them next week. The goal for this following week will be to implement the YOLO algorithm for object detection and have it work on images with 1 to multiple objects. I’ve downloaded YOLOv7 and I’ve been looking at a lot of different resources to get familiar with it before writing actual code.

Here are some resources that I’ve found about YOLOv7 that will be useful for the project (and particularly the future design presentation):
1. https://github.com/WongKinYiu/yolov7

2. https://learnopencv.com/yolov7-object-detection-paper-explanation-and-inference/

3.https://viso.ai/deep-learning/yolov7-guide/

4. https://blog.paperspace.com/yolov7/

5. https://www.kaggle.com/code/taranmarley/yolo-v7-object-detection

6. https://blog.roboflow.com/yolov7-breakdown/ 

These resources also explain the performance tradeoff between YOLO and other algorithms (especially the updated performance compared to its previous versions) and solidify my choice of using this algorithm more for object detection.

My progress is currently on schedule. Our group is meeting tomorrow (Sunday 02/12/23) and I’m expecting to put in a few more hours of work on finalizing the design and getting started on writing the code. By the end of next week, I am expecting to finish the code that can detect objects in an image.

LIKHITHA CHINTAREDDY’S STATUS REPORT for 2/11

LIKHITHA CHINTAREDDY’S STATUS REPORT for 2/11

This week, I started doing research on the different UIs used by apps and also published papers and other work in the UI space on how to design for users who are looking at wait times. Some of the work I have found useful and