Shubhi’s Status Report for 2/24/2024

Accomplishments

I worked on the software implementation for the architecture, ensuring that the directory is organized well and that the design of the software covers all the functions needed. Every module has been fleshed out as to what its purpose is to the overall goal, as well as how the modules will communicate with each other. Also, I was working on integrating openCV with the directory, researching on using OpenCV with C++ since I had only used it with Python before, and am also working on integrating YOLO into the project. There are some articles on how to use YOLO with C++, so I read them and feel more confident with using it in the project. I also placed an order for a camera, so that I can do live testing with the detection implementation.

Progress

Currently I am still building up the software, but hopefully by next week I will have some sort of item counting detection running to test. I underestimated how much time it would take for me to research and learn about implementing YOLO with OpenCV, but I also have now figured that I am going to need to use more edge detection related algorithms specifically for the item counting, in which case I am going to have to do a little bit more research there. I am not super confident in the item detection working, but I have a good idea on how to implement it, so we will see by the end of the week.

Simon’s Status Report for 02/24/2024

Achievements

I made little progress this week. I didn’t manage to finish the HPS guide I mentioned last week due to not having an ethernet router or an ethernet port on my laptop, both of which are needed. As such, I am unable to use the UART to USB communication that I was originally planning on until I get the ethernet components. I have not yet figured out an alternative for communication without using the HPS. Also, I didn’t get to experiment with OpenCL this week because I didn’t have the required microSD card and the adapter, so I am waiting on those to be delivered this coming Monday (note to self: In the future, I should read all the required materials first thing). As a result, much of my time this week was spent on planning the OpenCL implementation since I was unable to make progress on the other deliverables.

Progress:

I am significantly behind schedule, since I am supposed to start implementing the CV algorithm using OpenCL next week. To resolve this, I plan to spend some extra time over spring break (which was originally scheduled to not have any work) to catch back up to schedule, since I doubt I will be back on schedule by the end of this week.

Team Status Report for 2/24

Risk Mitigation

While we don’t have too much currently planned for our budget, it is imperative that we use it wisely when ordering cameras because of our design. Our design uses two cameras per checkout line, which gets very costly as we order more cameras to complete our system integration later in the semester. Therefore, we are currently just ordering one camera for testing and then we can change our camera if hardware requirements aren’t met, and not spend too much budget on this.

 

Design 

There were no design changes made to our system as of now. However, we are discussing what we need to do to handle choosing the correct frames in the real-time video footage we are receiving in order to not process redundant data, and how to handle the line decreasing in size.

 

Schedule

Our Gantt chart has not changed overall as of this week.



Brian’s Status Report for 2/24

Accomplishments

This week, I mostly worked on preparing for the design presentation that happened this Wednesday, since I was the presenter this time around. After this, I got some rudimentary work done on the software side of our project, getting a branch set up on our team’s Github repository. I am now attempting to detect when a hand is in contact with an item based off webcam footage, but I haven’t found a working solution to this yet. I am currently using YOLO for object detection because I realized it might be more promising than trying to figure out when a hand is grabbing an item with a custom algorithm. I’ve read a little into the HPS OpenCL guide in order to make myself a little more familiar with the framework as I will be working with Simon after break on the FPGA side of things for our project, but mainly focusing on software for now. 

 

Progress

I am still a bit behind schedule as I have not fully implemented a working algorithm for detecting when someone picks up and puts down an item yet. I will very likely be working on the software implementation during spring break should I still be behind next week, as I want to make sure we can integrate our system after with no delays. However, to catch up for next week, I will read more into YOLO/OpenCV documentation to see if there are things I can try in getting hand contact with items to be recognized by the algorithm, and attempt implementing those solutions. I’ve already searched a bit to see what types of algorithms I could use but haven’t found an idea that seems truly feasible/efficient yet.



Shubhi’s Status Report for 2/17

This week I worked on setting up the environment and the end to end software architecture for the system. We have a github repository where I have created mock modules for each function and step of our system. I also worked on designing what our database would look like and what data our system would need to communicate between all the modules. After working on this, I am slightly concerned about the camera inputs to the system since we need several cameras, and I am not sure how many data streams the system can support, especially since our current system needs 2 cameras per checkout line. In the next week, I hope to work on the computer vision component of identifying how many people/items exist in a line in a given camera feed.

Team Status Report for 2/17

We spent part of this week talking about potentially changing the system to include a different type of sensor to observe the number of items in every cart. After discussing with our faculty advisor and talking between each other, we decided against it since it would have been an additional effort to integrate it into our system, something we didn’t think was worth it. As a team, we spent the rest of the week fleshing out and implementing the architecture of our system while also working on the design review slides. 

 

After doing some more research into how OpenCL works, we realized that OpenCL converts kernel code (C/C++) into RTL and synthesizes it, in which case our Gantt chart needs to be modified. Now, our schedule has a dependency where we need to have an algorithm in OpenCV, then find the portion of the code that needs to be sped-up, and make a C/C++ implementation from Python so that we can accelerate it using OpenCL. Therefore, the allocation of tasks has changed and a few tasks have been shuffled/removed. Below is our revised Gantt chart:

A was written by Simon, B was written by Shubhi, C was written by Brian

Part A: Our system expedites the grocery shopping process, which hopefully alleviates some of the frustrations that might arise from having to wait in lines. With regard to public health, a more pleasant shopping experience could encourage people to shop for their own groceries more frequently, which tends to be healthier than eating at restaurants. This also benefits public welfare, since our system aims to make the basic need of buying groceries more convenient. Where safety is concerned, the use of cameras in our system could be concerning for the customers. However, the only data we aim to collect about people is the number of them (and not who is) in a line at a given time, and we will not be storing any significant amount of camera footage. Furthermore, people are already monitored in grocery stores through security cameras, so our additional cameras will not be capturing any information that wouldn’t already be known for security purposes.

 

Part B: Individuals are bound to have a better shopping experience where they are able to save time at the grocery store. People are more likely to return back to the grocery store and make more visits if they have a positive shopping experience, almost fostering a sense of community at the grocery store. Many demographics of people don’t get a lot of social interaction, such as the elderly, and a positive grocery store experience provides an opportunity to foster that sense of community and increase social interaction between such groups. Integrating a system that decreases the time it takes to get to the checkout counter also decreases the overall time a customer spends at the store, which is more time to spend elsewhere that can be directed to bettering society. 

 

Part C: Our system increases the speed at which customers are able to check out of the store, and this could allow for stores to become more popular due to customer satisfaction with checkout times. With increased popularity, a given store using our system could increase revenue and by extension profit. Also, with faster checkout times, a store might not need as many cashiers to man the counters, which would also increase store profits due to cost cutting.



Brian’s Status Report for 2/17

Accomplishments

This week, I mostly worked on the design presentation slides with Simon, since I am the presenter for next week. We were able to create block diagrams for our system’s algorithmic design and a rough diagram of the physical layout together. I also picked up our DE10-Standard FPGA that we will be using for hardware acceleration and looked into videos/documentation on how to work with the board. I also modified our Gantt chart after discussion with my teammates because now there is a dependency in our schedule. I am currently working with some open source OpenCL code in order to further understand how the framework works in order to be able to help Simon with converting OpenCV Python code to C/C++ code in later weeks to use with OpenCL. 

 

Progress 

The DE10-Standard has a lot of features, and because of this, I am still quite unfamiliar with how to use it fully. In terms of CV algorithm implementation, because of our redefinition of scope/design after our weekly meeting, I am also slightly behind schedule in this regard. To combat this, I hope to get through the OpenCL open source code within the next few days, communicate with Shubhi for more clarification on the software side of the project, and progress with the implementation of a CV algorithm for tracking a cashier checking out items.



Simon’s Status Report for 02/17/2024

Accomplishments:

I started off this week by looking into sensors we could use instead of cameras after receiving feedback from our proposal presentation. However, we decided that sensors on shopping carts would be economically unfeasible, easy to break/lose, and sensors at checkout lines would be less accurate than cameras.

I also worked on the design presentation slides with Brian. We refined our use case requirements, decided on which components we would use, and created a block diagram for our system.

Lastly, I familiarized myself with the DE10-Standard FPGA by reading through the manual and system CD, downloading the necessary software, and working through the getting started guide and the tutorials for the FPGA. I am currently working through the HPS guide and I plan to finish that and the OpenCL guide in the next few days.

Progress:

I underestimated the amount of work needed to familiarize myself with the DE10-Standard, so I am behind schedule by a little bit. Also, I am still unsure of the details for the CV algorithm, so Brian and I have not gotten around to designing the RTL datapath. However, the schedule for this week only entails getting the communication between the FPGA and my laptop working, so hopefully I can still complete this task on time and catch back up. For next week, I hope to have an initial version of the datapath done and also be able to send and receive data between my laptop and the FPGA.

Shubhi’s Status Report for 2/10

This week, I was able to spend some time researching computer vision algorithms to figure out what options we had for programming the part of the system that detects the number of objects in a cart. From my research, I found that OpenCV has multiple builtin algorithms for object tracking which seem promising for our use case, and also given the ease of usability and integration with the rest of the system. We also presented our proposal presentation this week, which I spent some time working on as well. Given the feedback we received from our presentation, I am considering other options other than a CV algorithm to analyze the carts, but that is something I plan on discussing with my team early next week so we can switch gears if we decide to.



Simon’s Status Report for 02/10/2024

This week, I worked on the proposal presentation slides, mostly contributing to the use-case requirements, technical challenges, and scope. I also focused on doing research into FPGA-based hardware acceleration to gain a general understanding of how it applies to machine learning and computer vision so that Brian and I can design our RTL implementation next week. From our readings, Brian and I decided on a DE10-Standard FPGA with a DE10-Lite as backup if the DE10-Standard is not available in the inventory. We also are planning to use the OpenCL framework for our FPGA development. (See Brian’s status report for more details).

In terms of progress, I feel like I am slightly behind schedule because of uncertainty as to which component(s) of the CV algorithm to speed up using an FPGA. Consequently, I haven’t figured out how to start designing the datapath, which Brian and I are scheduled to complete this upcoming week. However, we plan to have a team meeting either on Sunday or Monday where we will discuss potential CV algorithms and how an FPGA can be incorporated. This will hopefully give Brian and I a good idea of what to needs to be included in the datapath design.

For next week, I hope to have a completed plan for our datapath and begin looking into how to send/receive data between the FPGA to the CPU.