Brian’s Status Report for 2/24

Accomplishments

This week, I mostly worked on preparing for the design presentation that happened this Wednesday, since I was the presenter this time around. After this, I got some rudimentary work done on the software side of our project, getting a branch set up on our team’s Github repository. I am now attempting to detect when a hand is in contact with an item based off webcam footage, but I haven’t found a working solution to this yet. I am currently using YOLO for object detection because I realized it might be more promising than trying to figure out when a hand is grabbing an item with a custom algorithm. I’ve read a little into the HPS OpenCL guide in order to make myself a little more familiar with the framework as I will be working with Simon after break on the FPGA side of things for our project, but mainly focusing on software for now. 

 

Progress

I am still a bit behind schedule as I have not fully implemented a working algorithm for detecting when someone picks up and puts down an item yet. I will very likely be working on the software implementation during spring break should I still be behind next week, as I want to make sure we can integrate our system after with no delays. However, to catch up for next week, I will read more into YOLO/OpenCV documentation to see if there are things I can try in getting hand contact with items to be recognized by the algorithm, and attempt implementing those solutions. I’ve already searched a bit to see what types of algorithms I could use but haven’t found an idea that seems truly feasible/efficient yet.



Shubhi’s Status Report for 2/17

This week I worked on setting up the environment and the end to end software architecture for the system. We have a github repository where I have created mock modules for each function and step of our system. I also worked on designing what our database would look like and what data our system would need to communicate between all the modules. After working on this, I am slightly concerned about the camera inputs to the system since we need several cameras, and I am not sure how many data streams the system can support, especially since our current system needs 2 cameras per checkout line. In the next week, I hope to work on the computer vision component of identifying how many people/items exist in a line in a given camera feed.

Team Status Report for 2/17

We spent part of this week talking about potentially changing the system to include a different type of sensor to observe the number of items in every cart. After discussing with our faculty advisor and talking between each other, we decided against it since it would have been an additional effort to integrate it into our system, something we didn’t think was worth it. As a team, we spent the rest of the week fleshing out and implementing the architecture of our system while also working on the design review slides. 

 

After doing some more research into how OpenCL works, we realized that OpenCL converts kernel code (C/C++) into RTL and synthesizes it, in which case our Gantt chart needs to be modified. Now, our schedule has a dependency where we need to have an algorithm in OpenCV, then find the portion of the code that needs to be sped-up, and make a C/C++ implementation from Python so that we can accelerate it using OpenCL. Therefore, the allocation of tasks has changed and a few tasks have been shuffled/removed. Below is our revised Gantt chart:

A was written by Simon, B was written by Shubhi, C was written by Brian

Part A: Our system expedites the grocery shopping process, which hopefully alleviates some of the frustrations that might arise from having to wait in lines. With regard to public health, a more pleasant shopping experience could encourage people to shop for their own groceries more frequently, which tends to be healthier than eating at restaurants. This also benefits public welfare, since our system aims to make the basic need of buying groceries more convenient. Where safety is concerned, the use of cameras in our system could be concerning for the customers. However, the only data we aim to collect about people is the number of them (and not who is) in a line at a given time, and we will not be storing any significant amount of camera footage. Furthermore, people are already monitored in grocery stores through security cameras, so our additional cameras will not be capturing any information that wouldn’t already be known for security purposes.

 

Part B: Individuals are bound to have a better shopping experience where they are able to save time at the grocery store. People are more likely to return back to the grocery store and make more visits if they have a positive shopping experience, almost fostering a sense of community at the grocery store. Many demographics of people don’t get a lot of social interaction, such as the elderly, and a positive grocery store experience provides an opportunity to foster that sense of community and increase social interaction between such groups. Integrating a system that decreases the time it takes to get to the checkout counter also decreases the overall time a customer spends at the store, which is more time to spend elsewhere that can be directed to bettering society. 

 

Part C: Our system increases the speed at which customers are able to check out of the store, and this could allow for stores to become more popular due to customer satisfaction with checkout times. With increased popularity, a given store using our system could increase revenue and by extension profit. Also, with faster checkout times, a store might not need as many cashiers to man the counters, which would also increase store profits due to cost cutting.



Brian’s Status Report for 2/17

Accomplishments

This week, I mostly worked on the design presentation slides with Simon, since I am the presenter for next week. We were able to create block diagrams for our system’s algorithmic design and a rough diagram of the physical layout together. I also picked up our DE10-Standard FPGA that we will be using for hardware acceleration and looked into videos/documentation on how to work with the board. I also modified our Gantt chart after discussion with my teammates because now there is a dependency in our schedule. I am currently working with some open source OpenCL code in order to further understand how the framework works in order to be able to help Simon with converting OpenCV Python code to C/C++ code in later weeks to use with OpenCL. 

 

Progress 

The DE10-Standard has a lot of features, and because of this, I am still quite unfamiliar with how to use it fully. In terms of CV algorithm implementation, because of our redefinition of scope/design after our weekly meeting, I am also slightly behind schedule in this regard. To combat this, I hope to get through the OpenCL open source code within the next few days, communicate with Shubhi for more clarification on the software side of the project, and progress with the implementation of a CV algorithm for tracking a cashier checking out items.



Simon’s Status Report for 02/17/2024

Accomplishments:

I started off this week by looking into sensors we could use instead of cameras after receiving feedback from our proposal presentation. However, we decided that sensors on shopping carts would be economically unfeasible, easy to break/lose, and sensors at checkout lines would be less accurate than cameras.

I also worked on the design presentation slides with Brian. We refined our use case requirements, decided on which components we would use, and created a block diagram for our system.

Lastly, I familiarized myself with the DE10-Standard FPGA by reading through the manual and system CD, downloading the necessary software, and working through the getting started guide and the tutorials for the FPGA. I am currently working through the HPS guide and I plan to finish that and the OpenCL guide in the next few days.

Progress:

I underestimated the amount of work needed to familiarize myself with the DE10-Standard, so I am behind schedule by a little bit. Also, I am still unsure of the details for the CV algorithm, so Brian and I have not gotten around to designing the RTL datapath. However, the schedule for this week only entails getting the communication between the FPGA and my laptop working, so hopefully I can still complete this task on time and catch back up. For next week, I hope to have an initial version of the datapath done and also be able to send and receive data between my laptop and the FPGA.

Shubhi’s Status Report for 2/10

This week, I was able to spend some time researching computer vision algorithms to figure out what options we had for programming the part of the system that detects the number of objects in a cart. From my research, I found that OpenCV has multiple builtin algorithms for object tracking which seem promising for our use case, and also given the ease of usability and integration with the rest of the system. We also presented our proposal presentation this week, which I spent some time working on as well. Given the feedback we received from our presentation, I am considering other options other than a CV algorithm to analyze the carts, but that is something I plan on discussing with my team early next week so we can switch gears if we decide to.



Simon’s Status Report for 02/10/2024

This week, I worked on the proposal presentation slides, mostly contributing to the use-case requirements, technical challenges, and scope. I also focused on doing research into FPGA-based hardware acceleration to gain a general understanding of how it applies to machine learning and computer vision so that Brian and I can design our RTL implementation next week. From our readings, Brian and I decided on a DE10-Standard FPGA with a DE10-Lite as backup if the DE10-Standard is not available in the inventory. We also are planning to use the OpenCL framework for our FPGA development. (See Brian’s status report for more details).

In terms of progress, I feel like I am slightly behind schedule because of uncertainty as to which component(s) of the CV algorithm to speed up using an FPGA. Consequently, I haven’t figured out how to start designing the datapath, which Brian and I are scheduled to complete this upcoming week. However, we plan to have a team meeting either on Sunday or Monday where we will discuss potential CV algorithms and how an FPGA can be incorporated. This will hopefully give Brian and I a good idea of what to needs to be included in the datapath design.

For next week, I hope to have a completed plan for our datapath and begin looking into how to send/receive data between the FPGA to the CPU.

Team Status Report for 2/10

Risk Identification and Mitigation Strategies

Primary Risk Concern: A major risk for this project is choosing an acceptable FPGA to support our hardware acceleration constraints. Which FPGA we choose is important, we want one that has enough logic elements to support our RTL implementation. 

 

Mitigation Strategies: 

To address this risk before any potential issues, we are planning on using an FPGA in the project inventory (we are planning on using the DE10-Standard at the moment since Cyclone V has much more support than Cyclone IV boards). Since there is only one of these boards in the project inventory at the moment, we can pivot to ordering another Cyclone V board online if the DE10-Standard is claimed by another team (we are considering this less costly FPGA: https://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=218&No=1021&PartNo=2#contents). This will be a significant portion of our budget but our required items for purchase are not as expensive as this FPGA, so we will still be well within budget even if we need to purchase another FPGA. 

 

Design Changes: 

Although we have not made changes to our design yet and will continue to discuss this on Monday, we are working to incorporate feedback from our proposal. We are considering changing our design to from being entirely wired to including wireless communication where necessary to improve our scalability. We are currently exploring our options still, but we are primarily looking at bluetooth. 

 

Schedule Change: 

After the proposal presentations, we realized that our Gantt chart wasn’t accounting for spring break, so we decided to change that and include a time for break. Note that the RTL implementation task likely will not be worked on during spring break, but we thought that it wouldn’t be as clear if we separated the same task into two different tasks. 



Brian’s Status Report for 2/10

Accomplishments

Simon and I researched FPGA hardware acceleration and discussed our results briefly in order to understand how it works to a better degree and we also deliberated on which FPGA would be good to use for our project. While we saw multiple articles online where Xilinx FPGAs are being used for accelerating ML algorithms, we came to the conclusion that the exact type of FPGA doesn’t really matter and we decided that we could use an Intel Altera board (currently planning on using the DE10-Standard, can order the DE10-Lite if the DE10-Standard is taken by another team). We also found that it would be better to use a Cyclone V FPGA rather than a Cyclone IV FPGA like the DE0-Nano, because Cyclone V boards support OpenCL, which is a framework we are heavily considering for hardware acceleration. 

 

Furthermore, I came across this article that seems to be very relevant for our project (https://www.researchgate.net/publication/338481306_Hardware_Acceleration_of_Computer_Vision_and_Deep_Learning_Algorithms_on_the_Edge_using_OpenCL). This article seems to implement hardware acceleration for computer vision algorithms using OpenCL, a high level synthesis framework that converts kernel code to RTL. An even more relevant detail in this article is that this was all done using a Cyclone V Altera board, which we are planning to use by requesting from the project inventory (DE10-Standard). I also edited the Gantt Chart to better reflect our workflow during Spring Break.

 

Project Schedule and Progress

Fortunately, my progress is on schedule, and during Monday’s mandatory lab meeting, Simon and I will discuss our findings in more detail and start working on the design of our RTL/kernel code implementation, if time allows. This portion of the project is extremely important because our approach for hardware acceleration is vital for our final product. 

 

Goals for the Next Week

In the next week, Simon and I hope to finish designing our RTL/kernel implementation and getting an FPGA. I will also hope to complete a significant portion of the cashier time detection algorithm implementation in that time, and we hope to get cameras delivered so we can test our CV algorithms in the future.