Team’s Status Report for 12/5/20

Our main priority for this week was working on the demo. Since we are all back home and are in different time zones, it was fairly hard for us to meet but we were able to effectively plan and practice for our demo. We were able to focus on any final touches needed for integrating our final project and were able to successfully fully demonstrate our progress and project during our demo. For the upcoming week, our main priorities are working on the video and the final presentation.

There were no major risks identified this week and our schedule and existing design of the project has not changed at all.

Venkata’s Status Report for 12/5/20

This past week was focused on working on the UI to be able to demo. I first focused on adding additional features such as the functionality to navigate through the database by adding navigation buttons to grab the appropriate entries as well as adding more functionality to the history trends page so that it combines workouts of the same day and standardizes the y axis. I then focused on cleaning up the UI by creating new icons and adding the ability to highlight the buttons/icons when the user hovers over the various options.

After working on the UI and cleaning up the Workout History Summary and Workout History Trends page, I worked on trying to integrate Spotify into our project. After learning about Spotipy, I was able to create a Developer app that connected with a user’s Spotify account to allow app to modify the user’s playback state by pausing/playing/skipping the current song and allow our app to identify the name of the current song. We were also considering cleaning up the authentication flow but after our demo, we decided that we should focus on other aspects and so, I simply worked with Vishal to create a simple UI for the Spotify section and began working on the other deliverables.

I am on track with the schedule. For the upcoming week, I plan on focusing on the various deliverables such as working on the final presentation and providing Albert with the various video/audio files he needs to successfully integrate our various sections for our final video. I will then work on the final report the week after that.

 

Venkata’s Status Report for 11/21/20

This week was predominantly spent on working on tasks not focused on the hardware. Since I was able to meet our performance requirement for the FPGA, I decided to help out with the software side, specifically by working on parts of the UI that didn’t directly interfere with Vishal’s work. I decided to focus on the pages that provide workout history to the users. I first came up with a couple of mockups and after discussing with the rest of the team, I decided to start coding up the various pages. I had to learn how to use the needed python libraries (Pygame and Matplotlib) and the existing codebase for the python application. I then was able to create the following three pages that pull the appropriate information from our database and display it to the user.

  • Workout History Options – Users will be able to navigate the database and pick a workout to analyze

  • Workout History Summary – Users will receive a detailed summary of a selected former workout

 

  • Workout History Trends – Users will receive a graph analyzing their performance over the past 5 workouts

 

I am on track with the schedule. For the upcoming week(s), I plan on working with Vishal to fully integrate the changes that I made and work on refining the various pages to ensure that it is visually appealing. I will also do some more testing with the FPGA and integrate the new changes that Albert made to the image processing code.

Team’s Status Report for 11/14/20

This week we were able to complete our MVP that entailed basic integration by the time of our demo. We were able to present a product that was able to read in a random image, stream it to the FPGA, grab the feedback and display the appropriate feedback in Terminal fully synced with the User Interface. We also spent time collecting more images to help us train the posture analysis portion and ensure that we are able to provide the appropriate feedback while also testing the image thresholds for the various joints in various lighting conditions.

This week, there were no changes made to the overall design of the project nor any major risks identified.

Venkata’s Status Report for 11/14/20

I was able to work on quite a few different things this week. By the time of the last status report and the demo, I worked with Vishal on the interface and the communication between the FPGA and the computer, and we were able to have it completely integrated by the demo, which was presented during our demo.

Last week, I mentioned that the arrival time of the various pixel locations is higher than our target requirement of 1.5 s. I addressed this issue with the other team members and we were able to determine that we would be able to provide appropriate feedback to a user for a particular exercise if we passed an additional byte at the start that indicated the type of exercise being performed. This byte is located at the bottom of the stack after the erosion output depicted in last week’s status report.

This would allow the FPGA to only process the locations of the joints that are used for the appropriate exercise and let the FPGA skip the processing of the other joints and simply output 0s for those joints as in the example below.

By doing this, we were able to reduce the arrival time to less than 1.5 s and were able to receive the feedback after taking an image to our target requirement.

Note: The timer begins before opening the image and ends after it receives the appropriate feedback from the posture analysis functions.

I also began testing the various portions of the code. I was able to verify that the time it takes to send the image via UART and receive some information is less than our target of 1 s.

In terms of the schedule, I am on track. Since I do not have to do any further optimizations to make for the code and the other portions of the project are not fully refined, I plan on spending a large portion of the remaining time for data collection by serving as the model and ensure that the various portions have enough data to allow them to be fully refined.

Venkata’s Status Report for 11/7/20

I was able to make significant progress this week. Firstly, I realized that I was incorrectly using the BRAM in the MicroBlaze and had to use a Block Memory Generator IP block, so I updated the block diagram to the following.

After learning how to use the IP block, I was able to effectively store the entire input image and allocate different portions of the BRAM for the various intermediate results that are generated as follows:

I then worked with Albert to fully convert the image processing Python code to an implementation that I could place on the MicroBlaze. I was able to synthesize an implementation that met our target baud rate and was able to meet timing by playing around with different configurations of the block design. As a result, I was able to have the FPGA take in an image such as the following

and return the coordinates as follows.

The UART.py file loads an image, downscales the image, converts it to HSV, and writes it to the FPGA serially, and then waits until it receives all of the coordinates. It is similar to the overall workflow and is ready to be integrated.

As mentioned last week, I am slightly behind schedule. The arrival time of the locations is slightly higher than our target requirement. I plan on optimizing this during the weeks allocated for the slack but also plan on investigating and trying to reduce the latency to our target the upcoming week. I also plan on working with Vishal and having the interface and communication between the FPGA and UI be fully completed by next week.

Venkata’s Status Update for 10/31/20

Last week, I was able to program the FPGA to be able to stream information from the FPGA to the CPU via UART. I had to make a couple of small changes so that I would be able to stream information from the CPU to the FPGA, which worked well on small sets of values but had issues when I tried to stream large sets of values such as an image. This is because the input buffer for the UARTLite IP block only contains 16 bytes so, the device has to read from the buffer at an appropriate rate to ensure that we don’t lose information. I looked into different ways of reading information such as an interrupt handler and polling frequently and was eventually able to get an implementation where it stores all of the information appropriately. Attached is an image where I was able to echo 55000 digits of Pi ensuring that was able to use UART both ways and able to store the information.

In terms of the block diagram, I realized that the local memory of the MicroBlaze is configurable and invokes BRAMs. So, I simplified the design to the following and tried to store all of the information in the MicroBlaze.

However, I kept running into issues where the binaries would not be created if I   used too much memory. I am checking if I am not appropriately using the memory or if we need to downscale the image slightly more (which I have already discussed with my teammates).

Finally, another issue that arose was related to the baud rate. Different baud rates require different clock frequencies. As I was creating different block diagrams, it would sometimes not meet the target frequency and violate timing. In the image above with the digits of Pi, I was able to use our target baud rate.

In terms of the schedule, I was hoping to have most of the design done but ran into quite a few issues 🙁 I have addressed this with my teammates and by next week, I plan on having the ability to stream an image and receive the information (at a potentially smaller image size). I will finish the implementation of the image processing portion with the Vitis Vision library. I will then try to optimize the design to be able to use a high baud rate and the entire image during the weeks that were allocated for slack.

Venkata’s Status Update for 10/24/20

The past week and the upcoming week are allocated for working on the HLS implementation. Unfortunately, I was roadblocked at the start of the week since I was having trouble with HLS and wasn’t sure how to proceed. I then met Zhipeng (who the Professor introduced me to) and he was able to address my questions and pointed me to use the MicroBlaze soft-core CPU that would be responsible for controlling the interactions that take place on the FPGA. I also updated the block diagram that exists on the FPGA to be the following.

Note: It is not complete as it still requires the IP core that is responsible for the image processing. This core will connect to the appropriate AXI ports that have been left unconnected.

I then started learning how to use the MicroBlaze core, which required adding a new piece of technology (Vitis) as the current version of Vivado does not have an SDK to program the MicroBlaze core. I was able to program the core and stream appropriate information via UART. I am looking into the other components in the block diagram and how to control them.

I am on track with the schedule. This week, I hope to be able to learn how to control the various components specified in the block diagram and also have the image processing IP core done.

Team Status Update for 10/24/20

This week we took sample images with the webcam that recently arrived. In order to appropriately fine-tune the parameters for our posture analysis algorithm, we took multiple images at different positions of the various exercises that we plan on supporting. A major risk that we identified was that we would not be able to fully identify all of the colors. This risk is fully elaborated in Albert’s status report for this week. The contingency plans involve finding more distinct colors that would be easier to track and experimenting with different lighting and background conditions to find the optimal background for our project.

No major changes have been to the overall design and schedule.

Venkata’s Status Report for 10/17/20

This week was predominantly focused on learning HLS. I started reading this tutorial by Xilinx and have a rough idea of the workflow using HLS. I also talked to a couple of friends who are currently taking 18-643: Reconfigurable Logic and they pointed me to further resources for learning how to use HLS. After referring to a combination of these resources and some of the sample HLS projects that Vivado provides, I have an idea of how to go about implementing the design.

I am currently on track with the schedule. According to the schedule, this past week was for learning HLS and the upcoming week is for implementing the design. I have begun working on next week’s task for implementing the design by working with Albert to convert the existing Python code for image processing into C that can be passed into HLS.