Albert’s Status Report for 11/14/20

At the start of the week, my laptop’s screen broke and I had to send it in for repairs. I couldn’t make as much progress as I would have liked. I would make more live fine tuning for the posture analysis and HSV bounds in the next weeks. This week I was able to work on dealing with invalid joint positions. When we were integrating the FPGA with the application, my code crashed because I was doing a divide by zero to get the slope. This was caused by the FPGA not being able to detect the joints, so it outputted the origin. Therefore, this week I added further checks to ensure that the joints I received were valid positions. If not, I would output an invalid signal to the application. In the image below the output of the application would be “Invalid: [‘Invalid Joints Detected: Shoulder!’]” as the shoulder cannot be detected.

Shoulder cannot be detected.

Since we wanted to satisfy our requirement of having a feedback every 1.5 seconds, Venkata measured the time from getting the image to outputting the posture analysis. He concluded that it is best every workout to be limited to 5 joints so that we can get the feedback in 1.42 seconds. Leg raises and Lunges would not need more than 5 joints. For a pushup, we can relax the requirement to provide more feedback on the lower body, find a way to not send portions of the data to the FPGA, downscale the image even more, or take away one of the joints in the lower body.

Next week, I would conduct more live testing with the application, webcam, and FPGA integrated. I will also create ways to test the feedback and generate checks for the hardware side as well. I am on schedule in terms of the overall progress.

Venkata’s Status Report for 11/14/20

I was able to work on quite a few different things this week. By the time of the last status report and the demo, I worked with Vishal on the interface and the communication between the FPGA and the computer, and we were able to have it completely integrated by the demo, which was presented during our demo.

Last week, I mentioned that the arrival time of the various pixel locations is higher than our target requirement of 1.5 s. I addressed this issue with the other team members and we were able to determine that we would be able to provide appropriate feedback to a user for a particular exercise if we passed an additional byte at the start that indicated the type of exercise being performed. This byte is located at the bottom of the stack after the erosion output depicted in last week’s status report.

This would allow the FPGA to only process the locations of the joints that are used for the appropriate exercise and let the FPGA skip the processing of the other joints and simply output 0s for those joints as in the example below.

By doing this, we were able to reduce the arrival time to less than 1.5 s and were able to receive the feedback after taking an image to our target requirement.

Note: The timer begins before opening the image and ends after it receives the appropriate feedback from the posture analysis functions.

I also began testing the various portions of the code. I was able to verify that the time it takes to send the image via UART and receive some information is less than our target of 1 s.

In terms of the schedule, I am on track. Since I do not have to do any further optimizations to make for the code and the other portions of the project are not fully refined, I plan on spending a large portion of the remaining time for data collection by serving as the model and ensure that the various portions have enough data to allow them to be fully refined.

Vishal Status Report 11/07/20

This week I was able to make a decent amount of progress in the user interface application, as well a bit in terms of integration. I worked on actually implementing the calorie estimator as well as the heart rate estimator. I used my previous calorie formulas and data to keep track of calories burned during both rest as well as different workouts. I had to a do a bit of new research for heart rate but found out a proper value using the following formula and table.

Heart Rate = (Heart Rate Reserve)*(MET Reserve Percentage)

I also refined a few bugs in the workout page such as the pausing and timing for that, as well as having transitions between different workouts and changing the heart rate/calorie estimation.

I also worked a bit on integration with both Albert and Venkata. With the hardware I was able to work with the pyserial library and receive and send data from the captured images. I also was able to integrate the posture analysis in a naive and basic manner and have feedback shown in the workout summary at the end. In terms of schedule I am still on track to complete on time, but I will still be prioritizing integration over the database.

Team Status Report 11/07/20

This week we took more pictures together of various workouts to add to our collection for training thresholds and the posture analysis. From these pictures we were able to fine tune the different types of analysis we will be able to give for leg raise, push ups, and lunges. The details of the different types of analysis are detailed in Albert’s status report. We finally discussed what the details of our demo is going to look like and how much we want integrated for the demo.

Albert’s Status Report 11/07/20

This week we took even more pictures of bad posture of our workouts for the posture analysis portion. I classified the images into different checks that I perform. For example, for a leg raise, there is a picture where the legs are well over the perpendicular line, and I made sure the feedback would be “Don’t OverExtend”. Here is a breakdown of the images/feedback that I classified.

Leg Raise:

  • Perfect: (No Feedback)
  • Leg Over: “Don’t Overextend”
  • Leg Under: “Raise your legs Higher”
  • Knee Bent: “Don’t bend your knees”

Pushup:

  • Perfect: (No Feedback)
  • Hand Forward: “Position your hands slightly backwards”
  • High: “Go Lower”

Lunges (Forward + Backward):

  • Perfect: (No Feedback)
  • Leg Forward: “Front leg is over extending”
  • Leg Backward: “Back Leg too Far Back”
  • High: “Go Lower”

Feedback: [“Knees are Bent”]
Feedback: [“Over-Extending”, “Knees are Bent”]
Feedback: [“Raise your Legs Higher”]
Feedback: [] (Perfect Posture)
 

 

 

 

 

 

 

 

 

 

I have finished implementing and the basic fine tuning of all the workout positions. It has been formatted in a way for the application to call it directly through the functions I provide. I spend time teaching Vishal how to use my classes and methods. It has I did a slight touch up on the HSV Bounds as well and it has been pinpointing the joints very accurately on 10-15 other images.

This week I also spent a lot of time debugging the HLS Code and ensuring the FPGA is getting the correct joints. I first generated the Binary Mask into a txt file to compare it with the hardware version. We ran into a lot of problems debugging. First, we messed up the row and col accesses because the PIL library doesn’t follow the matrix form of row x col. Thus, I had to make a lot of changes to my python code in order to match the way Venkata stores it in as the bitstream. There were multiple bugs with erosion and dilation that we spent a lot of time using the software test bench to pinpoint the bug. Finally, after all the unit tests have been working, we ran the FPGA on all the joints. However, there was a slight different in the joint locations returned. After spending 1-2 hours checking the main portion of the code. We realized that the bug was when we created the byte array to send it to the FPGA. I had Pillow 6.1.0 while Venkata had Pillow 8.0.3 (newer version). The different versions of the same library did resizing and converting to HSV differently. Since my HSV bounds have been fine tuned, it took a while for Venkata to reinstall Python 3.6 (because Pillow 6.1.0 is only compatible with this version). My portion can soon be integrated either before or after the demo.

I am currently on schedule and slightly ahead. I might help Vishal if he needs help with the database or application if I have extra time. I will aim to do more testing once the FPGA and my posture analysis portion gets integrated into the application. Fine tuning will be much easier and more efficient with real time debugging rather than through pictures. Hopefully in the next week or two, I can fine tune the bounds and thresholds for the posture analysis even better. Also, since our time to get the image processed on the FPGA is slightly high, Venkata and I will work on optimizing the algorithm more.

Venkata’s Status Report for 11/7/20

I was able to make significant progress this week. Firstly, I realized that I was incorrectly using the BRAM in the MicroBlaze and had to use a Block Memory Generator IP block, so I updated the block diagram to the following.

After learning how to use the IP block, I was able to effectively store the entire input image and allocate different portions of the BRAM for the various intermediate results that are generated as follows:

I then worked with Albert to fully convert the image processing Python code to an implementation that I could place on the MicroBlaze. I was able to synthesize an implementation that met our target baud rate and was able to meet timing by playing around with different configurations of the block design. As a result, I was able to have the FPGA take in an image such as the following

and return the coordinates as follows.

The UART.py file loads an image, downscales the image, converts it to HSV, and writes it to the FPGA serially, and then waits until it receives all of the coordinates. It is similar to the overall workflow and is ready to be integrated.

As mentioned last week, I am slightly behind schedule. The arrival time of the locations is slightly higher than our target requirement. I plan on optimizing this during the weeks allocated for the slack but also plan on investigating and trying to reduce the latency to our target the upcoming week. I also plan on working with Vishal and having the interface and communication between the FPGA and UI be fully completed by next week.

Team Status Report 10/31/20

This week we discussed on how the posture classifies as a good posture for the posture analysis portion. Albert was able to change up the thresholds afterwards. We also gave feedback to Vishal’s User Interface to improve the user experience. On the hardware side, we might potentially have to downscale the image further due to memory constraints on the FPGA. We will explore other options before we resort to downscaling the image even further because it may harm the image processing side. Finally, we discussed about what we wanted to include in our checkpoint demo for next week so we tried to polish those portions even more.

Venkata’s Status Update for 10/31/20

Last week, I was able to program the FPGA to be able to stream information from the FPGA to the CPU via UART. I had to make a couple of small changes so that I would be able to stream information from the CPU to the FPGA, which worked well on small sets of values but had issues when I tried to stream large sets of values such as an image. This is because the input buffer for the UARTLite IP block only contains 16 bytes so, the device has to read from the buffer at an appropriate rate to ensure that we don’t lose information. I looked into different ways of reading information such as an interrupt handler and polling frequently and was eventually able to get an implementation where it stores all of the information appropriately. Attached is an image where I was able to echo 55000 digits of Pi ensuring that was able to use UART both ways and able to store the information.

In terms of the block diagram, I realized that the local memory of the MicroBlaze is configurable and invokes BRAMs. So, I simplified the design to the following and tried to store all of the information in the MicroBlaze.

However, I kept running into issues where the binaries would not be created if I   used too much memory. I am checking if I am not appropriately using the memory or if we need to downscale the image slightly more (which I have already discussed with my teammates).

Finally, another issue that arose was related to the baud rate. Different baud rates require different clock frequencies. As I was creating different block diagrams, it would sometimes not meet the target frequency and violate timing. In the image above with the digits of Pi, I was able to use our target baud rate.

In terms of the schedule, I was hoping to have most of the design done but ran into quite a few issues 🙁 I have addressed this with my teammates and by next week, I plan on having the ability to stream an image and receive the information (at a potentially smaller image size). I will finish the implementation of the image processing portion with the Vitis Vision library. I will then try to optimize the design to be able to use a high baud rate and the entire image during the weeks that were allocated for slack.

Vishal’s Status Update for 10/31/20

This week I was able to make significant progress and headway on my tasks. Most of my work was in terms of the workout screen and making it better and and more useful for the user. I received a lot of feedback from Albert and Venkata which I took in and implemented as seen below. I also added in a pause menu which wasn’t discussed below as it is something a user would like while performing a workout.

 

Since I waspretty much caught up with the UI outside of storing workout data for future reference within the database, I decided it would be a bit more worth while to spend time looking into integration as the database storage is essential but the project will definitely still work without it. I realized my code could be a bit more optimized and learned that I will need to implemented threading in order to receive data and send data through the UART protocol. This took a good portion of my week as well but it now definitely seems feasible and I have pseudo code that will be ready to plug into Venkata’s UART protocol. 

In terms of the work left I am basically caught up but a bit out of order. I have the database setup but I don’t have anything being stored from workouts or a menu to access the history set up yet. However I have made decent headway into the integration and have setup the serial library so that I can interface properly. In order to test that out more I will have to talk to Venkata and see if the code ends up working out. Next week I will most likely try to actually store data in the database and have a simple UI to view data. I also would like to add the additional feature of multiple profiles as I think it would help make the project more robust overall.

Albert’s Status Report for 10/31/20

This week I worked on multiple tasks and I am ready to the integration and verification on the image processing side. On the hardware side, Venkata gave me the task of reading and understanding the Vitis Vision Library because we may potentially be using the OpenCV like Library to implement the image processing portion as our current clock period does not meet the baud rate requirements. I converted the  maxAreaRectangle, maxAreaHistogram, and getCenter to C code because that was the only portion left to be converted to RTL. On the image processing end, I made good progress and finished the leg raises and lunges posture analysis. I completely restructured the previous code to support our new change where we only give feedback on the necessary position of the workout. Also, to make things easier for integration, I implemented polymorphism for the posture analysis. Instead of calling a specific function for a leg raise or lunge. I grouped them and refactored them to be easier to integrate later on. I was also able to fine tune on the HSV bounds on the leg raise and lunges (as shown below).

Successfully tracked important points on the Leg Raise (Bottom)
Successfully tracked important points on the Leg Raise (Top)
Successfully tracked important points on the Lunge (Backward)
Successfully tracked important points on the Lunge (Forward)

 

 

 

 

 

 

 

 

 

 

 

As we take more pictures, I can fine tune the HSV bounds to be more precise and accurate. For the posture analysis portion, I was also fine tuning the feedback for the leg raises and lunges. It was able to output “Raise your Legs Higher” and “Knees Bent” on the appropriate posture. I realized that since the trackers might slide off the middle of the joint and also the natural position of the joints, we might be setting a requirement to try to meet and that is the angles have to be within 10 degrees of the expected. We decided on this because a proper leg raise doesn’t require the hip to be at an exact 90 degree for a good posture. I would be waiting on when we get to integration to test my models on wrong posture and fine tune the thresholds better. Last but not least, I created a way to test the FPGA’s result on the software side.

I am on track in terms of schedule, but I need to help Venkata on the hardware side because we ran into unexpected challenges. Therefore, as more pictures come in, I will be continuing to fine tune them. Next week I will try to implement or help Venkata implement the Vitis Vision Library function and see if that will be be better for timing on the FPGA side.