Venkata’s Status Update for 10/31/20

Last week, I was able to program the FPGA to be able to stream information from the FPGA to the CPU via UART. I had to make a couple of small changes so that I would be able to stream information from the CPU to the FPGA, which worked well on small sets of values but had issues when I tried to stream large sets of values such as an image. This is because the input buffer for the UARTLite IP block only contains 16 bytes so, the device has to read from the buffer at an appropriate rate to ensure that we don’t lose information. I looked into different ways of reading information such as an interrupt handler and polling frequently and was eventually able to get an implementation where it stores all of the information appropriately. Attached is an image where I was able to echo 55000 digits of Pi ensuring that was able to use UART both ways and able to store the information.

In terms of the block diagram, I realized that the local memory of the MicroBlaze is configurable and invokes BRAMs. So, I simplified the design to the following and tried to store all of the information in the MicroBlaze.

However, I kept running into issues where the binaries would not be created if I   used too much memory. I am checking if I am not appropriately using the memory or if we need to downscale the image slightly more (which I have already discussed with my teammates).

Finally, another issue that arose was related to the baud rate. Different baud rates require different clock frequencies. As I was creating different block diagrams, it would sometimes not meet the target frequency and violate timing. In the image above with the digits of Pi, I was able to use our target baud rate.

In terms of the schedule, I was hoping to have most of the design done but ran into quite a few issues 🙁 I have addressed this with my teammates and by next week, I plan on having the ability to stream an image and receive the information (at a potentially smaller image size). I will finish the implementation of the image processing portion with the Vitis Vision library. I will then try to optimize the design to be able to use a high baud rate and the entire image during the weeks that were allocated for slack.

Vishal’s Status Update for 10/31/20

This week I was able to make significant progress and headway on my tasks. Most of my work was in terms of the workout screen and making it better and and more useful for the user. I received a lot of feedback from Albert and Venkata which I took in and implemented as seen below. I also added in a pause menu which wasn’t discussed below as it is something a user would like while performing a workout.

 

Since I waspretty much caught up with the UI outside of storing workout data for future reference within the database, I decided it would be a bit more worth while to spend time looking into integration as the database storage is essential but the project will definitely still work without it. I realized my code could be a bit more optimized and learned that I will need to implemented threading in order to receive data and send data through the UART protocol. This took a good portion of my week as well but it now definitely seems feasible and I have pseudo code that will be ready to plug into Venkata’s UART protocol. 

In terms of the work left I am basically caught up but a bit out of order. I have the database setup but I don’t have anything being stored from workouts or a menu to access the history set up yet. However I have made decent headway into the integration and have setup the serial library so that I can interface properly. In order to test that out more I will have to talk to Venkata and see if the code ends up working out. Next week I will most likely try to actually store data in the database and have a simple UI to view data. I also would like to add the additional feature of multiple profiles as I think it would help make the project more robust overall.

Albert’s Status Report for 10/31/20

This week I worked on multiple tasks and I am ready to the integration and verification on the image processing side. On the hardware side, Venkata gave me the task of reading and understanding the Vitis Vision Library because we may potentially be using the OpenCV like Library to implement the image processing portion as our current clock period does not meet the baud rate requirements. I converted the  maxAreaRectangle, maxAreaHistogram, and getCenter to C code because that was the only portion left to be converted to RTL. On the image processing end, I made good progress and finished the leg raises and lunges posture analysis. I completely restructured the previous code to support our new change where we only give feedback on the necessary position of the workout. Also, to make things easier for integration, I implemented polymorphism for the posture analysis. Instead of calling a specific function for a leg raise or lunge. I grouped them and refactored them to be easier to integrate later on. I was also able to fine tune on the HSV bounds on the leg raise and lunges (as shown below).

Successfully tracked important points on the Leg Raise (Bottom)
Successfully tracked important points on the Leg Raise (Top)
Successfully tracked important points on the Lunge (Backward)
Successfully tracked important points on the Lunge (Forward)

 

 

 

 

 

 

 

 

 

 

 

As we take more pictures, I can fine tune the HSV bounds to be more precise and accurate. For the posture analysis portion, I was also fine tuning the feedback for the leg raises and lunges. It was able to output “Raise your Legs Higher” and “Knees Bent” on the appropriate posture. I realized that since the trackers might slide off the middle of the joint and also the natural position of the joints, we might be setting a requirement to try to meet and that is the angles have to be within 10 degrees of the expected. We decided on this because a proper leg raise doesn’t require the hip to be at an exact 90 degree for a good posture. I would be waiting on when we get to integration to test my models on wrong posture and fine tune the thresholds better. Last but not least, I created a way to test the FPGA’s result on the software side.

I am on track in terms of schedule, but I need to help Venkata on the hardware side because we ran into unexpected challenges. Therefore, as more pictures come in, I will be continuing to fine tune them. Next week I will try to implement or help Venkata implement the Vitis Vision Library function and see if that will be be better for timing on the FPGA side.

Vishal’s Status Update 10/24/20

The past two weeks I have been working on creating the custom timed workouts as well as wrapping up the calculations for calories burned + average heart rate.

For calories burned I did a bit of research in terms of the MET (Metabolic equivalents) for various workouts. From my research I found that push ups are have a MET value of 7.55, leg raises have an MET 2.23 and Lunges have a MET value of 2.80. I used the following formula to calculate the aaverage calories burned as well as heart rate in the code:

I was able to complete this section and it will accurately reflect how many calories are burned. I also take into account during rest sections that the MET value will decrement at rate of 20% per minute.

In terms of the timed workout I was not able to fully finish them as I had anticipated as I had a very rough schedule this week (midterms etc.) and not too much time to finish. I have all the necessary media, models and templates at this point to finish but still have a few more buttons and pages to add. I will be finishing this by Monday (10/26) at the latest and will be almost ready for integration.

I have also begun researching the way that the data will be stored more in depth so that it can be tied into the application. SQLite will be adequate and be a safe way to store the data in a relation manner. I have the function headers for storing the workout data setup but will be adding that in, this upcoming week.

As I had a tough week I am about half week behind my schedule but will definitely be able to catch up, as my other classes had midterms and I will now be able to put my full effort/time into the application and UI. I anticipate some time will be required to integrate with the UART protocol as well as the posture analysis so I will also begin prepping for that.

Venkata’s Status Update for 10/24/20

The past week and the upcoming week are allocated for working on the HLS implementation. Unfortunately, I was roadblocked at the start of the week since I was having trouble with HLS and wasn’t sure how to proceed. I then met Zhipeng (who the Professor introduced me to) and he was able to address my questions and pointed me to use the MicroBlaze soft-core CPU that would be responsible for controlling the interactions that take place on the FPGA. I also updated the block diagram that exists on the FPGA to be the following.

Note: It is not complete as it still requires the IP core that is responsible for the image processing. This core will connect to the appropriate AXI ports that have been left unconnected.

I then started learning how to use the MicroBlaze core, which required adding a new piece of technology (Vitis) as the current version of Vivado does not have an SDK to program the MicroBlaze core. I was able to program the core and stream appropriate information via UART. I am looking into the other components in the block diagram and how to control them.

I am on track with the schedule. This week, I hope to be able to learn how to control the various components specified in the block diagram and also have the image processing IP core done.

Albert’s Status Report for 10/24/20

We were finally able to take some demo pictures from the webcam since we ordered the item slightly later than the rest. I had been working with iPhone images. A noticeable problem when I first got the image is not the lighting is a completely different shade of color from the iPhone image. We hope that the change of saturation is due to the different webcams. However, it can also be due to the lighting of the room. I ran the newly captured images on my previous algorithm and they were not pinpointing the joints. I will do some further tests next week by capturing the images at night with the webcam and see if my new acquired bounds or old ones will be able to detect the trackers. Also, the webcam image seems to come in with a different dimension than a normal picture. Therefore, there are black lines to the side of it which is unnecessary for the joint tracking. Thus, I have made edits to my algorithm to crop out the black lines.

To aid me in finding the bounds, I have to manually find the center pixel of the trackers and capture the HSV bounds by finding the minimum and maximum within the area. I have to do this for every joint on a reference image. Then, I would run the same bounds on a different image to ensure that the joint locations that I returned are in a similar area. I have to redo my fine-tuning because of the different saturation of the image. I started with the pushups this week. I had to modify the morphological transform portion because an erosion followed by a dilation would remove a lot of the important pixels I track. Thus, I changed it to 2 dilations then followed by 2 erosions to better track the pixels. The picture below shows the output. The peach tracker on the elbow would have to be changed because it is too similar to the background; thus, it its not trackable. I hardcoded that position in for the posture analysis portion.

Purple Trackers represent the Joint Location. The image is the one I tested the bounds on.
Purple Trackers represent the joint locations I find. This is the reference image.

For the posture analysis portion, I had to make edits to the code because due to latency issues we have decided that we will be only tracking the second or more important position of the workout. For a pushup, it is the downwards motion. Since I finally have the joint positions from the image processing portion, I could finally do some threshold fine-tuning. I adjusted the values of the slopes and angle comparisons to fit our model. The current model I got gives me the correct feedback when I track feed in the up position for the pushup analysis. It would output “Butt is too high” and “Go Lower” because it detects the hip joint and elbow joint not in the same slope and angle according to the pushup model. To make this better, we would have to capture more images of faulty pushups in order for me to fine-tune it even better.

I am on schedule in terms of the image processing portion, but is slightly behind on the posture analysis portion. Since the posture analysis portion is easier to implement and fine tune fully with the application and system set up, I will work on helping my teammates with their portions. I will get to the Lunges and Leg Raises posture analysis fine tuning when I have spare time from helping Venkata with the RTL portion. We seem to have found a library in HLS to help us do the image processing portion. However, the bounds that I find will still be useful for the HLS code. Since the joint tracking algorithm and posture analysis is very sensitive to real life noise, I envision the integration portion would constantly be updating, so I would keep on updating my algorithm on a weekly basis.

Team Status Update for 10/24/20

This week we took sample images with the webcam that recently arrived. In order to appropriately fine-tune the parameters for our posture analysis algorithm, we took multiple images at different positions of the various exercises that we plan on supporting. A major risk that we identified was that we would not be able to fully identify all of the colors. This risk is fully elaborated in Albert’s status report for this week. The contingency plans involve finding more distinct colors that would be easier to track and experimenting with different lighting and background conditions to find the optimal background for our project.

No major changes have been to the overall design and schedule.

Team Status Report for 10/17/20

In terms of team work this week mostly comprised of working on the design review and design report. We took the feedback from the presentation and updated our report accordingly. We also made a few updates to our schedule to make it more reasonable and up to date.

We also worked on designing and creating the tracker suit. We realized that there were a few colors that weren’t bright enough through the webcam so we had to change them and modify the location the trackers were placed.

Vishal’s Status Report for 10/17/20

This week I worked with Venkata/Albert to create the tracker suit that we will be using for our project. More details on the work done for this can be seen in the team status update for this week.

In regards to my own personal work I spent a lot of this week working on the design review report/presentation which ended up taking a major portion of my time. I was also able to make progress in creating the timed workouts. I now have each set timed and it will move onto the next set once the first set is finished. Pictures are being taken periodically as each rep finishes. The user input menu is completed now and the weight and height can be adjusted. In terms of the schedule I have updated it so that I have another week to work on the timed workouts as it ended up being a lot more work than I had anticipated. The timing for the schedule is still fine as I already had a slack week accounted for.

Albert’s Weekly Status Report for 10/17/20

This week I started working on fine tuning the HSV bounds to do the color tracking. This was supposed to be done earlier, but we weren’t able to make the suit as there was some miscommunication in the ordering process. I had been fine tuning the alogirthm on sample images. However, fine tuning to the dark suit that we currently have took longer than expected because we would have to get it even more precise to the noise from a real life downscaled image. Instead of doing a relatively broad range, I have to manually pinpoint the joints first. Then, I would find the HSV values around those points and create an extremely precise bound to have better accuracy. I also finished implementing the posture analysis for pushups and lunges in python to provide feedback to the user. I plan to fine tune this portion after the image processing is set up on the FPGA. Also, since this week I was presenting for the design review presentation, I spent a lot of time fine tuning the presentation after everyone was done and practiced for more than 3 hours because I am not a good presenter.

In terms of schedule, I am more or less on track because I have started things that are due later but haven’t completed things the are already due. I have been doing a lot of verification of individual parts along the way, so integrating may take less time than we have planned. For next week, I will work with Venkata on converting the joint tracking algorithm to RTL using HLS.