Wonho’s Status Report for 4/30

This week I worked on preparing for our final presentation and presenting it as well. I also worked on reducing the run time for the Open Pose. Originally, our testing showed that it took around 21 seconds on average to run Open Pose on the photo we wanted. I was able to adjust the settings and change the commands for running Open Pose and reduced the runtime all the way down to 4 seconds. This reduced our total runtime from and average of 40 seconds to 15 seconds. Reducing the net resolution of the image being run on Open Pose was able to reduce the runtime as originally we were running it on the full 1080p resolution. Reducing this to 320p, I was able to reduce the runtime drastically.

Other than that, I have been helping my teammates integrate and testing to make sure everything works in the Jetson environment. Our mirror should be good to demo as along as we get the Jetson <–> App working and the UI <–> Jetson working. Other than those components we are on schedule for our final demo this week.

Wonho’s Status Report for 4/23

This week I continued with integrating the separate parts of our project. The automation script I created last week was having some problems running the ColorThief algorithm due to the node and npm being used simultaneously by the weather API. Thus, I made a major change by dropping the Color Thief api and used OpenCV pandas instead to create an algorithm that easily extracts the dominant color of an image. This result returns a tuple of the rgb values of the dominant color and works well with the automation script as well. Ramzi also finished the recommendation algorithm so I also integrated that function with the script as well as adjusting it so it takes the output.txt from the color detection output. I was also able to make some adjustments to the automation script to reduce the runtime by changing the run settings for OpenPose and reduced the total runtime of the script down to 20 seconds.

I also worked on the final presentation slides as I will be presenting next week and will continue working on them till Sunday. We still need to get some testing metrics for the slides so Jeremy and I will be extracting those numbers on Sunday and add them to the slides.

We definitely need some more time to polish everything together as well as still connecting the app, Jetson components and the UI but most of the integration that needs to be done is almost there.

Wonho’s Status Report for 4/16

This week I focused on creating a cropping script for the image as well as creating a shell automation script to allow the code to run automatically. For the cropping script, it takes in the info from the JSON output file produced by Open Pose and crops the image taken by the arducam based on the body key points labeled in the JSON file. The cropping allows the image to be focused on the upper body to make it easier for Color Thief to detect the upper torso color. Then it will automatically saved that image in an easily accessible location in Jetson Xavier.

For the shell script, it ties together the components for first part of our recommendation process. It automatically takes 20 pictures through the arducam and selects the most well lit (the 20th picture taken) picture and runs Open Pose only on that one. This reduces our run time since OpenPose takes a while. Then it will take that image and run it through the cropping script I wrote this week and it will save that image. Then the script takes that image and runs it through the Color Thief API that Yun organized and produces an output.txt file with the dominant color of the image.

Our individual chunks are pretty much done and we have started to work on integration of those parts today so if we work hard this week, our team should be able to finish on schedule. Next week I will work on connecting my part done this week with Ramzi’s recommendation algorithm.

cropped image example

Wonho’s Status Report for 4/10

This week I was able to make a large break though with the Jetson and Camera module and setting up the torso detection with Open Pose. Initially, Jeremy and I had set up a Plan B which utilized trt-pose and a different camera (IMX 219) but it was not going smoothly so we had ordered a different camera that would hopefully work with the software and was going to test it. However, working at night on Monday, I was able to get Open Pose working with the Jetson and the original Arducam we had a found a way to extract the data from the output.

Open Pose is run on the individual screenshots we take of the person and I was able to return a output as a JSON file that where we would be able to identify the key body point in terms of coordinates. Open Pose is able to identify a total of 25 key points across the body and we just need to use 1-8 to detect the upper torso. What I plan on doing in the upcoming week is to use these key points to automatically crop the image we are using and connect it with the Color Thief algorithm to detect the main piece of clothing the user is wearing.

OpenPose body key points
Running OpenPose on my body
Output JSON file from OpenPose

Wonho’s Status Report for 4/2

This week was all about getting the torso recognition software to work. I decided to go with trt-pose due to its compatibility with Nvidia Jetsons as well as its simplicity compared to OpenPose. We ordered a new camera that was compatible with this as well to work with the software. However, the camera ended up not being compatible with trt-pose. The SAINSMART imx219 camera does not work with trt-pose and we have another camera ordered to be compatible with the software. In the beginning of the week, Jeremy and I fully assembled the mirror and frame as well as testing it so the UI is visible through the mirror. This upcoming week, when the camera arrives, we will be testing it and making sure that trt-pose works no matter what. Also, as a backup plan we will be reaching out to groups that are using OpenPose to figure out what is wrong with our system and get the torso recognition up and running for the interim demo. This obstacle has set us back a week in terms of schedule but I am confident that we will be able to get it up in running before we go on our break for carnival.

Wonho’s Status Report for 3/26

This week we decided to move on to our plan B of using trt-pose after struggling for a week to get OpenPose working on our Jetson Xavier. We spent endless hours on it trying to debug and reinstall drivers but it still did not seem to work. I’ve wiped our SD card and have installed trt-pose on the Xavier. I just need to go in the lab and test it with the camera to see if it works. Other than working on this crucial part of our project, we decided to get the hardware components (the frame of the mirror) out of the way by setting an internal goal to finish the mirror this week

As a team, we all went to Home Depot to purchase the wood necessary for the project  and were able to finish building the frame for the mirror. In terms of the hardware, we seem to be on track but lacking slightly in the software progress. This will be the week where I will really start to make progress with our CV code as trt-pose should be up and running soon.

Wonho’s Status Report for 3/19

Prior to spring break, I received the micro SD card from our order and configured it to run Ubuntu for our Jetson Xavier NX. I took the SD card and wrote the proper image on it, which was downloaded from the Nvidia website to help set up for the Jetson Xavier.  Once we got the camera set up and connected to the Xavier, I spent about a week on it to get live camera feed working. At first I could only get it to take pictures but them realized the display connection cable was the problem and after switching it to a HDMI cable we were able to get live video feed showing.

Once we got live video feed working, the next step was to get Open Pose working. We downloaded the correct files and tried to follow the instructions for setting up Open Pose. I spent time after class this week both on Monday and Wednesday to set it up but it seemed like the micro SD card we bought did not have enough storage. So I brought an 64GB (what we had originally was 16GB) micro SD card and configured it to work on our Xavier but this still didn’t seem to fix the issue of compiling the Open Pose file. Jeremy and I spent 3 hours on Thursday tries to install correct prerequisites and drivers but somehow the system cannot find/recognize the correct library. We hope to try it again on Monday but incase Open Pose does not work we have a secondary plan in place to try out. We found a program similar to OpenPose called trt-pose which is capable of detecting limbs and gestures real time just like OpenPose but just for one person. This should be sufficient enough for what we need since only one person will be standing in front of the mirror.

Jeremy and I also were able to find scrap wood that we can use for our project in the TechSpark woodshop. The wood pieces were the appropriate pieces for the frame we will be building for the mirror.

In terms of progress, I would say we are on track to have our project going. The next couple of week will really be the time where we need to grind out both the software and hardware components for our project but I’m confident we can get it done.

Live video feed from Arducam

Wonho’s Status Report for 2/26

This week I mainly worked on the slides for the design presentation as well as ordering everything we need for our project. As of now, everything is ordered and we just need to wait for the parts to arrive to start assembling the mirror. I also requested for space in TechSpark to assemble and build our smart mirror. The majority of our budget has gone to ordering the display for our mirror and the two way mirror itself but the remaining parts are cheap enough that budget should not be an issue. Continuing from last week, I have configured the Jetson Xavier to be ready to be setup as soon as our micro SD card arrives so we will able to extract information from the camera feed. This should happen in the next upcoming week as well as finishing my part of the design report. Next week, I hope to finish this part before spring break comes as well as start assembling parts of our smart mirror so we can test it out.

Wonho’s Status Report for 2/19

This week I mainly worked on figuring out getting the camera module and the Jetson Xavier board setup. I looked online to look at the SDK required to set up the SD card for the Xavier board as well as figuring out how to install the necessary drivers for the camera module. This part looks like there needs to be more research done on how to make it work so I’ll continue to do that as we get our micro-SD card delivered next week.

As for doing work with the team, I also worked on filling out the design review presentation slides and the bill of materials. I ended up putting in the order for major part such as the mirror and the monitor as well as some other little parts we need for the project.

Next week, I’ll hopefully get the camera up and running so we can start figuring out how to collect data from the camera input.

Wonho’s Status Report for 2/12

This week I worked on the proposal presentations slides with my team. The tasks were divided up into more specific and modular tasks so that we would be able to plan ahead accordingly. This also helped us create a more detailed schedule that allowed multiple parts of the project to be developed simultaneously while the team works together.

We also had internal discussions to determine what direction we want to take regarding the software language and we determined that C++ would be more useful after receiving some feedback after our presentation.

In terms of the hardware aspect of the project, I was able to secure the Jetson Xavier NX as well as an 12MP Arducam that is compatible with the Xavier. Next week, I’ll be researching the specific materials we need for the smart mirror and plan on putting in the order forms for the one-way mirror and a monitor for our smart mirror display.