Christina’s Weekly Status Report for May 1

This week I’ve been focusing on writing a testing script so that we can have some quantitative metrics for our final presentation next week. We are testing by running the matching algorithm on different models with the same clothing and manually selecting the superimposed images that look passable. This was a suggestion that Marios gave us during our weekly meeting and we all agreed that it made the most sense for testing usability and precision. Currently I have ran the testing script with one shirt on 2032 test images of models from the warping model. We will select visually passable images and include the results and observations in our final presentation. Note this is only for superimposition, not warping since that has yet to be integrated. I’ve included some example outputs below.

Unpassable example:

Passable examples:

Christina’s Weekly Status Report for April 24

Last week I was preparing for the interim demo by displaying the preliminary matching algorithm through tkinter. This isn’t the final matching algorithm since we hope that the ML model Devon is training and working on integrating will be what will be using for the final matching algorithm. The preliminary algorithm takes in the two key point jsons OpenPose generates of the user and the clothing image. These are normalized as a value of 0 to 1 based on the x, y ratio in comparison to the entire image. The center points are matched and the points are created and the clothing image is rendered on top of the user with PIL to register PNG transparency. I’ve attached a picture of the results below. As you can see, it’s not perfect at all. So this week I’ve been working on finding what’s wrong with the normalization process I go through, since visually comparing the jsons, we’re getting a pretty accurate center point position. I’m still working on this, but I suspect it’s because the ratios are calculated from different sizes of picture. I’m not what the right way to go about correcting this is, but probably doing some more math on normalizing the size ratios.

Christina’s Weekly Status Report for April 10

This week, I worked on reworking the matching algorithm to accommodate the use of the Deep_Fashion_Try-on ML model which trains on a set of OpenPose json responses to match key points on uploaded images of clothing to a user image. This needs to be integrated with the model that Devon is currently training, so we hope to accomplish this before the demo. This should give us exactly the try-on effect we desire. The result will be as follows:

At this point, we will only allow users to upload pre-background-subtracted images.

Construction of the mirror is going at a really good pace, the hardware subsystem itself should all be complete.

References:

https://github.com/switchablenorms/DeepFashion_Try_On

https://arxiv.org/abs/2003.05863

Christina’s Status Report for April 3

This week I have been working on the matching algorithm. I’ll be able to get the fixed point output json from the two images (clothing and user) and use the distance formula to figure out the relative distance. By using arctan function tan^(−1)(𝑦/𝑥)=𝜃, I can isolate the angle the line the two points create relative to the horizontal axis. With this information, I’ll be able to figure out where to place the image over the user by aligning the shoulder points, which will give a pretty accurate centering for the clothing, even though it doesn’t look perfect with superimposition.

On the hardware side, I worked with Judy and Devon to get OpenPose set up on the Jetson. On my end, we’ll need to figure out how to hook up the camera to get a video feed and get the subsequent fixed points in the format I expect for the matching module.

Christina’s Status Report for March 27

Last week, I worked on the Design Requirements, Software Architecture, Software Trade Studies, Software System Description, and Risk Management sections of the Design Report. I also created a more generalized block diagram of our entire system for the overall system structure to add to our Design Report as well. This attached below.

These past week has largely been dedicated to figuring out and learning how OpenPose works as well as playing with the configurations on my laptop before deploying to the Jetson. Unlike we previously thought, we probably won’t need to subtract functionality from the 17 fixed point setting since this already doesn’t include hands. By Marios’ suggestion of using GPU mode instead of CPU-ONLY mode in the cmake configuration of OpenPose, it runs much faster on my laptop now. The image below shows that OpenPose works with baggy clothing. I am also working on figuring out a background subtraction algorithm that we can use to isolate the clothing image.

Block Diagram:

OpenPose Testing:

Christina’s Status Report for March 13

This week I focused on preparing for the Design Review Presentation as I was the designated presenter this time around. I think it went pretty well. Reviewing the peer feedback, it seems most think we have a reasonable scope and design approach, which is reassuring. We got some very good feedback from the faculty on specific aspects of the project. Knowing that the Xavier will be treated as an item we’re borrowing opens up our budget a lot and gives flexibility for buying doubles if needed. As we wait for our parts to come in, I’ve started the setup for OpenCV and OpenPose. I’m following the installation guide on the github for MacOS. There are still some setup bugs, but I hope to begin playing around with torso recognition soon. I talked to the Work It team briefly after hearing their Design Review Presentation to get some insight on subtracting unnecessary functionality such as finger and lower body recognition since they also are using OpenPose. I’ll be using that advice as I continue to play around with OpenPose. I want to have a good understanding of it before the hardware comes in.

Christina’s Status Report for March 6

I’ll be the Design Review presenter. This week, I worked on the software block diagram, testing verification and metrics, and risk factor slides. I researched the various parts of our design to figure out how the software components will work together. I found that OpenCV will be needed to convert our live video feed from the Arducam to the Jetson. The live video feed and selected image of clothing will be run through a clothing detection module. We were originally planning on using DeepMark, but deprecation may lead us to use another option: https://github.com/lzhbrian/deepfashion2-kps-agg-finetune. After our meeting with the TA on Wednesday, we added scope discussion on the solution approach slide and indication of how the software and hardware components will work together in the block diagrams. Most of my time this week has been spent on research and preparing for the presentation.

Christina’s Status Report for Feb 27

This week I finished up on the problem, solution, requirements, and software technical challenges slides of the proposal presentation. Devon and I listened and gave feedback on Judy’s presentation. We got good feedback from our peers and professor afterwards and we realized we really needed to have more defined metrics for testing and are discussing more specific metrics to judge success such as having 50% precision on matching fixed points with superimposing and 80% with warping within 2cm.

Christina’s Status Report for Feb 20

I did some general/preliminary research on the suggestions Marios gave us on various aspects of the project including Raspberry Pi vs. Jetson Xavier NX and various hardware costs. I also worked on the Use Case, Requirements, and Software Technical Challenges sections of the proposal presentation. I think we’re all starting to think more about attainable design which is great.