Ramzi’s Status Report 3/19

This week I worked on getting down the fundamentals of our coding language and the systems that we are using to set up our database and our analysis systems. With assistance from Yun and Jeremy I managed to figure out the basic setup and access our git deposit to begin working on our application framework, which I have included a code snippet from in figure 1.

Figure 1: small code snippet

We have begun to make progress and get ourselves back on track to our schedule. We have been attempting to contact TechSpark for the sake of using their woodshop, but are facing delays and communication issues. We have set our short term goals of getting the information we want from our Arducam OpenPose Live Feed torso detection, and our ColorThief color detection, while setting up the necessary framework for our application to interact with the mirror and the user.

Yun’s Status Report for 3/19

This week, I worked on constructing database and modifying ColorThief API appropriate for our project. In terms of database, I found a sample color coordinating chart(Figure 1) and converted into CV format with Microsoft Excel. As you can see from Figure 2, colors are denoted with their RGB values in an array in each cell. A color has 5-6 colors that match. In terms of the database, I am expecting that there should be an alternative data structure to store the color data more efficiently for the future computation and search, and am planning to figure it out in the following week.

Figure 1: Sample color coordinating chart
Figure 2: color database in CV

Another task I was able to complete was modifying ColorThief API more suitable for our project. The original ColorThief API is based on the web service which is unnecessarily complicated for ours as the project operates in local level. Thus, I modified some pieces of codes within the original API so that I can get the dominant color’s RGB value of an image file in a local machine all operating in local machine. In addition, the original API is in node.js and displays the results in stdout. However, we want to use the results in C++; thus, I added some features such as logging, and storing the results in a text file for each run. On top of that, the original API had an issue of unable to detect white as a dominant color. Thus, I fixed the error in different files within the API so that it is able to detect white as well. The following Figure 3 is a small code snippet for bring a local image and getting RGB color data, although there were additional modifications to operate this made in other files as well.

Figure 3: short code snippet

I think we are on track in terms of the schedule we proposed and also in terms of our short term goal which is to get basics working till the end of next week as we finished modular tasks working except for constructing the mirror frame. For next week, I am planning to connect Arducam OpenPose result with ColorThief so that we can get the torso and top color detection working. On top of that, it would be ideal if I could also get a very simple color-based outfit recommendations working by coming up with brief codes of finding an appropriate color based on the color coordinating chart database.

Team Status Report for 3/19

Our team is planning to get the basic structure of the product working by the end of next week. Thus, this week’s short term goal was to complete modular tasks for constructing the mirror frame, get Arducam to work with OpenPose and operate video, develop basic structures of the App, constructing color coordinating database, and get ColorThief working in local machines. Everything except for the mirror frame construction has been completed. The delay in mirror frame construction was caused due to slow communication with the TechSpark faculties, and we solved it by reaching out to students working at TechSpark. In the end of this week, we have prepared woods that we can use for the frame and have communicated with the students at TechSpark. Thus, we are very optimistic about completing the mirror frame by the end of next week along with other components combined together.

There is no change into the design of the system. However, through meetings with the faculties, we are planning to add special features that will make our smart mirror to be distinguished from others such as scoring the user outfit and helping the user with packing for trips. These will take some time; however, modular tasks required for such additions are already being handled in our basic structures; thus, it will only be a matter of linking them slightly differently.

Some pictures of our progresses are shown in individual reports. We have a picture of Arducam working(Wonho’s), screenshots of the basics of App(Jeremy’s), and the RGB values result shown from ColorThief ran on sample images stored in local computer(Yun’s).

Wonho’s Status Report for 3/19

Prior to spring break, I received the micro SD card from our order and configured it to run Ubuntu for our Jetson Xavier NX. I took the SD card and wrote the proper image on it, which was downloaded from the Nvidia website to help set up for the Jetson Xavier.  Once we got the camera set up and connected to the Xavier, I spent about a week on it to get live camera feed working. At first I could only get it to take pictures but them realized the display connection cable was the problem and after switching it to a HDMI cable we were able to get live video feed showing.

Once we got live video feed working, the next step was to get Open Pose working. We downloaded the correct files and tried to follow the instructions for setting up Open Pose. I spent time after class this week both on Monday and Wednesday to set it up but it seemed like the micro SD card we bought did not have enough storage. So I brought an 64GB (what we had originally was 16GB) micro SD card and configured it to work on our Xavier but this still didn’t seem to fix the issue of compiling the Open Pose file. Jeremy and I spent 3 hours on Thursday tries to install correct prerequisites and drivers but somehow the system cannot find/recognize the correct library. We hope to try it again on Monday but incase Open Pose does not work we have a secondary plan in place to try out. We found a program similar to OpenPose called trt-pose which is capable of detecting limbs and gestures real time just like OpenPose but just for one person. This should be sufficient enough for what we need since only one person will be standing in front of the mirror.

Jeremy and I also were able to find scrap wood that we can use for our project in the TechSpark woodshop. The wood pieces were the appropriate pieces for the frame we will be building for the mirror.

In terms of progress, I would say we are on track to have our project going. The next couple of week will really be the time where we need to grind out both the software and hardware components for our project but I’m confident we can get it done.

Live video feed from Arducam

Jeremy’s Status Report for 3/19

Post-spring break, I dove straight into implementing our mobile app that the user would use to interact with our smart mirror as well as input his/her wardrobe into the database. I am building the app with React Native and testing it with Expo Go. I have attached a few photos below which show my progress with the app. As of now, I have a very bland and simple home screen for our app and a ‘+’ button in the top-right corner which navigates the user to the page where he/she can add clothes to the database. I’m still working on having the app be able to handle image and file uploads. After that is done, then I plan to make the app a little more pretty and then connect it with the database implementation that Yun is working on. Besides working on the app, I also helped Wonho set up OpenPose with NVIDIA Xavier. We spent 3 hours on Thursday night trying to install all the correct dependencies and builds, but there seems to be an issue with either caffe or cmake not being able to recognize one of the libraries needed to run OpenPose. Wonho and I will try again on Monday, but if it doesn’t work then he and I discussed a plan B which is to use a different software tool, capable of real-time gesture recognition, called trtpose which was developed by NVIDIA. I also went to the wood shop with Wonho and picked out some scrap wood planks we will use to build the frame of our smart mirror. Next week, I will finish the app and build the frame of our smart mirror. I would say that our team is slightly behind schedule because there have been so many issues with setting up OpenPose on the Xavier, but if that is resolved then we may even be ahead of schedule.

Running the server
Home page
Add clothing

Yun’s Status Report for 2/26

This week, I mainly worked on revising design presentation design and details. We have finalized most details of our design plan reasonably. I have also started to look into open source APIs such as OpenPose and ColorThief in more details in order to brainstorm an integration of open sources and my team’s software.

We are a bit behind the schedule, but as most details for design are all set now, I am hoping that my team to finish most modular tasks such as recommendation algorithms and assembling mirror before spring break.

For the upcoming week, we will write a draft of our design document and finalize it. On top of that, we will be starting implementation for both software and hardware, but with a slightly more focus on software.

Ramzi’s Status Report for 2/26

This week, my time was mainly spent on designing the slides for our design presentation, as well as preparing our design report. As of now we have a system figured out on how we are going to complete our design report as a team, and we gave our design presentation earlier this week. We requested space in TechSpark to put together and test our smart mirror, and our group ordered the parts, so we are waiting on the parts to arrive before we begin putting our project together. We are a little behind our planned schedule, but hopefully we will be able to catch up when the parts deliver. For the next week we are focusing our efforts on the design report until our parts arrive, then we may be able to begin setting up the hardware and software components of our project.

Wonho’s Status Report for 2/26

This week I mainly worked on the slides for the design presentation as well as ordering everything we need for our project. As of now, everything is ordered and we just need to wait for the parts to arrive to start assembling the mirror. I also requested for space in TechSpark to assemble and build our smart mirror. The majority of our budget has gone to ordering the display for our mirror and the two way mirror itself but the remaining parts are cheap enough that budget should not be an issue. Continuing from last week, I have configured the Jetson Xavier to be ready to be setup as soon as our micro SD card arrives so we will able to extract information from the camera feed. This should happen in the next upcoming week as well as finishing my part of the design report. Next week, I hope to finish this part before spring break comes as well as start assembling parts of our smart mirror so we can test it out.

Team Status Report for 2/26

The most important part our team went over this week is the review of the feedback from the design presentation as well as what to do going forward for submitting the design report. Overall, our design presentation this week went well as we were able to address certain questions or uncertainties we had from our last presentation. We discussed certain parts of our presentation before Monday and added in more specifications for the System and Hardware diagrams.

In terms of our presentation, we were able to specify what kind of standard we will be using to recommend the outfit as well as explain in detail how our system would interact with the various information given by the user. Certain libraries to use were decided upon by our team.

The most significant risk we had this week was providing a cohesive presentation and I think we were able to address that faculty’s and classmates’ concerns and questions well.  In terms of the question about demoing feature of incorporating the weather for the final presentation is something we need to discuss as a team but we are leaning towards the decision of showing outfit recommendations for different locations for the final demo. Other than that, we have started to explore the various libraries that we are going to be using for our project. The changes we made last week to our project still stand.

Next week, we will start to build the mirror as all of our parts will arrive as well as start writing basic code for the application and torso detection algorithm. We also hope to finish the major parts of our design report this weekend so the team can meet to discuss various parts of it and make sure we are on the same page of everything we discussed this week.

Jeremy’s Status Report for 2/26

This week I was mainly working on brushing up on the details of the design review presentation. I also wrote down some responses to frequently asked questions that might have come up in our presentation. In doing so, our group now has pretty much solidified all implementation details and system specifications. I have also started to come up with specific algorithm implementations for our recommendation scheme. It has to include the weight of the clothing, the color, and the weather. I’ve been working on an equation that could use all three of these parameters and provide a correct/satisfying recommendation output.

We are currently slightly behind schedule, but we hope to nail down a lot of things right before spring break so that we can come back from break and get working at a relatively fast speed.

In the coming week, we plan to finish our design report and get it reviewed by a TA and then start working on building the mirror frame when the mirror arrives. We plant to do this at the woodshop or makerspace. I also hope to get a beta version of the app working.