Jeremy’s Status Report for 4/30

This week I worked on creating the server that would communicate with the mirror UI. This integration with the app and the mirror UI will make it easier to display certain screens on the monitor when the user presses “Start outfit recommendation” on the app. It will also allow the mirror UI code to act as the Controller class in that it will call upon the OpenPose shell script when the app signals to the mirror UI that the outfit recommendation has started. This coming week will be about wrapping everything up, making the final poster, writing up the final report, and some finishing touches for the final demo.

Parts of the server I am creating right now are a little difficult to handle with because the mirror UI runs on a separate library called electron, but the approach I am using right now is sending a string in the form of a JSON to localhost port 3000 and the mirror UI js file will have an app.listen function which is listening on the same port and will update the screen using React Native’s Router class to re-route the screen to the loading screen. If this is achieved within the coming week, then our mirror is good to go.

Team Status Report for 4/23

Our team made a lot of progress this week. We are getting very close to finishing up the integration process with the mirror UI, the Jetson, the app, and the outfit recommendation algorithm. Some risks that are being dealt with at the moment are incompatibility issues between the mirror UI code and the Jetson. The main reason for this is because of the OS differences. Packages and code that compiled and ran on the macOS are suddenly not working on the Jetson’s Linux OS. Our plan B in case this doesn’t work out is possibly to set up a virtual machine that would run the mirror UI on a macOS. This is our last resort because running on VMs may result in slower response times and potentially more bugs. Another risk that we face at the moment is being able to run the openpose shell script through a click of a button in our app. Considering the various roadblocks we have faced because of the Jetson, it may be very likely (hopefully not though) that the Jetson will have trouble opening and running the shell script when the app sends a request. Our contingency plan for this is to have to run the script manually, but we will do our best to avoid this at all costs because it significantly hinders the convenience of our smart mirror.

Some changes were made to the outfit recommendation schema. Originally, we were planning to recommend one outfit, but we changed the output to 5 outfits. We also added the option for users to input whether they want a formal outfit or an informal outfit because it enhances the purpose of our smart mirror and is more tailored towards a personal preference this way. This did not induce any costs but may potentially pose a challenge of having to send data about the formality to the Jetson.

Photos and videos of our progress will be reflected on the final presentation slides as well as our individual status reports.

As we are coming close to the final demo, our schedule has become a lot busier. Integration is a lot more difficult than expected, but most of us are staying in lab till night to get things sorted as fast as possible. As a result, we are doing testing and integration at the same time instead of separately. Other than that, I feel like our team has made a ton of progress this week and it will stay that way.

Jeremy’s Status Report for 4/23

This week I figured out how to upload images to the database spreadsheet file and I made some changes to the app. Some of these changes are:

  • Using a dropdown menu for the tags/labels instead of a user input text field
  • Uploads photo to Cloudinary with the image url inputted into the database when user presses upload and either 1. goes back to add more clothes or 2. finishes uploading all clothes
  • Added a screen with two buttons where the user would choose whether he/she wants the outfit recommendation to be formal or informal type of clothing.
  • Added a screen with one button that says “Start outfit recommendation”

After being able to upload and save images onto Cloudinary and the database, the only thing left to do with the app is be able to run the shell script that exists locally on the Jetson. I believe there is an easy go-to command that allows you to run scripts locally with Javascript code. I have also begun some unit testing. The most important ones that I have conducted are the response time for uploading the tags to the database and the response time for uploading the image to Cloudinary and inputting the image display url in the database. I have attached screenshots of the response times shown in the console log and plan to put some explanation in the final presentation slides as well.

This coming week I will finish up the connection between my app and the Jetson and also work on the final presentation slides and poster.

Video of app features

Another video of app features

Response time for HTTP request sent (tags)

Jeremy’s Status Report for 4/16

This week I mainly focused on finishing the app. As of now, my app has a home screen where the user is able to input certain tags to the piece of clothing which are then uploaded to a Google Spreadsheet that resides in our group’s capstone folder on Google Drive. I achieved this through using an API called sheet.best which allows GET and POST requests to a Google Spreadsheet using a specific uri. The only issue that I am facing is finding an effective way to upload the photo of the piece of clothing to the database. I’ve tried to send a POST request with the photo but it would only push the file destination of the photo in String format which is useless. I’ve also looked into possibly storing the photos on a cloud server (e.g. Google Cloud), but it requires a lot of OAuth hassle which I believe is unnecessary. It may be easier to find a way to upload the photo directly from the app to the Jetson if possible because then I could simply store the unique name/id of the photo file in the ‘photo’ column in the database and the Jetson can output the correct photo when trying to display it on the monitor screen. This is something that I will be working on over the weekend and by next week. I don’t want to spend too much time on this because I still need to integrate the app with the recommendation system and the Jetson, so I hope to get this done before next Wednesday. I would say that we are a little behind schedule because we only have around a week before our final reports are due, but I’m still positive about the progress that our group has been making towards getting everything integrated.

Tags
User inputs info
Database with inputted info

Jeremy’s Status Report for 4/10

This week I worked on finishing up the important features of our app. As of now, I am able to open the app through Expo Go, show the splash screen with our logo, and choose and upload photos which the server responds to. The server is on localhost:3000 and the user is also able to cancel choosing/uploading photos which the server will also respond with a “cancelled = true” k-v pair in the json object that is returned. I also helped demonstrate all this during the interim demo that happened on Wednesday.

For the coming week, as long as I stay on schedule, then I will have a working version of the app up and running with the ability to add tags and labels when uploading the photo which will also be sent to the server in a json object. I also want to be able to work with Yun to use her database that she has been setting up instead of localhost:3000 just responding with an OK. After that, I plan to integrate everything into the smart mirror and possibly add functionality to turning on/off the LED lights once we order them and get delivered.

Splash screen
Choosing a photo
Crop once selected
Image preview
Server response body

Jeremy’s Status Report for 4/2

This week Wonho and I attached the mirror to our mirror frame. We ended up simply taping the four corners of the mirror onto the four corners of our mirror frame because the mirror bends a lot as the mirror is made of acrylic. In a well-lit environment, it is possible to see through the front of the mirror which is not what we want, so we have decided (as a temporary solution) to use the cardboard box that the mirror came in to cover the back of our smart mirror and block any light from coming in through the back. I tried putting my phone behind the mirror and setting it at a full brightness level to test if the smart mirror would still be able to show the monitor display and it is visible as expected.

We are running into some trouble with the Xavier not being able to recognize that we have a camera input. A week ago, we had no issues with this but they appeared out of nowhere. We tried switching out the CSI cables, reading through troubleshooting threads online and it still would not work. This is the only obstacle that is preventing us from testing out trtpose and I am planning to go in on either Sunday/Monday to get this working again.

I would say that we are slightly behind schedule, but as long as we get trtpose working before the interim demo, then we should be in a very good spot to finish up during the last two weeks of this semester.

Team Status Report for 3/26

The most significant risks that pose a threat to the overall completion of our project is the issues that we have been facing with OpenPose. There are a bunch of Github threads that have been made by others who have had problems with setting up OpenPose on their NVIDIA Jetsons yet the documentation is either outdated or incorrect depending on each person who had the issue. Our team faced the same challenges and therefore created a contingency plan where we will be using trtpose now. We plan to wipe the SD card that was in our NVIDIA Xavier and start from scratch by installing trtpose instead of OpenPose.

In terms of the mirror frame that we built, we made a few changes to our initial blueprint of what the frame would look like. Our most recent diagram shows the exact dimensions of each side of our frame and this is what we referred to when constructing the frame in the woodshop downstairs.

Diagram of mirror frame

None of the scrap wood in the woodshop matched the dimensions of our mirror frame so we had to go to Home Depot to buy wood. This incurred an extra cost of around $50, but it is still well under our budget limit. Another extra component to our mirror design that we thought of after constructing the frame was how we would hold the acrylic two-way mirror in place. The mirror itself is quite bendy and malleable for some reason, so we decided that we would attach triangular pieces to each corner of the frame which the mirror could stick to without falling forwards. A picture is shown below of our finalized frame.

Completed mirror frame

During the construction of the mirror, we got assistance from an instructor in Techspark who was generous enough to give us a few tips about woodworking as well. One obstacle was the fact that Home Depot claimed they sold this wood plank with dimensions of 2in x 10in x 8ft, but the height of the wood was actually shorter than 2 in which resulted in an unforeseen consequence of having to cut the wood again to match the design we wanted to in our original blueprint.

A generous soul helping out engineers who don’t know anything about woodworking.

We moved the frame back to our bench, so now that the frame is done, all we need to do now is get the software working and running. We are on schedule in terms of what we have built so far, but we are behind schedule with testing out the software because it hasn’t been set up properly yet. That will be our group’s main focus for the coming week.

Complete

Jeremy’s Status Report for 3/26

This week I worked on figuring out what was wrong with setting up OpenPose with our Xavier. After hours of endless debugging and reinstalling, Wonho and I decided that it may be easier to switch to our plan B of using trt-pose. I’ve been working on the app a little more, but the past few days, I have been working with my team to get to the woodshop and create the frame for our mirror. We went to Home Depot to get two 2 x 10 x 8ft wood planks which we cut at the woodshop down in techspark with the assistance of an instructor there. As a result, I don’t have any photos to post in my individual status report, but will post the mirror frame that we built as a team in the team status report. For the next week, I hope to finish the app and get trtpose working so that we have something to show for the interim demo in the first week of April. The app dev process should be a little easier now that I learned more about frontend UI and GUI development in a class I am taking right now which also uses React and Node.js to create a backend and frontend that communicate via APIs. This is essentially the route that we are taking to have the frontend use a button that will send an API request to a backend which stores the user’s wardrobe.

Jeremy’s Status Report for 3/19

Post-spring break, I dove straight into implementing our mobile app that the user would use to interact with our smart mirror as well as input his/her wardrobe into the database. I am building the app with React Native and testing it with Expo Go. I have attached a few photos below which show my progress with the app. As of now, I have a very bland and simple home screen for our app and a ‘+’ button in the top-right corner which navigates the user to the page where he/she can add clothes to the database. I’m still working on having the app be able to handle image and file uploads. After that is done, then I plan to make the app a little more pretty and then connect it with the database implementation that Yun is working on. Besides working on the app, I also helped Wonho set up OpenPose with NVIDIA Xavier. We spent 3 hours on Thursday night trying to install all the correct dependencies and builds, but there seems to be an issue with either caffe or cmake not being able to recognize one of the libraries needed to run OpenPose. Wonho and I will try again on Monday, but if it doesn’t work then he and I discussed a plan B which is to use a different software tool, capable of real-time gesture recognition, called trtpose which was developed by NVIDIA. I also went to the wood shop with Wonho and picked out some scrap wood planks we will use to build the frame of our smart mirror. Next week, I will finish the app and build the frame of our smart mirror. I would say that our team is slightly behind schedule because there have been so many issues with setting up OpenPose on the Xavier, but if that is resolved then we may even be ahead of schedule.

Running the server
Home page
Add clothing

Jeremy’s Status Report for 2/26

This week I was mainly working on brushing up on the details of the design review presentation. I also wrote down some responses to frequently asked questions that might have come up in our presentation. In doing so, our group now has pretty much solidified all implementation details and system specifications. I have also started to come up with specific algorithm implementations for our recommendation scheme. It has to include the weight of the clothing, the color, and the weather. I’ve been working on an equation that could use all three of these parameters and provide a correct/satisfying recommendation output.

We are currently slightly behind schedule, but we hope to nail down a lot of things right before spring break so that we can come back from break and get working at a relatively fast speed.

In the coming week, we plan to finish our design report and get it reviewed by a TA and then start working on building the mirror frame when the mirror arrives. We plant to do this at the woodshop or makerspace. I also hope to get a beta version of the app working.