Jeremy’s Status Report for 4/30

This week I worked on creating the server that would communicate with the mirror UI. This integration with the app and the mirror UI will make it easier to display certain screens on the monitor when the user presses “Start outfit recommendation” on the app. It will also allow the mirror UI code to act as the Controller class in that it will call upon the OpenPose shell script when the app signals to the mirror UI that the outfit recommendation has started. This coming week will be about wrapping everything up, making the final poster, writing up the final report, and some finishing touches for the final demo.

Parts of the server I am creating right now are a little difficult to handle with because the mirror UI runs on a separate library called electron, but the approach I am using right now is sending a string in the form of a JSON to localhost port 3000 and the mirror UI js file will have an app.listen function which is listening on the same port and will update the screen using React Native’s Router class to re-route the screen to the loading screen. If this is achieved within the coming week, then our mirror is good to go.

Jeremy’s Status Report for 4/23

This week I figured out how to upload images to the database spreadsheet file and I made some changes to the app. Some of these changes are:

  • Using a dropdown menu for the tags/labels instead of a user input text field
  • Uploads photo to Cloudinary with the image url inputted into the database when user presses upload and either 1. goes back to add more clothes or 2. finishes uploading all clothes
  • Added a screen with two buttons where the user would choose whether he/she wants the outfit recommendation to be formal or informal type of clothing.
  • Added a screen with one button that says “Start outfit recommendation”

After being able to upload and save images onto Cloudinary and the database, the only thing left to do with the app is be able to run the shell script that exists locally on the Jetson. I believe there is an easy go-to command that allows you to run scripts locally with Javascript code. I have also begun some unit testing. The most important ones that I have conducted are the response time for uploading the tags to the database and the response time for uploading the image to Cloudinary and inputting the image display url in the database. I have attached screenshots of the response times shown in the console log and plan to put some explanation in the final presentation slides as well.

This coming week I will finish up the connection between my app and the Jetson and also work on the final presentation slides and poster.

Video of app features

Another video of app features

Response time for HTTP request sent (tags)

Jeremy’s Status Report for 4/16

This week I mainly focused on finishing the app. As of now, my app has a home screen where the user is able to input certain tags to the piece of clothing which are then uploaded to a Google Spreadsheet that resides in our group’s capstone folder on Google Drive. I achieved this through using an API called sheet.best which allows GET and POST requests to a Google Spreadsheet using a specific uri. The only issue that I am facing is finding an effective way to upload the photo of the piece of clothing to the database. I’ve tried to send a POST request with the photo but it would only push the file destination of the photo in String format which is useless. I’ve also looked into possibly storing the photos on a cloud server (e.g. Google Cloud), but it requires a lot of OAuth hassle which I believe is unnecessary. It may be easier to find a way to upload the photo directly from the app to the Jetson if possible because then I could simply store the unique name/id of the photo file in the ‘photo’ column in the database and the Jetson can output the correct photo when trying to display it on the monitor screen. This is something that I will be working on over the weekend and by next week. I don’t want to spend too much time on this because I still need to integrate the app with the recommendation system and the Jetson, so I hope to get this done before next Wednesday. I would say that we are a little behind schedule because we only have around a week before our final reports are due, but I’m still positive about the progress that our group has been making towards getting everything integrated.

Tags
User inputs info
Database with inputted info

Jeremy’s Status Report for 4/10

This week I worked on finishing up the important features of our app. As of now, I am able to open the app through Expo Go, show the splash screen with our logo, and choose and upload photos which the server responds to. The server is on localhost:3000 and the user is also able to cancel choosing/uploading photos which the server will also respond with a “cancelled = true” k-v pair in the json object that is returned. I also helped demonstrate all this during the interim demo that happened on Wednesday.

For the coming week, as long as I stay on schedule, then I will have a working version of the app up and running with the ability to add tags and labels when uploading the photo which will also be sent to the server in a json object. I also want to be able to work with Yun to use her database that she has been setting up instead of localhost:3000 just responding with an OK. After that, I plan to integrate everything into the smart mirror and possibly add functionality to turning on/off the LED lights once we order them and get delivered.

Splash screen
Choosing a photo
Crop once selected
Image preview
Server response body

Jeremy’s Status Report for 4/2

This week Wonho and I attached the mirror to our mirror frame. We ended up simply taping the four corners of the mirror onto the four corners of our mirror frame because the mirror bends a lot as the mirror is made of acrylic. In a well-lit environment, it is possible to see through the front of the mirror which is not what we want, so we have decided (as a temporary solution) to use the cardboard box that the mirror came in to cover the back of our smart mirror and block any light from coming in through the back. I tried putting my phone behind the mirror and setting it at a full brightness level to test if the smart mirror would still be able to show the monitor display and it is visible as expected.

We are running into some trouble with the Xavier not being able to recognize that we have a camera input. A week ago, we had no issues with this but they appeared out of nowhere. We tried switching out the CSI cables, reading through troubleshooting threads online and it still would not work. This is the only obstacle that is preventing us from testing out trtpose and I am planning to go in on either Sunday/Monday to get this working again.

I would say that we are slightly behind schedule, but as long as we get trtpose working before the interim demo, then we should be in a very good spot to finish up during the last two weeks of this semester.

Jeremy’s Status Report for 3/26

This week I worked on figuring out what was wrong with setting up OpenPose with our Xavier. After hours of endless debugging and reinstalling, Wonho and I decided that it may be easier to switch to our plan B of using trt-pose. I’ve been working on the app a little more, but the past few days, I have been working with my team to get to the woodshop and create the frame for our mirror. We went to Home Depot to get two 2 x 10 x 8ft wood planks which we cut at the woodshop down in techspark with the assistance of an instructor there. As a result, I don’t have any photos to post in my individual status report, but will post the mirror frame that we built as a team in the team status report. For the next week, I hope to finish the app and get trtpose working so that we have something to show for the interim demo in the first week of April. The app dev process should be a little easier now that I learned more about frontend UI and GUI development in a class I am taking right now which also uses React and Node.js to create a backend and frontend that communicate via APIs. This is essentially the route that we are taking to have the frontend use a button that will send an API request to a backend which stores the user’s wardrobe.

Jeremy’s Status Report for 3/19

Post-spring break, I dove straight into implementing our mobile app that the user would use to interact with our smart mirror as well as input his/her wardrobe into the database. I am building the app with React Native and testing it with Expo Go. I have attached a few photos below which show my progress with the app. As of now, I have a very bland and simple home screen for our app and a ‘+’ button in the top-right corner which navigates the user to the page where he/she can add clothes to the database. I’m still working on having the app be able to handle image and file uploads. After that is done, then I plan to make the app a little more pretty and then connect it with the database implementation that Yun is working on. Besides working on the app, I also helped Wonho set up OpenPose with NVIDIA Xavier. We spent 3 hours on Thursday night trying to install all the correct dependencies and builds, but there seems to be an issue with either caffe or cmake not being able to recognize one of the libraries needed to run OpenPose. Wonho and I will try again on Monday, but if it doesn’t work then he and I discussed a plan B which is to use a different software tool, capable of real-time gesture recognition, called trtpose which was developed by NVIDIA. I also went to the wood shop with Wonho and picked out some scrap wood planks we will use to build the frame of our smart mirror. Next week, I will finish the app and build the frame of our smart mirror. I would say that our team is slightly behind schedule because there have been so many issues with setting up OpenPose on the Xavier, but if that is resolved then we may even be ahead of schedule.

Running the server
Home page
Add clothing

Jeremy’s Status Report for 2/26

This week I was mainly working on brushing up on the details of the design review presentation. I also wrote down some responses to frequently asked questions that might have come up in our presentation. In doing so, our group now has pretty much solidified all implementation details and system specifications. I have also started to come up with specific algorithm implementations for our recommendation scheme. It has to include the weight of the clothing, the color, and the weather. I’ve been working on an equation that could use all three of these parameters and provide a correct/satisfying recommendation output.

We are currently slightly behind schedule, but we hope to nail down a lot of things right before spring break so that we can come back from break and get working at a relatively fast speed.

In the coming week, we plan to finish our design report and get it reviewed by a TA and then start working on building the mirror frame when the mirror arrives. We plant to do this at the woodshop or makerspace. I also hope to get a beta version of the app working.

Jeremy’s Status Report for 2/19

This week I mainly worked on the design review presentation slides and reviewing our system specification. I added more details to our block diagram that shows how each component of the project interacts with one another.  I also modified some of our use case requirements after considering the expectations of our users. We are currently on schedule but hopefully in the next week, we actually try and test out some of the hardware and software that we decided on.  Along with that, I hope to receive helpful feedback from my presentation that will help steer us in the right direction.

Jeremy’s Status Report for 2/12

This week I was mostly working on our team’s proposal presentation slides. Our project is still in the early stages, so I also went ahead and polished some of our preliminary ideas so that we could design our project in a much more modular fashion.

This includes: attaching white LED lights around our mirror to fix the variable of minimizing shadows for each of our users, being able to analyze solid colors, having a tag to each of the clothes that the user fills out to make it easier to grab the correct matching clothes from the database.

In doing so, we were able to set an MVP for our project. I believe our project is on schedule, but we should be aware of the possible complications that could come from a new tech stack. Next week, I hope to be able to test out some of the software tools that our team is planning to use for the color detection and come up with an overall design strategy for our project by deciding on which tools we plan to utilize.