Yun’s Status Report for 4/30

This week I worked on implementing the mirror UI, and its integration with the rest of the project. Last week, the mirror UI did not work on Jetson. Thus, I decided to create a virtual machine with the same exact environment as Jetson in my own laptop, and work on it there. This way, I was able to build a distributable form of mirror UI Electron app for Linux. However, although the app ran completely well on the virtual machine, it did not run on Jetson due to the fact that some requirements for running the app did not align with what are required for the rest of the project. Considering this, and also considering the fact that the mirror UI should be able to communicate with the mobile app, I decided to create a web app for the mirror UI. The reason we originally decided to make a desktop app instead of a web app was because OpenPose and color recognition already created too much loads on Jetson, which slows down the connectivity. However, as we tested with other web browser, OpenPose and color recognition we have now seemed compatible with a web app. This way, it became easier to connect the UI with our app as well. I have made a basic structure of web app that can be compatible with the mobile app, and will shortly test it with Jeremy’s mobile app as well.

Although we are behind the original schedule, I think we are approaching very close to the end of the project as we are wrapping up. I think we are in a good pace considering the deadlines for poster and final demo.

For the upcoming week, I will work on the integration briefly and final demo.

Jeremy’s Status Report for 4/30

This week I worked on creating the server that would communicate with the mirror UI. This integration with the app and the mirror UI will make it easier to display certain screens on the monitor when the user presses “Start outfit recommendation” on the app. It will also allow the mirror UI code to act as the Controller class in that it will call upon the OpenPose shell script when the app signals to the mirror UI that the outfit recommendation has started. This coming week will be about wrapping everything up, making the final poster, writing up the final report, and some finishing touches for the final demo.

Parts of the server I am creating right now are a little difficult to handle with because the mirror UI runs on a separate library called electron, but the approach I am using right now is sending a string in the form of a JSON to localhost port 3000 and the mirror UI js file will have an app.listen function which is listening on the same port and will update the screen using React Native’s Router class to re-route the screen to the loading screen. If this is achieved within the coming week, then our mirror is good to go.

Yun’s Status Report for 4/23

This week I worked on 1) running bash script with a click button in electron desktop app, 2) deploying the electron app in Jetson, and 3) deploying colorthief in Jetson.

I started working on 1) running bash script with a click button from my macOS environment; however, it did not work even after multiple trials. Thus, I decided to deploy electron app in Jetson and see how it goes. In terms of 2) deploying the app in Jetson, I was unable to install Electron in Jetson. Thus, I decided to build a distributable electron desktop app from my macOS laptop using electron-builder, and run the app from Jetson. Although I was able to successfully build and run a distributable desktop app for macOS and Windows, I was not able to run the app for Linux. I have not figured out this part yet. In terms of 3) deploying colorthief in Jetson, Wonho and I encountered “cannot find module” errors; thus, Wonho fixed it by using OpenCV for colorthief instead of node.js.

Overall, this week has been lots of trials and errors. We are behind the schedule as we have not been able to successfully integrate the mirrorUI desktop app with Jetson. As it is an important part of our project, if it does not work for another day or so, I am planning to develop mirrorUI using different platforms other than Electron.

For the upcoming week, I am planning to finish the integration of mirrorUI with Jetson and improve the mirrorUI so that it has better connections with the overall system.

Team Status Report for 4/23

Our team made a lot of progress this week. We are getting very close to finishing up the integration process with the mirror UI, the Jetson, the app, and the outfit recommendation algorithm. Some risks that are being dealt with at the moment are incompatibility issues between the mirror UI code and the Jetson. The main reason for this is because of the OS differences. Packages and code that compiled and ran on the macOS are suddenly not working on the Jetson’s Linux OS. Our plan B in case this doesn’t work out is possibly to set up a virtual machine that would run the mirror UI on a macOS. This is our last resort because running on VMs may result in slower response times and potentially more bugs. Another risk that we face at the moment is being able to run the openpose shell script through a click of a button in our app. Considering the various roadblocks we have faced because of the Jetson, it may be very likely (hopefully not though) that the Jetson will have trouble opening and running the shell script when the app sends a request. Our contingency plan for this is to have to run the script manually, but we will do our best to avoid this at all costs because it significantly hinders the convenience of our smart mirror.

Some changes were made to the outfit recommendation schema. Originally, we were planning to recommend one outfit, but we changed the output to 5 outfits. We also added the option for users to input whether they want a formal outfit or an informal outfit because it enhances the purpose of our smart mirror and is more tailored towards a personal preference this way. This did not induce any costs but may potentially pose a challenge of having to send data about the formality to the Jetson.

Photos and videos of our progress will be reflected on the final presentation slides as well as our individual status reports.

As we are coming close to the final demo, our schedule has become a lot busier. Integration is a lot more difficult than expected, but most of us are staying in lab till night to get things sorted as fast as possible. As a result, we are doing testing and integration at the same time instead of separately. Other than that, I feel like our team has made a ton of progress this week and it will stay that way.

Jeremy’s Status Report for 4/23

This week I figured out how to upload images to the database spreadsheet file and I made some changes to the app. Some of these changes are:

  • Using a dropdown menu for the tags/labels instead of a user input text field
  • Uploads photo to Cloudinary with the image url inputted into the database when user presses upload and either 1. goes back to add more clothes or 2. finishes uploading all clothes
  • Added a screen with two buttons where the user would choose whether he/she wants the outfit recommendation to be formal or informal type of clothing.
  • Added a screen with one button that says “Start outfit recommendation”

After being able to upload and save images onto Cloudinary and the database, the only thing left to do with the app is be able to run the shell script that exists locally on the Jetson. I believe there is an easy go-to command that allows you to run scripts locally with Javascript code. I have also begun some unit testing. The most important ones that I have conducted are the response time for uploading the tags to the database and the response time for uploading the image to Cloudinary and inputting the image display url in the database. I have attached screenshots of the response times shown in the console log and plan to put some explanation in the final presentation slides as well.

This coming week I will finish up the connection between my app and the Jetson and also work on the final presentation slides and poster.

Video of app features

Another video of app features

Response time for HTTP request sent (tags)

Wonho’s Status Report for 4/23

This week I continued with integrating the separate parts of our project. The automation script I created last week was having some problems running the ColorThief algorithm due to the node and npm being used simultaneously by the weather API. Thus, I made a major change by dropping the Color Thief api and used OpenCV pandas instead to create an algorithm that easily extracts the dominant color of an image. This result returns a tuple of the rgb values of the dominant color and works well with the automation script as well. Ramzi also finished the recommendation algorithm so I also integrated that function with the script as well as adjusting it so it takes the output.txt from the color detection output. I was also able to make some adjustments to the automation script to reduce the runtime by changing the run settings for OpenPose and reduced the total runtime of the script down to 20 seconds.

I also worked on the final presentation slides as I will be presenting next week and will continue working on them till Sunday. We still need to get some testing metrics for the slides so Jeremy and I will be extracting those numbers on Sunday and add them to the slides.

We definitely need some more time to polish everything together as well as still connecting the app, Jetson components and the UI but most of the integration that needs to be done is almost there.

Wonho’s Status Report for 4/16

This week I focused on creating a cropping script for the image as well as creating a shell automation script to allow the code to run automatically. For the cropping script, it takes in the info from the JSON output file produced by Open Pose and crops the image taken by the arducam based on the body key points labeled in the JSON file. The cropping allows the image to be focused on the upper body to make it easier for Color Thief to detect the upper torso color. Then it will automatically saved that image in an easily accessible location in Jetson Xavier.

For the shell script, it ties together the components for first part of our recommendation process. It automatically takes 20 pictures through the arducam and selects the most well lit (the 20th picture taken) picture and runs Open Pose only on that one. This reduces our run time since OpenPose takes a while. Then it will take that image and run it through the cropping script I wrote this week and it will save that image. Then the script takes that image and runs it through the Color Thief API that Yun organized and produces an output.txt file with the dominant color of the image.

Our individual chunks are pretty much done and we have started to work on integration of those parts today so if we work hard this week, our team should be able to finish on schedule. Next week I will work on connecting my part done this week with Ramzi’s recommendation algorithm.

cropped image example

Team Status Report for 4/16

The most significant risk/task we are facing is the integration between app and Jetson, and Jetson and mirror UI. We are expecting that both integration would require creating server-client socket connections, and there is a no easy way out. However, Jeremy and Yun have some experience in this area, so we are hoping to get this task done if we push through.

In terms of the system design, we initially planned to connect the app with mirror UI as well; however, we decided to remove that interaction and use Jetson as the intermediate for the communication as we need to build connections of Jetson-mirror UI and Jetson-app anyway.

We do not have any change in the schedule from the last week. We set a small goal last week to finish the modular tasks by this Thursday and work on integration from Saturday, and we are on track for this schedule.

To explain in a bit more detail of what we did this week, we worked on completing all remaining modular tasks. The remaining tasks were 1) automating torso and top detection, 2) user wardrobe input form from the app, 3) color + weather recommendation, and 4) mirror UI.

  • Wonho was able to write the automation script for Jetson which covers from spinning off Jetson to taking a screenshot using Arducam, running OpenPose, and to cropping the screenshot.
  • Jeremy finished making the user wardrobe input form that asks apparel type, length, season. After the user submit the form, the data gets saved into google spreadsheet, which will be the user wardrobe database.
  • Ramzi finished the color and weather based recommendation with Python. The color recommendation based on the survey he created on which colors go well with which, and the weather recommendation is based on the real-time feels-like temperature.
  • Yun finished creating the mirror UI using Electron. It has main page, recommendation loading page, recommendation result page, and thank you/feedback page.

More details can be found on each of our individual status report. The above four tasks were all of the modular tasks remaining, which means all we actually need to do is integrating these parts.

 

Yun’s Status Report for 4/16

This week, I worked on the mirror UI using Electron, HTML, and JavaScript. The mirror is supposed to interact with Jetson and the app. The diagram(Figure 1) below illustrates a general idea of how mirror UI would look like.

Figure 1: Mirror UI diagram

As illustrated in the diagram, the mirror UI(desktop app) is supposed to make transitions from one screen to another based on the app and Jetson request/responses. However, as we have not finished the integration of this part yet, I made the app to ack upon buttons such as “Get recommendation” and “Done”. This will later be updated along with the integration. Here is a screen recording of how the desktop app looks so far.

(raw link: https://drive.google.com/file/d/1_uZXitU8aZ5DF-yaDiu9o0ppms_jQQFj/view?usp=sharing)

In terms of the schedule, I think we are on schedule, as we planned to start the integration today after finishing the individual parts. Although modular parts need to be more sophisticated, they are all working. I think we can start integrating everything from today, which was what we planned.

Next week, I will work on connecting Jetson and the Mirror UI, so that the UI can act according to the user input.

 

 

Jeremy’s Status Report for 4/16

This week I mainly focused on finishing the app. As of now, my app has a home screen where the user is able to input certain tags to the piece of clothing which are then uploaded to a Google Spreadsheet that resides in our group’s capstone folder on Google Drive. I achieved this through using an API called sheet.best which allows GET and POST requests to a Google Spreadsheet using a specific uri. The only issue that I am facing is finding an effective way to upload the photo of the piece of clothing to the database. I’ve tried to send a POST request with the photo but it would only push the file destination of the photo in String format which is useless. I’ve also looked into possibly storing the photos on a cloud server (e.g. Google Cloud), but it requires a lot of OAuth hassle which I believe is unnecessary. It may be easier to find a way to upload the photo directly from the app to the Jetson if possible because then I could simply store the unique name/id of the photo file in the ‘photo’ column in the database and the Jetson can output the correct photo when trying to display it on the monitor screen. This is something that I will be working on over the weekend and by next week. I don’t want to spend too much time on this because I still need to integrate the app with the recommendation system and the Jetson, so I hope to get this done before next Wednesday. I would say that we are a little behind schedule because we only have around a week before our final reports are due, but I’m still positive about the progress that our group has been making towards getting everything integrated.

Tags
User inputs info
Database with inputted info