This week I worked on preparing for our final presentation and presenting it as well. I also worked on reducing the run time for the Open Pose. Originally, our testing showed that it took around 21 seconds on average to run Open Pose on the photo we wanted. I was able to adjust the settings and change the commands for running Open Pose and reduced the runtime all the way down to 4 seconds. This reduced our total runtime from and average of 40 seconds to 15 seconds. Reducing the net resolution of the image being run on Open Pose was able to reduce the runtime as originally we were running it on the full 1080p resolution. Reducing this to 320p, I was able to reduce the runtime drastically.
Other than that, I have been helping my teammates integrate and testing to make sure everything works in the Jetson environment. Our mirror should be good to demo as along as we get the Jetson <–> App working and the UI <–> Jetson working. Other than those components we are on schedule for our final demo this week.
This week I worked on implementing the mirror UI, and its integration with the rest of the project. Last week, the mirror UI did not work on Jetson. Thus, I decided to create a virtual machine with the same exact environment as Jetson in my own laptop, and work on it there. This way, I was able to build a distributable form of mirror UI Electron app for Linux. However, although the app ran completely well on the virtual machine, it did not run on Jetson due to the fact that some requirements for running the app did not align with what are required for the rest of the project. Considering this, and also considering the fact that the mirror UI should be able to communicate with the mobile app, I decided to create a web app for the mirror UI. The reason we originally decided to make a desktop app instead of a web app was because OpenPose and color recognition already created too much loads on Jetson, which slows down the connectivity. However, as we tested with other web browser, OpenPose and color recognition we have now seemed compatible with a web app. This way, it became easier to connect the UI with our app as well. I have made a basic structure of web app that can be compatible with the mobile app, and will shortly test it with Jeremy’s mobile app as well.
Although we are behind the original schedule, I think we are approaching very close to the end of the project as we are wrapping up. I think we are in a good pace considering the deadlines for poster and final demo.
For the upcoming week, I will work on the integration briefly and final demo.
This week, our team presented our final presentation in class. Following a good presentation, we continued to focus our efforts on bringing together the various parts of our project. After struggling to integrate certain pieces of the mirror together, we changed focus, and decided to create a server that allows us to access parts of our project through the internet. This allows both the mirror and the application we are creating to access the server and relay information accordingly. We also worked on integrating the mirror UI to be able to activate the mirror and connect to the other parts of the project.
As we are coming closer to the end of the project, we find ourselves more in need of connecting the final pieces of the project. In the next week we will focus on getting our mirror fully prepared and functional, and also finishing our final poster and final report. Overall we are on track as we enter the final stretch to get the various parts of the mirror together and test them for success.
This week I worked on connecting the outfit recommendation algorithms with our Google Spreadsheets API that we are using as a database for our user’s wardrobe. This allows the our application and device to connect to the same server for our wardrobe. I also tested the full outfit recommendation time, and found that we were way over the mark of where we want to be in recommendation time. I think otherwise we are on schedule, and this next week we will be focusing on connecting all the parts together and wrapping everything up in preparation for our final report and demo.
This week I worked on creating the server that would communicate with the mirror UI. This integration with the app and the mirror UI will make it easier to display certain screens on the monitor when the user presses “Start outfit recommendation” on the app. It will also allow the mirror UI code to act as the Controller class in that it will call upon the OpenPose shell script when the app signals to the mirror UI that the outfit recommendation has started. This coming week will be about wrapping everything up, making the final poster, writing up the final report, and some finishing touches for the final demo.
Parts of the server I am creating right now are a little difficult to handle with because the mirror UI runs on a separate library called electron, but the approach I am using right now is sending a string in the form of a JSON to localhost port 3000 and the mirror UI js file will have an app.listen function which is listening on the same port and will update the screen using React Native’s Router class to re-route the screen to the loading screen. If this is achieved within the coming week, then our mirror is good to go.
This week, I improved the weather tags recommendation algorithm, and put together the outfit recommendation algorithm. I altered the weather algorithm and its tags to accommodate a wider variety of clothing, mainly formal clothing, such as suits and dresses, and synchronized it to the format of the database and the user’s input form. We now have an outfit recommendation algorithm that takes in various outputs, and provides the user with the 5 best options for them to pick and choose from based on a weighted point system. It also has an internal system for analyzing what accessories would be appropriate depending on weather conditions.
As we are coming closer to the final presentation and demo, I think we are on track with what we need to be getting done. Next week I will be focusing on putting together our final report, and integrating what remaining parts of our prototype are still waiting to be put together.
This week I worked on 1) running bash script with a click button in electron desktop app, 2) deploying the electron app in Jetson, and 3) deploying colorthief in Jetson.
I started working on 1) running bash script with a click button from my macOS environment; however, it did not work even after multiple trials. Thus, I decided to deploy electron app in Jetson and see how it goes. In terms of 2) deploying the app in Jetson, I was unable to install Electron in Jetson. Thus, I decided to build a distributable electron desktop app from my macOS laptop using electron-builder, and run the app from Jetson. Although I was able to successfully build and run a distributable desktop app for macOS and Windows, I was not able to run the app for Linux. I have not figured out this part yet. In terms of 3) deploying colorthief in Jetson, Wonho and I encountered “cannot find module” errors; thus, Wonho fixed it by using OpenCV for colorthief instead of node.js.
Overall, this week has been lots of trials and errors. We are behind the schedule as we have not been able to successfully integrate the mirrorUI desktop app with Jetson. As it is an important part of our project, if it does not work for another day or so, I am planning to develop mirrorUI using different platforms other than Electron.
For the upcoming week, I am planning to finish the integration of mirrorUI with Jetson and improve the mirrorUI so that it has better connections with the overall system.
Our team made a lot of progress this week. We are getting very close to finishing up the integration process with the mirror UI, the Jetson, the app, and the outfit recommendation algorithm. Some risks that are being dealt with at the moment are incompatibility issues between the mirror UI code and the Jetson. The main reason for this is because of the OS differences. Packages and code that compiled and ran on the macOS are suddenly not working on the Jetson’s Linux OS. Our plan B in case this doesn’t work out is possibly to set up a virtual machine that would run the mirror UI on a macOS. This is our last resort because running on VMs may result in slower response times and potentially more bugs. Another risk that we face at the moment is being able to run the openpose shell script through a click of a button in our app. Considering the various roadblocks we have faced because of the Jetson, it may be very likely (hopefully not though) that the Jetson will have trouble opening and running the shell script when the app sends a request. Our contingency plan for this is to have to run the script manually, but we will do our best to avoid this at all costs because it significantly hinders the convenience of our smart mirror.
Some changes were made to the outfit recommendation schema. Originally, we were planning to recommend one outfit, but we changed the output to 5 outfits. We also added the option for users to input whether they want a formal outfit or an informal outfit because it enhances the purpose of our smart mirror and is more tailored towards a personal preference this way. This did not induce any costs but may potentially pose a challenge of having to send data about the formality to the Jetson.
Photos and videos of our progress will be reflected on the final presentation slides as well as our individual status reports.
As we are coming close to the final demo, our schedule has become a lot busier. Integration is a lot more difficult than expected, but most of us are staying in lab till night to get things sorted as fast as possible. As a result, we are doing testing and integration at the same time instead of separately. Other than that, I feel like our team has made a ton of progress this week and it will stay that way.
This week I figured out how to upload images to the database spreadsheet file and I made some changes to the app. Some of these changes are:
Using a dropdown menu for the tags/labels instead of a user input text field
Uploads photo to Cloudinary with the image url inputted into the database when user presses upload and either 1. goes back to add more clothes or 2. finishes uploading all clothes
Added a screen with two buttons where the user would choose whether he/she wants the outfit recommendation to be formal or informal type of clothing.
Added a screen with one button that says “Start outfit recommendation”
This coming week I will finish up the connection between my app and the Jetson and also work on the final presentation slides and poster.
This week I continued with integrating the separate parts of our project. The automation script I created last week was having some problems running the ColorThief algorithm due to the node and npm being used simultaneously by the weather API. Thus, I made a major change by dropping the Color Thief api and used OpenCV pandas instead to create an algorithm that easily extracts the dominant color of an image. This result returns a tuple of the rgb values of the dominant color and works well with the automation script as well. Ramzi also finished the recommendation algorithm so I also integrated that function with the script as well as adjusting it so it takes the output.txt from the color detection output. I was also able to make some adjustments to the automation script to reduce the runtime by changing the run settings for OpenPose and reduced the total runtime of the script down to 20 seconds.
I also worked on the final presentation slides as I will be presenting next week and will continue working on them till Sunday. We still need to get some testing metrics for the slides so Jeremy and I will be extracting those numbers on Sunday and add them to the slides.
We definitely need some more time to polish everything together as well as still connecting the app, Jetson components and the UI but most of the integration that needs to be done is almost there.