Aaron’s Status Report for 5/8

This week I worked on preparing for the final presentation, and working on integrating the item recognition component into the web app. I also finalized the distance pruning algorithm and started testing the distance at which items are detected.

For the final presentation, I created a new and updated hardware block diagram to illustrate the changes that have been made since the start of our project.

The main differences are that we are no longer using the built-in Bluetooth adapter, and are instead using 3 external USB bluetooth adapters to measure RSSI values. Additionally, the task of reporting the the item list is now delegated to a separate Python GATT server running separately from the distance pruning algorithm.

As part of integrating the item recognition system, I ran into several issues deploying to AWS. Although our previous commits deployed successfully, the deployment failed as soon as we added Tensorflow to the list of required packages. Initially, I tried to rewrite the Django migration command, as I thought that was the issue from reading the error logs. However, as it turned out, I was reading the old logs from a previous failed deployment. After finding the correct error log, I realized that the AWS server was running into a memory error when installing Tensorflow. After installing Tensorflow manually with the –no-cache-dir directory set, the AWS deployment succeeded.

For this final week, I will be working on editing the final video, and preparing for the final demo. I have picked specific items to use for the final demo, and I will  verify that they are detected in a practice run. Additionally, I am going to create a visualization for the distance data to use in the final report.

 

 

 

Joon’s Status Report for 5/8

This week, which was the last week of classes, we had a final presentation and presented what we have done during the semester and our final plans before the final demo. We also worked on the final poster and final video for next week’s deliverables.

For the item recognition part, last week I decided to have as many epochs (50 epochs) to train. However, training as many epochs to train was a bad idea because the model started to get overfit. In other words, the model learns the details of the outliers and the noise in the training dataset, therefore resulting in lower classification accuracy. I went back with the range of 15-22 epochs, as that was the range of the epochs where the validation accuracy started to converge. After the test, I was able to achieve the increased accuracy of 88.08% with 18 epochs.

I also have tested the model with the images of Aaron’s student items because for the final demo, Aaron is the one who will be physically demonstrating the overall system, including the item registration. The most noticeable result was that among 21 student item labels, it was able to classify the water bottle with 100.0% accuracy. We can also see that the notebook gets correctly classified.

    

I also have tested the picture of Aaron’s pencil case with his hand taken and without his hand taken, which is a possibility when a user takes a photo when registering an item. The picture with the hand was correctly identified as an image to be a pencil case, but the picture without the hand was incorrectly labeled as a wallet. But, identifying the pencil case as a wallet is fine because both items are “pouch-like objects” which makes sense to classify a pencil case as a wallet and vice versa. To take account of this issue (which also applies to rectangular objects like a notebook, laptop, etc), we have decided previously to show the top 3 classification results, and this result is shown at the top of the image as a Python list. As we see, the result has [‘wallet’, ‘sports equipment’, ‘pencil case’], which also contains the correct result, ‘pencil case’. Thus, even though the image was incorrectly identified (if we only consider the top 1 result), the user can benefit from simply choosing from the top 3 classification results.

 

Although I have increased the accuracy and made sure that my model works with Aaron’s student item images, my progress on integrating this item recognition part to the web application is slightly behind because of the Tensorflow and Keras modules that throw an error to the deployment of the web application. Thus, to have enough time to integrate and test the deployed web application, I am more leaning towards the backup plan, which is to train the model in the local machine (of a team member doing the final demo) and have the model to communicate with the web application locally. To do so, I am working extensively with Aaron, who is responsible for the deployment, and the web application, and asked Janet to enable the image form feature, where the user can input the image for the item recognition.

For next week, I plan to integrate this model into a web application. Then, we will test the deployed web application and see how it is working with the overall system. Finally, our group will prepare for the final demo and work on the final report.

Team Status Report for 5/8

Currently, the most significant risk to our project is that our integration with the image recognition portion of our project is still not completed, as the endpoint for the model must first be finished. This is something that will likely be completed in the following days, but perhaps not in time for the poster or video. To manage this risk, Joon is working ASAP on setting up an endpoint for the model, and Janet will help with integration into the web app once the endpoint is set up. However, even without the integration of the image recognition portion into our project, our product is still able to achieve our intended use case. The only consequence of failing to integrate the image recognition portion is that users will need to manually enter the name of items when registering them, rather than our model suggesting three names.

No changes were necessarily made to the existing design of our system (in our requirements, block diagram, or system specs), but we will no longer be pursuing the creation of a thin Android app client for Bluetooth persistence for our project. This change was necessary simply because we ran out of remaining time for our project to implement this, and extensive research would have been required before we could begin to implement it. The cost that this change will incur is that users would spend more time needing to connect to the Raspberry Pi every time they open the app. As we are currently at the end of the semester, unfortunately, we have no time to mitigate this cost.

Our schedule can be viewed here.

 

Janet’s Status Report for 5/8

This week, I completed writing and implementing the Schedule Learning feature for our web application. The schedule learning feature makes use of two new models: a TimeItemListPair and an ItemCounter.

Multiple TimeItemListPairs essentially function as a dictionary, with each pair tied to a user and a timestamp with a list of items currently inside the backpack at that specified timestamp. An ItemCounter exists for each item tied to a specific event (e.g. there could be multiple ItemCounters for the same item, “Folder,” if Folder is assigned to more than one event) and keeps track of how many times this item, for this event, has been found in a TimeItemListPair.

For the assignment logic, our system runs through all the events for a user that are marked as “Schedule Learning” events. If a TimeItemListPair exists such that its timestamp falls within the window of an event (i.e. between the start time and end time of an event), the counter for each item in that Pair will increment by one. Finally, assignment continuously happens, assigning items with counts higher than 85% of all counts to the corresponding event.

Although this feature is not completely fleshed out, schedule learning is not included as part of our MVP, and is simply meant to help decrease the user burden of manually assigning items to each event.

Additionally, I completed an extra feature to add a “Register” button to unnamed items inside a user’s backpack for streamlined registration. This feature can be viewed here.

This past week, I also worked with my teammates on the Final Presentation slides.

Our progress is currently on schedule.

In the next week, our remaining focus will be on completing deliverables for the end of this project course, including the final video, poster, and report.

Joon’s Status Report for 5/1

This week, we had the last mandatory lab meeting with Professor Kim and Ryan. We have discussed the updates on our progress since the last meeting and our goals for the final presentation and the report.

For the item recognition part, I was able to increase the accuracy of the model by training for longer epochs. Instead of 15 epochs, where the validation accuracy started to converge, I trained for 50 epochs and it increased the classification accuracy to 84.36%. I believe that increasing the number of epochs is fine because the model is eventually trained onto the AWS server and in the web application, the user can simply input the image to the server and get the classification results from the model in the AWS server. For the reporting purposes of the testing component of the CNN model, I’m planning to not only present the accuracy percentage but also show the confusion matrix, which shows an overview of the classification of given labels to the correct labels.

In order to integrate the image recognition component into Janet’s web application, my main goal for the remainder of the semester is to provide an API endpoint for the model. I have been setting up the model (along with the datasets and the trained feature onto the AWS server using Amazon Sagemaker.

Although the development and the testing for the item recognition part are done, my progress on integrating this item recognition part is slightly behind because I need to provide the API endpoint for the web application and the item recognition module. Since the web app and the ML item recognition are running on two different servers and getting the servers to communicate to each other may be difficult, my backup plan is to train the model in the local machine (of a team member doing the final demo) and have the model to communicate with the web application locally. Then the top 3 recommendations can be transferred and displayed to the web application. To catch up on this, I have to work concurrently with providing an API endpoint between two servers (ML server and web application server) and working on the backup plans.

For next week, I plan to integrate this model into Janet’s web application and provide an API endpoint for the server. Finally, I have to check whether the recommendation is displayed correctly on the web application. Our group will also work on the deliverables for the finals week of this course.

Team Status Report for 5/1

This week we succesfully managed to deploy the web app to AWS as well as test that our app works on an Android phone. Getting the app to work on a smartphone versus a computer was one of the risks we had, so this risk has been eliminated now that we know the app works on a smartphone.

The most significant risk we still face is getting persistent Bluetooth to work, as this would reduce the time the user spends needing to connect to the Raspberry Pi every time they open the app. Currently, we are working on creating an Android applet, which will maintain the Bluetooth connection while the web app is closed. In addition to this risk, our other risks include integrating the item recognition into our app, as currently we are running the web app and the ML item recognition on two different servers, and getting the servers to communicate may be difficult.

As for design changes, we have decided not to implement the sleeping protocol for the Raspberry Pi, as we have determined that even under the most demanding conditions the battery life is sufficient to run the system for at least two 8-hour work days, or one 18-hour day. Additionally, even if there are no events scheduled, the user may want to check their items arbitrarily, and they would not be able to do so quickly if the system is asleep given the long bootup time of the Raspberry Pi.

The only major change to our schedule is that we are increasing the time for the persistent Bluetooth problem. As always, our schedule can be viewed here.

Aaron’s Status Report for 5/1

This week I worked on deploying the Django web app to AWS, as well as making some changes to the app to streamline transferring item information betweent the Javascript and Python portions of our app. After examining the various available AWS services, including Lightsail and Lambda, we decided to use AWS Elastic Beanstalk (EB) to host the web app. However, getting EB to work with our web app took a lot of experimenting as well as searching online for solutions to our problems.

The first problem I encountered was that EB did not have access to some of the Python packages which our Django app was using. In fact, EB actually does not support the most recent release of Python (3.9), so I had to set up a Python 3.8 environment to test run the web app on. I slowly determined which packages were problematic by looking at the EB environment initialization logs after each failed deployment. I downgraded or removed these packages until the EB environment succesfully deployed. One tip I wish I had known earlier is that the AWS command-line interface only deploys your most recent git commit, not just any changes that have been made to the local files. Thus, any changes have to be committed before the AWS CLI will upload them.

The second problem with our server was that the static files were not being served for the webpage. After reading various blogs about deploying Django apps on EB, I learned that the Nginx server running on the EB instance needs to be configured to serve the static files from a directory. Also, the Django app needs to be told to prepare the static files with the collectstatic command. Using the option-settings feature of EB, I created a configuration file which directed EB to run the necessary commands to prepare and serve the static files using Django and Nginx respectively.

We also learned that the Web Bluetooth API requires https in order to function, for security reasons. In order to make our site secure, I also had to create a self-signed development SSL certificate and upload it to the AWS Certificate Manager. My first few attempts failed due to the certificate being invalid, but after following this blog:

https://medium.com/@chamilad/adding-a-self-signed-ssl-certificate-to-aws-acm-88a123a04301

I managed to succesfully upload a certificate. I then added an HTTPS listener to our AWS EB instance using the certificate, and I also installed the certificate locally on my computer. However, the https still did not work, as modern browsers such as Chrome require that SSL certificates specify the Subject Alternative Name (SAN) field in order to be accepted. After following this Stack Exchange post:

https://security.stackexchange.com/questions/74345/provide-subjectaltname-to-openssl-directly-on-the-command-line

I created a new certificate with the SAN field specified and the certificate was accepted.

After installing the certificate on my Android phone, I was also able to test and verify that our system works on a smartphone. Previously, we were unable to test this functionality because the Web Bluetooth API does not run unless https is enabled.

Schedule-wise, we are falling a bit behind on getting persistent Bluetooth to work, however we should be able to complete that with the extra slack time we have.

For next week, I will be working with Janet on making the Bluetooth connection persistent, so the user does not need to reconnect to check their items. I will also be performing distance testing to determine how well our system gates items inside versus outside the backpack.