Janet’s Status Report for 5/1

This week, I completed the features for missing item notifications (i.e., an event is coming up but at least one of the required items for the event is not currently in the backpack) as well as lost item notifications (i.e., the item list at the beginning of an event is not consistent with the item list at the end of an event). A demo of this feature can be found here [with no missing items] and [with missing items]. Two new functions were written for these new features, which can be viewed below:

Additionally, since our team members are all working to finish the remainder of our project, I picked up the schedule learning feature task for our web application. Currently, I’ve collected sample data for what our web application may need for this feature, which consists of large dictionaries with timestamps as keys and an item list snapshot of what items are inside the user’s backpack at that timestamp. This data would then be continually compared against events stored in the database and items would be automatically assigned to the user’s events based on this data. However, this feature is not a requirement of our MVP, and a fully fleshed-out version of this feature would require persistence (thin native client + webview) to be completed first. I’ve also started working with my teammates on our final presentation.

Currently, our schedule is a little bit behind. The components left involve:

  1. Persistence (implementation of thin native client in Android + WebView)
  2. Schedule Learning (not part of our MVP, but dependent on persistence)
  3. Integration with image recognition component (currently depending on Joon to set up an API endpoint for the model)

Save for the integration with the image recognition component, our MVP is already completed. However, implementing features like persistence and schedule learning would greatly increase the cases in which our product can be most useful as well.

To catch up to the project schedule, we’ll be using this final week of development (week of 5/2) to complete the remaining components. Aaron and I will work on Persistence, I will work on Schedule Learning, and Joon will work on setting up an API endpoint for the model. These are the deliverables we hope to complete in the next week.

Joon’s Status Report for 4/24

This week, we had an additional meeting with Professor Kim because there weren’t any mandatory lab meetings due to the ethics discussion, and we also wanted to inform him about our progress after the interim demo.

For the item recognition part, I had to increase the item classification accuracy. For the interim demo, the previous model I implemented had 58% item classification accuracy. However, after the discussion during the interim demo, I have realized that I needed to increase the recognition accuracy by a significant amount. While I initially thought that the 60% accuracy was acceptable (and 50% accuracy for the MVP), as the user can manually type the item information whenever the wrong item suggestion is given, I also agreed with the fact that higher accuracy is desirable to decrease the user burden. I had to completely change the model and the better model that I was able to find was the VGG16 CNN model (For more information of VGG16 model: VGG16 Paper and Blog post). This model was provided by the Python Tensorflow and Keras, so I coded and trained the newly implemented VGG16 model. I also had to change the dimension of the images from 256 x 256 to 224 x 224 because the initial convolution layer takes in the 224 x 224 image and the 224 x 224 dimension is widely used in machine learning models for image classification. Another step I also took to increase the accuracy was to have a validation set. With the validation set, I could find the best hyperparameter, which was the number of epochs to train the model. I stopped the number of epochs to train when the validation accuracy and validation loss gets converged. Among 50 epochs, which took way long time, I was able to find the accuracy got converged around 15 epochs, so I stopped training there.

After implementing the model, I was able to increase the accuracy to 80.21%, which was much better than the planned accuracy. For instance, whenever a user inputs an image of a laptop, it correctly identifies that the image is a laptop, with 99.52% accuracy. For visualization and demonstration purposes, I have printed out all 21 labels and the classification likelihood for each label in percentage. I also took a picture of my wallet, and it correctly identifies that the item is a wallet, with 78.09% accuracy. This was good to see because my model works well for the images taken from a user’s smartphone. Also, among 21 labels, there are many rectangular objects such as laptop, notebook, tablet, and textbook, but it can correctly classify that the image is a wallet. Moreover, the item classification returns a top 3 item classification suggestion so that the user can simply choose the item classification from these 3 suggestions. These suggestions are returned in a form of a Python list so that they can be easily transferred to the Django web application.

My progress on the item recognition part is slightly behind because I need to integrate this item recognition module into the web application. To catch up on this, since Janet’s done with her individual work on the web application, I need to work extensively and collaborate with her during the lab session to fully integrate and test the smooth integration of the item recognition module.

For next week, I plan to integrate this model into Janet’s web application. To do so, I have to train my model to the AWS server and look into the methods to integrate the CNN model into the Django framework. In the later part of next week, our group will also work on the final presentation slides.

Team Status Report for 4/24

The most significant risk that could jeopardize the success of our project is the Bluetooth persistence issue. Currently, our web application utilizes the Web Bluetooth API to connect with our RPi Zero and receive the list of items inside the backpack. However, we first need to deploy our web app (we are using AWS Elastic Beanstalk) before testing on a phone, as the Web Bluetooth API only works in secure contexts. Overall, this risk is being managed by our current contingency plan, which is to create a thin client on Android which handles the Bluetooth functionality, while including our Web App in a <WebView> in the Android application to handle the interfacing. However, we expect to run into many issues, particularly in the communication between the thin client itself and the web application. We’ve dedicated the upcoming week to implementing this transition and debugging any issues we run into.

No changes have been made to the existing design of the system. As always, our schedule can be viewed here.

Janet’s Status Report for 4/24

This week, I completed the removal functionality for items and events as well as web push notifications that get sent once the notification deadline for an event has arrived. I used the Django webpush package to send web push notifications to the user and had to register a service worker to enable this functionality. Additionally, I had to add another attribute for the Event model, “notified,” so that our checkNotifs function doesn’t repeatedly send notifications for events more than once, and created a Notification model to use for debugging and logging. This feature can be viewed here, and the logic for the checkNotifs function can be seen below:

I also worked with Aaron to test the delay between an action (adding or removing an item to/from the backpack) and the RPi receiving the item as well as the delay between the RPi and the web app interface updating. We found that the time fluctuates quite a bit between the action and the RPi, but once an RPi receives an item, the interface update is typically under a second.

We have had to push back on integrating Joon’s image recognition module with the web app until Joon finished migrating the model to AWS, so this aspect of our progress is behind schedule. We are also a little behind schedule on the schedule learning feature as well as Bluetooth persistence, though we’ve planned the bulk of these tasks for the upcoming week rather than this previous week. To catch up to our project schedule, we’ll likely focus solely on these two tasks for the upcoming week. Thankfully, we’ve also planned for a little slack time in our final planned week of work (week of 4/25), so we can also use that time to wrap up any leftover tasks. At this point, all my individual tasks for our project have been completed, and the remaining tasks all involve some form of integration or collaboration to complete.

In the next week, I hope to complete usability testing of our web app, the schedule learning algorithm, and Bluetooth persistence with Aaron. I also need to integrate Joon’s image recognition module into our web app’s registration process.

Aaron’s Status Report for 4/24

This week I worked on improving the distance pruning algorithm, physically mounting the system to the backpack, and testing the system.

After discussing with Professor Sullivan, I decided to test an even longer averaging period of 10 samples to see if that would improve the stability of our system. While it did result in less “wobble” (where the system is uncertain about whether a tag is in the backpack and constantly switches it into and out of the item list), it did increase how long it takes for the item list to be updated after a tag is removed to about 5 seconds. This means that we likely won’t meet our original requirement of 1 second for tag update time, however we feel this is a worthwhile tradeoff for the better stability.

In addition to testing a longer averaging period I also tried using a Kalman filter, based on Wouter Bulten’s webpage at https://www.wouterbulten.nl/blog/tech/kalman-filters-explained-removing-noise-from-rssi-signals/. I tested different values for the process noise and gain, however I found that the Kalman filter was not as stable as the direct averaging was. Thus, I decided to stick with averaging instead of a Kalman filter.

For the mounting, I used the backpack’s existing pockets to hold the battery and the RPi Zero, and sewed the Bluetooth adapters to different points in the backpack spaced 20cm apart. I then carried the device on my backpack and walked around the neighborhood to test if the system could withstand movement, which it did. I also tested shaking the backpack to see if any of the tags inside would be undetected, but the system still detected all 10 tags inside of the backpack.

In addition to testing the system physically, I also ran battery tests throughout the week. I ran the system at it’s maximum power draw level, by having it track 10 tags while it was also reporting the tag list with the Bluetooth GATT server. Over 5 tests, the battery lasted for an average of 19.5 hours, with no test going under 18 hours. Thus the battery life meets our requirement of 18 hours of continuous use.

 

Joon’s Status Report for 4/10

This week, for the item recognition part, I worked more on finalizing the CNN model and improving the classification accuracy after testing with much more images (more images than 21 images for the last week). However, after updating the algorithm, while the accuracy I’m currently having is fine for the interim demo, I still found that the image recognition accuracy is lower than expected accuracy for our requirements set for the Design Proposal/Presentation. Although my current model just presents the top 1 item recognition given a student item image input, making the model presenting the top 3 item recognition will help increase the accuracy. Moreover, for this weekend before the interim demo, I will devote most of the time working on improving the image recognition model and algorithm.

My progress on the item recognition part is slightly behind because I was trying to get the item recognition algorithm better than our item recognition accuracy requirement. So, I was unable to integrate this item recognition into Janet’s web application. Thus, for the demo, we will demonstrate the item recognition functionality separate from the web application which tracks the tagged items inside/outside of the backpack. Our schedule has been modified to take account of this delay in the progress on the item recognition part.

For next week, I hope to do more extensive testing not only with the scraped images but also with the “real” images taken from a user’s smartphone camera. Therefore, I will take some student item images on my phone to test the model and ask my teammates to take some student time images to test the item recognition module. I will also make the system present the top 3 item recognition, instead of the current top 1 item recognition. Then, I hope to integrate item recognition with Janet’s web application.

Janet’s Status Report for 4/10

This week, I worked with Aaron to complete the integration with the RPi Zero with our web app and also continued to work on the onboarding and item addition on our web app. Since Joon’s timeline on the image recognition module will likely not allow for integration before the interim demo, I have decided to move ahead with the manual naming for items in our item registration process. I also spent time researching Bluetooth persistence for our web app, and currently, we may need to implement our mitigation plan of a bare native Android application with our web application included as a WebView. This is troublesome for some features we may want to implement, such as tracking lost items (e.g. “You left your notebook at Study Session”) or simply add too much to the user burden (if the user has to consistently open the app), so I’ve allotted some time to begin working on this during the coming weeks.

A bare-bones demo of our item registration process currently looks as so, and a demo of the interface update with our BLE tags can be viewed here.

Our progress is currently on schedule.

In the next week, deliverables I hope to complete include integration with Joon’s image recognition module (after the interim demo). I also plan to begin working on the mobile web push notifications for reminders/missing items as well as deletion of items on the web app, and Aaron and I will work together on automatic registration of tag MAC addresses.

Team Status Report for 4/10

Currently, the most significant risk to our project is our ability to accurately determine which items are inside versus near the backpack. To mitigate this risk, we are exploring a new idea of light-based switching of the beacon tags. The concept is that tags inside of the backpack will not be exposed to light (assuming the backpack zipper is closed), whereas a tag next to the backpack would still be exposed to the ambient lighting within the room. Another mitigation we are using is a notification to the user to check that there are no items next to or near the backpack when they are about to leave for an event. By having the user check for such items, the system will not need to be able to distinguish between close to versus inside the backpack.

Another risk we have discovered is that the usage of 4 bluetooth adapters for the scanning process has caused the Raspberry Pi Zero to slow down due to processor overload. To mitigate this risk, we have decided to use only 3 out of the 4 adapters for our scanning process, which will lighten the burden on the RPi Zero. Additionally, we are increasing the timeout update intervals from 500ms to 1s to further reduce the load. Although this increases the time between when a beacon’s signal is lost to when it is removed from the item list, this tradeoff is necessary to ensure that the RPi Zero can still operate without slowing down.

The only changes we’ve made to the existing design of the system involve using 3 rather than 4 adapters for our scanning process, for reasons discussed above.

Our updated schedule can be found here.

A video demo of our interface update from BLE tags can be viewed here.

Aaron’s Status Report for 4/10

This week I worked on adding support for multiple Bluetooth adapter sensors to the Raspberry Pi Zero’s script. To accomplish this, I had to modify the script to support multi-threading, with each thread registering a Bluetooth scanner with each of the Bluetooth USB adapters. In addition to adding these threads, I had to modify the distance pruning script to record and utilize multiple RSSI readings from each of the Bluetooth adapters, as opposed to using just a single RSSI value. To do so, I created a new Python class for handling items which stores the RSSI values and automatically adds itself to the item list according to the distance pruning algorithm. Additionally, the class handles dropout as well in case the beacon signal is lost.

As part of the distance pruning algorithm, we tested a smoothing coefficient to average out the reported RSSI values over time for better accuracy. However, from testing on the borderline distance where the beacons would be considered outside of the backpack (50cm), the smoothing did not appear to help prevent the beacon from “stuttering” (jumping in and out of the list of items considered to be inside the backpack).

A new idea we have for helping with accurately identifying which objects are in the backpack is to use a light-level based power switching circuit on each of the beacon tags, so that the beacons will only be active under dark lighting conditions. This would make it so that only the tags inside of the closed backpack would begin advertising, while items outside of the backpack would be exposed to the ambient room lighting and therefore remain dormant. To test the feasibility of such a solution, I soldered wires to the power traces of the beacon tags, and found that the tags automatically began advertising again after receiving power. Thus, it would be possible to control the beacon tags with an additional light-sensing circuit on the outside. However, a new enclosure would have to be produced for the tags, as such a light-sensing circuit would not fit inside the original casings of the tags.

In addition to working on the RPi-Tag interface, I also worked on the RPi-Web app interface. Although we were able to send data to the web app last week, we had not been able to send a full item list. This week, I modified the GATT server to report the item list in string (character-by character) format, with each item’s mac address added to the string, separated by a semicolon. This also required changing the GATT server’s characteristic type from integer to string. On the web app side, I modified the Javascript script to be able to continuously handle update events from the RPi, as opposed to the one-off data transfer the original script from last week did. This also required modifying the script to request notifications from the RPi GATT server, and also modifying the GATT server to periodically report the item list to the web app upon receiving the nofications request.

For next week I plan on exploring the light-sensing switch circuit concept further, by ordering photoresistors and transistors and testing how well the tags can be switched on and off. I also plan on mounting the RPi system inside a backpack, and testing whether the system can withstand the physical movement associated with normal use of a backpack.

 

Joon’s Status Report for 4/3

This week, we met with Professor Kim and Ryan during the regular lab meetings. As usual, we again discussed our current progress on the project and goals for the interim demo.

For the item recognition part, I completed the implementation of the CNN model using Python and PyTorch. First, the training is done using the 1000 images per item that were obtained from the scraping from the web server and implementing an image processing algorithm to a single image to augment the image dataset. The training is done on my machine to fix any bugs and errors in my CNN model implementation. For the implementation and training, much guidance was received from this blog post.

Additionally, I worked on testing the CNN model using a few sample images from the test dataset. I have found that the CNN model developed does a good job identifying the student items. I have tested it with 21 images, which comes from a single image (not from the training dataset) from each 21 identifying student images, and it showed a 100% accuracy. However, I hope to test extensively with a much larger test dataset.

My progress is on schedule according to the Gantt chart. I also have made changes to the schedule to take account of the training and finalizing the CNN model implementation time and delayed the schedule for testing the CNN model. This testing should be done in correlation to the integration with the web application prior to the interim demo.

Next week, I plan on extensively testing the CNN model. I will also work with Janet to integrate this feature into the web application. As a group, I will be working with my teammates to fully prepared for the interim demo.