Janet’s Status Report for 4/24

This week, I completed the removal functionality for items and events as well as web push notifications that get sent once the notification deadline for an event has arrived. I used the Django webpush package to send web push notifications to the user and had to register a service worker to enable this functionality. Additionally, I had to add another attribute for the Event model, “notified,” so that our checkNotifs function doesn’t repeatedly send notifications for events more than once, and created a Notification model to use for debugging and logging. This feature can be viewed here, and the logic for the checkNotifs function can be seen below:

I also worked with Aaron to test the delay between an action (adding or removing an item to/from the backpack) and the RPi receiving the item as well as the delay between the RPi and the web app interface updating. We found that the time fluctuates quite a bit between the action and the RPi, but once an RPi receives an item, the interface update is typically under a second.

We have had to push back on integrating Joon’s image recognition module with the web app until Joon finished migrating the model to AWS, so this aspect of our progress is behind schedule. We are also a little behind schedule on the schedule learning feature as well as Bluetooth persistence, though we’ve planned the bulk of these tasks for the upcoming week rather than this previous week. To catch up to our project schedule, we’ll likely focus solely on these two tasks for the upcoming week. Thankfully, we’ve also planned for a little slack time in our final planned week of work (week of 4/25), so we can also use that time to wrap up any leftover tasks. At this point, all my individual tasks for our project have been completed, and the remaining tasks all involve some form of integration or collaboration to complete.

In the next week, I hope to complete usability testing of our web app, the schedule learning algorithm, and Bluetooth persistence with Aaron. I also need to integrate Joon’s image recognition module into our web app’s registration process.

Aaron’s Status Report for 4/24

This week I worked on improving the distance pruning algorithm, physically mounting the system to the backpack, and testing the system.

After discussing with Professor Sullivan, I decided to test an even longer averaging period of 10 samples to see if that would improve the stability of our system. While it did result in less “wobble” (where the system is uncertain about whether a tag is in the backpack and constantly switches it into and out of the item list), it did increase how long it takes for the item list to be updated after a tag is removed to about 5 seconds. This means that we likely won’t meet our original requirement of 1 second for tag update time, however we feel this is a worthwhile tradeoff for the better stability.

In addition to testing a longer averaging period I also tried using a Kalman filter, based on Wouter Bulten’s webpage at https://www.wouterbulten.nl/blog/tech/kalman-filters-explained-removing-noise-from-rssi-signals/. I tested different values for the process noise and gain, however I found that the Kalman filter was not as stable as the direct averaging was. Thus, I decided to stick with averaging instead of a Kalman filter.

For the mounting, I used the backpack’s existing pockets to hold the battery and the RPi Zero, and sewed the Bluetooth adapters to different points in the backpack spaced 20cm apart. I then carried the device on my backpack and walked around the neighborhood to test if the system could withstand movement, which it did. I also tested shaking the backpack to see if any of the tags inside would be undetected, but the system still detected all 10 tags inside of the backpack.

In addition to testing the system physically, I also ran battery tests throughout the week. I ran the system at it’s maximum power draw level, by having it track 10 tags while it was also reporting the tag list with the Bluetooth GATT server. Over 5 tests, the battery lasted for an average of 19.5 hours, with no test going under 18 hours. Thus the battery life meets our requirement of 18 hours of continuous use.

 

Joon’s Status Report for 4/10

This week, for the item recognition part, I worked more on finalizing the CNN model and improving the classification accuracy after testing with much more images (more images than 21 images for the last week). However, after updating the algorithm, while the accuracy I’m currently having is fine for the interim demo, I still found that the image recognition accuracy is lower than expected accuracy for our requirements set for the Design Proposal/Presentation. Although my current model just presents the top 1 item recognition given a student item image input, making the model presenting the top 3 item recognition will help increase the accuracy. Moreover, for this weekend before the interim demo, I will devote most of the time working on improving the image recognition model and algorithm.

My progress on the item recognition part is slightly behind because I was trying to get the item recognition algorithm better than our item recognition accuracy requirement. So, I was unable to integrate this item recognition into Janet’s web application. Thus, for the demo, we will demonstrate the item recognition functionality separate from the web application which tracks the tagged items inside/outside of the backpack. Our schedule has been modified to take account of this delay in the progress on the item recognition part.

For next week, I hope to do more extensive testing not only with the scraped images but also with the “real” images taken from a user’s smartphone camera. Therefore, I will take some student item images on my phone to test the model and ask my teammates to take some student time images to test the item recognition module. I will also make the system present the top 3 item recognition, instead of the current top 1 item recognition. Then, I hope to integrate item recognition with Janet’s web application.

Janet’s Status Report for 4/10

This week, I worked with Aaron to complete the integration with the RPi Zero with our web app and also continued to work on the onboarding and item addition on our web app. Since Joon’s timeline on the image recognition module will likely not allow for integration before the interim demo, I have decided to move ahead with the manual naming for items in our item registration process. I also spent time researching Bluetooth persistence for our web app, and currently, we may need to implement our mitigation plan of a bare native Android application with our web application included as a WebView. This is troublesome for some features we may want to implement, such as tracking lost items (e.g. “You left your notebook at Study Session”) or simply add too much to the user burden (if the user has to consistently open the app), so I’ve allotted some time to begin working on this during the coming weeks.

A bare-bones demo of our item registration process currently looks as so, and a demo of the interface update with our BLE tags can be viewed here.

Our progress is currently on schedule.

In the next week, deliverables I hope to complete include integration with Joon’s image recognition module (after the interim demo). I also plan to begin working on the mobile web push notifications for reminders/missing items as well as deletion of items on the web app, and Aaron and I will work together on automatic registration of tag MAC addresses.

Team Status Report for 4/10

Currently, the most significant risk to our project is our ability to accurately determine which items are inside versus near the backpack. To mitigate this risk, we are exploring a new idea of light-based switching of the beacon tags. The concept is that tags inside of the backpack will not be exposed to light (assuming the backpack zipper is closed), whereas a tag next to the backpack would still be exposed to the ambient lighting within the room. Another mitigation we are using is a notification to the user to check that there are no items next to or near the backpack when they are about to leave for an event. By having the user check for such items, the system will not need to be able to distinguish between close to versus inside the backpack.

Another risk we have discovered is that the usage of 4 bluetooth adapters for the scanning process has caused the Raspberry Pi Zero to slow down due to processor overload. To mitigate this risk, we have decided to use only 3 out of the 4 adapters for our scanning process, which will lighten the burden on the RPi Zero. Additionally, we are increasing the timeout update intervals from 500ms to 1s to further reduce the load. Although this increases the time between when a beacon’s signal is lost to when it is removed from the item list, this tradeoff is necessary to ensure that the RPi Zero can still operate without slowing down.

The only changes we’ve made to the existing design of the system involve using 3 rather than 4 adapters for our scanning process, for reasons discussed above.

Our updated schedule can be found here.

A video demo of our interface update from BLE tags can be viewed here.

Aaron’s Status Report for 4/10

This week I worked on adding support for multiple Bluetooth adapter sensors to the Raspberry Pi Zero’s script. To accomplish this, I had to modify the script to support multi-threading, with each thread registering a Bluetooth scanner with each of the Bluetooth USB adapters. In addition to adding these threads, I had to modify the distance pruning script to record and utilize multiple RSSI readings from each of the Bluetooth adapters, as opposed to using just a single RSSI value. To do so, I created a new Python class for handling items which stores the RSSI values and automatically adds itself to the item list according to the distance pruning algorithm. Additionally, the class handles dropout as well in case the beacon signal is lost.

As part of the distance pruning algorithm, we tested a smoothing coefficient to average out the reported RSSI values over time for better accuracy. However, from testing on the borderline distance where the beacons would be considered outside of the backpack (50cm), the smoothing did not appear to help prevent the beacon from “stuttering” (jumping in and out of the list of items considered to be inside the backpack).

A new idea we have for helping with accurately identifying which objects are in the backpack is to use a light-level based power switching circuit on each of the beacon tags, so that the beacons will only be active under dark lighting conditions. This would make it so that only the tags inside of the closed backpack would begin advertising, while items outside of the backpack would be exposed to the ambient room lighting and therefore remain dormant. To test the feasibility of such a solution, I soldered wires to the power traces of the beacon tags, and found that the tags automatically began advertising again after receiving power. Thus, it would be possible to control the beacon tags with an additional light-sensing circuit on the outside. However, a new enclosure would have to be produced for the tags, as such a light-sensing circuit would not fit inside the original casings of the tags.

In addition to working on the RPi-Tag interface, I also worked on the RPi-Web app interface. Although we were able to send data to the web app last week, we had not been able to send a full item list. This week, I modified the GATT server to report the item list in string (character-by character) format, with each item’s mac address added to the string, separated by a semicolon. This also required changing the GATT server’s characteristic type from integer to string. On the web app side, I modified the Javascript script to be able to continuously handle update events from the RPi, as opposed to the one-off data transfer the original script from last week did. This also required modifying the script to request notifications from the RPi GATT server, and also modifying the GATT server to periodically report the item list to the web app upon receiving the nofications request.

For next week I plan on exploring the light-sensing switch circuit concept further, by ordering photoresistors and transistors and testing how well the tags can be switched on and off. I also plan on mounting the RPi system inside a backpack, and testing whether the system can withstand the physical movement associated with normal use of a backpack.

 

Joon’s Status Report for 4/3

This week, we met with Professor Kim and Ryan during the regular lab meetings. As usual, we again discussed our current progress on the project and goals for the interim demo.

For the item recognition part, I completed the implementation of the CNN model using Python and PyTorch. First, the training is done using the 1000 images per item that were obtained from the scraping from the web server and implementing an image processing algorithm to a single image to augment the image dataset. The training is done on my machine to fix any bugs and errors in my CNN model implementation. For the implementation and training, much guidance was received from this blog post.

Additionally, I worked on testing the CNN model using a few sample images from the test dataset. I have found that the CNN model developed does a good job identifying the student items. I have tested it with 21 images, which comes from a single image (not from the training dataset) from each 21 identifying student images, and it showed a 100% accuracy. However, I hope to test extensively with a much larger test dataset.

My progress is on schedule according to the Gantt chart. I also have made changes to the schedule to take account of the training and finalizing the CNN model implementation time and delayed the schedule for testing the CNN model. This testing should be done in correlation to the integration with the web application prior to the interim demo.

Next week, I plan on extensively testing the CNN model. I will also work with Janet to integrate this feature into the web application. As a group, I will be working with my teammates to fully prepared for the interim demo.

Team Status Report for 4/3

Currently, the most significant risk that could jeopardize the success of the project is the integration between the web application that visualizes the list of user items inside the backpack and the Bluetooth reader that determines whether the tagged item is inside the backpack. Moreover, because we have the interim demo coming in two weeks, we need to demonstrate that this functionality is fully working to ensure our success for the project. To manage these risks, we are doing extensive research on the Web Bluetooth API and communicating with Bluetooth devices so that we can quickly resolve problems that that arise during integration, both from solutions found by other users published online or alternative implementations that are available, such as having an additional Bluetooth reader. In preparation for integration, each of us have been testing each component incrementally and individually. For the web application, Janet has been testing the web application using dummy items and information so that communication with the Bluetooth scanner is not needed. For the Bluetooth scanner part, Aaron has been testing the Bluetooth scanner independently to check the robustness of the tagged item detection. For the item recognition part, Joon has been testing the CNN model on his machine so when the test is correctly done, the integration is done with the item registration part of the web application.

There is a change made to the existing design of the system. As stated in Aaron’s Status Report for 4/3, we have found that the onboard Bluetooth controller is actually disabled if the OS detects a separate USB adapter. Therefore, we have decided not to use the built-in adapter for the distance pruning of the tagged items. Instead, we have decided to use an array of 4 external USB Bluetooth adapters to determine the tag location. While this will require more of our budget to be used and require more power consumption, we believe that this will enhance the accuracy of our distance pruning algorithm and is worth the tradeoff.

The schedule for the item recognition part has been changed to account for the training of the CNN model for the item recognition and the testing plan for the CNN model is pushed back (but before the interim demo) to account for the integration with the web application and the interim demo. Additionally, we have added more time for Aaron to incorporate the additional Bluetooth adapters, and have pushed back implementing the power-saving sleep feature. The updated schedule is found here.

Aaron’s Status Report for 4/3

This week I worked on interfacing the Raspberry Pi with the web app over Bluetooth. To accomplish this, I configured the Raspberry Pi to run a Bluetooth GATT Server to which the web app could connect to by Bluetooth pairing. After connecting, the GATT service on the Raspberry Pi presents the list of items ID’s in string format for the web app to read.

The code for the GATT server is written in Python, and it uses the Bluez package’s BLE advertisement example code to run. I used the CPU temperature reporting example written by Douglas Otwell as inspiration for the code. The GATT server works by presenting a reporting “service,” which the web app can connect to via the service’s UUID. The service can then send data to the web app by exposing “characteristics” which can also be read via the characteristic UUID. We haved tested this functionality and have successfully transferred data between the Raspberry Pi and web app.

In addition to working on the app-RPi Bluetooth interface, I also continued to work on improving the RPi-tag interface. With the arrival of a dedicated USB Bluetooth dongle, I began additional testing to determine it’s sensitivity and to compare it with the built-in Bluetooth antenna. Although the reported RSSI values were lower, the USB Bluetooth adapter was still able to detect the iBeacon tags within the necessary range for our purposes.

One issue I encountered with the USB Bluetooth adapter is that the onboard Bluetooth controller is actually disabled if the OS detects a separate USB adapter. Despite various attempts at changing this behavior, including modifying configuration files and disabling certain services, I was unable to stop the built-in adapter from disabling itself. As a result, we have decided not to use the built-in adapter to determine tag distance. Instead, we are opting to use a total of 4 external USB Bluetooth adapters space apart in the backpack to determine the tag location. Having 4 RSSI readings will help improve the accuracy of the distance pruning algorithm, and make it less likely that an item outside of the backpack will be detected as being inside.

For next week I plan on finalizing the RSSI calibration values for the new USB Bluetooth adapter, as the previous values I had for the built-in adapter no longer apply to our system. I also plan on adding time-out functionality so that if a tag is not detected for long enough it is removed from the item list, even if its distance is unknown.

Janet’s Status Report for 4/3

This week, I completed the scheduling feature for our web application. Events can now be manually created via a Django Form, stored as an Event model, and displayed in a calendar view. Events can also be edited retrospectively. Additionally, the home page contains a view of upcoming events from these models. Much guidance was received from this blog post. A short demo of the scheduling feature on our app can be viewed here (video is too long to be directly included in post). Additionally, the login and registration functionality is complete (barring the integration with our item recognition module).

Additionally, I began development on the onboarding feature of the web app (i.e. when a new user first registers, they must tag all their items and register these tags). However, since this step cannot be completed until Joon completes the item recognition module, dummy items and information are currently being used. I also began development on the integration of the RPi Zero with the web app using the Web Bluetooth API.

Progress is currently exactly on schedule, as written in our Gantt Chart.

In the next week, per our Gantt Chart, deliverables I hope to complete the final integration between the RPi Zero and our web app as well as the integration between the item recognition and the onboarding on our web app. Additionally, we will all be preparing for the interim demo on the week of 4/11.