Joon’s Status Report for 3/27

This week, we met with Professor Kim and Ryan during the regular lab meetings. We discussed our goals for the interim demo and our current progress on the project. We also briefly talked about the mid-semester grades and the feedback of the mid-semester grades.

For the item recognition part, aside from the collected images (250 images per item) for the training dataset, I then collected 150 images per item, which are different from the collected images. The IDT library was useful in this purpose as well, because it supports separation of training data and test data, which prevents the model from training on test images and potentially skewing the test results. I also augmented the images by applying image transformation, reflection and distortions to generate many images from a single image that are real-world feasible and label preserving. I also wrote Python and PyTorch code to build the CNN model. 

My progress is slightly behind because training the CNN model took longer than I expected, and this was due to the large number of images. However, I am confident that I can integrate this item recognition part onto the web application that Janet is working on so that our group can demonstrate the feature on the interim demo. 

Next week, I plan on fully training the CNN model with the dataset and test on the test dataset I collected this week. I will also look into the integration of the CNN model to the web application built in Django framework and dealing with the machine learning models onto the AWS server.

Team Status Report for 3/27

Currently, the most significant risks that could jeopardize the success of the project include losing any native functionality of what we had planned for the phone app. Originally, we planned to build our user-facing app interface as a native Android app in Android Studio with Kotlin. However, after running into many difficulties with collaboration and build errors, we agreed to switch to developing a web application instead, with the same functionality as planned. However, the largest risk is losing any persistent functionality (including receiving notifications or receiving Bluetooth data when the web app is not active). To manage these risks, we are doing extensive research on the Web Bluetooth API, which allows us to communicate over Bluetooth using Javascript.  Additionally, Android mobile devices do support mobile web push notifications. As a mitigation, should our team still run into any issues, we plan on creating a bare native Android application in Android Studio + Kotlin that handles any persistent native functionality such as notifications and Bluetooth communication and placing our web application as a WebView into the phone app so that the user may still interact with the interface we’ve built.

Changes were made to the existing design of the system, as discussed above. We’ve changed our user-facing application from a native Android app to a web application. This change was necessary due to persistent build (Gradle) errors and issues with collaboration in the Android Studio IDE.  Costs this change incurs may involve a possible loss of native functionality (as discussed above), but these costs will be mitigated by the possibility of handling any bare native functions like notifications and Bluetooth through either the Web Bluetooth API + mobile web push notifications or a bare Kotlin app that handles these functionalities but displays our web application interface through a WebView.

No changes have been made to our schedule, but it can be viewed here.

 

Janet’s Status Report for 3/27

This week, I worked on setting up our web application in Django, building layouts for the pages, creating models in the database for our user Profiles, Items, and Events, and setting up the infrastructure for connection to the RPi Zero. All of routing and navigation is completed, and the two main areas of focus left on the web app are the scheduling and the connection with the RPi Zero via the Web Bluetooth API. Our new repository is now set up here.

We are currently using native Django models for our web application to meet our MVP goal for the interim demo, but plan to use databases on AWS before the final demo for scalability.

Since the last status update, our team has agreed on developing a web app rather than a native Android app, for the advantages stated before:

  1. Ease of development between the three team members
  2. Product no longer excludes users without Android phones
  3. Accessibility via other platforms, such as laptops and tablets

Since our current focus is preparing the MVP of our project for the interim demo (week of 4/11), we wanted a route that would help optimize our development time as quickly as possible. Additionally, any concerns of loss of functionality in a web app vs. a native app can be easily mitigated. Our current plan is to create the web app with full functionality, however, if we run into issues with persistency or Bluetooth tasks, one mitigation we’ve planned is to create a bare native Android app (in Android Studio with Kotlin, as previously planned) to handle native functionality, but using the interface of our web app through a WebView, which allows you to deliver web applications as part of a client application.

While developing, I also ran into the question of how our project will be used in actual use cases, specifically with the RPi Zero. Currently, multiple users can create accounts on the web application, but this requires our web application to assume that each user has their own backpack device (aka the RPi Zero).  However, our team is currently only testing with one RPi Zero (and this will be reflected in the demos as well). I decided to include the UUID field of an RPi Zero as a required attribute for each user Profile on our web application, as this would be required if Backpack Buddy were a commercial product.

My progress is currently on schedule.

In the next week, I hope to have the interface completed for the schedule creation and the onboarding (registration) process implemented, in preparation for integration with Joon’s component (item recognition). I’ll also be working together with Aaron on integrating the RPi Zero with the web app’s interface.

Aaron’s Status Report for 3/27

This week I worked on setting up and configuring the H1 iBeacon tags, connecting to them using the Bluez stack, and doing testing to determine the calibration values needed for our distance pruning algorithm. The manufacturer of the beacons, Mokosmart, provided a smartphone app to connect to the iBeacons and configure their transmission power and interval.

Although our initial plan was to set the tranmission power to the lowest setting available, this power setting actually made the Bluetooth signal too weak to be detected by the Raspberry Pi. Additionally, the lower power transmission levels also had higher variance in the RSSI values reported at 50cm. After testing different power transmission levels, we found that level 4 (-12dBm) was a suitable balance between power consumption and detection reliability.

For testing, I used the aioblescan Python package created by François Wautier. We initially tried using the bluepy package created by Ian Harvey, however we discovered that the bluepy package could not achieve the 1 second update interval required for our project. We switched to the aioblescan package, as it is specifically designed for reading BLE advertisement packets, and it therefore has a sufficiently fast update interval for our needs. Using the package, I wrote a simple script to read in 1000 advertisement packets, and calculate the mean and standard deviation of the RSSI values reported. I then placed a beacon exactly 50cm from the Raspberry Pi, and ran the script. Below is a sample screenshot of the testing loop results.

I also tested 5 different beacons to measure the variation across different beacons. All of the mean RSSI values reported were within +/- 3 dBm of each other, indicating that the different transmission powers of the beacons were within tolerance of each other.

To increase the robustness of our system, we decided to order an additional Bluetooth receiver for the Raspberry Pi. This would allow us to obtain two distinct RSSI values, which would further increase the accuracy of our distance pruning algorithm. By placing the Raspberry Pi and additional receiver at opposite ends of the backpack, we can reduce the effective volume where a tag could be and still be within 50cm of both receivers. This helps mitigate the issue of items near the backpack being mistaken for being inside the backpack.

For next week, I plan on testing the ability of the system to work in a backpack, by attaching the tags to items and placing the Raspberry Pi in an actual backpack. I will also be working on an improved algorithm using two RSSI values instead of one, in preparation for the arrival of the additional Bluetooth Receiver.

Joon’s Status Report for 3/13

This week, I worked with my teammates for the Design Report. Aside from the regular lab meetings where we spent all of the time listening to the Section’s Design report presentations, we had our regular meetings outside of the regular lab sessions to discuss our current progress of the project and design report. From the feedback received from the faculty, TAs, and peers in Section C, we look forward to improving our project.

For the item recognition part, I collected the data (a collection of images) for the items that students carry in their backpack. From 21 items, which includes textbooks, notebooks, pencil case, etc, I was able to collect 250 images per item. I was initially planning to write a Python scraper to scrape images from Google. In order to scrape from Google Images, I had to download ChromeDriver, a web driver for Google Chrome. But, I wasn’t able to download it. However, I was able to find a library called IDT, a Image Dataset Tool, which enabled me to scrape 250 images for each student item. I decided to download and use the IDT library for a few main reasons. (For more information about the library, refer to https://pypi.org/project/idt/, https://github.com/deliton/idt )

  • It intends to create image datasets to be used for machine learning, which is suitable for my task to use CNN for image classification
  • The library allows the optimization of the image dataset and downscaling and compressing the images for optimal file size.
  • The library allows for the multiple searches with different keywords. For example, when searching images for a reusable water bottle, a user will search it by typing different keywords “water bottle”, “reusable water bottle”, or “plastic bottle” for the same results. IDT library enables the user to put the multiple keywords so that the user can collect various possibilities to images to better train the machine learning model.

Below is the collection of the “reusable water bottle” collected by the IDT library and saved in my personal directory in my working space.

 

My progress is currently on schedule, and I have made changes on the schedule for the item recognition part, because I was able to refine my schedule after the research and design process.

Next week, I plan on augmenting the collect images and have a finalized dataset to train the CNN model for image classifier. I will also develop the CNN for the image classification/recognition tasks.

Janet’s Status Report for 3/13

This week, I gave the Design Review presentation and also worked with my team on our Design Report. I also created an entity-relationship diagram for the database of our phone app.

In the diagram, we have an intersection table, EventItems, that keeps track of two foreign keys: EventIDs, and ItemIDs. This is because each event has a subset of all the tagged items in our database, and intersection tables are useful when modeling many-to-many relationships, as in this case. I also created the basic layouts and navigation logic for our phone app in Kotlin this week. However, I’ve been running into constant issues with Git integration for Android Studio, as well as common build errors with Android Studio, and these issues may present a significant risk to the project (as discussed further in the team status report). I’ll be preparing to create a mobile-accessible web app version of our phone app, should we run into large integration issues in the upcoming weeks. Three reasons why switching to a web app may be advantageous to us include:

  1. Ease of development between the three team members
  2. Product no longer excludes users without Android phones
  3. Accessibility via other platforms, such as laptops and tablets

Additionally, we could use the Web Bluetooth API to communicate with other Bluetooth devices over JavaScript. I did some research and found that this API would perfectly fit our needs, as it allows you to request and connect to nearby BLE devices, read/write Bluetooth characteristics & descriptors, and receive GATT notifications and when devices get disconnected.

Currently, our progress is on schedule, but I’ll be running some basic tests to see how accessible on phones a web app would be for our project. I’ll also be working with my team members mostly on the design report that’s due this upcoming week.

In the next week, I hope to have completed the design report with my team members and have done some research on using a web app instead of a phone app for our needs. Luckily, there’s quite a bit of slack time built into our schedule for the phone app before further development is needed (after integration with the BLE tags + RPi), so I’ll be able to use this time to set up risk mitigations.

Team Status Report for 3/13

One of the current risks to this project is the ability to accurately determine the distance of the BLE tags to the Raspberry Pi. Different tags have different power emission levels, and the scanner on the Raspberry Pi also might be weaker than the scanners used in iPhones for which the iBeacon protocol was designed. To mitigate this risk, we are creating a calibration protocol for the tags, whereby the user will place the tag next to the Raspberry Pi at a fixed distance to allow the system to correlate signal power level with distance.

Another significant risk to this project involves the difficulty of Android Studio, particularly when it comes to Git Integration and common build errors. This may make integrating our modules of the project together much more difficult. One way we are managing this risk is to consistently test version control changes between members on Android Studio. Another possible mitigation for this risk is defaulting to a mobile-accessible web app, which may prove more useful to our project and its users.

No major changes have currently been made to the existing design of the system.

Our current schedule can be found here, though Joon has made changes in his schedule on the item recognition part due to more refined series of tasks developed during the design presentation and design report; our team members are currently focusing mostly on the upcoming design report.

Aaron’s Status Report for 3/13

This week I worked on the design review presentation and design report. I completed both an overall system diagram and a software-specific block diagram for our project.

I also began creating a bill of materials from our parts list. This budget list includes everything we currently need for our project. We also included a Tile tracker tile in order to compare our system to existing products.

Following feedback given after the design review presentation, I have also begun rethinking our distance-based gating requirement. Rather than having a fixed algorithm, I am working on a calibration protocol which would allow the system to more accurately determine tag distance by having the user place the tags at a specific distance from the Raspberry Pi and measuring the signal strength.

For next week I plan on getting the parts ordered and finishing the design report. I plan on adding variable names to the software specification diagram as well, to clarify what data is being sent between the different components of the phone app.

Janet’s Status Report for 3/6

During this week, I began basic development of our phone app and started following Google Developers’ courses for creating Android apps using Kotlin. I set up AWS Amplify to manage deployment, hosting, and the backend of our app. I also set up our GitHub repository for the phone app. I also met with my team and redesigned the user flow diagram and wireframe of the phone app, which is now updated to look something like this:

The main changes made to the user flow were:

  • Always ask for user’s home location
  • Each event should have a “check now” function to see if all needed items are already packed
  • App will notify user if one (or more) of their items have been left behind at an event location
  • App will notify user if there is a high chance of precipitation in their area and they’ve registered a related item (e.g. umbrella, raincoat, snow boots, etc.)

Additionally, I worked with my team to complete our Design Review Presentation slides and write our Design Review Report.

My progress for the phone app is currently on schedule.

In the next week, I hope to set up layouts and navigation across different screens for our phone app. I will also continue working on the Design Review Report with my team members.

Aaron’s Status Report for 3/6

This week I worked on the system specification and implementation plans for the design review proposal. I created a new, more detailed version of the overall system block diagram, and I also created an independent software block diagram that goes into more detail about our app.

For the communication between the Raspberry Pi and the phone, we are planning on using the BlueZ Linux Bluetooth stack to handle Bluetooth pairing and data transmission.

I also did more research into determining beacon distance using Bluetooth. I found this webpage which details the math and some code behind how Bluetooth distancing calculations can be done. Although Bluetooth distance measuring is not very accurate, this is not a problem because we do not need precise distances. Since we only need to determine whether a tag is inside of the backpack or not, we are able to get away with the distances being off by up to approximately 10cm.

Next week I plan on working on the design review document and creating more diagrams to detail our system even further.