Joon’s Status Report for 2/27

This week, I presented our team’s project proposal to Section C and Professor Kim. Thanks to my group members’ support, the presentation went really well. From the feedback I have received from the classmates from Section C, I was delighted to hear that the presentation delivered was clear and very easy to understand. I am also looking forward to hearing some feedback on the technical content of the presentation and discussing this feedback with the teammates. 

I did more research on the computer vision and image recognition modules and datasets for the scope of our project. I have read the documentation of the OpenCV to get familiar with the module. Aside from OpenCV that I found last week, I also found that TensorFlow is another module for working on image classification. TensorFlow website also provides a list of the machine learning models (MobileNet, Inception, and NASNet) for image recognition on the smartphone application with accuracy and processing time. (Link: https://www.tensorflow.org/lite/guide/hosted_models). I read some papers related to MobileNet and hopefully next week, I can determine what models to train and build for our project. From Professor Kim’s comments this week, I developed a list of items that the student would carry in the backpack. Having this comprehensive list of items will help me determine the items to identify and classify while the student registers an item with the smartphone camera.

Our team and I are currently on schedule. 

Next week, I plan on finally deciding the modules and software libraries to use for the camera identification so that I have a finalized pipeline for image processing. Since I am in Korea right now and other teammates are completely remote, we need to set up the version control system (Git) so that each other can work together in the online/virtual environment. I also need to work on the Design Review document with my teammates next week.

Team Status Report for 2/27

Currently, the most significant risk for this project is finding a reasonable and robust testing plan for the learning system where the system will learn the user’s habits (which items they bring at which times of day on which days), as we can’t afford to wait 2 weeks for our system to learn habits each time we’d like to test the feature. One possible solution is to immediately begin collecting data from one of the team members as soon as the tags and integration with the device is done (i.e. one team member will tag and register all their items and bring their items with them from event to event for a period of time). We can also come up with carefully constructed synthetic data for testing purposes. Contingency plans we have ready include allowing the user to see their schedule as it is being developed and learned by the system so that the user can correct any mistakes our learning system makes.

Some changes were made to the existing design of the system based on feedback we received from others after our proposal presentation and more team discussion. The biggest change to our project includes the learning system, where we can learn a user’s habits after a period of time to craft their schedule and item notifications. This change was necessary because, as Professor Kim suggested, the user burden might be too large to require them to create events or sync their Google Calendar and assign items to each event. However, this change imposes additional strain on the user, as users may have to “wait” for their habits to be learned. For example, during the two weeks where our product learns about the user’s item habits, they could have already been able to use our system to help track and manage their assets. As a mitigation, the learning system will only be optional for users, in case they prefer to sync with Google Calendar or create events themselves.

We’ve currently made some tentative changes to our schedule to accommodate the learning system.

User flow diagrams and wireframes have been made for our phone app, which can be seen below. Additionally, our team has also decided on the most important items for item recognition as well as the hardware we need to purchase.

 

Aaron’s Status Report for 2/27

This week I helped finish the proposal presentation, and I began finding specific parts to order for our project. For the proposal presentation, I created a preliminary system block diagram. I used this block diagram to help me create the item list.

I also began documenting the choices made in selecting these parts for use in the design review.  For example, we chose the Raspberry Pi Zero W for its cheap cost and small form factor, while still retaining Bluetooth and WiFi connectivity.

For next week I will work on selecting a protocol for communicating between the Raspberry Pi Zero and the Android app. This protocol will need to be able to quickly send the item list after the RPi finishes its item scanning.

Janet’s Status Report for 2/27

During this week, I created basic user flow diagrams and wireframes for how the user will interact with our project’s phone app.

I also downloaded Android Studio and began working on the Android basics course for Kotlin from Google’s Developers.

My team and I are currently on schedule, according to our Gantt Chart.

In the next week, I hope to set up the database for the phone app and have some basic pages of the app done. I will also meet with my team to work on and finalize our Design Review Presentation and the Design Review Report.

Joon’s Status Report for 2/20

This week, I worked with my team to solidify and expand the overall idea of the Backpack Buddy developed for our Project Abstract. Based on each team member’s areas of expertise, we divided our tasks and planned our schedule for the entire semester using the Gantt chart. Since I had taken many computer science and machine learning courses, I decided to work on the camera feature of the system, where users can use their phone camera to register the item into the database. I needed to get familiar with the concepts of applications of computer vision because the knowledge of computer vision is instrumental in completing my task. (Links I referred to: Object Recognition with Deep Learning, OpenCV for Detecting Objects in Live Video)

I did some research related to computer vision and the most frequently used modules. I decided that the OpenCV library is suitable for this project. I was also able to learn the difference between object recognition and object detection. Although detecting the object from the camera image will not be too difficult with the OpenCV library, being able to recognize which object will be a much more challenging task. Therefore, my plan for next week is to do more research on using the computer vision techniques on phone applications built in Kotlin and code the object recognition algorithms to be applied to our project.

I also worked with my team members to set up our WordPress website and build dozens of slides for the Proposal Presentation next week. We’re on schedule because we are regularly meeting besides our designated lab sections to catch up on our progress and work together on the slides. As the first presenter of my team, I will be delivering the Proposal Presentation, so I practiced for the presentation before our class next week.

Aaron’s Status Report for 2/20

This week I worked on the proposal presentation, setting up the WordPress site, and creating our GANTT chart. For the proposal presentation, I focused on fleshing out implementation details and determining what our technical challenges will be.  We plan on using a Raspberry Pi Zero to manage a set of BLE scanners to determine what items are in a backpack. This list of items is then used by our phone app to notify the user if they are missing any items.

For the WordPress website, I set up the menu bar, and I helped write the introduction and summary for our project. For our Gantt chart, I helped list out the tasks which we need to accomplish to make our system functional, from building the scanner array to registering new items in the phone app.

For next week, I will begin designing the system, and looking at which specific tags and readers we will purchase. I will also finalize the block diagram for our scanning system.

Team Status Report for 2/20

Currently, the most significant risk that could jeopardize the success of the project is the selection of which BLE tags we choose to purchase. Certain tags require more work to be programmed, and we need to keep in mind both our budget and the scope and requirements of our project when purchasing tags. We’re currently managing this risk by creating a pro/con list of possible tag purchases and keeping room in our budget for multiple tag purchases, should some of them not meet our needs.

One change we made to the existing design of the system was choosing to include the phone’s camera to streamline easier item registration using computer vision. This change was necessary because the user burden of initial item registration was brought up as a concern by our project advisor, Professor Kim, and we agreed that having to manually register each tagged item might prove too big a burden for most users (particularly if they have many items). The costs that this change would incur include placing a larger performance burden on the user’s phone and adding much more complexity and risk to our project. We hope to mitigate these costs by finding relevant work related to item recognition and are prepared to default to manual item registration, should this aspect of our project not succeed.

Our current schedule can be found here (slack time and tasks are currently not finalized).

 

Janet’s Status Report for 2/20

This week, I worked with my team to do some more research on the details of our project. I also set up the images, categories, and tags for our WordPress site and fleshed out the details of our project requirements and tasks for our proposal presentation. Since the majority of my area focus has been in Software Systems, I will be working on the bulk of our team’s phone app. I met with my team members and created the Gantt Chart for our team and began adding each member’s tasks. Additionally, I also drew up a diagram for how the parts of our project will interact with each other.

Our project is currently on schedule, as we are just wrapping up research and our proposal presentation.

In the next week, I hope to have some basic wireframes done for our phone app, a user flow diagram for how the user will interact with the app, and begin basic development of the app in Kotlin.