March 2: Team Status Update

This week the team focused on finalizing the design presentation, as well as the design document. We spent a significant amount of time narrowing down our requirements, then finding solutions to accomplish these requirements. We initially struggled with finding a value for the false positive and false negative rates for the cat door opening. We were unable to find statistics on raccoon behavior, or a value for how much damage raccoons can cause. We then realized that our goal simply needs to be better than the current design of a regular cat door. For clear reasons, a regular cat door will always let a racoon in because there is no locking mechanism. Therefore, any value for false positive less than 100% would be an improvement. In addition, we decided to challenge ourselves to achieve a 5% false positive rate, as this rate is achieved by competent facial recognition algorithms. We also chose 5% as our false negative rate because if we assume a cat uses the door four times a day, the user would be alerted once over five days that their cat may be stuck outside, which is reasonable.

More on the project management side, we decided that in addition to our two meetings a week during class time and our Saturday meeting, we should meet most days for a “stand up.” These meetings will be done over Zoom and will allow us to communicate our accomplishments over the past 24 hours and what we wish to accomplish in the next 24 hours. We believe that this will help us work better as a team, as we will be staying in touch on a daily basis. This is especially important as the semester goes on, when we start implementing our designs.

Our team is currently on track!

[Philip] Final Design Decisions

This week my focus has been on finalizing the design decisions for the design presentation and report. I wrote the system description for the iPhone Application, which included a more detailed wireframe. In this report, I specified what the requirements are for the app, and how we would accomplish those.

Note: Cities will be replaced with “Today” “Yesterday” “Past Week” “Past Month” “Past Year”

I also wrote the system description for the System Hub. Last week, I explained how we chose the Jetson TX2 for the CV and ML acceleration. I explained how the developer kit will suit all our needs for the system hub: it will need to communicate over Wifi to a phone, receive camera footage, apply our Computer Vision and ML algorithms, control the servos for the door, turn on and off and LED, and receive PIR data. It will replace the original plan to use a Raspberry Pi for the communication in conjunction with the Jetson GPU. In addition, I discuss our camera choice.

Finally, I research related cat doors for the research paper. I discuss the benefits and downsides of the traditional cat door, as well as an RFID activated cat door.

My progress is on schedule.

In the upcoming week, I will get started on writing code for the iPhone app.

[Philip] Choosing Hardware

This week I focused on finalizing what hardware we want to use. To do this, I came up with several requirements for our system. I determined that based off the average speed of a cat, our computer vision and machine learning algorithms will have approximately 1.2 seconds of cat visuals. In addition, I believe that we want to maximize the number of images we can process during this time. For example, based off my research a Raspberry Pi can compute at a rate of 1 frame per second. In most cases, we would take only one image of the cat, which is a great risk because the cat could be looking away or a light glare for just that one image. I also looked into an Odroid, which is essentially a more powerful Raspberry Pi. Even this would yield 2-3 frames per second. Again, we would be banking on receiving a stable image during these frames. Based off this research, our team decided that GPUs was the best course of action.

I focused my GPU research on Nvidia GPUs as I have experience writing parallel code in Cuda on Nvidia GPUs. Nvidia has a family of GPUs called Jetson whose application are for embedded systems. They have 256 cores. In addition, the development kit has a quad-core Arm CPU, Wifi capabilities, and many I/O ports. The Jetson TX2 was not only a solution for our image processing, but also for our system communication. In addition, I added this information along with more details to our design paper.

I also made progress with the app design, starting with a simple wireframe in Xcode:

My progress is on schedule.

In the upcoming week I will be working on finishing up the design presentation, in addition to figuring out more details about the app and the camera to Jetson communication mechanism which I will be reporting in the design paper.