[Philip] Finalizing code, wiring, and motor

This week I was very busy in creating a functional system. I finalized the code this week which takes into account the two PIR sensors and the camera, in turn opening or closing the door. In addition, this required much wiring for all the power and GPIO pins. I also helped Jing out with getting the solenoid to work (the transistor we were using needed 5V and the GPIO pins output 3.3V).

My major accomplishment this week had to due with the motor and door. Previously, Irene was not capable of creating a functioning motor, which prevented us from having an automated door. Earlier this week, Jing and I decided to fix this as we believed this was a necessary aspect of our project, especially for the demo on Monday. We found a new DC motor that we believed rotated at a fast enough speed and with enough torque. We were able to get this motor to work within a couple of hours. On Saturday, I successfully mounted the motor onto the door, then found a way to use fishing wire to pull the door up! It takes approximately 8 seconds for the door to completely open and 3 seconds for the door to close. Although this was not my aspect of the project I was assigned to do, I was willing to work on it to create a more complete project that better showcased my efforts this semester.

(If you look closely there is clear fishing wire attached between the motor and door)

[Philip] More Integration

This past week I have been doing a lot of integration. First, I greatly modified my Python script to include the inputs from both PIR sensors, call the open and close door functions, and fully communicate with the App. I had all the individual components ready which made this part easier. The logic for this script is well thought out and a main goal is to not close the door on the cat!

I have also been trying to use Jing’s newest algorithm on the Jetson but it has encountered difficulties. It is not smoothly replacing the older one. This is because Jing added a new layer to his algorithm which the Tensorflow on the Jetson does not like for some reason. This will be my priority this upcoming week because the older algorithm is not good enough.

In addition, I also worked on the presentation slides which are due Sunday night.

This upcoming week is crunch time! I will be getting Jing’s newest algorithm working on the Jetson and finish fully integrating the system.

[Philip] Jetson Integration + App

In the first part of the week I focused on integrating the Tensorflow algorithm, the MQTT library, and the GPIO library into one Python script. Integrating the GPIO was fairly straightforward as when a cat is detected by the camera or the PIR sensor detects movement, I simply set an LED to turn on for 5 seconds. The MQTT library was a little more tricky because I had to define what information will need to be sent between the Jetson and the iPhone app. The iPhone app tells the Jetson when the user wishes the door to be enabled or disabled. The Jetson then interprets this command and applies it. In addition, when movement is detected, the Jetson informs the app of the time of the movement in addition to the direction (inbound or outbound). When the app receives this information it then interprets it and updates the screen. Finally, when the app first opens it requests an update automatically from the Jetson for the most recent movement which then sends the corresponding info back to the app.

In addition to changing a lot on the backend of the iPhone app, I also spent a lot of time with the design of the app. Here is the final version:

My progress is on track.

This week I will need to test the entire integrated system. This will require working extensively with Jing and Irene.

[Philip] Jetson working with Tensorflow

I was originally hoping that Jing’s ML algorithm and Irene’s CV algorithm would smoothly work on the Jetson. I had prepared for this by installing all the necessary libraries for Tensforflow and OpenCV, in addition to the TensorRT libraries. However, after attempting to run the Python script on the Jetson there was a library mismatch. The drivers for the GPUs were too old of a version to work with the Python script. Unfortunately, there is no easy way to update the drivers. Originally, I was using the OS that boot up out of the box. Now, I had to download the most recent OS and re-flash the whole board.

After the re-flashing of the board, I had to reinstall all of the libraries. This process was a lot quicker than previously because I already knew what I had to do, but was rather tedious. Finally, Jing and I started to try and run the Python script. The Python script ran perfectly well on his PC laptop, now we just had to get it to run on the Jetson. After updating to the newest version of Tensorflow, and modifying bits and pieces of the CV and ML portions of the code, we were finally able to get it to work! It ran very fast, although this conclusion is just from my eye’s view, we have yet to run tests.

My progress is behind due to the complications that arose in running the Python script on the Jetson. To catch up, I will not be focusing my time on optimizing the algorithm for the Jetson, as this is not a necessary aspect of the project, simply a desired aspect.

In the next week I will be working on fully integrating the entire system. After this is accomplished I hope to be running a variety of tests on the system.

[Philip] OpenCV and GPIO

OpenCV is the standard for computer vision. However, it was not made to be put on an Nvidia Jetson board. Nvidia is trying to fix this, and have created an easy-to-use SDK installer. However, this installer is in beta. It was very easy to pick and choose which libraries to install on my board until the SDK would not properly connect to the board. Unfortunately, I had to revert to a much more complicated way which I figured out by compiling multiple advice pages on the internet. In the end, I got OpenCV working by getting a simple python script to display the video feed coming from a USB camera plugged into the Jetson.

Originally, I planned on continuing to work on the app. However, I realized that my work with the GPIO pins that I had previously done was useless because it was written in C++. Previously, I used a GPIO library I found for the Jetson TX1 which I edited for the TX2. However, I was not able to find a GPIO library in Python. Therefore, I had to write the GPIO library from scratch including functions such as Export, Unexport, setDirection, SetValue, GetValue, and ActiveLow.

My progress is slightly behind because of having to write the GPIO library in Python.

My goal for this week is to get Jing’s graph integrated with TensorRT. After some preliminary research, this does not seem straightforward as I had hoped.

[Philip] Jetson Breakthroughs

This week I worked a lot with the Jetson. First, I wanted to be able to read and write from the GPIO pins. I looked through the datasheet of the Jetson TX2 development board to figure out what pins are available to use. Then I found a python library which would enable me to more easily communicate to the GPIO pins. Finally, I wrote simple code to show that I could in fact use the GPIO pins. I did this by reading values from the PIR sensor.

Second, I was able to download the MQTT library onto the Jetson board. I created a Python script that subscribed to messages from the simulated iPhone app on my laptop. These messages were successfully received! I now have an established protocol to communicate between both devices.

My progress is slightly behind. I was hoping to also get OpenCV downloaded and running on the Jetson board. However, I need access to an Ubuntu machine which I did not have. Luckily, my research lab has one that I am able to use. Therefore, I am expecting to catch up soon.

In the next week I hope to get OpenCV running on the Jetson board. I would like Irene to be able to test her OpenCV code on the Jetson using the USB camera. I also wish to get the timer page up and running for the iPhone app.

[Philip] iPhone App then transition to Jetson

This week I focused on developing the app for our project. Previously I had just done a simple design for the app. Now, I am fully implementing it. Although I have created iPhone apps before, I was definitely rusty and needed to go through a fair share of tutorials to brush up. Afterwards, I was able to code the key function of the app: communicating over Wifi (using the MQTT protocol) to close or open the door. This required implementing the communication mechanism using the cocoaMQTT libraries.

(Note: in the picture there is a connect and disconnect button, but I was able to remove the need for them)

I started working on the Daily Reminder page, although I encountered difficulty with this, as the “list” feature in Xcode is difficult to understand.

On the Jetson side of things, I ordered a USB hub, as there is only one USB port on the developer kit. I also, booted up the board with Ubuntu, connected it to CMU Wifi, and downloaded all necessary libraries for Jing and Irene to use (python and pip). The benefit of this is that they will now be able to test their code on the board.

My schedule has shifted a bit because I am now focusing on the Jetson, because my partners requested I finish that part first so that we can integrate our parts sooner. Therefore the App development is being delayed and the Jetson deliverables are being prioritized.

That being said, this week I want to be able to use the GPIO pins on the Jetson board for input and output. In addition, I want the Jetson to be able to communicate over Wifi using the MQTT protocol to the iPhone App. This will require no further work on the iPhone side of things.

[Philip] Final Design Decisions

This week my focus has been on finalizing the design decisions for the design presentation and report. I wrote the system description for the iPhone Application, which included a more detailed wireframe. In this report, I specified what the requirements are for the app, and how we would accomplish those.

Note: Cities will be replaced with “Today” “Yesterday” “Past Week” “Past Month” “Past Year”

I also wrote the system description for the System Hub. Last week, I explained how we chose the Jetson TX2 for the CV and ML acceleration. I explained how the developer kit will suit all our needs for the system hub: it will need to communicate over Wifi to a phone, receive camera footage, apply our Computer Vision and ML algorithms, control the servos for the door, turn on and off and LED, and receive PIR data. It will replace the original plan to use a Raspberry Pi for the communication in conjunction with the Jetson GPU. In addition, I discuss our camera choice.

Finally, I research related cat doors for the research paper. I discuss the benefits and downsides of the traditional cat door, as well as an RFID activated cat door.

My progress is on schedule.

In the upcoming week, I will get started on writing code for the iPhone app.

[Philip] Choosing Hardware

This week I focused on finalizing what hardware we want to use. To do this, I came up with several requirements for our system. I determined that based off the average speed of a cat, our computer vision and machine learning algorithms will have approximately 1.2 seconds of cat visuals. In addition, I believe that we want to maximize the number of images we can process during this time. For example, based off my research a Raspberry Pi can compute at a rate of 1 frame per second. In most cases, we would take only one image of the cat, which is a great risk because the cat could be looking away or a light glare for just that one image. I also looked into an Odroid, which is essentially a more powerful Raspberry Pi. Even this would yield 2-3 frames per second. Again, we would be banking on receiving a stable image during these frames. Based off this research, our team decided that GPUs was the best course of action.

I focused my GPU research on Nvidia GPUs as I have experience writing parallel code in Cuda on Nvidia GPUs. Nvidia has a family of GPUs called Jetson whose application are for embedded systems. They have 256 cores. In addition, the development kit has a quad-core Arm CPU, Wifi capabilities, and many I/O ports. The Jetson TX2 was not only a solution for our image processing, but also for our system communication. In addition, I added this information along with more details to our design paper.

I also made progress with the app design, starting with a simple wireframe in Xcode:

My progress is on schedule.

In the upcoming week I will be working on finishing up the design presentation, in addition to figuring out more details about the app and the camera to Jetson communication mechanism which I will be reporting in the design paper.