Team Status Report 3/9/19

One of the major risks is that our webcam might not be compatible with Python. So far we have been testing in our demo environment using our old MATLAB code, but now we will be migrating all our code to Python. This means that we will have to prioritize taking data using a Python interface as a next step. In the case that we are unable to interact with the camera using Python, we will consider purchasing a PiCam. This is well within our budget.

Suann has placed an order for an SD card for the raspberry pi. We need this to read data from our sensor. To make sure that we do not have any delays in putting this system together, Suann will make a behavioral model of this tap sensing system using an Arduino.

Currently, there are no changes to the existing design, nor is there any change in our schedule.

Isha Status Report 3/9

I have worked on porting the color detection code from Matlab to Python. We decided to move away from using Matlab in conjunction with Python to remove any possible delays in sending data between the two running processes.

Matlab has several functions that make dealing with image matrices easy and computationally fast. I am looking in to finding similar functions that I can use with Python so that searching for elements in an image matrix is not O(n^2) complexity.  I have been working with numpy to try and achieve this.  However, numpy matrices are maximum two dimensional. This is necessary so that we can get a reasonable real time response within our ideal metric of within 1 second.  Additionally, I am installing and familiarizing myself with Python toolkits such as scikit-image, matplotlib, numpy and scipy since there are no inbuilt functions for color space conversion in Python. If these tools prove to be unsuitable for our purposes, I will consider using OpenCV for color spaces.

My progress is on schedule. I aim to complete implementing the color detection and move on to SVM in the next couple weeks.

Tanushree Status Report 3/2

This week, my team and I worked on getting more test images to measure the accuracy of the MATLAB implementations of the color detection and circle detection algorithm. Though we have decided to go with color detection since its preliminary results were more promising than those of circle detection.

I also spent a large chunk of my time writing the design review paper due next week. My teammates and I changed our system design block diagram to indicate the changes we have made to our pipeline, the major change being the fact that all of our code will now be implemented in Python. This was suggested to us after our design presentation. This shift in design will help reduce any possible latency that could have occurred when interfacing between MATLAB and Python, thus, will better enable us to reach our goal of a one second response time. Accordingly, we had to change our schedule to show dependencies and revisions regarding individual tasks.

As for my task assignments,  I will no longer be working on K-means clustering since Prof. Savvides suggested using SVM instead. This is because the color of our red dot remains fairly constant. Since Isha will be working on the Python implementation of color detection, it will be logical for her to use that code to implement SVM as well. Thus, during the following 2 weeks I will start working on creating a GUI that would have the functionalities we want for our final presentation. I will need to reinstall all packages again like Python and PyQt since my laptop’s hard drive was replaced. I will also be helping Suann with the sensor circuit since I am responsible for reading the analog signal in order to realize a threshold that will result in a 100% accuracy of true positives reflecting when a tap occurred.

I am currently on schedule since the initial task of carrying out K-means clustering was an additional task and I was eventually going to start working on the GUI soon.

Isha Status Report 3/2

After presenting our design on Monday, I created a preliminary GUI pictured below for our Matlab to Python pipeline that I was supposed to be starting this week. This will be used for our testing. This required setting up PyQt5 on my laptop.

I also upgraded to Matlab 2018b from Matlab 2016b so that we will all be using the same software.

After our design presentation, a TA or professor recommended that we do not use Matlab so that we can omit any possible delays from sending data back and forth from a Python script to a Matlab script. We decided to alter our design to reflect this change. Thus, my future task assignments have changed. I will be porting the color detection code from Matlab to Python next week.

We worked on taking more test data this week so that Tanu and I could have more data with which to test our detection algorithms. My team also spent many hours writing the Design Document that is due on Monday. We also worked on our schedule to reflect the feedback the professors sent out and redid our block diagram for the Design Document.

Since only my future tasks have changed, my progress is on schedule. We have decided to focus entirely on color detection for our prototype given our successes with preliminary test results with the LAB color space. All tests passed our metric requiring the detected coordinate to be within the 3/4 the radius of the red dot.

Team Status 03/02

We did not realize that a raspberry pi does not have the exact same capabilities of an Arduino, in that it cannot read analog input directly. We may have to switch to using an Arduino to read the data from the piezo sensor to mitigate any possible difficulties with this part of the pipeline.

After our design presentation, the TAs and Professor recommended that we implement all our code in Matlab so that we can avoid any delays in interfacing between Matlab and Python. We are porting all our Color Detection code to Python next week.

We will not use Matlab anymore for our CV implementations. This change was necessary to prevent any latency caused by transferring data between Python and Matlab. This means that Isha will need to work on the CV algorithms for another week in order to transfer the code. However, this will not delay our overall progress since this task will be replacing our initial task of creating a Matlab to Python pipeline to transfer data between the two.

Another change is that we will be implementing SVM to perform dynamic thresholding instead of K-means clustering because in our case, the color of the red dot remains fairly constant.

Below is an updated schedule:

 

3/2/19 Suann Progress Report

This week I spent 7 hours meeting with my teammates to work on the design paper submission and make decisions about the timeline of our project. We also wrote the team progress update together. I have reached out to Shreyas about setting up a Raspberry Pi with CMU Wi-fi since it is not a straightforward task.

I have fully finished reading up on Lucas-Kanade this week.

For this upcoming week, since it is already March, I will move on to working with the Raspberry Pi and setting up the piezo sensor circuit. Since the Pi cannot take analog input directly, I will do research into whether or not we want to stick with the Pi or if we want to move on to an Arduino instead. If the Pi is still viable, I will order the necessary parts to set up a circuit this week and start off the Pi with Python. If not, I will alert my teammates that I will be moving on to the Arduino instead.

Tanushree Status Report 2/23

This week I continued to work on the Determinant of Hessian (DoH) blob detection algorithm. After talking to Prof. Savvides, I realized that I could fix the scale of the blur level, i.e, fix the sigma of the gaussian kernel used to blur the image. Thus, the algorithm I have implemented can be considered more of a “circle” detector rather than a true blob detection algorithm. Regardless, it suits our purposes.

As has been mentioned in earlier posts, Isha and I are using the same set of images to test our algorithms. These test images were taken in an environment that closely represents our demo setup. Below are the results:

Currently, my algorithm works for some images (test 12, 14) and not for others (test 11, 13). Following this, one of the design decisions we have made is to color the buttons of our GUI white and keep the background darker. This seems to be a sound decision given the fact that we will only want to detect the user’s finger when he/she taps on a button.

Apart from this, I also helped my team setup the camera and projector in order to obtain the above-mentioned test images. Further, we all worked on the creating the slides for the design review.

Next week I will continue to test my algorithm on different images to see where it breaks and tune parameters accordingly.  I will also start working on K-means clustering that will make use of both color detection and blob detection. We hope that this clustering algorithm will provide us with robust results.

I am currently on schedule.

Suann Update 2/23

This week I worked on setting up the website and writing the project introduction post in accordance with the guidelines. It took me some time to figure out website settings and get the hang of using WordPress as I have never used it before. We also worked on writing design presentation slides together.

Unfortunately I do not have much progress on Lucas-Kanade this week since I had an on-site interview. I’m behind now, but fortunately I have much more free time this upcoming week so I will be able to dedicate a lot of time to Lucas-Kanade. Since the end of next week is my deadline for completing Lucas-Kanade, if I do not get it by then, I will move on to setting up the Raspberry Pi since the piezo circuit is a more crucial part of the project.

I will get Lucas-Kanade working by next week.

Isha Status Report 2-23

I helped my team set up our camera and projector and compile a bank of test images. We also worked on our presentation slides for the design review. Since I will be presenting, I have been writing a script for the presentation.

I have also been experimenting with LAB and HSV color space results. Right now, I am using simple thresholding with the new test data we took this week. Below is a table of results I have compiled. The red dot in the HSV and LAB images indicates the detected coordinate of the red dot. The HSV space result for image Test11 is significantly off the desired coordinate. I will begin to work more with the camera and test both color spaces with real time data. Next week, I hope to make a data driven decision to determine which color space would be best suited for our purposes. My progress is on schedule.

Team Status Report 2/23

Significant Risks:

Tanushree’s laptop’s hard drive had to replaced. This problem added a delay to our schedule despite her using cluster computers. Since her work mainly used Matlab, she was able to still work on cluster computers.

Our projector stand was not tall and stable enough to project on the lab bench. So to mitigate this risk, we changed the surface we project on to a poster board. We will now be creating a GUI for our final interactive poster.

Changes to Existing Design of the System:

We have made changes to our individual task assignments. Now, Suann is solely working on implementing Lucas Kanade, while Tanushree is focusing on blob detection. Further, Isha will be exploring different color spaces to decide which one best fits our purposes.

We received our parts much before the expected date. So we have started assembling our demo setup. This has allowed us to create a standardized bank of test images taken in our potential demo environment. Isha and Tanushree are both using these images as input for testing their respective algorithms.  Another advantage of having a primitive setup is that we have decided the color scheme of our GUI such that it suits our algorithms and prevents occlusions.

Updated Schedule