yeahhhh

Isabella Brahm (ibrahm), Devin Gund (deg), Cayla Wanderman-Milne (cwanderm)

Week 8 (3/5 - 3/8)

Bella: This week, I was mainly focused on understanding how to simulate mouse interaction. I spent a fair bit of time setting up a VM that could run Windows and Visual Studio, then looked into Winforms. After our team collectively decided to use Open Kinect software, I was able to switch to simulating mouse input on a Mac. I wrote Python scripts to do left click, right click, drag, get screen size, and get current cursor position, then uploaded them all to a shared git repo. Additionally, I updated our timeline, which had to change because of our decision about sensor positioning and using Open Kinect software.

Cayla: This week, I primarily worked on the physical location algorithm for our project using the Kinect. Right now, we're working on being able to detect where the wall is, so that in the future we can detect a hand touching the wall. We're determining where the wall is by reading depth data from the Kinect, determining the median depth across two horizontal lines in the middle of the depth stream, and determining a plane from those two lines/depths. Once we know where the wall is, we can detect an object by looking at the depth for a given pixel in the depth stream, and determining if that pixel is on the plane (ie part of the wall) or not. This week I worked on debugging the part of the algorithm that determines where the wall-plane is, and visually displaying on the depth stream whether the algorithm considers a pixel as on the wall-plane or not.

Devin: This week, after native testing on my macOS computer and consultation with the team, I decided to transition our Kinect drivers from the standard Microsoft library to OpenKinect’s open-source libfreenect2 library. This will now allow us to support macOS, Linux, and Windows, meeting our goal of simplicity and interoperability. I then wrote a C++ program to interface with the Kinect through the libfreenect2 to library and configure the sensors to transmit color, depth, infrared, and aligned color+depth data to the computer for display through a visualizer. This will serve as as the starter file for our future algorithm development work. Finally, I created a Makefile to simplify the process of building and linking against the libfreenect2 and OpenGL libraries. All of this is stores in a GitLab repository for version control and collaboration between all group members. My personal commits are:

Week 9 (3/9 - 3/18)

Spring Break

Week 10 (3/19 - 3/25)

Bella: This week, I worked with Devin and Cayla on brainstorming the physical location algorithm, which is our biggest task at the moment. Additionally, I set up OpenKinect on my computer, got the mouse input working in C++ (which I had previously written in Python), and found a GUI library called WxWidgets that I can use to develop our testing interface. I am currently working on designing and implementing a basic testing / calibration interface.

Cayla: This week, I worked on downloading and installing OpenKinect onto Windows so that the whole team can use the same framework despite having Macs and Windows machines. I also brainstormed with Devin and Bella for the structure and organization of our code, ie, laying out how the user interaction, physical location algorith, virtual location algorithm, and calibration will interact with eachother. We also worked on fleshing out the physical location algorithm and being able to detect the wall, and when a user is touching the wall.

Devin: This week, I worked on the physical algorithm code for detecting the surface plane. First, I implemented functionality to save and reload Kinect depth data from files, so that we can save example inputs and test our code without using the Kinect every time. Our theory is that we can detect interactions through discontinuities in the slope of the surface plane. The most difficult part has been compensating for the angle and position of the Kinect on the ground at the bottom of the surface, which skews the depth data exponentially as you move up the surface. My personal commits are:

Week 11 (3/26 - 4/1)

Bella: This week I was working on planning our testing and configuration interfaces and setting up wxWidgets. I ran through a number of tutorials to get comfortable with the library and have learned how to add and manipulate some controls, which is the first step towards having working interfaces.

Cayla: This week, I worked on getting OpenKinect working, and worked with Devin on the physical algorithm. We've been working on improving the algorithm that determines what in the depthFrame is surface and non-surface. We have two algorithms for doing this. The first one compares each pixel's depth with the expected depth for the surfaceat that point, and marks each pixel as surface or non-surface based on that. The second algorithm compares the difference in depth between each pixel and its upper neighbor, and what it expects the difference in depth to be for the surface at that point (ie, compares the depth derivative of each pixel and the expected depth derivative). We found that the second algorithm works better, and currently are using this to determine what in the depth frame is surface and what is not.

Devin: This week, I implemented surface and abnormality detection for the physical detection algorithm. This took a lot of thinking and work because the angle and forced-perspective view of the Kinect mean that the surface does not follow a linear slope relative to the Kinect's camera. Ultimately, I implemented a power series regression to predict the location of the surface and find anomalies (things that are not the surface). My personal commits are:

Week 12 (4/2 - 4/8)

Bella: Over the past week, in addition to talking through our algorithms for tap detection with Cayla and Devin, I worked on getting our testing and configuration interfaces working. We are using a C++ GUI library called wxWidgets. Currently we have two interfaces for users to interact with, one that simply indicates where to click for calibration, and another that randomly places a button on the screen so we can test how well our algorithm is working at different locations.

Cayla: This week we primarily worked on getting our system ready for the demo, which meant being able to tell the coordinates of a finger touching the wall. We already were able to determine what was surface and what was not surface, so our next task was to be able to tell whether a non-surface anomaly (i.e. a hand) was touching the wall or just hovering near the wall. We first worked on recognizing a significant non-surface anomaly, by searching for a group of at least 700 non-surface pixels. We then decided to determine whether these pixels were touching the wall or not by looking at the variance of the depths of the pixels around the potential interaction point (i.e. the tip of the finger). If the variance is low, it means that the depths are similar so the finger is touching the wall, and if the variance is high, the depths are more different and the finger is not touching the wall.

Devin: This week I completed the physical detection algorithm for the Midpoint demo. The algorithm determines a pixel is an interaction if: it is an anomaly (does not match the surface regression), it is not on the edge of the surface, it is connected to a significant number of other pixels that are anomalies, and the depth variance with other pixels around it is low (so it is touching the surface). My personal commits are:

Week 13 (4/9 - 4/15)

Bella: This week, I worked on revamping the testing interface based on assumptions that are being made in our other algorithms. I also cleaned up the code and made it so that it can be easily instantiated from another file.

Cayla: This week I worked on the virtual algorithm. We decided to determine the height of the of the physical projected screen on the wall by determining the arc length of the kinect's "wall curve" between the two calibration points. We then determine the virtual y-position of an interaction by determining the arc length between the bottom of the screen and the physical y-coordinate of the point, and then comparing that to the total length of the curve. We determine the virtual x-coordinate of an interaction by looking at the physical x-coordinate and the x-coordinates of the edge of the screen at that y-coordinate.

Devin: This week I implemented a number of changes to the virtual detection algorithm to optimize and lower our latency. These changes include coming surface bounds and regression using a fixed reference frame, swapping the order of variance in the algorithm, and improving depth smoothing and interaction detection values. My personal commits are:

Week 14 (4/16 - 4/22)

Bella:

Cayla: This week I continued to work on implementing the virtual algorithm. I determined that the Kinect sees vertical edges on the side of the screen as tilted but straight lines, meaning the Kinect sees the screen as a trapezoid. This is used to determine the x-coordinates of the left and right edge at the screen at arbitrary y-coordintes, so that the x-value of the virtual coordinates of arbitrary taps on the screen can be determined. This week I tested the virtual monitor with physical coordinate data that we got from the Kinect to make sure it's working.

Devin: This week I Added a top-level UI using wxWidgets for starting/stopping live interaction detection. This involved adding multithreading to the program, where the graphics are on the main thread and the interaction detection is on a background thread. My personal commits are:

Week 15 (4/23 - 4/29)

Bella:

Cayla: This week we put all the pieces of the Virtual Monitor together (i.e., determining physical coordinate of tap, calibrating, and determining virtual coordinate of tap). I changed the virtual monitor to take an arbitrary number of calibration points in a grid, instead of just taking four calibration points that are in the corner of the screen. We are continuing to try different constants in our program to increase the accuracy of determining the position of taps.

Devin: This week I focused on integration, adding the mouse control functionality and integrating the calibration process (with Bella). By the end of the week, the calibration process was working in tests, although there were crash bugs to analyse. My personal commits are:

Week 16 (4/30 - 5/6)

Bella:

Cayla:

Devin: This week I resolved the crash bugs and successfully completed the program. In tests, the Virtual Monitor performs with a relatively low latency, although tweaks to accuracy can be made before the demo. My personal commits are: