Anushka’s Status Report for 4/30

This week, I mainly focused on different hologram designs. We have three options right now:

  1. The angles in the pyramid are 53-53-127-127, and the dimensions are for a hologram that fits an iPad.
  2. The angles in the pyramid are 53-53-127-127, and the dimensions are for a hologram that fits a monitor.
  3. The angles in the pyramid are 60-60-120-120, and the dimensions are for a hologram that fits an iPad.

So far, we have tried all three, and the last one works the best but only in total darkness. Next week, we’re going to try to test new angles before the demo day, then probably pick the one with the greatest visual effect. I’ve also been working on the final sewing of the device onto the wrist band. We adjusted the height of the sensors since our hand was interfering with the input. I have also been working on the poster for our final demo.

This is the final official status report of the semester. We have all the parts completed sufficiently, but we know there are always areas of improvements. We’re going to try to make minor changes throughout the week before demo day, but otherwise, we have a completed product! I am most concerned about how it will go overall. We have tested our product amongst ourselves, but we are aiming to test it on several participants before demo day and final paper. We are excited to show everyone our progress and what we’ve learned!

Anushka’s Status Report for 4/23

This week was a tough week as I got COVID and had to participate in everything remotely. Since I was incapacitated, I mainly worked on the final presentation.

The main part of the presentation I’m working on is the tradeoffs. We made a lot of algorithmic tradeoffs, so I had to revisit all the old models: the pre-ML model, the SVM model with old arm data, and with the time series classification models. Other things I wanted to mention is the usage of Jetson and different angles of holograms. I designed new hologram sides with slots and holes so that we don’t have to glue the sides together and the sides are more secure.

Since we have to finish the presentation tomorrow, I am still on track to finish this week’s goals. In the next week, I hope to be able to go back in-person and work on the hologram display and reassemble Tony to the exercise band.

Anushka’s Status Report for 4/16

This week was a grind week for the team. We tried many different algorithms for determining what gesture is occurring. There are two things I tried this week:

  1. We queue 5 data points of what fingers are currently being detected. From there, we determine what gesture is occurring. If there are at least 4 occurrences of the number of fingers, then the gesture will be classified as such (0 for noise/rest, 1 for swipes, and 2 for pinches). If the number of fingers goes from 1 to 2, then the gesture is a pinch out, and if the vice versa occurs, then the gesture is pinch in. I set up a few cases that took care of the noise, but ultimately, it still caused a lot of errors. The frequency of data collection is too low; a gesture can range from 2-10 gestures. However, the number of fingers detected before or after the gesture can affect the window of 5 and yield an inaccurate result.

2. I tried using time series classification library called sktime: https://sktime.org/. This library is really cool because I can give the model a set of inputs over time and across different classifications, and the model can predict what class an time series data belongs too. Splitting the new arm data into training and testing, I was able to create a model with 92% accuracy, and this model is able to distinguish between pinch in, pinch out, rest, swipe left, and swipe right. However, this model would need 24 data points in advance, and as discussed before, there aren’t that many data points associated with a gesture, and with this model, we would have to predict the gesture a considerable amount of time before performing the gesture.

As a team, we’ve decided to work on different implementations until Sunday, then make a final decision on what algorithm we should select. It comes down to a tradeoff between accuracy and latency, both important metrics for our device.

I also helped Joanne with building a bigger hologram pyramid this weekend. We decided to hold off on casing until we determine the final height of the pyramid, which we’ve also decided to do a bit more exploring with.

Currently, I am a bit behind on schedule, but whatever tasks that are remaining in development and testing has to be accomplished this week as our final presentation is due next week. Everything essentially has to be done, so I’m in a bit of panic, but I have faith in Tony, Holly, and our team.

Anushka’s Status Report for 4/10

This week, we had our interim demos. I worked on updating our schedule, which can be seen in our Interim Demo post. I was also in charge of making sure that we had all the parts together for the demo.

The night before demos, I attempted to work with the Jetson Nano again. I was able to connect to the Internet, but I was having a tough time downloading Python packages in order to train the machine learning model. I thought it might have been related to location, so I tested the connection in the UC with Edward. However, I still wasn’t able to download packages, even though I was connected. I will be speaking with Professor Mukherjee next week to see if he is able to help.

The team decided to take a small break for Carnival, but we will get back to working on integration and the machine learning model next week. We created a state diagram of how our gesture detection algorithm should work after detecting how many fingers are present. Once the gesture is determined, we will use metrics such as minimum or maximum to decide how much of the gesture is being performed. We have the foundations for both, but we have more information on the latter than the former.

State Diagram of Gesture Detection Algorithm. Swipes refer to the finger detection algorithm returning 1 finger, pinches returning 2, and none returning 0 or undetermined

We are currently on schedule, but since it’s the final few weeks of building, I want to push myself to finish our project by the end of next week and begin final testing. We are hoping to finish the gesture detection algorithm by Tuesday so that we can make incremental improvements for the remainder of our Capstone time.

Interim Demo Schedule

Here is a copy of our most recent schedule:

It can also be found here.

Major tasks that are remaining include:

  • Improving the finger and gesture detection algorithms using the machine learning models we’ve been working with over the past few weeks,
  • Providing the ability to upload 3D models from the web application,
  • Testing our sensors once mounted on wristband,
  • Cutting final version of hologram, and
  • Collecting latency data.

Team Status Report for 4/2

This week was a better week for the team. We were able to tackle many problems regarding our capstone.

Our biggest concern was the Jetson. After reading online tutorials on using Edimax, we were able to finally connect the Jetson to WiFi. Next week, we’ll be focusing on connecting the sensors to the Jetson and the Jetson to the Unity web application so that we can accomplish our MVP and start formally testing it.

Since we have time now, we went back to our gesture algorithm and sought to improve the accuracy. We explored different machine learning models and decided to see if Support Vector Machines (SVM) would be effective. On the data that we already collected, the model seems to be performing well (95% accuracy on cross-validation). However, we know that the data is not comprehensive of different locations and speeds of the gestures. We collected more data and plan on training our model on that over the next week. It takes a lot of computational power, so we will try training on either a Jetson or a Shark machine. We were also suggested AWS, which we might also look into.

Biggest risk now is latency. We want to make sure that everything is fast with the addition of the Jetson. We have a Raspberry Pi as a backup, but we’re hoping that this works. Therefore, we haven’t made any changes to our implementation. We also went ahead and ordered the band to mount the sensors on. We plan on collecting more data and training on that next week after we find an effective way to train the model.

We should have a working(ish) demo for the interim demo on Monday. We also seem to be on track, but we will need to further improve our detection algorithms and thinking to have a final, usable product.

Anushka’s Status Report for 4/2

This week, we worked more on the gesture recognition algorithm. We figured it would be best to go back to the basics and figure out a more concrete finger detection algorithm, then develop a gesture algorithm on top of that.

Currently, we abandoned the finger detection algorithm in favor of tracking either mins or maxes in the signal, then determining the relationship between them and give a metric to Unity to perform a translation. However, this metric is highly inaccurate. Edward suggested using SVMs for finger detection. The difference between pinches and swipes are the number of fingers present, so we can use existing data to train the model so that it can tell us which sensor represents one, two, or no fingers.

I added some new data that is more comprehensive to the existing data set. I also added some noise so that the model would also know what to classify if there is too many distractions.

Afterwards, I trained the data using different subsets of the new data combined with the old data. The reason behind this was because training the new data took a lot of time. It took over 4 hours to train 200 lines of new data and a lot of power.

Next week, I’m going to train the model on the Jetson with all the data. Jetsons are known for high machine learning capabilities, so maybe using it will make our computation go faster. We managed to add wifi to the Jetson this week, so it’ll be easy to download our existing code from Github. I am concerned about training the new model with the new data. With only the old data, we have a 95% accuracy, but hopefully with the new data, we’ll be prepared for more wild circumstances.

I think I’m personally back on schedule. I want to revisit the hologram soon so that we can complete integration next week. I’m excited to hear feedback from interim demo day since we have time for improvement and will likely add or improve parts of our project after Monday.

Anushka’s Status Report for 3/26

This week was a cool week. I learned a lot from the ethics lesson on Wednesday. After the discussion with the other teams, I learned that the biggest concern for most engineering projects is data collection and data usage, which is something that may be lacking in our project explanation. I will keep this in mind for our final presentation.

I spent a lot of time improving the gesture recognition algorithm. With one single data collection, we are most likely not able to identify what gesture is being done. I improved it by looking at the frequency of the gestures guessed over n data collections. The accuracy improved for every gesture except zoom out, which makes sense because the beginning of the gesture looks like a swipe.

We collected more data so that we can see if the algorithm fits different variations of the the gesture. We noticed that there is high variability in the accuracy of our algorithm based on the speed and location in which we move our fingers. I decided to look into varying the n data collections and the polynomial that the data is currently being fitted into to accommodate our two discoveries. I am working with the data in Excel and am planning on looking at statistics to determine which combination yields the highest result.

Screenshot of pinching in data with n data against the order of the polynomial

I think that although this is a slow week, I’m going to be working hard to improve the algorithm before the interim demo day. Although our project is in the process of being integrated, this is a critical part of achieving our design metrics. I’m planning on meeting with my team outside of class on Monday and Tuesday so that we can work together to improve the algorithms.

Apart from algorithm updates, I need to talk to a professor about the Jetson. I’ve started playing with the Raspberry Pi, which is easier to work with since I have prior experience. I will spend Wednesday working on this part. Next week will surely be busy, but these deliverables are critical to the success of the interim demo.

Anushka’s Status Report for 3/19

This week was a productive week for our capstone project. Over Spring Break, I began working on building the gesture recognition algorithms. That week, we generated some sample data from a few gestures, so I observed the general behavior of the data points. I noticed that on an x-y plane, if the y-axis is where the sensors are lined up and if x-axis is the distance that the sensors are measuring, then a finger creates a sideways U-shape as seen in the picture below. If there are two fingers present, then the U-shape intersects to create a sideways w-shape.

Figure 1: Example of Swiping Data

I began looking at what properties of this graph stayed consistent. With both shapes, I wanted to focus on the local minima of the curves as a method of finger identification. However, the local minima would sometimes be inaccurately calculated or the values were arranged in an increasing or decreasing manner such that no local minima was detected. This was especially the case when two fingers were present. However, Edward pointed out that even though local minima was hard to detect with two fingers present, a local maxima was always present as the two Us intersected. An example is shown in the image below.

Figure 2: Example of Pinching Data

After observations from a few examples of pinching data, I reasoned that the velocity of that maxima could also serve as an indicator what direction the user is pinching and how fast their fingers are moving. If there are no local maximas present, then we can guess that a gesture is a swipe, calculate local minima’s velocity, and determine what direction their finger is moving and how fast they are swiping. We have yet to code the specifics of each gesture, but we are confident about this method of identification and will most likely move forward with this.

I also spoke with Professor Savvides about machine learning methods that would serve as viable solutions. He suggested using Markov methods, but that was something I was unfamiliar with. We were also recommended by Professor Sullivan to use curve fitting, which is why you see a few polynomials fitted to the graphs above. We are going to look into that method over the next week, but at least we have a starting point as opposed to before.

Because of the work put into gesture recognition, I would say we are back on track. I put the Jetson Nano aside for a week because I couldn’t get it to work on a monitor again, so that will definitely be my first task.I might reach out to a professor if I’m still struggling with the Jetson by Monday because that will be my main deliverable next week. I see us bringing all the parts together very soon, and it’s something I’m definitely looking forward to.

Anushka’s Status Report for 3/5

This week was definitely a busy week. Our goal was to make as much progress before Spring Break, as we aren’t able to work on the hardware side of our project over break. I began this week by going over what was needed for the design review document. I made notes on what I wanted to cover in the presentation, which we ultimately incorporated into the final paper.

I began working on the Jetson Nano again. I had a feeling that something was wrong with how I wrote the SD card, so I requested one from the professors on Wednesday to try. Once I rewrote, we tried connecting the Nano to the monitor but with no success. Edward suggested I try using minicom as an interface to see the Nano, and we were both able to successfully install and run the package, thus finally giving as an SSH-like way of accessing the Nano.

I added details in the design report, which included the gesture recognition algorithm and the hologram pyramid design choices. I know there were more things that I wanted to do with the report, especially in the gesture recognition side, and after we get our feedback, I plan on incorporating them into the paper along with my changes. This is more so for us so that we understand the reasoning behind our choices so that if they don’t work, we can eliminate options.

I feel like this is the first week I feel behind. With the sensors coming in on Monday, most of us leaving on Friday, and paper also due on Friday, I felt like there was not a lot of time to test the sensors and gauge whether our current algorithm works. The team talked to the professor about ideas on gesture recognition, and we were suggested machine learning to a) help identify zooming actions, and b) help with user calibration in the beginning. I’m not too familiar with models that can test given a stream of data, so I plan on talking to Professor Savvides about potential options. I would say the gesture algorithm is the biggest risk as this point as if it doesn’t work, we have to determine a machine learning model and train on a huge dataset that we would have to produce ourselves. I think this will be my main focus over the next two weeks so that we can test as soon as we come back from Spring Break.