Anushka’s Status Report for 4/23

This week was a tough week as I got COVID and had to participate in everything remotely. Since I was incapacitated, I mainly worked on the final presentation.

The main part of the presentation I’m working on is the tradeoffs. We made a lot of algorithmic tradeoffs, so I had to revisit all the old models: the pre-ML model, the SVM model with old arm data, and with the time series classification models. Other things I wanted to mention is the usage of Jetson and different angles of holograms. I designed new hologram sides with slots and holes so that we don’t have to glue the sides together and the sides are more secure.

Since we have to finish the presentation tomorrow, I am still on track to finish this week’s goals. In the next week, I hope to be able to go back in-person and work on the hologram display and reassemble Tony to the exercise band.

Joanne’s Status Report for 4/23/2022

This week was a busy week since we are wrapping up on our project and we had some covid case situations that delayed our work schedule.

Edward and I acquired black acrylic sheets to encase our hologram pyramid and display over. We tried gluing the cut out plexiglass (for the hologram), however, the glue we used from Tech Spark did not apply as nicely as we hoped (nor did it stick). We then thought of a new design to laser cut the hologram layout so it would have a puzzle like attachment feature at the edges, so it can connect to the other sides of the hologram pyramid without all of the glue and tape. We drafted the new version to cut up. There was some covid related issues in our group, so we are in the midst of working out a schedule so we can go in to Tech Spark and cut the new version.

We are still working on our model gestures. Since our gesture algorithm did not work out, and we are relying on our finger detection algorithm for our gestures, we moved noise filtering and data queueing to the Unity side of things. The model works well in the test data for “good” swipes and zoom gestures. However during the live performance, due to noise, there are a lot of erratic model movement that we are trying to resolve. The swipe gestures look ok live, the pinches have some unknown behavior. That has been my primary goal this week. To filter our noise, I have tried taking the average of points coming in for a specific gesture and comparing that to identify a factor for gesture movement. It seemed to work well in reducing noise in the swipes, but not in the pinches for some reason.

I have been working on finalizing the web app UI and included visual components that would help with user interface (like including a finger visualization of where ur finger is on the trackpad, and battery info). As a group, we talked about collecting data for testing and have been working on our final presentation.

Team Status Report for 4/23/2022

This week, we had much to do. Redesigns, bugs, and COVID have pushed us behind schedule. We are on our last slack week next week.

Edward and Joanne have been working on fine-tuning the finger detection and model rotation. The system does not work well on live “real” data and seems to have erratic and random behaviors at times. Hopefully, by the end of this week, we can have a really good version working. We will spend the majority of next week working on this.

Edward and Joanne have also bought black acrylic for the hologram. They attempted the glue the acrylic together, but the glue was not as sticky as we wanted. So, Anushka drafted a new version of the hologram using slots to fit the walls together like a puzzle. Hopefully, we can get this cut and built next week.

We’ve also been working on the final presentation. It’s a good thing we have some time before demo day, but we’re excited to present our findings and what we’ve done so far in the semester.

In good news, the hardware has not bugged out on us yet. It seems fairly reliable so far. We still need to properly sew the board unto the wristband. This should be done by early next week.

Next week will be an extremely busy week for us. Wish us luck!

Edward’s Status Report for April 23

This week, I worked on the final project presentation as well as further improving the gesture to 3d model action algorithm. Anushka has been sick this week, so Joanne and I have been trying to finish up everything before the demo. I have been working on collecting more training data to retrain our SVM whenever an issue comes up. I think the current model is able to distinguish between one and two fingers pretty well. Some of the tricks I played to get the robustness up are: removing the noise label in the model (since it can get swipes and noise confused, and noise is easy to detect), and training on data where I just place my fingers and not move them or perform gestures.

I started collecting data for the testing section of our presentation. We discussed as a group how to determine our system’s accuracy, gesture precision, and latency.

I also worked with Joanne to cut the acrylic for our hologram, but the glue that techspark had looked really bad and tape would show up when under light, so we redesigned our hologram to use slots instead of adhesives. I am not the best at cutting, so I hope Anushka will get better so she can cut and help construct the hologram.

The gesture algorithm has proven to be much harder than we though, especially with just the two of us this week. Hopefully, next week we can have something demo-able. We are using up our slack weeks.

Anushka’s Status Report for 4/16

This week was a grind week for the team. We tried many different algorithms for determining what gesture is occurring. There are two things I tried this week:

  1. We queue 5 data points of what fingers are currently being detected. From there, we determine what gesture is occurring. If there are at least 4 occurrences of the number of fingers, then the gesture will be classified as such (0 for noise/rest, 1 for swipes, and 2 for pinches). If the number of fingers goes from 1 to 2, then the gesture is a pinch out, and if the vice versa occurs, then the gesture is pinch in. I set up a few cases that took care of the noise, but ultimately, it still caused a lot of errors. The frequency of data collection is too low; a gesture can range from 2-10 gestures. However, the number of fingers detected before or after the gesture can affect the window of 5 and yield an inaccurate result.

2. I tried using time series classification library called sktime: https://sktime.org/. This library is really cool because I can give the model a set of inputs over time and across different classifications, and the model can predict what class an time series data belongs too. Splitting the new arm data into training and testing, I was able to create a model with 92% accuracy, and this model is able to distinguish between pinch in, pinch out, rest, swipe left, and swipe right. However, this model would need 24 data points in advance, and as discussed before, there aren’t that many data points associated with a gesture, and with this model, we would have to predict the gesture a considerable amount of time before performing the gesture.

As a team, we’ve decided to work on different implementations until Sunday, then make a final decision on what algorithm we should select. It comes down to a tradeoff between accuracy and latency, both important metrics for our device.

I also helped Joanne with building a bigger hologram pyramid this weekend. We decided to hold off on casing until we determine the final height of the pyramid, which we’ve also decided to do a bit more exploring with.

Currently, I am a bit behind on schedule, but whatever tasks that are remaining in development and testing has to be accomplished this week as our final presentation is due next week. Everything essentially has to be done, so I’m in a bit of panic, but I have faith in Tony, Holly, and our team.

Team Status Report for 4/16/2022

This week, we attached the board and the sensors to a wristband we bought. We collected new data with the board on our arm and retrained our number-of-finger detection model with this new data. The sensors need to be elevated and slanted up to that it doesn’t confuse the skin on the arm as a finger.

We are having a bit of trouble detecting swipes and pinches reliably, so we have been discussing removing pinches entirely from our implementation. Our number of finger classifying SVM model gets confused between swipes and pinches and it causes a lot of problems for gesture detection. We created a demo of a “swipes-only” system and that seems to work much better, since it only has to detect if there is “none” or “one finger.” Anushka and Edward (mostly Anushka) have been trying our various new techniques to detect fingers and or gestures, but we are having trouble finding an optimal one. Early next week, we will discuss what our future plans are for our finger detection.

Anushka and Joanne have cut new acrylic sheets for the hologram and will work on creating an enclosure this week.

We are a bit behind schedule this week as we are working on refining our “swipes-only” system. We may create two modes for our system: one that has the swipes-only model so that we are guaranteed a smooth performance, and one that has both swipes and pinches, which isn’t as smooth but does have more functionalities. We are also considering changing the gesture for the zooms while still using our two finger model, but we have to come to a decision by early next week on if and how we should implement this.

We’ve decided to scrape the Jetson as the latency of communication between the sensors and the Jetson was so much slower than the latency between the sensors and our laptops. We were looking forward to using the Jetson for machine learning, but again due to network issues we’ve been having and the fact that we are able to train the models on our own laptops has led us to this conclusion. We have learned a lot from using it, and we hope that in the future, we can find whatever the issue is behind it.

Joanne’s Status Report for 4/16/2022

This week we worked together a lot on testing our integrated product and fixing individual bugs we had from each of our parts. We mounted our board onto the wristband we bought. We have been testing our algorithm on the new data that reflects user input taken from the arm. Right now we see that there is a lot of noise in the data that causes the model to move in erratic behaviors. Right now I am working on trying to smooth out the noise on my end, while Edward and Anushka also work on filtering out the data on their end as well.

I have created a new rotation algorithm that makes the model rotate at a much smoother pace. When tested on ideal data (data with no noise), it moves at a very smooth consistent rate relative to how fast the user swiped. Before I only had a rough rotation algorithm where the model move based on the distances given to me of the fingers. Now I take into account the timestamps of when the data was taken so I can approximate a relative speed of the user. This change was only for rotations when it was limited to the X axis.

Due to some problems in gesture detection on the sensor side, we are right now planning of getting rid of pinches, since pinching and swiping motion is confusing our ML model. Thus we are thinking of implementing rotation in all degrees to add additional functionality. I have added that functionality in right now. However the translation of the finger locations does not translate intuitively to the model rotation (it rotates in the right direction but not the right angles?). I am working on how to make the swipe made by the user look more like the 3d rotation we would expect to see. We have been talking as a group to see what other functionalities we can include from what we have now and in the time frame left. Some ideas might be detecting taps, or creating a new zoom in scheme that does not involve the traditional pinching motion.

Right now I am also currently working on new ways of getting rid of the unexpected rotation spikes due to noise. I graphed out the data points and decided to try averaging each data point in a swipe and using that as a calculation standard for finger location so that I can try to reduce the effect of noise in the data. I will test that implementation out this week.

Anushka and I also have recut the hologram from plexiglass to fit the dimensions of the display we are going to use for the presentation. We are planning to create the encasing (for light blocking) this week.

Edward’s Status Report for April 16

This week, I continued working on the finger and gesture detection and further integrated the wearable with the Webapp. Our data is extremely noisy, so I have been working on smoothing out the noise.

Originally, the Photon is unable to send the timestamp when it collected all 10 sensor values, since the RTC on the Photon did not have millisecond accuracy. But, I found a way to get an approximate time of sensor data collection by summing up elapsed time in milliseconds. This made the finger detection data sent to the Webapp appear smoother.

I am not super confident in our SVM implementation of finger detection, so I want to remove pinches entirely in order to reduce noise. Two fingers that are close together often get confused with one finger and vice-versa. Because of this fact, many swipes get classified as pinches, which throws off everything. So, I decided to work on an implementation without pinches and it seemed to work much better. As a group, we are deciding whether or not this is the direction to go. If we are to continue with only detecting swipes, we will need to have a full 3D rotation rather than only rotating along the y-axis.

This week, we also attached the board to a wristband we ordered and have been working off of that. The sensors need to be elevated slightly and I am not sure how annoying that will be to a user. This elevation also makes it so that users cane just swipe on their skin casually; they need to deliberately place their finger in a way so that the sensors can catch it.

This week, I will work on trying to filter and reduce noise in our swipes-only implementation, which I think we may stick with for the final project. This will be a tough effort.

I am a bit behind schedule as the project demo comes closer. Detected finger locations are a bit shaky and I’m not sure if I can figure out a way to clean them up. I will work with Joanne to come up with algorithms to reduce noise, but I’m not too confident we can make a really good one. Removing pinches limits the functionality of our device and I’m not sure if its even “impressive” with just swipes. We will need to either explore more gestures or broaden our use case to make up for this fact.

Team Status Report for 4/10/2022

This week was demo week so we worked as a group on integrating our individual setups and testing our prototype. Before the demo we had a small hardware problem that turned out to be a soldering mistake. Edward fixed this before our demo thankfully. We further worked on creating a better plan for our gesture detection algorithm. The new finger detection algorithm using the ML approach provided better accuracy than our before approach. Thus we are hoping that the new gesture detection algorithm we planned out would also provide better results.

We realized that our data was getting some lags between the hardware and the Webapp due to some spotty wifi problems in the lab. Professor Tamal mentioned during the demo that he could help us out with that, so we are planning to talk to him soon. Other than that we also planned out a list of things to add on the web application portion such as battery life, battery status, finger location. We are also planning to implement a new strategy in translating the data we get from gesture detection into smoother model translations.  Because of the large amount of data and noise, we need to make the translations account for that. Thus we have planned out a new approach, and am also planning this week to implement it.

We are planning to mount our board to our wristband that we ordered soon (probably tomorrow). We got feedback from our demo session that we should see as soon as possible what live data taken from our arm would translate to in our model. Thus we are planning to test that as a group in the early week.

We think we are on track and currently refining the parts of our project.

Joanne’s Status Report for 4/10/2022

This week was demo week so we focused in on integrating the parts of our project. We conducted more testing on how the live sensor data translates to our model transformations (rotate, and zoom). We tried to mimic as many gestures from different angles and observed whether or not they performed the correct functionality on the model. We found out there was slight lag between the user input and the model translations due to lag in data being sent due to spotty wifi in the lab. Professor Tamal mentioned that he could help us out with the connectivity issue, so we are planning to talk to him this week about that as well.

Other than demo work, I also started taking in more info from the hardware to also display things such as battery life, on/off status. Once i get the information via MQTT from the hardware, it updates it self on the webapp. I am also planning this week to create a visual layout of where the finger is in relation to our sensor layout.

I thought of a new algorithm to help translate the data better into smoother translations. I could not translate my new approach to code yet due to carnvial week. However I hope this new approach will help in making the model move more smoothly even in the presence of noise and at a rate more consistent with how fast the user is swiping/ pinching.

I believe we are on track, and as a group this week we are planning also to mount our hardware onto a wristband so that we can see what actual data taken from our arm would be like.