Anushka’s Status Report for 4/30

This week, I mainly focused on different hologram designs. We have three options right now:

  1. The angles in the pyramid are 53-53-127-127, and the dimensions are for a hologram that fits an iPad.
  2. The angles in the pyramid are 53-53-127-127, and the dimensions are for a hologram that fits a monitor.
  3. The angles in the pyramid are 60-60-120-120, and the dimensions are for a hologram that fits an iPad.

So far, we have tried all three, and the last one works the best but only in total darkness. Next week, we’re going to try to test new angles before the demo day, then probably pick the one with the greatest visual effect. I’ve also been working on the final sewing of the device onto the wrist band. We adjusted the height of the sensors since our hand was interfering with the input. I have also been working on the poster for our final demo.

This is the final official status report of the semester. We have all the parts completed sufficiently, but we know there are always areas of improvements. We’re going to try to make minor changes throughout the week before demo day, but otherwise, we have a completed product! I am most concerned about how it will go overall. We have tested our product amongst ourselves, but we are aiming to test it on several participants before demo day and final paper. We are excited to show everyone our progress and what we’ve learned!

Joanne’s Status Report for 4/30/2022

This week was final presentation week so we as a group worked a lot on polishing the slides and doing testing on our prototype. We conducted a lot of testing on latency, performance, accuracy and that was our focus for the latter half of the week. The early parts of the week, Edward and I took time to debug our unity/gesture code to provide better model translations. Before there was a bug of irregular spikes in the model’s movement when given gestures. However we were able to go through our whole code, and figure out what caused the issue. We were not resetting the appropriate values after the completion of one gesture, thus causing issues with data calculation. Our integrated model works fairly well in response to gestures now! After that we have been focusing on testing + hologram portion. I added new features to the web app which include changing models (now there are three models that the user can switch between), and incorporating the previous sensor data visualization we had with our current web app. I will continue this week to finish up the UI for the web app and help with finalizing other portions of the project such as display/hologram creation as well as the final report/ poster.

Anushka’s Status Report for 4/23

This week was a tough week as I got COVID and had to participate in everything remotely. Since I was incapacitated, I mainly worked on the final presentation.

The main part of the presentation I’m working on is the tradeoffs. We made a lot of algorithmic tradeoffs, so I had to revisit all the old models: the pre-ML model, the SVM model with old arm data, and with the time series classification models. Other things I wanted to mention is the usage of Jetson and different angles of holograms. I designed new hologram sides with slots and holes so that we don’t have to glue the sides together and the sides are more secure.

Since we have to finish the presentation tomorrow, I am still on track to finish this week’s goals. In the next week, I hope to be able to go back in-person and work on the hologram display and reassemble Tony to the exercise band.

Joanne’s Status Report for 4/23/2022

This week was a busy week since we are wrapping up on our project and we had some covid case situations that delayed our work schedule.

Edward and I acquired black acrylic sheets to encase our hologram pyramid and display over. We tried gluing the cut out plexiglass (for the hologram), however, the glue we used from Tech Spark did not apply as nicely as we hoped (nor did it stick). We then thought of a new design to laser cut the hologram layout so it would have a puzzle like attachment feature at the edges, so it can connect to the other sides of the hologram pyramid without all of the glue and tape. We drafted the new version to cut up. There was some covid related issues in our group, so we are in the midst of working out a schedule so we can go in to Tech Spark and cut the new version.

We are still working on our model gestures. Since our gesture algorithm did not work out, and we are relying on our finger detection algorithm for our gestures, we moved noise filtering and data queueing to the Unity side of things. The model works well in the test data for “good” swipes and zoom gestures. However during the live performance, due to noise, there are a lot of erratic model movement that we are trying to resolve. The swipe gestures look ok live, the pinches have some unknown behavior. That has been my primary goal this week. To filter our noise, I have tried taking the average of points coming in for a specific gesture and comparing that to identify a factor for gesture movement. It seemed to work well in reducing noise in the swipes, but not in the pinches for some reason.

I have been working on finalizing the web app UI and included visual components that would help with user interface (like including a finger visualization of where ur finger is on the trackpad, and battery info). As a group, we talked about collecting data for testing and have been working on our final presentation.

Anushka’s Status Report for 4/16

This week was a grind week for the team. We tried many different algorithms for determining what gesture is occurring. There are two things I tried this week:

  1. We queue 5 data points of what fingers are currently being detected. From there, we determine what gesture is occurring. If there are at least 4 occurrences of the number of fingers, then the gesture will be classified as such (0 for noise/rest, 1 for swipes, and 2 for pinches). If the number of fingers goes from 1 to 2, then the gesture is a pinch out, and if the vice versa occurs, then the gesture is pinch in. I set up a few cases that took care of the noise, but ultimately, it still caused a lot of errors. The frequency of data collection is too low; a gesture can range from 2-10 gestures. However, the number of fingers detected before or after the gesture can affect the window of 5 and yield an inaccurate result.

2. I tried using time series classification library called sktime: https://sktime.org/. This library is really cool because I can give the model a set of inputs over time and across different classifications, and the model can predict what class an time series data belongs too. Splitting the new arm data into training and testing, I was able to create a model with 92% accuracy, and this model is able to distinguish between pinch in, pinch out, rest, swipe left, and swipe right. However, this model would need 24 data points in advance, and as discussed before, there aren’t that many data points associated with a gesture, and with this model, we would have to predict the gesture a considerable amount of time before performing the gesture.

As a team, we’ve decided to work on different implementations until Sunday, then make a final decision on what algorithm we should select. It comes down to a tradeoff between accuracy and latency, both important metrics for our device.

I also helped Joanne with building a bigger hologram pyramid this weekend. We decided to hold off on casing until we determine the final height of the pyramid, which we’ve also decided to do a bit more exploring with.

Currently, I am a bit behind on schedule, but whatever tasks that are remaining in development and testing has to be accomplished this week as our final presentation is due next week. Everything essentially has to be done, so I’m in a bit of panic, but I have faith in Tony, Holly, and our team.

Joanne’s Status Report for 4/16/2022

This week we worked together a lot on testing our integrated product and fixing individual bugs we had from each of our parts. We mounted our board onto the wristband we bought. We have been testing our algorithm on the new data that reflects user input taken from the arm. Right now we see that there is a lot of noise in the data that causes the model to move in erratic behaviors. Right now I am working on trying to smooth out the noise on my end, while Edward and Anushka also work on filtering out the data on their end as well.

I have created a new rotation algorithm that makes the model rotate at a much smoother pace. When tested on ideal data (data with no noise), it moves at a very smooth consistent rate relative to how fast the user swiped. Before I only had a rough rotation algorithm where the model move based on the distances given to me of the fingers. Now I take into account the timestamps of when the data was taken so I can approximate a relative speed of the user. This change was only for rotations when it was limited to the X axis.

Due to some problems in gesture detection on the sensor side, we are right now planning of getting rid of pinches, since pinching and swiping motion is confusing our ML model. Thus we are thinking of implementing rotation in all degrees to add additional functionality. I have added that functionality in right now. However the translation of the finger locations does not translate intuitively to the model rotation (it rotates in the right direction but not the right angles?). I am working on how to make the swipe made by the user look more like the 3d rotation we would expect to see. We have been talking as a group to see what other functionalities we can include from what we have now and in the time frame left. Some ideas might be detecting taps, or creating a new zoom in scheme that does not involve the traditional pinching motion.

Right now I am also currently working on new ways of getting rid of the unexpected rotation spikes due to noise. I graphed out the data points and decided to try averaging each data point in a swipe and using that as a calculation standard for finger location so that I can try to reduce the effect of noise in the data. I will test that implementation out this week.

Anushka and I also have recut the hologram from plexiglass to fit the dimensions of the display we are going to use for the presentation. We are planning to create the encasing (for light blocking) this week.

Team Status Report for 4/10/2022

This week was demo week so we worked as a group on integrating our individual setups and testing our prototype. Before the demo we had a small hardware problem that turned out to be a soldering mistake. Edward fixed this before our demo thankfully. We further worked on creating a better plan for our gesture detection algorithm. The new finger detection algorithm using the ML approach provided better accuracy than our before approach. Thus we are hoping that the new gesture detection algorithm we planned out would also provide better results.

We realized that our data was getting some lags between the hardware and the Webapp due to some spotty wifi problems in the lab. Professor Tamal mentioned during the demo that he could help us out with that, so we are planning to talk to him soon. Other than that we also planned out a list of things to add on the web application portion such as battery life, battery status, finger location. We are also planning to implement a new strategy in translating the data we get from gesture detection into smoother model translations.  Because of the large amount of data and noise, we need to make the translations account for that. Thus we have planned out a new approach, and am also planning this week to implement it.

We are planning to mount our board to our wristband that we ordered soon (probably tomorrow). We got feedback from our demo session that we should see as soon as possible what live data taken from our arm would translate to in our model. Thus we are planning to test that as a group in the early week.

We think we are on track and currently refining the parts of our project.

Joanne’s Status Report for 4/10/2022

This week was demo week so we focused in on integrating the parts of our project. We conducted more testing on how the live sensor data translates to our model transformations (rotate, and zoom). We tried to mimic as many gestures from different angles and observed whether or not they performed the correct functionality on the model. We found out there was slight lag between the user input and the model translations due to lag in data being sent due to spotty wifi in the lab. Professor Tamal mentioned that he could help us out with the connectivity issue, so we are planning to talk to him this week about that as well.

Other than demo work, I also started taking in more info from the hardware to also display things such as battery life, on/off status. Once i get the information via MQTT from the hardware, it updates it self on the webapp. I am also planning this week to create a visual layout of where the finger is in relation to our sensor layout.

I thought of a new algorithm to help translate the data better into smoother translations. I could not translate my new approach to code yet due to carnvial week. However I hope this new approach will help in making the model move more smoothly even in the presence of noise and at a rate more consistent with how fast the user is swiping/ pinching.

I believe we are on track, and as a group this week we are planning also to mount our hardware onto a wristband so that we can see what actual data taken from our arm would be like.

 

Anushka’s Status Report for 4/10

This week, we had our interim demos. I worked on updating our schedule, which can be seen in our Interim Demo post. I was also in charge of making sure that we had all the parts together for the demo.

The night before demos, I attempted to work with the Jetson Nano again. I was able to connect to the Internet, but I was having a tough time downloading Python packages in order to train the machine learning model. I thought it might have been related to location, so I tested the connection in the UC with Edward. However, I still wasn’t able to download packages, even though I was connected. I will be speaking with Professor Mukherjee next week to see if he is able to help.

The team decided to take a small break for Carnival, but we will get back to working on integration and the machine learning model next week. We created a state diagram of how our gesture detection algorithm should work after detecting how many fingers are present. Once the gesture is determined, we will use metrics such as minimum or maximum to decide how much of the gesture is being performed. We have the foundations for both, but we have more information on the latter than the former.

State Diagram of Gesture Detection Algorithm. Swipes refer to the finger detection algorithm returning 1 finger, pinches returning 2, and none returning 0 or undetermined

We are currently on schedule, but since it’s the final few weeks of building, I want to push myself to finish our project by the end of next week and begin final testing. We are hoping to finish the gesture detection algorithm by Tuesday so that we can make incremental improvements for the remainder of our Capstone time.

Interim Demo Schedule

Here is a copy of our most recent schedule:

It can also be found here.

Major tasks that are remaining include:

  • Improving the finger and gesture detection algorithms using the machine learning models we’ve been working with over the past few weeks,
  • Providing the ability to upload 3D models from the web application,
  • Testing our sensors once mounted on wristband,
  • Cutting final version of hologram, and
  • Collecting latency data.