Team Status Report for 4/30/2022

This week, we finally finished a working demo of the wearable and Webapp! Joanne and Edward spent several hours debugging and bug squashing in order to properly filter out the incoming finger location data from the wearable. Eventually, they converged on a working system. See video here.

The current issues remaining are that WiFi is fairly slow and negatively affect latency. It isn’t too bad, but it makes our system less real-time than what we wanted. If we were to do this again, we would use BLE or perhaps some other wireless protocol.

Anushka worked on refining the hologram design. She design several version of the hologram and cut them out to see which one gave the best effect. So far, the pyramid with angles that are above 60 degrees and on a big screen work the best. She has also started working on the encasing for the hologram. At this point, we’re not really sure how to improve the visualization without changing it altogether, so we might try bigger angles on smaller screens throughout the week to see what gives the best effect.

We are on schedule to have something for the demo. This week we will work on polishing everything up and making sure things are in order to not blow up at demo day.

Edward’s Status Report for April 30

After I submitted last week’s status update, me and Joanne worked on fixing the bugs in the Webapp. We spent several hours cleaning up the code and trying to see what was causing the issues. We have been seeing that the model freezes up or moves in the opposite direction when the wearable sends is finger locations. After debugging, we finally found the issue and fixed it up. This video was made right after we fixed all the bugs!

This week, I prepared for and worked on the final presentation. Anushka was feeling better, so I gave her the wearable for her to properly sew on. So, without any hardware, I spent most of this week working on the final presentation.

I am on schedule and pretty much done with the individual tasks I needed to get done this semester! Next week, I will help on polishing everything for the final demo. We still need to build the hologram pyramid, so I will be helping Anushka on that. Super excited to demo our system in the final demo!

Team Status Report for 4/23/2022

This week, we had much to do. Redesigns, bugs, and COVID have pushed us behind schedule. We are on our last slack week next week.

Edward and Joanne have been working on fine-tuning the finger detection and model rotation. The system does not work well on live “real” data and seems to have erratic and random behaviors at times. Hopefully, by the end of this week, we can have a really good version working. We will spend the majority of next week working on this.

Edward and Joanne have also bought black acrylic for the hologram. They attempted the glue the acrylic together, but the glue was not as sticky as we wanted. So, Anushka drafted a new version of the hologram using slots to fit the walls together like a puzzle. Hopefully, we can get this cut and built next week.

We’ve also been working on the final presentation. It’s a good thing we have some time before demo day, but we’re excited to present our findings and what we’ve done so far in the semester.

In good news, the hardware has not bugged out on us yet. It seems fairly reliable so far. We still need to properly sew the board unto the wristband. This should be done by early next week.

Next week will be an extremely busy week for us. Wish us luck!

Edward’s Status Report for April 23

This week, I worked on the final project presentation as well as further improving the gesture to 3d model action algorithm. Anushka has been sick this week, so Joanne and I have been trying to finish up everything before the demo. I have been working on collecting more training data to retrain our SVM whenever an issue comes up. I think the current model is able to distinguish between one and two fingers pretty well. Some of the tricks I played to get the robustness up are: removing the noise label in the model (since it can get swipes and noise confused, and noise is easy to detect), and training on data where I just place my fingers and not move them or perform gestures.

I started collecting data for the testing section of our presentation. We discussed as a group how to determine our system’s accuracy, gesture precision, and latency.

I also worked with Joanne to cut the acrylic for our hologram, but the glue that techspark had looked really bad and tape would show up when under light, so we redesigned our hologram to use slots instead of adhesives. I am not the best at cutting, so I hope Anushka will get better so she can cut and help construct the hologram.

The gesture algorithm has proven to be much harder than we though, especially with just the two of us this week. Hopefully, next week we can have something demo-able. We are using up our slack weeks.

Team Status Report for 4/16/2022

This week, we attached the board and the sensors to a wristband we bought. We collected new data with the board on our arm and retrained our number-of-finger detection model with this new data. The sensors need to be elevated and slanted up to that it doesn’t confuse the skin on the arm as a finger.

We are having a bit of trouble detecting swipes and pinches reliably, so we have been discussing removing pinches entirely from our implementation. Our number of finger classifying SVM model gets confused between swipes and pinches and it causes a lot of problems for gesture detection. We created a demo of a “swipes-only” system and that seems to work much better, since it only has to detect if there is “none” or “one finger.” Anushka and Edward (mostly Anushka) have been trying our various new techniques to detect fingers and or gestures, but we are having trouble finding an optimal one. Early next week, we will discuss what our future plans are for our finger detection.

Anushka and Joanne have cut new acrylic sheets for the hologram and will work on creating an enclosure this week.

We are a bit behind schedule this week as we are working on refining our “swipes-only” system. We may create two modes for our system: one that has the swipes-only model so that we are guaranteed a smooth performance, and one that has both swipes and pinches, which isn’t as smooth but does have more functionalities. We are also considering changing the gesture for the zooms while still using our two finger model, but we have to come to a decision by early next week on if and how we should implement this.

We’ve decided to scrape the Jetson as the latency of communication between the sensors and the Jetson was so much slower than the latency between the sensors and our laptops. We were looking forward to using the Jetson for machine learning, but again due to network issues we’ve been having and the fact that we are able to train the models on our own laptops has led us to this conclusion. We have learned a lot from using it, and we hope that in the future, we can find whatever the issue is behind it.

Edward’s Status Report for April 16

This week, I continued working on the finger and gesture detection and further integrated the wearable with the Webapp. Our data is extremely noisy, so I have been working on smoothing out the noise.

Originally, the Photon is unable to send the timestamp when it collected all 10 sensor values, since the RTC on the Photon did not have millisecond accuracy. But, I found a way to get an approximate time of sensor data collection by summing up elapsed time in milliseconds. This made the finger detection data sent to the Webapp appear smoother.

I am not super confident in our SVM implementation of finger detection, so I want to remove pinches entirely in order to reduce noise. Two fingers that are close together often get confused with one finger and vice-versa. Because of this fact, many swipes get classified as pinches, which throws off everything. So, I decided to work on an implementation without pinches and it seemed to work much better. As a group, we are deciding whether or not this is the direction to go. If we are to continue with only detecting swipes, we will need to have a full 3D rotation rather than only rotating along the y-axis.

This week, we also attached the board to a wristband we ordered and have been working off of that. The sensors need to be elevated slightly and I am not sure how annoying that will be to a user. This elevation also makes it so that users cane just swipe on their skin casually; they need to deliberately place their finger in a way so that the sensors can catch it.

This week, I will work on trying to filter and reduce noise in our swipes-only implementation, which I think we may stick with for the final project. This will be a tough effort.

I am a bit behind schedule as the project demo comes closer. Detected finger locations are a bit shaky and I’m not sure if I can figure out a way to clean them up. I will work with Joanne to come up with algorithms to reduce noise, but I’m not too confident we can make a really good one. Removing pinches limits the functionality of our device and I’m not sure if its even “impressive” with just swipes. We will need to either explore more gestures or broaden our use case to make up for this fact.

Edward’s Status Report for April 10

This week, we demo’ed our prototype to the instructor and TA. There was a small soldering mistake that I fixed before our demo, but our hardware should reliably work from now on. An issue we found in our setup is the connectivity. WiFi in the lab is pretty spotty and sometimes there will be pauses when data is being streamed. We hope this will not happen much in the UC when we demo, but I am not sure this will go away. I plan to talk to Tamal about this in the coming week.

Other than the demo, we also worked on hashing out a basic plan for the gesture detection and did further integration with Joanne’s Webapp.

We did not do much coding this week due to Carnival, but next week I will help Anushka code up our new gesture detection algorithm. We plan to have an new approach where if a new gesture is detected for n times, then we say that the current gesture is that new gesture.

We also decided to ditch the Jetson Nano due to poor WiFi performance, but Tamal said this should not be an issue, so I may chat with him about this in the coming week.

Possibly tomorrow, we also plan to attach our hardware to a wristband we bought. This will be done using tape or sewing.

We are on track if we work on and refine the gesture detection for the next two weeks. This is the toughest part of this project and it needs to work well for the project to function.

Edward’s Status Report for April 2

This week, I did a lot of group work.

I worked with Anushka to set up the Jetson Nano. We had some trouble setting up WiFi, but eventually figured out how to connect to CMU-DEVICE. I plan to set up the MQTT broker on the Jetson in order to improve WiFi latency. Since the broker is closer to the wearable and on the same network, it should be much faster in theory. This weekend, I plan to test this out and see if there is any noticeable speedup in data transmission.

We decided on training a machine learning model to classify the number of fingers on the forearm, since fitting a curve and finding the min and max to classify swipes and pinches was too complicated. So, I worked with Anushka and Joanne to create a SVM classifier to classify the number of fingers. I wrote a training script and trained it on data we had collected a while ago and got about 95% accuracy in cross validation! It seems to perform well on live data, but it isn’t perfect. SVM seems to work well for this task, so we collected some more data and are training it to see if we can get any improvements. The next steps are to improve finger detection given that we know the approximate number of fingers and then improve gesture detection given hat we know the approximate locations of the fingers. I am not too confident that finger detection will be as good as we want, so we may have to deal with that this week when we actually fully implement it. Maybe we need to apply some filtering technique or throw out bad detections.

I also further integrated the wearable with Joanne’s Webapp. The improved finger detection powered by the SVM seemed to be better. Less swipes are classified as pinches which means the Webapp consistently rotates the model.

I am on track for this week, but things are piling up…

Team Status Report for 3.26.2022

We attended the ethics discussion on Wednesday. It was nice hearing what other teams are doing outside of section D and hearing their feedback on our project. The discussions did push us to think about the ethical implications of our project, which we didn’t think was a lot before. However, to be fair to our users, we will be more clear on what data we are collecting and how we are using it.

This week, we worked on a swipe detection demo and improving our gesture recognition algorithm. We are having issues with our algorithm. Since we are trying to avoid using machine learning, we are testing out different curve fitting equations and changing the parameters of how we’re collecting our data. We are limited to how fast our sensors can collect data, which is not good when a user performs a gesture quickly. We are currently collecting the accuracy when tuning hyperparameters and will map out these over the next week to determine which parameters offer the highest accuracy.

Most significant risk is the algorithm at the moment. If we are able to get the Jetson working next Wednesday, we might be able to afford better parameters for a higher accuracy, even if it takes longer to compute since we have a faster, external computer. The Jetson might change to a Raspberry Pi if we aren’t able to get the Jetson working, but we aren’t going to make that decision right now. The schedule has changed, since we weren’t expecting to be having this much trouble with the Jetson. However, this is why we allocated a lot of time for it.

We have started integration between the basic gesture detection from live sensor readings, and the web application portion of the project. We streamed data via MQTT protocol to the Django app, and confirmed we could get data into Unity and that it can rotate the model. A demo of the rotation on live data is shown below in the video on the google drive link: https://drive.google.com/file/d/1h3q5Os-ycagcWNNDkVVVS57qLM31H1Ls/view?usp=sharing.

We are continuing to work on smoothing out the rotation based on sensor data since it doesnt rotate as smoothly as it should (although it does rotate in the correct direction) and we are planning to test out the zoom in/out based on live data soon this week as well.