Team Status Report for 4/16/2022

This week, we attached the board and the sensors to a wristband we bought. We collected new data with the board on our arm and retrained our number-of-finger detection model with this new data. The sensors need to be elevated and slanted up to that it doesn’t confuse the skin on the arm as a finger.

We are having a bit of trouble detecting swipes and pinches reliably, so we have been discussing removing pinches entirely from our implementation. Our number of finger classifying SVM model gets confused between swipes and pinches and it causes a lot of problems for gesture detection. We created a demo of a “swipes-only” system and that seems to work much better, since it only has to detect if there is “none” or “one finger.” Anushka and Edward (mostly Anushka) have been trying our various new techniques to detect fingers and or gestures, but we are having trouble finding an optimal one. Early next week, we will discuss what our future plans are for our finger detection.

Anushka and Joanne have cut new acrylic sheets for the hologram and will work on creating an enclosure this week.

We are a bit behind schedule this week as we are working on refining our “swipes-only” system. We may create two modes for our system: one that has the swipes-only model so that we are guaranteed a smooth performance, and one that has both swipes and pinches, which isn’t as smooth but does have more functionalities. We are also considering changing the gesture for the zooms while still using our two finger model, but we have to come to a decision by early next week on if and how we should implement this.

We’ve decided to scrape the Jetson as the latency of communication between the sensors and the Jetson was so much slower than the latency between the sensors and our laptops. We were looking forward to using the Jetson for machine learning, but again due to network issues we’ve been having and the fact that we are able to train the models on our own laptops has led us to this conclusion. We have learned a lot from using it, and we hope that in the future, we can find whatever the issue is behind it.

Team Status Report for 4/10/2022

This week was demo week so we worked as a group on integrating our individual setups and testing our prototype. Before the demo we had a small hardware problem that turned out to be a soldering mistake. Edward fixed this before our demo thankfully. We further worked on creating a better plan for our gesture detection algorithm. The new finger detection algorithm using the ML approach provided better accuracy than our before approach. Thus we are hoping that the new gesture detection algorithm we planned out would also provide better results.

We realized that our data was getting some lags between the hardware and the Webapp due to some spotty wifi problems in the lab. Professor Tamal mentioned during the demo that he could help us out with that, so we are planning to talk to him soon. Other than that we also planned out a list of things to add on the web application portion such as battery life, battery status, finger location. We are also planning to implement a new strategy in translating the data we get from gesture detection into smoother model translations.  Because of the large amount of data and noise, we need to make the translations account for that. Thus we have planned out a new approach, and am also planning this week to implement it.

We are planning to mount our board to our wristband that we ordered soon (probably tomorrow). We got feedback from our demo session that we should see as soon as possible what live data taken from our arm would translate to in our model. Thus we are planning to test that as a group in the early week.

We think we are on track and currently refining the parts of our project.

Interim Demo Schedule

Here is a copy of our most recent schedule:

It can also be found here.

Major tasks that are remaining include:

  • Improving the finger and gesture detection algorithms using the machine learning models we’ve been working with over the past few weeks,
  • Providing the ability to upload 3D models from the web application,
  • Testing our sensors once mounted on wristband,
  • Cutting final version of hologram, and
  • Collecting latency data.

Team Status Report for 4/2

This week was a better week for the team. We were able to tackle many problems regarding our capstone.

Our biggest concern was the Jetson. After reading online tutorials on using Edimax, we were able to finally connect the Jetson to WiFi. Next week, we’ll be focusing on connecting the sensors to the Jetson and the Jetson to the Unity web application so that we can accomplish our MVP and start formally testing it.

Since we have time now, we went back to our gesture algorithm and sought to improve the accuracy. We explored different machine learning models and decided to see if Support Vector Machines (SVM) would be effective. On the data that we already collected, the model seems to be performing well (95% accuracy on cross-validation). However, we know that the data is not comprehensive of different locations and speeds of the gestures. We collected more data and plan on training our model on that over the next week. It takes a lot of computational power, so we will try training on either a Jetson or a Shark machine. We were also suggested AWS, which we might also look into.

Biggest risk now is latency. We want to make sure that everything is fast with the addition of the Jetson. We have a Raspberry Pi as a backup, but we’re hoping that this works. Therefore, we haven’t made any changes to our implementation. We also went ahead and ordered the band to mount the sensors on. We plan on collecting more data and training on that next week after we find an effective way to train the model.

We should have a working(ish) demo for the interim demo on Monday. We also seem to be on track, but we will need to further improve our detection algorithms and thinking to have a final, usable product.

Team Status Report for 3.26.2022

We attended the ethics discussion on Wednesday. It was nice hearing what other teams are doing outside of section D and hearing their feedback on our project. The discussions did push us to think about the ethical implications of our project, which we didn’t think was a lot before. However, to be fair to our users, we will be more clear on what data we are collecting and how we are using it.

This week, we worked on a swipe detection demo and improving our gesture recognition algorithm. We are having issues with our algorithm. Since we are trying to avoid using machine learning, we are testing out different curve fitting equations and changing the parameters of how we’re collecting our data. We are limited to how fast our sensors can collect data, which is not good when a user performs a gesture quickly. We are currently collecting the accuracy when tuning hyperparameters and will map out these over the next week to determine which parameters offer the highest accuracy.

Most significant risk is the algorithm at the moment. If we are able to get the Jetson working next Wednesday, we might be able to afford better parameters for a higher accuracy, even if it takes longer to compute since we have a faster, external computer. The Jetson might change to a Raspberry Pi if we aren’t able to get the Jetson working, but we aren’t going to make that decision right now. The schedule has changed, since we weren’t expecting to be having this much trouble with the Jetson. However, this is why we allocated a lot of time for it.

We have started integration between the basic gesture detection from live sensor readings, and the web application portion of the project. We streamed data via MQTT protocol to the Django app, and confirmed we could get data into Unity and that it can rotate the model. A demo of the rotation on live data is shown below in the video on the google drive link: https://drive.google.com/file/d/1h3q5Os-ycagcWNNDkVVVS57qLM31H1Ls/view?usp=sharing.

We are continuing to work on smoothing out the rotation based on sensor data since it doesnt rotate as smoothly as it should (although it does rotate in the correct direction) and we are planning to test out the zoom in/out based on live data soon this week as well.

Team Status Report for 3.19.2022

The hardware for the wearable is completed and a testbed for the physical sensor configuration and MQTT sensor data streaming have been fully set up! The device reads from all 10 sensors in a round-robin fashion (to prevent IR interference), applies a EWMA filter to the sensor readings, and sends the data to MQTT to be received by a Python program or a website. The device can also send power information like battery percentage and voltage. After this was finished, we moved on to thinking about the webapp and gesture recognition while Edward works on polishing up the hardware.

For gesture recognition, we made a lot of progress since receiving the sensors. We began by just observing the general behavior of the sensors and brainstorming ideas on how we can map them to different gestures. We noticed a few mathematical characteristics of the zooming gestures that were different from the characteristics of a swipe, so we will most likely use those as our gesture identification. We have a few theories as to how we can distinguish those further, and those will be our next step.

For the Unity-Web Application part of our project, we started working on communication of data. In order to prepare for integration, we made sure that data could be streamed to our web application via the MQTT protocol. For now, we just sent over dummy data from a python script via the protocol above. Then we made sure that we can actually communicate smoothly with the Unity interface that is embedded onto the Django web app. Now we are continuing to work on how to translate the gesture data into more precise gestures applied to the model.

The next steps are to build out the actual wearable (buy wrist straps, create enclosure for hardware, attach hardware to strap, etc.). We also need to flesh out gesture recognition and make it more robust to the various kinds of ways to perform specific gestures like a swiping left and right or a pinching in and out.

Our biggest concern still remains to be the Jetson. However, since we know the connection between the device and our computers work, we at least have one avenue for communication. That will most likely be our contingency plan, but we’re still hoping that the Jetson will work for faster computation purposes. Our schedule still remains the same, and we’re planning on beginning integration over the next week.

Team Status Report for 2.26.2022

We delivered our design review presentation on Wednesday. Overall, it went pretty well. Creating the presentation gave us a lot of insight into what the next few weeks are going to look like, and we discussed our schedule more in-depth after the presentation to cover ground in areas that we still need to work on.

The PCB board is being manufactured and will be shipped next week, and we’re currently testing individual sensors to see how they behave. Additionally, we decided on using MQTT communication since it was the most common IoT communication used, especially with the Photon Particle. We began testing the sensors and developed a visual interface using JavaScript and HTML to see how the sensors detect objects close by. For the most part, we’re able to tell what sensors are going off when a finger comes close by, and we can also almost guess which direction we are swiping. This is a good indication that our rotation gesture recognition algorithm may actually work.

We also started the web application portion of our project. We have figured out how to embed the Unity application onto our Django web app and communicate data between our Web application and Unity.

Together, we worked on brainstorming how the web interface will look like and how it will be implemented. Joanne will primarily be working on getting a demo of that working soon. Edward will test out the PCBs once they are shipped. Anushka will work on getting the Jetson to boot and run code. We seem to be on schedule this week.

An issue that arouse was that the VL6180X sensor can read data in a 25 degree cone, and when a finger gets too far away from the sensors (75-150mm range), all the sensors will say that they see an object. We have no idea how this will look like exactly, so we created a visualization to see what would happen if a finger gets too far away. Would the finger look like a line? Would it have noticeable dip? We will need to explore this once the PCB is fully done and verified, which might take a while and put us behind schedule.

Team Status Report for 2.19.2022

This week as a group we primarily discussed future design goals. Some major topics that we focused on was how we would write the algorithm for the gesture recognition, more precise testing measures, a more in depth flow chart of our device. More design details that we discussed will be on our design presentation slides. Afterwards we worked on the design presentation slides. We also got samples of the sensor that will be on our custom PCB so we can start working on getting sensor data to the Jetson and then to our web application this week. We placed the order for our custom PCB that Edward designed this week as well.

Individually we worked on 3 different areas of our project: the PCB board to collect sensor data, the Jetson Nano, and the 3d modeling interface (Unity Web application).

Joanne worked on getting the Unity section. She scripted one of the three gestures the model should perform when a user inputs our target gesture. We got the rotation gesture (which is the swiping gesture from user) to work. The translations work well when projected onto the holograph pyramid as well. She will be working on finishing up the other two gestures (zooming in and out) this week as well.

Edward worked on finalizing the PCB and testing out the VL6180 distance sensors. He made a small testbench using three breakout boards we bought. The sensors seem fairly sensitive and a pretty fit for our use case. They detect up to around 150-200mm, which fits on a human forearm. Honestly, maybe with some clever math, we can detect finger movements with just three sensors and might not need all 10 sensors. So if the PCB ends up not working out, it might be ok to just use the breakout boards? Will need to explore this further next week.

Anushka worked on setting up the Jetson Nano and looking into communication protocols to implement. This will be helpful later this week once we work on sensor communication with our test sensors. She will be using this information to guide the web application interface throughout this week. She will also be presenting the design review at the end of the week.

The schedule has moved around a lot since last week, and we’ve updated our current Gantt chart to reflect that, as seen below. The team will meet next week to discuss these updates.

Schedule Updates

We haven’t made any major design changes since last week, but we did flesh out what we want the user experience to look like. We will be discussing these during the design review this week. The most significant risk is how the board will work, but this is why we ordered it now so that we have some time if we have to make any changes to the PCB. We will test as soon as the PCB comes, but we purchased single sensors in the meantime so that we can test the general behavior of the distance sensors.

Team Status Report for 2.12.2022

Welcome to our project website! We’re so excited to share everything we are doing.

This week we focused on initial designs of different components. We focused on the PCB boards, Unity, and hologram pyramid. For the PCB board, we designed and are ready to order, but we also decided to place an order for two sensor breakout boards so that we can start testing our measurements as soon as possible since that order will arrive most likely before our boards do. The Unity environment was intuitive to work with, and we were able to set up an environment that we needed for the hologram quickly. We created the layout necessary to project our 3d model onto our hologram pyramid. However, our biggest challenge this week was with the results of the hologram pyramid. 

After tinkering with the hologram, we are concerned about the scope of the component. We aren’t satisfied with the projection of the 3D image and we are considering either improving it or replacing it with another medium altogether. We have placed a demo video link of the hologram pyramid: https://drive.google.com/file/d/1ip7AXfh7wN9jhmTKIRnzeADvgepjKqEY/view?usp=sharing. We will make a decision on what to do after our testing this week. Since we are working on this component earlier than planned, we have enough time to make modifications and it will not serve as a risk to our project at the moment. This may change our requirements, but we cannot make that decision at this time. 

Since we were prototyping with multiple pyramids, we had to incur the cost of the plastic. It isn’t a huge cost, since our overall projected budget is low, but we do not want to spend too much just cutting new pyramids after every modification. We will try to do more research ahead of time on what dimensions serve best before cutting more pyramids.

 

Next week, we want to accomplish several things, and below is our modified schedule. We decided that we’re going to parallely work on parts to prevent us from scrambling at the end.

  1. We’re going to be working on improving the hologram and seeing if we can get the image to be continuous across an axis instead of disjoint.
  2. We’re going to be placing the PCB order as soon as we get it approved.
  3. We’ll continue tinkering with Unity and exploring how external inputs can be applied to objects.
  4. We are planning to try out a different size design for the hologram pyramid to see if it improves the quality of 3d model viewing. 

Modified Gantt Chart with this week’s tasks and progress