Tanushree Status Report 5/4

This week our team spent all our time implementing our new GUI. This new implementation was tested repeatedly to fix all remaining bugs and issues. Through testing we realized that each sensor had a different threshold. Thus, we decided to threshold on each individual sensor for accurate tap detection.

Future tasks: test our system with the new projector, make the demo video, create an elevator pitch and writing the final paper.

Team Status 4/27

The one risk we might face is if our tap detection circuit breaks, we only have one spare sensor left. We hope to avoid this by making sure not to disturb it now that our set up is finalized.

We redesigned the GUI to make it more visually appealing and engaging. Further, we added more sensors to the circuit, mentioned above, to improve our tap detection accuracy over a greater surface area of the poster.

 

Tanushree Status 4/27

The following tasks were accomplished this week:

1. Isha and I soldered four more sensors to the tap detection circuit. This pushed our accuracy rate to 100%.

2. Our team also brainstormed ideas to improve our UI and improve our UX.

2. Our team also gathered numbers to validate our metrics.

3. We prepared the presentation slides for the final presentation this week.

4. I will be presenting this time, so I have been practicing for that as well.

Our goal for next week is to make all the final changes to GUI and freeze our final product that we will be demoing next week. We are all on schedule.

Tanushree Status Report 4/20

This week I have been working with my team to debug some of the functionality errors. We made the decision to switch to Ubuntu as our platform  instead of Mac OS; this helped eliminate most of our latency issues. We restructured our code by migrating code between the different classes we created for the GUI. This helped in debugging the other issues we had identified with our system implementation.

Our next tasks would include making the UI more visually appealing that allows greater user interaction/engagement. We also need to increase the tap detection accuracy rate. Currently, the user often needs to tap a few times before the vibrations are picked up by the sensor. We hope to improve this accuracy rate by adding more sensors to our tap detection circuit. Lastly, we need to iteratively test our system and make sure it can be run for a long period of time without fail.  We are currently on schedule.

Tanushree Status Report 4/14

Now that we have integrated all our subsystems, all our remaining will be used in debugging the functionality errors and refining the user interface and experience. Thus, going forward, we will essentially be working as a group rather than assigning individual tasks, unless necessary. This week we met on Thursday to resolve a couple of known bugs but also identified new ones.

My progress is on schedule. As mentioned, all the remaining time will now be used for debugging and refinement.

Tanushree Status Report 4/6

Isha and I tested our integrated subsystems last Sunday with real-time input. The integrated system works with a certain margin of error. Isha has implemented a second iteration of the calibration function to reduce these errors.

During Wednesday’s lab, all three of us tested the tap detection subsystem to calculate an appropriate threshold value. We conducted this test in three different environments and eventually settled on the most optimal one which we hope to use for our final demo.

I also worked on integrating Suann’s tap detection code with the rest of our system. Thus, now we have all three subsystems: GUI, color detection and tap detection, integrated. My future tasks include:

1. testing this integration with real-time input

2. refining the GUI to make it more appealing to the user

I am currently on schedule.

Tanushree Status Report 3/30

I worked with Isha to integrate my GUI subsystem with her detection subsystem. I added the following functionalities to my GUI in order for the integration to work:

1. After receiving the calibration coordinates from Isha’s part of the code, I set them as reference points and used to calculate the observed width and height of the GUI as seen by the camera used for the detection algorithm.

2. On receiving the coordinates of the red dot from Isha’s detection code, I calculate the position of the user’s finger with respect to the actual GUI coordinates. Thus, the red dot observed by the camera needs to be transformed to a scale that matched the GUI dimensions.

I tested my code using synthetic data: hard-coded the calibration coordinates and red dot coordinates according to the the test images we took on Tuesday. The mapping code works but with some degree of error. Thus, following will be my future tasks:

1. I will be working on testing my code on real time data and make appropriate changes to improve the accuracy .

2. I will also be working on integrating the tap detection subsystem with the GUI and detection subsystems.

Below is an example test image:

Below is the GUI and the mapped coordinate of the red dot is shown with a red ‘x’. Since it is within the button, it launches the modal containing the text from the appropriate file:

I am currently on schedule.

 

Team Status Report 3/30

Currently, the most significant risk is integrating all three of our subsystems together and achieving a high accuracy for our overall system. We have already integrated two subsystems and will be integrating the third this week. Thus, we will have about a month to test our system and make appropriate changes in order to improve its accuracy. We want to also ensure smooth user interaction with our interface without interruptions.

We have decided to stick with Arduino instead of moving to the Pi in order to stay on schedule. If we had stuck with the Pi we would need to wait a week for additional circuit components to come in.

No changes were made.

Tanushree Status Report 3/23

I finished writing the code for the first iteration of the GUI using PyQt. I have implemented the following functionalities:

1. The GUI application launches in full-screen mode by default and uses the dimensions of the user’s screen to provide the baseline for all calculations regarding button sizes and placement. This allows the GUI to be adaptable to any screen size, making it scalable, thus, portable.

2. Since we plan on demonstrating our project as an intractable poster board, on receiving the coordinates where the tap occurred, the application correctly identifies which button was tapped, if at all, and creates a modal displaying information from the appropriate text file corresponding to the topic associated with the clicked button. Once the user is done reading this information, he/she can press the return button to go back to the original screen.

Currently, the GUI has been designed such that there are six buttons relating to six different headings. The background has been set to black, while the buttons are a dark grey. And the buttons have been spaced out enough to help us achieve a better accuracy when detecting which button was tapped. These are open to changes as we keep testing for the next few weeks.

I am currently on schedule. My next tasks include:

1. creating a function that would help map the coordinates detected by the color detection algorithm to the coordinates in the GUI, using the green border coordinates that Isha will be detecting.

2. find a way to pass the detected coordinates to the main GUI application after it has been executed. I will look into some PyQt built-in functions that allow to do that.

Tanushree Status Report 3/9

I am working on the GUI which will be created using PyQt. I re-familiarized myself with the OOP paradigm in Python as it has been a while since I last used Python for OOP. I am also learning about PyQt, which I have not worked with before. Since I am assigned to work on the entire GUI setup, I decided to breakdown this broad task into smaller tasks of the kinds of functionality we need for our project and the UI that will work best for our color detection algorithm:

This may change as I familiarize myself with PyQt more, leading to a change in my thought process. For next week or so, I will continue to work towards implementing the tasks I have outlined above. This will be testing this system along the way and also at the end when integrated with real time data. I am so far on schedule.