Isha Status Report 4/20

This week I have been debugging the system with Tanu and Suann. We found that most of the latency issues we were seeing were due to using the Mac OS platform. When we switched to using the Ubuntu platform on my laptop, these issues went away. To make this switch, we needed a new way of zooming in and turning off autofocus on the Logitech webcam since this webcam has no Logitech software support for Ubuntu. We were able to bypass zooming in by changing the positioning of our camera and projector. This also helps cut down on a step needed with the initial system setup.

Next Steps:
The main issues we are still facing are the sensor’s rate of response to tapping on the board. After receiving feedback from some peers, we are also prioritizing making the system more intuitive and visually appealing. We will solder together more sensor circuits to expand the range of tap detection on the poster board and work on improving our gui this next week.
We are on schedule.

Tanushree Status Report 4/20

This week I have been working with my team to debug some of the functionality errors. We made the decision to switch to Ubuntu as our platform  instead of Mac OS; this helped eliminate most of our latency issues. We restructured our code by migrating code between the different classes we created for the GUI. This helped in debugging the other issues we had identified with our system implementation.

Our next tasks would include making the UI more visually appealing that allows greater user interaction/engagement. We also need to increase the tap detection accuracy rate. Currently, the user often needs to tap a few times before the vibrations are picked up by the sensor. We hope to improve this accuracy rate by adding more sensors to our tap detection circuit. Lastly, we need to iteratively test our system and make sure it can be run for a long period of time without fail.  We are currently on schedule.

Suann Status Report 4/14/19

I discovered several major bugs with my teammates, including extra black screens that happen upon modal closure and really sensitive tap thresholding.

My goal for the next week is to help my teammates figure out tap detection bugs. I will do this by familiarizing myself with PyQt functionality so I can contribute more ideas during debugging sessions.

Team Status Update 4-14

Thresholding for our tap detection system is currently incorrect. This can be fixed with more testing, especially in our final demo environment. A large risk is the latency of our system, calibration takes a long time and so does modal loading. This will be fixed by trying different QtWidget classes and possibly working on optimizing our detection algorithm.

No changes were made to the system.

Isha Status Report 4/14

We finished integrating our third subsystem last Sunday. So this week we worked as a group to identify and fix several bugs with the system. The system works, but it is slow due to various inefficiencies. Response time after the initial integration was on average several seconds.

Our progress is on schedule.

Next Steps:
Continue working as a group to fix bugs and optimize performance

Tanushree Status Report 4/14

Now that we have integrated all our subsystems, all our remaining will be used in debugging the functionality errors and refining the user interface and experience. Thus, going forward, we will essentially be working as a group rather than assigning individual tasks, unless necessary. This week we met on Thursday to resolve a couple of known bugs but also identified new ones.

My progress is on schedule. As mentioned, all the remaining time will now be used for debugging and refinement.

Suann Status Report 4/6/19

The Arduino code was tied together this week. The tap subsystem code was tested to see if it worked as expected; the subsystem was able to send user tap data accurately and quickly every time a tap was sensed. However, tap threshold varied greatly depending on the poster board setup. The knock threshold was higher when the poster board was placed in its stand and assembled at full height. The threshold was lower when the same stand setup was only partially assembled and placed on a table like so:

I will work on thresholding in the setting of the final demo for best tap thresholding. I will also help verify our product requirements.

Tanushree Status Report 4/6

Isha and I tested our integrated subsystems last Sunday with real-time input. The integrated system works with a certain margin of error. Isha has implemented a second iteration of the calibration function to reduce these errors.

During Wednesday’s lab, all three of us tested the tap detection subsystem to calculate an appropriate threshold value. We conducted this test in three different environments and eventually settled on the most optimal one which we hope to use for our final demo.

I also worked on integrating Suann’s tap detection code with the rest of our system. Thus, now we have all three subsystems: GUI, color detection and tap detection, integrated. My future tasks include:

1. testing this integration with real-time input

2. refining the GUI to make it more appealing to the user

I am currently on schedule.

Isha Status Report 4/6

We decided to do our real time testing with the second classifier, which takes about 0.8 seconds on average to finish computations. I tried a few different methods of classification: svm.LinearSVC and the SGD classifier since they are supposed to scale better. However, the results of our original classifier remain significantly better so we decided that these methods are not worth pursuing further.

Tanu and I finished debugging our detection and gui integration on 3/31. Our system now works in real time. We added buttons on the screens as a placeholder for a tap to trigger color detection.

The first iteration of the calibration system that I wrote last week is working. For this system, we detect the coordinate of the top left and lower right corner of the gui screen using our existing color detection. After repeatedly testing in real time, we realized that since a user has to position the red dot at the corner of the screen for calibration, there is significant variability in detected coordinates. Thus the mapping of coordinates from detection to gui is not accurate. Each mapped coordinate is consistently a fixed distance away from the target coordinate.

To adjust for this error, I wrote a second iteration of calibration. The second version of calibration consists of a splash screen that appears on loading the program as shown below. A third coordinate is detected in the center of the screen to adjust for differences in the detected coordinates of the two corners of the gui.

Next steps:

  1. Test the second iteration of calibration function in real time. If this function does not give good results, possibly change it to project and detect red squares at the corners of the screen so that they will always be at fixed positions.
  2. Compile detection results from the 16 test images we took on Wednesday in a table

I am on schedule.

Team Status Report for 4/6/19

The most significant risk is the error rate in the calibration function. We are currently trying to mitigate this through different approaches like re-adjusting the reference coordinates using a third central coordinate to recalculate the width and height. Additionally the thresholding for tap detection seems pretty arbitrary from the data we collected right now. Testing on final demo site is necessary.

We have added an additional step to the calibration function: detecting a third coordinate at the center of the GUI screen for re-adjustment of error in calibration. To optimally implement this change, we have added a splash screen that it solely meant for calibration. After calibration is complete, the user is directed to the main screen consisting of the six buttons required for our poster. 

No schedule changes have occurred this week.