This week, I mainly worked on implementing the actual drumstick detection code and integrated with the current code that Ben wrote.
The code Ben wrote calculates the x-value, y-value, and radius of each drum ring at the beginning of the program and stores them in a list so that they don’t have to be recalculated in the future. I then pass this list into a function that then calls another short function, which calculates the exponentially weighted moving average of the most recent 20 video frames like so:
It may seem strange, but accessing the buffer in this way is optimal because of the way I am putting data into it.:
Currently, with every video frame read by openCV’s video capture module, I first determine which drumstick’s thread is accessing the function using threading.current_thread().name to determine whether to apply a red or green mask to the image, as they are named “red” and “green” (respectively) when spawned. I then use findContours() to acquire the detected x and y location values of said drumstick. Afterwards, I pass these x and y values to another function that uses the aformentioned drum ring location list to determine which bounding circle the drumstick tip is in. This returns a number between 0 and 3 (inclusive) if the drum is detected in the bounds of a corresponding drum0-3, and -1 otherwise. Finally, this number is put into the buffer at index bufIndex, which is a global variable updated using the formula bufIndex = (bufIndex + 1) % bufSize, maintaining the circular aspect of the buffer.
As a result, it is highly possible for the ring detection yielded for a more recent video frame to be put at a lower index than older ones. Thus, we start at the current value of (bufIndex + 1) % bufSize (which should be the current least recent frame), and loop around the buffer in-order to apply the formula.
I am also using this approach because I am trying to figure out if there is any significant difference between calculating the drum location of each video frame as it is read and then putting that value into our buffer, and putting each frame into the buffer as it is read, then determining the drum location afterwards. I have both situations implemented, and plan to test how long each takes in the near future in order to reduce latency as much as possible.
Currently, based on our gantt chart, we should be finishing the general drum ring/entry into drum ring detection code, as well as starting to determine the accelerometer “spike” threshold values. Therefore, I believe I am somewhat on track, since we could (hypothetically) plug the actual red/green drumstick tip color range values into the code I wrote, connect the webcam, and be able to detect whether a stick is in bounds of a drum ring. I do have to put more time into testing for accelerometer spike values however, as soon as we are able to transmit its data.
Therefore, next week, I plan to start testing with the accelerometer to determine what an acceptable “threshold value” is for a spike/hit. This would be done by essentially hitting either the air, a drum pad, and/or the table with a machined drumstick, and observing how the output data changes over time.