Anna’s Status Update for 10/31
This past week, I played around with different arrangements and orientations of the microphones. I first tried a very simple linear arrangement with the three microphones, expecting the microphone nearest to me (the acoustic source) to reflect that it had picked up the sound first, but for some reason, it appeared that they were detecting my voice out of order. I’m still not quite sure why this is happening, but it would obviously throw a wrench into my acoustic location algorithm if this persists. I thought it might have been the result of the added latency of the print statements, but I did a little test and found that checking the inputs from the three microphones only allowed a maximum capture of 93 samples per second (total among all the microphones, so only 31 samples for each), which… sucks. We purchased the ADS1115 ADC for its 16-bit sampling at 860 samples per second, so what we’re getting is not great. I did some research on how to maximize our sampling rate, and I found that, while the default I2C bus speed is set to 100k bits per second, it can be increased to 1M bits per second without error. I tried that and managed to get 39 samples per second for each microphone when reading all three inputs. While the increase helps, it’s concerning me very greatly… The acoustic location calculations need extreme precision, and the slightest 0.0001s time delay between microphones needs to be captured, but at the sampling rate we’re getting, we cannot achieve this.
I am considering alternatives now, and one idea was inspired when I learned that we misunderstood how the ADC connected the mics to the Jetson — originally, I thought the ADC had an input and an output for each microphone, so we’d only be able to have one microphone per I2C port; but rather, it has four inputs and only one I2C output. Using Adafruit’s ADS1115 Python module, I can discretely read each of these four inputs from one I2C port. Thus, we can have up to (2 ADCs) * (4 microphones/ADC) = 8 microphones total, which is way more than we previously thought we could use! However, given how significantly a hit the per-microphone sampling rate takes when we add a microphone to read, I don’t want to add a fourth microphone to an ADC. What I’m thinking we can do is that we have two ADCs, each connected to three microphones for a total of six, and place them around the base of the iContact in a wide-diameter circle (the greater the diameter, the more discernable the time difference between microphones). Then, (since we can’t rely on the precision of our previous algorithm’s acoustic location calculations with the unfortunately low sampling rate), our algorithm simply becomes checking which microphone detects the sound of the speaker first, and the motors can turn in the direction of this microphone accordingly. Splitting the yaw axis into six angular regions seems adequate, as CV should be able to compensate for centering the speaker.
For this week, I’ll order some new ADCs and microphones to get to work on this new implementation. While I wait for them to arrive, I’ll figure out why the time delays seem to be off and research other ways to improve our sampling rate.