This week, I got the temperature sensor working with the RaspberryPi with Ayesha. I have never worked with RaspberryPi’s before. With Angie’s help, I learned how to re-image it and configure it to my local Wi-Fi. Although Ayesha and I had previously gotten the temperature sensor working with the Arduino, we wanted to ensure that it was taking accurate readings with the RPi as well. We found a guide online for our temperature sensor and were able to get everything connected and test its accurate temperature readings by comparing with the ambient thermometer we acquired a week ago. I’ll show pictures of that process in the team status report. Additionally, Ayesha and I worked on getting the GPS sensor working with the RPi to see if Ayesha’s web app was receiving data correctly. Angie had already written a script to format and send the data from the GPS to the web app, so we tested that script. We used the same step-by-step guide that Angie used, but we were not able to get the GPS to fix on a location. Therefore, Ayesha wrote a dummy script that sent fixed GPS coordinates to her web app, and we tested that successfully and that data was received on her web app! Alone, I worked on integrating the speaker with the RPi. There wasn’t a straightforward guide on the speaker kit, but then I realized that speakers are more about the hardware connection than the actual module itself, because little to no code is involved. I was working on the speakers at home, so I didn’t have the soldering tools and wcrecutters necessary to actually assemble it. However, I was able to copy over a “.wav” file of me saying, “Wave your arms if you are able. This will help us detect you better.” I also installed the necessary audio libraries on the Pi, so once the speakers are connected properly via hardware, playing the message should be easy. Lastly, I retrained the machine learning network with 3600 more samples Angie collected. Unfortunately, the F1 score dropped to .33. After discussing this with Angie, it’s because all the new samples for the human labeled examples had humans deep breathing, but I don’t believe it’s smart for us to focus on that hard to detect case at the moment. So, I have hope that with more samples of humans waving arms, the F1 score will increase from the .5 that was the best we have collected previously.

My progress is on schedule.

In the next week, I will get the speakers working as a whole. I will retrain the network once Angie gives me more data. Once I get an F1 score close to .7, I will integrate my part with Ayesha’s.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *