This week I mainly focused on continuing to develop without the Jetson while we wait for it to be up and running. Firstly, I set up the RPi with CMU Wifi via a wired connection, but now that I’ve set up the initial connection, it’s easier to set up the wireless connection. Furthermore, I’ve worked on tweaking my detection model to account for objects that are entering the sight of one sensor to another to ensure that only one of the sensors is picked for the direction that will be sent to the Jetson. For this, I simply added to my filtering algorithm to ensure that there was an object detected on two adjacent sensors within a certain distance range and time frame then the measurement on the sensor furthest away from the user’s north will be discounted. Furthermore, I established Bluetooth connectivity from RPi to the IOS app, but this connection will be used secondary to the Jetson connection as this is a backup option in case of failure of the Jetson.
I’m currently on schedule.
Next week, we hope for the Jetson to be up and working so we will look to connecting the Jetson to the IOS app as well as ensuring the serial connection between the RPi and Jetson works as expected. Furthermore, we are going to build the actual device that the user wears so we can run tests based on how the product will look on the actual user.
Verification and validation:
So far I have completed thorough testing on the range and direction coverage of the ultrasonic sensors. I’ve completed individual tests of the ultrasonic sensors to see their degree of coverage and their distance measurement capabilities, and I have completed this same test with multiple sensors. In addition, I aim to complete further testing with multiple sensors but also analyzing how the sensors react if an object overlaps between two sensors or if an object goes from one sensor to another. These results will be analyzed to deduce the placement of the sensors on the headband and how much distance there should be between each sensor. In addition, I have used this data to adjust how my code is set up in regards to double detection on the sensors and to ensure there are no overlaps in how the objects are detected to ensure the user has the right bearings of the direction of the incoming object.
As far as testing the capabilities of the camera, I have completed testing based on how the camera works in different environments and if the photo quality is good enough for our ML model to see and run on. Furthermore, I will also complete latency testing for how quickly the data is transferred from the RPi to the Jetson and if it meets our latency requirements. Finally, I have also run tests on the portable battery to ensure that the battery life meets our requirements, and with my program running, there’s a battery life of 4 hours however I anticipate that will go down by the time we complete testing with running the entire model.
For the rest of the project, most of my testing will be completed with the actual device on the headband as results could differ when it’s placed on the user’s head. The testing I will complete for the physical headband will be similar to the testing I have completed on the individual sensors but account for the user’s head movement and body movement in general.