This week I worked on adding new features and developing a visual interface for the fall detection. For the new features, instead of using the raw phase values, I tried using changes in phase. This way, the orientation of the device does not affect the phase values. After combining the change in phase with the magnitude, the algorithm now accurately categorizes jumping and running activities as non-falls. Also, I combined Jacob’s frequency features and tested the algorithm, but the frequency features did not improve the accuracy of the algorithm. Jacob will work on changing the frequency features so that they improve the accuracy of the algorithm.
The visual interface is implemented for the demo next week. Because we currently do not have our components integrated, I thought that it would be useful to visualize the fall detection to make it easier to demonstrate that the algorithm works. When the program runs, it shows the graph of the input acceleration data, and a rectangle with a size of our sliding window moves through the graph and displays the algorithm’s prediction for each window.
I also collected some fall and non-fall data to compare them with the dummy data that Jacob is collecting. From previous data collection, I held my phone on my hand but this time I placed my phone in my pocket to include noises and possible orientation changes of the device in the pocket.
Next week, I will send the SVM program to Max and we will try integrating the fall detection and raspberry pi components. We will have to check how long it takes for the algorithm to run on the pi and improve the running time if it takes too long. If the algorithm takes too long to run, I will try reducing the feature size. Currently, the size of the feature array for each sliding window is 20: 10 for magnitudes and 10 for phase changes. If I only use the maximum values of each features, I can reduce the size to 2 per window, which will decrease the run time for a large train data.