Ryan Lee – Week 10 Status Report

Since last week, we have all been working on integrating the ML portion with the Rasberry Pi. Although the code ran well on Ari’s local computer, there were a lot of importing module issues on the Pi. Therefore, this week I had to work on fixing the ALSA audio issues on the Pi to successfully record directly from the stethoscope.

Aside from fixing the import issues on the Pi, I also worked on checking the sensitivity vs specificity on the ML testing algorithm by using a Confusion Matrix. Eri and I initially found that only 65% of our abnormal test sounds were being classified as abnormal, so we decided to use more abnormal heart sounds in the training algorithm because there was previously a lot more normal heart sounds than abnormal in the dataset. After we fixed this, we saw that we were now getting around a 85% true positive rate and 85% true negative rate, which is what our specifications originally aimed for. The overall validation accuracy was also 89%.

For next week, we as a team have to finish the integration portion as well as ensuring that it also works with the speaker using a double-blind experiment.

Eri – Week 10

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).
    • We reached an accuracy of 89% this week
    • We found the specificity and the sensitivity. The specificity rate was around 95%, but the sensitivity was pretty low (65%).
    • We decided to find more datasets with abnormal heart sounds and after training our CNN with more dataset we were able to get our sensitivity rate to 79%.
    • Created a run time signal for the heart sounds from the stethoscope
    • This runtime signal shows up on our raspberry pi, but it’s a bit slower than we’d like.
  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
    • Our progress is on schedule
  • What deliverables do you hope to complete in the next week?
    • Get the sensitivity rate up to 85% and finish integrating the stethoscope with our code. (We are nearly done with the integration part)

Ryan Lee – Week 9 Status Report

For this week, our main focus was improving our CNN since in the last few weeks we hit a bump at about 75% accuracy. With improvements in preprocessing it actually decreased the accuracy, so we had to explore other methods. First we added more audio data to train on since we weren’t training on the entire Physionet dataset yet. Also, we played around with the architecture by changing the number of channels in our last two convolutional layers. We are still using in total 5 convolutional layers with a filter size of three for each one, but in the last two convolutional layers, we switched the number of filters from 48 each to 24 and 12 respectively. Once we tested the CNN with these changes, we consistently achieved an accuracy of above 85% as shown by the image below.

However once we tested again the next day, the CNN was consistently achieving an accuracy of above 80% but not always above 85%. Therefore, more testing has to be done to make the final leap above the 85%. We plan on testing with increasing the total number of filters that occur in each convolutional layer as well as increasing the number of epochs/batch size, although that will increase our training time significantly.

However, since our final demo is coming up, we are putting a hold on improving the CNN and have it communicate with the actual stethoscope to read real time data and classify the sound. Our team is meeting on Sunday to finish this task.

Week 9 Team Report

This week our team made good progress, however, we still have a decent amount to do in the upcoming weeks. We are aiming to finish the real time system tomorrow on Sunday in lab. We are also going to decide if we want to do AR/AS vs MR/MS depending on how much time we have. We are also going to try to increase our accuracy to above 85% consistently. We were able to reach 85% a few times, but are going to try to make it higher. Finally, with the remaining time we have, we will focus on complete testing of our finished system to make sure our device work. We have not a ton of time left, but I think we can finish what we need to get done to have a good final project.

Ari’s Week 9 Status Report

This week was when we began our integration into a real time system. I personally accomplished a lot this week and am excited to integrate before our demo. The raspberry pi that we ordered arrived along with the touch screen that we will use to run the system. I flashed the pi and was able to configure the touch screen to work with it. I also started a python program that will be the basis for our entire project. This codebase will create a simple GUI which will live display the signal received from the stethoscope, and will have a button that will begin the analysis. This code will invoke our matlab code to classify a heart sound and will display the result. We aim to have this real time integration ready for our pre-final demo and will work a lot on it tomorrow to make sure it is ready. The main things we have left to do is to work on our testing after the integration is done. We also want to try to get a little more accuracy on our ML.

Eri Week 9 Journal

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).
    • We finished our double blind trials on our testing environment this week
    • Through training the larger dataset on the SVM and CNN we finally were able to reach an average accuracy of around 83% using the CNN, which is close to our goal.
    • We started integrating all of the necessary things for the demo on Monday, and tomorrow we will focus more on transforming our MATLAB code into C.
  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
    • Our progress is behind schedule since we wanted to be done with classifying AS/MR by the end of this week, but we realized we may just scrap that completely and just make this stethoscope work for classifying abnormal vs normal heart sound since we do not have a lot of time left.
  • What deliverables do you hope to complete in the next week?
    • Finish integrating our code with our stethoscope and start working on using the raspberry pi to be able to display whether the user has a normal or abnormal heart sound.

Ari’s Week 8 Status Report

This week was very productive for me. I was able to accomplish a lot regarding the hardware and the structure of the physical stethoscope as well as working on the ML algorithm with Ryan and Eri. I also worked on figuring out a way to convert our Matlab code into C code so that we could run it on a raspberry pi. I placed orders for a raspberry pi on which the code will run and a screen on which a user can interact with the device. Further, this week I was able to begin writing the code that will begin the processing on the raspberry pi and I was also able to design the status reporting system. I also designed the double blind experiment and scheduled it for next week to figure out if humans can tell the difference between my the sounds from my stethoscope and the other sounds.

Ryan Lee – Week 8 Status Report

This week I worked on training the CNN with extra preprocessing. First, the audio files were previously trimmed to be 5 seconds long, and also passed through Eri’s Shannon Expansion noise removal algorithm. I then took the spectograms of this new processed audio data and trained my CNN on it. Since the spectograms were now all of the same time frame of 5 seconds, we expected our accuracy to go up. However, the newly trained CNN produced an accuracy of around ~65%, so it was a decrease from the previously trained CNN’s with the same number of audio files. More testing has to be done on how to improve this algorithm.

I was travelling last week and this Sunday-Monday for interviews and Spring Carnival was this weekend, so I could not invest too much time into this project this past week. However, more time will be invested in future weeks to ensure that we reach the desired 85% accuracy by final demo. Next week I want to research LTSM to preprocess the data instead of trimming at a random 5 seconds length.

Eri Week 8 Journal

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).
    • Double blind trials on testing environment – so far people cannot tell the difference between the heart sound from our stethoscope and the dataset heart sound
    • Researched ways to find the similarity of the heart sound from stethoscope and the dataset so we can get a percentage to prove the testing environment is good enough
    • Trained the CNN on larger dataset from Physionet.
  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
    • Behind schedule – I did not work on this project enough this week due to carnival and other commitments.
    • I will put in more time next week and work with Ari and Ryan in lab more.
  • What deliverables do you hope to complete in the next week?
    • We will be working on our CNN for AR/MS to reach an accuracy of 85%.
    • Start training on abnormal heart sounds for MR/AS.
    • Find a percentage accuracy of the similarity for our testing environment

Ryan Lee – Week 7 Status Report

Earlier this week I worked on sending data from Ari’s stethoscope to my ML algorithm. There were difficulties with this portion because we tried to send in real time data at first. There are no libraries or packages for this in Matlab so it was all experimental. The stethoscope was inputting weird audio data at first, so we switched our method to a more simpler method of just reading audio data from the stethoscope and saving it locally. I then trained my network on the training data and tested our stethoscope data on it to classify. Every time we tested it on our own heartbeat, it was classified as ‘normal’ which is a good sign, but we had no subject with an abnormal heart sound to also test on. Therefore, this is something we must test more thoroughly once Ari has finished making the testing setup with the speaker. I also plotted the input so that the heart sound could be visualized to the viewers during the demo. I was not able to make any improvements to the ML algorithm because I was busy travelling this week, but I will be working on that for this coming week.

For next week, I want to improve the ML portion by adding Eri’s denoising algorithm to the preprocessing. I also want to improve the communication between Matlab and the stethoscope to analyze data directly from the input instead of having to save it locally.