Status Update 11/24/2018

Aayush:

  • I tried changing the resolution of the image taken by the pi camera to speedup the eye detection. Input of size (1024, 768) took ~10 seconds while (256, 200) also took ~9.8 seconds so I did not bother messing with the algorithm too much. Instead focused on finding the perfect plank position to hold onto the raspberry pi camera and the ideal lighting. For the algorithm to work properly, the image needs to be in portrait mode with soft yellow light (obviously anything brighter would work equally well)
  • I tried automatizing the bluetooth setup but could not due to not having certain permissions on certain config files. Next week, I need to look into how to get around this. Worst case, we can setup the bluetooth manually.
  • I tested the server client communication by running the server on the pi and the client on the android app and it works well.

Angela:

  • I figured out some ways to design the setup as well as design the wearable for the baby. I found some scrap pieces of fabric as well as ordered some materials to make the wearable.  I obtained a piece of wood for the mount if we need it. I also worked with Aayush to make sure that the potential setups would not interfere with eye open and closed detection.
  • I helped with some of the testing for the server client communication as well as automating the bluetooth setup.
  • I thought of ways to make sure everything runs sequentially because the Raspberry Pi only has 1 processor. The main way we are going to do it is to run each algorithm sequentially and at the beginning, get data from the arduino. If a eye open test isn’t done in the last 15 seconds, then we will do one of those. Results will then be pushed through the server to the app if the app is open. If not, it will not update until the app is opened.
  • Goals for the next week:
    • Iron out the set up design and implement it/make sure all of the componenets are ordered
    • Start building the wearable
    • Figure out what needs to be changed to automate the bluetooth set up

Priyanka:

  • I got a part of the circuit soldered and now that both the sensors are connected to the I2C pins on the teensy, I could get started on reading data from both the sensors at the same time.
  • I had to figure out how to add accelerometer to the I2C bus in my arduino code. It wasn’t sufficient that the sensors were connected to the I2C pins as the I2c pins were only identifying one of the sensors.
  • I also figured out that we had not ordered all the parts for the battery (booster, charger and the LiPo battery) so I ordered those.
  • I also worked on automating the bluetooth connection so that the bluetooth will connect to the pi on reboot. However, that’s still not working and we will have to make some changes to source files to make that possible.

Status Update 11/17/2018

Aayush

  • I finally got the eye open/close detection to work on the pi after struggling for over 3 weeks to install the required modules. It turns out that the pi is 10 times slower than my macbook on which I had been timing. The pi takes ~10 seconds for each image taken using the raspberry pi camera with resolution of (1024, 768).

Going forward, I plan to reduce the resolution of the image and see if that improves the processing time. I also want to mount the raspberry pi on the crib so that I can make necessary adjustments in terms of light, exposure, etc to ensure consistency in images.

  • I wrote a basic server in python and modified the app to send/receive data from the server over the internet. Below is a demonstration of data transfer from app to the server.

Going forward, I need to wait for Priyanka to automatize the bluetooth stuff which is used to send data from sensors to pi so that I can include that in the python script and load it at boot time

Angela

  • I have continued to work on the crying detection. I installed packages such as PyAudio to do file transformation
  • I am working on integrating my sleep-wake algorithm and heart rate detection to the packets that Priyanka is sending. The packets are in the form of:
    • start character
    • sampling rate
    • data stream
    • end character
  • My plans for the next week are to finish this to make sure that we can start automating the detection algorithms after we come back from break. I will also work with some materials to make the circuitry into a wearable.

Priyanka

  • working on connecting the pi via wifi with the app that Aayush has been working on. Spent most of the time researching what to do since I needed to get an end-point of the App from Aayush.
  • Worked on automating the start-up so that everything connects properly on boot. (Still Work in progress)
  • Also need to solder the wearable hardware together so that i can get readings from both sensors at the same time.

 

Status Update 11/10/2018

Aayush

  • Worked on setting things up for the midpoint demo: installing necessary modules on the pi, helping with the bluetooth stuff and timing various algorithms to see if we meet our stated requirements
  • Currently, processing each image for eye detection takes about 1.2 seconds. I tried to reduce the resolution of the input image to speed up the detection
  • I set up the framework for the android app. In particular, the various screens have dummy values set for now but the app is fully functional. For example, the home screen looks like with a small button to lead to the settings menu where the user can select one of two modes: verbose, notifications.
  •  
  • Goal for next week is to start integrating the various parts together since that is one of the hardest parts of our project. I also plan to solder the circuit that Priyanka built. I need to discuss with Angela and figure out the format in which the data will be sent to the app.

Angela

  • I tested the heart rate algorithm on real heart rate gathered on Priyanka. This data was gathered from the Sparkfun 30105 sensor. So happy it works!! The heart rate gathered was at a 112 bpm and the calculated heart rate from one algorithm was 114.  The other algorithm I tried got a heart rate of 119. Both calculated heart rates are within our goal of less than 10 bpm error. We put it onto the raspberry pi, and it works fine. The time that it takes for a minute worth of data is 0.04, which is less than our goal of 0.1.
  • We tested the sleep wake algorithm for speed. For 20 minutes of data, it took 0.4 seconds. For each minute, it was 0.02 seconds, which is less than our goal of 0.1.
  • I helped Priyanka debug the data transmission over Bluetooth between the sensors and the Raspberry Pi

 

Priyanka

  • Gathered heart rate data for Angela so that she could test her algorithms. It was gathered by the Sparkfun MAX30105 sensor. I struggled to collect the data into a csv/text file so happy that we can do that now to collect the data
  • Another thing I wanted to accomplish before the midpoint demo was to be able to wirelessly connect the Teensy board with the raspberry pi. By last Sunday, I was able to wirelessly connect the Teensy board with the computer and thought it should have been fairly easy with the Raspberry pi, but I was wrong 🙁
  • I had to change the serial port configurations in the config settings. It took me a while to figure out that the raspberry pi was not able to get the data because of its initial set up. Also, I was sending data to the pi using print statements and using println was not adding a line end to the data being sent so I had to send a “\n” as well in the byte array I was sending.

Status Update 10/27/2018

Aayush:

  • Used Prof. Low’s suggestion to do some pre-processing on the image before feeding it into the eye detection algorithm. In particular, I made sure the algorithm can work well with image of resolution less than the max resolution of the camera that we are planning to use. I did some testing to ensure that it works. I also did basic timing analysis using the specs of my laptop’s processor and that of the raspberry pi to figure out how the algorithm would scale as we move from laptop to raspberry pi. My rough estimations suggest that each image would take about 1.5 seconds.
  • Looked into how to receive data over Wifi in android app. Started thinking about various screens for the android app.
  • Going forward, I plan to develop the app and plot graphs using dummy data.

Angela:

  • I gathered more samples of accelerometer data. Some samples included rocking motions, regular daily motions (of an adult not a baby), and some different gestures. I also tried to include different types of jerk movements to see how my current algorithm can tolerate those motions.
  • I discovered that the accelerometer data gathered from my phone has inconsistent sampling rates, which can affect the results of the convolution and thresholding algorithm. I have been trying different ways of interpolating and also downsampling the data because my data is currently sampled at a very high frequency, which does not seem practical to use FFT’s.
  • The current time that it takes to determine if a sample is during sleep or wake is less than 1 second with a sample that has 20,000 data points.
  • I have tinkered with my parameters for convolution box sizes and
  • This week I plan on trying different time periods and sampling rates to see how well they perform at detecting sleep of my current data samples.

 

Priyanka:

  • I tried to get the Bluetooth module to work with the teensy-board and send data to the computer
  • However, I’m having trouble getting the HC-06 to pair with the computer for longer than 9-10 seconds. In that time, only one message is send and echoed back. I’m trying to diagnose the issue by playing with baud rate, USB connections etc but none seem to be working so far.
  • Also, the accelorometer is able to connect to the teensy but is not always registering changes in direction and velocity.  I need to diagnose this too but this is secondary to the bluetooth situation.

 

Status Update 10/13/18

Aayush

  • Worked on the design review presentation and the design document
  • Came across a paper on “Real-time eye blinking detection using facial landmarks” and used that idea along with OpenCV and dlib to implement eye open/close detection algorithm. The algorithm maps 68 facial landmarks onto the input image.

Eye markers for open and closed eyes look like:

 

 

 

The algorithm then computes distances between the features for each of the eyes and averages them out to give a number. I trained it on various baby images and it seems to work really well. (only 1 out of 100 was incorrectly classified).

Correctly detecting open eyes:

Correctly detecting closed eyes:

  • I also developed a simple android application to save accelerometer data into a file on the phone so that we can use it to test sleep detection algorithm. The app currently saves acceleration along 3 x-axis, y-axis and z-axis.

Angela

  • Worked on the design review presentation and the design document
  • Identified important types of patterns that distinguish sleep and wake accelerometer data for the use cases that we were interested in
    • Sleep with short bursts of movement around once per minute
    • Regular activity, irregular acceleration
    • Regular activity, sinusoidal acceleration for motion artifact
  • Ran different simulations on simulated accelerometer data to measure accuracy
    • Got an accuracy of 99%
  • Ran simulations on simulated heart rate data with motion artifact and noise
    • Got an accuracy of 98%
    • BPM of 120 to 150
  • Reevaluated some testing data sets
  • Read papers on sensor fusion for computer vision
    • Goals for next week are to see how to apply this to our problem

Status Update 9/22/18

Summary

All of us worked to finalize the project requirements and came up with hardware/software specifications. We spent some time developing the slide deck for the presentation which gave us an opportunity to think through the integration, algorithms, risk factors and unknowns once again. We got constructive feedback which we are going to take into account going forward.

Aayush

A breakdown of how I spent my time:

  • About 50% designing the app screens and coming up with various settings. Looking into how to receive/send data using bluetooth
  • About 20% looking into some real time eye open/close detection algorithms
  • Rest of the time thinking about the form factor and performance metrics for the design

Angela

Here’s a break down of my time this week:

  • About 50% figuring out the best method for beat detection and implementation on both MATLAB and Python
  • About 30% researching methods for sleep prediction and finding ways to optimize ML on a Raspberry Pi
  • About 20% working on the presentation

Priyanka

Here is how I spend my time this week:

  • 50% of time trying to figure out how the different hardware components will connect and work together
  • 30% of my time looking for hardware components that were small but met all the requirements (ADC conversion, Bluetooth module, etc.)
  • 20% of my time working on the presentation.