Status Update 11/24/2018

Aayush:

  • I tried changing the resolution of the image taken by the pi camera to speedup the eye detection. Input of size (1024, 768) took ~10 seconds while (256, 200) also took ~9.8 seconds so I did not bother messing with the algorithm too much. Instead focused on finding the perfect plank position to hold onto the raspberry pi camera and the ideal lighting. For the algorithm to work properly, the image needs to be in portrait mode with soft yellow light (obviously anything brighter would work equally well)
  • I tried automatizing the bluetooth setup but could not due to not having certain permissions on certain config files. Next week, I need to look into how to get around this. Worst case, we can setup the bluetooth manually.
  • I tested the server client communication by running the server on the pi and the client on the android app and it works well.

Angela:

  • I figured out some ways to design the setup as well as design the wearable for the baby. I found some scrap pieces of fabric as well as ordered some materials to make the wearable.  I obtained a piece of wood for the mount if we need it. I also worked with Aayush to make sure that the potential setups would not interfere with eye open and closed detection.
  • I helped with some of the testing for the server client communication as well as automating the bluetooth setup.
  • I thought of ways to make sure everything runs sequentially because the Raspberry Pi only has 1 processor. The main way we are going to do it is to run each algorithm sequentially and at the beginning, get data from the arduino. If a eye open test isn’t done in the last 15 seconds, then we will do one of those. Results will then be pushed through the server to the app if the app is open. If not, it will not update until the app is opened.
  • Goals for the next week:
    • Iron out the set up design and implement it/make sure all of the componenets are ordered
    • Start building the wearable
    • Figure out what needs to be changed to automate the bluetooth set up

Priyanka:

  • I got a part of the circuit soldered and now that both the sensors are connected to the I2C pins on the teensy, I could get started on reading data from both the sensors at the same time.
  • I had to figure out how to add accelerometer to the I2C bus in my arduino code. It wasn’t sufficient that the sensors were connected to the I2C pins as the I2c pins were only identifying one of the sensors.
  • I also figured out that we had not ordered all the parts for the battery (booster, charger and the LiPo battery) so I ordered those.
  • I also worked on automating the bluetooth connection so that the bluetooth will connect to the pi on reboot. However, that’s still not working and we will have to make some changes to source files to make that possible.

Status Update 11/17/2018

Aayush

  • I finally got the eye open/close detection to work on the pi after struggling for over 3 weeks to install the required modules. It turns out that the pi is 10 times slower than my macbook on which I had been timing. The pi takes ~10 seconds for each image taken using the raspberry pi camera with resolution of (1024, 768).

Going forward, I plan to reduce the resolution of the image and see if that improves the processing time. I also want to mount the raspberry pi on the crib so that I can make necessary adjustments in terms of light, exposure, etc to ensure consistency in images.

  • I wrote a basic server in python and modified the app to send/receive data from the server over the internet. Below is a demonstration of data transfer from app to the server.

Going forward, I need to wait for Priyanka to automatize the bluetooth stuff which is used to send data from sensors to pi so that I can include that in the python script and load it at boot time

Angela

  • I have continued to work on the crying detection. I installed packages such as PyAudio to do file transformation
  • I am working on integrating my sleep-wake algorithm and heart rate detection to the packets that Priyanka is sending. The packets are in the form of:
    • start character
    • sampling rate
    • data stream
    • end character
  • My plans for the next week are to finish this to make sure that we can start automating the detection algorithms after we come back from break. I will also work with some materials to make the circuitry into a wearable.

Priyanka

  • working on connecting the pi via wifi with the app that Aayush has been working on. Spent most of the time researching what to do since I needed to get an end-point of the App from Aayush.
  • Worked on automating the start-up so that everything connects properly on boot. (Still Work in progress)
  • Also need to solder the wearable hardware together so that i can get readings from both sensors at the same time.

 

Status Update 11/10/2018

Aayush

  • Worked on setting things up for the midpoint demo: installing necessary modules on the pi, helping with the bluetooth stuff and timing various algorithms to see if we meet our stated requirements
  • Currently, processing each image for eye detection takes about 1.2 seconds. I tried to reduce the resolution of the input image to speed up the detection
  • I set up the framework for the android app. In particular, the various screens have dummy values set for now but the app is fully functional. For example, the home screen looks like with a small button to lead to the settings menu where the user can select one of two modes: verbose, notifications.
  •  
  • Goal for next week is to start integrating the various parts together since that is one of the hardest parts of our project. I also plan to solder the circuit that Priyanka built. I need to discuss with Angela and figure out the format in which the data will be sent to the app.

Angela

  • I tested the heart rate algorithm on real heart rate gathered on Priyanka. This data was gathered from the Sparkfun 30105 sensor. So happy it works!! The heart rate gathered was at a 112 bpm and the calculated heart rate from one algorithm was 114.  The other algorithm I tried got a heart rate of 119. Both calculated heart rates are within our goal of less than 10 bpm error. We put it onto the raspberry pi, and it works fine. The time that it takes for a minute worth of data is 0.04, which is less than our goal of 0.1.
  • We tested the sleep wake algorithm for speed. For 20 minutes of data, it took 0.4 seconds. For each minute, it was 0.02 seconds, which is less than our goal of 0.1.
  • I helped Priyanka debug the data transmission over Bluetooth between the sensors and the Raspberry Pi

 

Priyanka

  • Gathered heart rate data for Angela so that she could test her algorithms. It was gathered by the Sparkfun MAX30105 sensor. I struggled to collect the data into a csv/text file so happy that we can do that now to collect the data
  • Another thing I wanted to accomplish before the midpoint demo was to be able to wirelessly connect the Teensy board with the raspberry pi. By last Sunday, I was able to wirelessly connect the Teensy board with the computer and thought it should have been fairly easy with the Raspberry pi, but I was wrong 🙁
  • I had to change the serial port configurations in the config settings. It took me a while to figure out that the raspberry pi was not able to get the data because of its initial set up. Also, I was sending data to the pi using print statements and using println was not adding a line end to the data being sent so I had to send a “\n” as well in the byte array I was sending.

Status Update 11/3/2018

Aayush:

  • I developed the first few screens of the mobile application.
  • I set up the raspberry pi; installed necessary softwares; and was able to connect it to the network
  • Downloaded openCV on the pi and put eye detection algorithm
  • Made sure that the raspberry pi camera works
  • Also, took pictures of the doll and tested the eye detection algorithm on that to ensure things are set for the mid point demo
  • Going forward, plan is to make required tweaks to the eye detection algorithm so that it works with raspberry pi camera

Angela:

  • I worked on fine tuning the parameters of the sleep wake algorithm using my sleep and wake accelerometer data. Currently, the sleep wake algorithm is able to correctly classify my data using majority vote with acc_x, acc_y, and acc_z. There is occasional misclassification especially at the beginning and end of sleep cycles, but since I am not sure exactly when I fall asleep and wake up, this does not seem too significant.
  • I worked on using an algorithm for audio detection using a similar algorithm as the sleep wake. I am using a convolution with a box signal. If there are a certain number of samples that are higher than some threshold, then this result will be classified as crying (or talking) rather than sleeping. This will be used in conjunction with the accelerometer data. The goal is for these two components to be the main decision making sources when the lights are off/night time. I am still looking for audio of babies crying to test it on.
  • I worked with Aayush to get the Raspberry Pi running. I registered it with the CMU wifi system, so it should be able to use CMU wifi. We will be working on running our code before midpoint demo.
  • Goal: Integrate the programs onto the Raspberry Pi and make sure it works.

Priyanka:

  • I was able to get the Bluetooth module to work. The problem seemed to be that the teensy module was not connected to the ground and therefore the Bluetooth module was not able to send data but it was able to receive them.
  • I am able to connect accelerometer to the teensy module and get the acceleration values g_x, g_y and g_z.
  • I am able to connect the pulse monitor and temperature sensor to the teensy module .
  • As of now, the teensy gets the data from the accelerometer and pulse sensor creates a string with starting and ending bits to send via Bluetooth.
  • The parsing by of the info done by the computer is not correct as of now. While the information being sent is correct and the computer is able to identify the starting and ending bits of the information array being sent (and hence able to separate out different pieces of information correctly), the information itself is not being parsed correctly. So instead of seeing integers that represent acceleration in the different directions and the heart rate, we see weird characters.
  • Goal: Correct the parsing of the info.

 

Status Update 10/27/2018

Aayush:

  • Used Prof. Low’s suggestion to do some pre-processing on the image before feeding it into the eye detection algorithm. In particular, I made sure the algorithm can work well with image of resolution less than the max resolution of the camera that we are planning to use. I did some testing to ensure that it works. I also did basic timing analysis using the specs of my laptop’s processor and that of the raspberry pi to figure out how the algorithm would scale as we move from laptop to raspberry pi. My rough estimations suggest that each image would take about 1.5 seconds.
  • Looked into how to receive data over Wifi in android app. Started thinking about various screens for the android app.
  • Going forward, I plan to develop the app and plot graphs using dummy data.

Angela:

  • I gathered more samples of accelerometer data. Some samples included rocking motions, regular daily motions (of an adult not a baby), and some different gestures. I also tried to include different types of jerk movements to see how my current algorithm can tolerate those motions.
  • I discovered that the accelerometer data gathered from my phone has inconsistent sampling rates, which can affect the results of the convolution and thresholding algorithm. I have been trying different ways of interpolating and also downsampling the data because my data is currently sampled at a very high frequency, which does not seem practical to use FFT’s.
  • The current time that it takes to determine if a sample is during sleep or wake is less than 1 second with a sample that has 20,000 data points.
  • I have tinkered with my parameters for convolution box sizes and
  • This week I plan on trying different time periods and sampling rates to see how well they perform at detecting sleep of my current data samples.

 

Priyanka:

  • I tried to get the Bluetooth module to work with the teensy-board and send data to the computer
  • However, I’m having trouble getting the HC-06 to pair with the computer for longer than 9-10 seconds. In that time, only one message is send and echoed back. I’m trying to diagnose the issue by playing with baud rate, USB connections etc but none seem to be working so far.
  • Also, the accelorometer is able to connect to the teensy but is not always registering changes in direction and velocity.  I need to diagnose this too but this is secondary to the bluetooth situation.

 

Status Update 10/20/2018

Priyanka:

  • set up MCU 6050 accelerometer and tried to set up MAX3050 for getting readings.
  • the MAX3050 component we have is not working so we are going to have to wait for a new one to come in.

Angela:

  • converted code to python
  • recorded accelerometer data from my sleep
  • updated version of open CV
  • updated the design doc
  • planned demo components
  • updated parameters in sleep wake algo

Aayush:

  • Worked on android app to gather accelerometer data for testing purposes. Later, realized that it is faster to just download an app from the play store and stopped working on the app
  • Tweaked the eye detection algorithm to make it work in dimly lit conditions. Used a couple of filters along with gamma detection.

Status Update 10/13/18

Aayush

  • Worked on the design review presentation and the design document
  • Came across a paper on “Real-time eye blinking detection using facial landmarks” and used that idea along with OpenCV and dlib to implement eye open/close detection algorithm. The algorithm maps 68 facial landmarks onto the input image.

Eye markers for open and closed eyes look like:

 

 

 

The algorithm then computes distances between the features for each of the eyes and averages them out to give a number. I trained it on various baby images and it seems to work really well. (only 1 out of 100 was incorrectly classified).

Correctly detecting open eyes:

Correctly detecting closed eyes:

  • I also developed a simple android application to save accelerometer data into a file on the phone so that we can use it to test sleep detection algorithm. The app currently saves acceleration along 3 x-axis, y-axis and z-axis.

Angela

  • Worked on the design review presentation and the design document
  • Identified important types of patterns that distinguish sleep and wake accelerometer data for the use cases that we were interested in
    • Sleep with short bursts of movement around once per minute
    • Regular activity, irregular acceleration
    • Regular activity, sinusoidal acceleration for motion artifact
  • Ran different simulations on simulated accelerometer data to measure accuracy
    • Got an accuracy of 99%
  • Ran simulations on simulated heart rate data with motion artifact and noise
    • Got an accuracy of 98%
    • BPM of 120 to 150
  • Reevaluated some testing data sets
  • Read papers on sensor fusion for computer vision
    • Goals for next week are to see how to apply this to our problem

Status update 10/6/18

Angela :
I implemented simple signal processing algos to identify sleep wake cycles, I tried to find different data sets to do sleep wake training on, I downloaded open CV.

Priyanka:

I set up the teensy board and set up arduino IDE to program the teensyboard. I also got all the parts and am trying to figure out how to program the different sensors.

Aayush:
I downloaded openCV, implemented basic face detection algorithm and sort of working eye open/close detection

Status Update 9/29/18

Aayush

  • I looked into different ways to capture eye open/close along with face detection using openCV
  • Read research papers on various algorithms that can be used for heart rate detection and sleep detection.
  • I plan on working on writing the actual algorithm starting next week

Angela

  • Most of my time was spent deciding on algorithms and reading papers
  • Will spend more time this week on implementation

Priyanka

  • I did more investigation on doing video streaming on Raspberry Pi
  • I ordered more parts for the net working and hardware aspect of the project
  • Hopefully, some of the parts will come in by next week and I can start playing around with them