Status Update 11/24/2018

Aayush:

  • I tried changing the resolution of the image taken by the pi camera to speedup the eye detection. Input of size (1024, 768) took ~10 seconds while (256, 200) also took ~9.8 seconds so I did not bother messing with the algorithm too much. Instead focused on finding the perfect plank position to hold onto the raspberry pi camera and the ideal lighting. For the algorithm to work properly, the image needs to be in portrait mode with soft yellow light (obviously anything brighter would work equally well)
  • I tried automatizing the bluetooth setup but could not due to not having certain permissions on certain config files. Next week, I need to look into how to get around this. Worst case, we can setup the bluetooth manually.
  • I tested the server client communication by running the server on the pi and the client on the android app and it works well.

Angela:

  • I figured out some ways to design the setup as well as design the wearable for the baby. I found some scrap pieces of fabric as well as ordered some materials to make the wearable.  I obtained a piece of wood for the mount if we need it. I also worked with Aayush to make sure that the potential setups would not interfere with eye open and closed detection.
  • I helped with some of the testing for the server client communication as well as automating the bluetooth setup.
  • I thought of ways to make sure everything runs sequentially because the Raspberry Pi only has 1 processor. The main way we are going to do it is to run each algorithm sequentially and at the beginning, get data from the arduino. If a eye open test isn’t done in the last 15 seconds, then we will do one of those. Results will then be pushed through the server to the app if the app is open. If not, it will not update until the app is opened.
  • Goals for the next week:
    • Iron out the set up design and implement it/make sure all of the componenets are ordered
    • Start building the wearable
    • Figure out what needs to be changed to automate the bluetooth set up

Priyanka:

  • I got a part of the circuit soldered and now that both the sensors are connected to the I2C pins on the teensy, I could get started on reading data from both the sensors at the same time.
  • I had to figure out how to add accelerometer to the I2C bus in my arduino code. It wasn’t sufficient that the sensors were connected to the I2C pins as the I2c pins were only identifying one of the sensors.
  • I also figured out that we had not ordered all the parts for the battery (booster, charger and the LiPo battery) so I ordered those.
  • I also worked on automating the bluetooth connection so that the bluetooth will connect to the pi on reboot. However, that’s still not working and we will have to make some changes to source files to make that possible.

Status Update 11/17/2018

Aayush

  • I finally got the eye open/close detection to work on the pi after struggling for over 3 weeks to install the required modules. It turns out that the pi is 10 times slower than my macbook on which I had been timing. The pi takes ~10 seconds for each image taken using the raspberry pi camera with resolution of (1024, 768).

Going forward, I plan to reduce the resolution of the image and see if that improves the processing time. I also want to mount the raspberry pi on the crib so that I can make necessary adjustments in terms of light, exposure, etc to ensure consistency in images.

  • I wrote a basic server in python and modified the app to send/receive data from the server over the internet. Below is a demonstration of data transfer from app to the server.

Going forward, I need to wait for Priyanka to automatize the bluetooth stuff which is used to send data from sensors to pi so that I can include that in the python script and load it at boot time

Angela

  • I have continued to work on the crying detection. I installed packages such as PyAudio to do file transformation
  • I am working on integrating my sleep-wake algorithm and heart rate detection to the packets that Priyanka is sending. The packets are in the form of:
    • start character
    • sampling rate
    • data stream
    • end character
  • My plans for the next week are to finish this to make sure that we can start automating the detection algorithms after we come back from break. I will also work with some materials to make the circuitry into a wearable.

Priyanka

  • working on connecting the pi via wifi with the app that Aayush has been working on. Spent most of the time researching what to do since I needed to get an end-point of the App from Aayush.
  • Worked on automating the start-up so that everything connects properly on boot. (Still Work in progress)
  • Also need to solder the wearable hardware together so that i can get readings from both sensors at the same time.

 

Status Update 11/10/2018

Aayush

  • Worked on setting things up for the midpoint demo: installing necessary modules on the pi, helping with the bluetooth stuff and timing various algorithms to see if we meet our stated requirements
  • Currently, processing each image for eye detection takes about 1.2 seconds. I tried to reduce the resolution of the input image to speed up the detection
  • I set up the framework for the android app. In particular, the various screens have dummy values set for now but the app is fully functional. For example, the home screen looks like with a small button to lead to the settings menu where the user can select one of two modes: verbose, notifications.
  •  
  • Goal for next week is to start integrating the various parts together since that is one of the hardest parts of our project. I also plan to solder the circuit that Priyanka built. I need to discuss with Angela and figure out the format in which the data will be sent to the app.

Angela

  • I tested the heart rate algorithm on real heart rate gathered on Priyanka. This data was gathered from the Sparkfun 30105 sensor. So happy it works!! The heart rate gathered was at a 112 bpm and the calculated heart rate from one algorithm was 114.  The other algorithm I tried got a heart rate of 119. Both calculated heart rates are within our goal of less than 10 bpm error. We put it onto the raspberry pi, and it works fine. The time that it takes for a minute worth of data is 0.04, which is less than our goal of 0.1.
  • We tested the sleep wake algorithm for speed. For 20 minutes of data, it took 0.4 seconds. For each minute, it was 0.02 seconds, which is less than our goal of 0.1.
  • I helped Priyanka debug the data transmission over Bluetooth between the sensors and the Raspberry Pi

 

Priyanka

  • Gathered heart rate data for Angela so that she could test her algorithms. It was gathered by the Sparkfun MAX30105 sensor. I struggled to collect the data into a csv/text file so happy that we can do that now to collect the data
  • Another thing I wanted to accomplish before the midpoint demo was to be able to wirelessly connect the Teensy board with the raspberry pi. By last Sunday, I was able to wirelessly connect the Teensy board with the computer and thought it should have been fairly easy with the Raspberry pi, but I was wrong 🙁
  • I had to change the serial port configurations in the config settings. It took me a while to figure out that the raspberry pi was not able to get the data because of its initial set up. Also, I was sending data to the pi using print statements and using println was not adding a line end to the data being sent so I had to send a “\n” as well in the byte array I was sending.

Status Update 11/3/2018

Aayush:

  • I developed the first few screens of the mobile application.
  • I set up the raspberry pi; installed necessary softwares; and was able to connect it to the network
  • Downloaded openCV on the pi and put eye detection algorithm
  • Made sure that the raspberry pi camera works
  • Also, took pictures of the doll and tested the eye detection algorithm on that to ensure things are set for the mid point demo
  • Going forward, plan is to make required tweaks to the eye detection algorithm so that it works with raspberry pi camera

Angela:

  • I worked on fine tuning the parameters of the sleep wake algorithm using my sleep and wake accelerometer data. Currently, the sleep wake algorithm is able to correctly classify my data using majority vote with acc_x, acc_y, and acc_z. There is occasional misclassification especially at the beginning and end of sleep cycles, but since I am not sure exactly when I fall asleep and wake up, this does not seem too significant.
  • I worked on using an algorithm for audio detection using a similar algorithm as the sleep wake. I am using a convolution with a box signal. If there are a certain number of samples that are higher than some threshold, then this result will be classified as crying (or talking) rather than sleeping. This will be used in conjunction with the accelerometer data. The goal is for these two components to be the main decision making sources when the lights are off/night time. I am still looking for audio of babies crying to test it on.
  • I worked with Aayush to get the Raspberry Pi running. I registered it with the CMU wifi system, so it should be able to use CMU wifi. We will be working on running our code before midpoint demo.
  • Goal: Integrate the programs onto the Raspberry Pi and make sure it works.

Priyanka:

  • I was able to get the Bluetooth module to work. The problem seemed to be that the teensy module was not connected to the ground and therefore the Bluetooth module was not able to send data but it was able to receive them.
  • I am able to connect accelerometer to the teensy module and get the acceleration values g_x, g_y and g_z.
  • I am able to connect the pulse monitor and temperature sensor to the teensy module .
  • As of now, the teensy gets the data from the accelerometer and pulse sensor creates a string with starting and ending bits to send via Bluetooth.
  • The parsing by of the info done by the computer is not correct as of now. While the information being sent is correct and the computer is able to identify the starting and ending bits of the information array being sent (and hence able to separate out different pieces of information correctly), the information itself is not being parsed correctly. So instead of seeing integers that represent acceleration in the different directions and the heart rate, we see weird characters.
  • Goal: Correct the parsing of the info.