Adriana’s Status Report 05/08

This week we finished the final presentation in which Evann presented. We all worked on the final demo video. The script for the final demo was written and most of the footage that we are using has been filmed. The final demo video takes place in the car where the program audio and alerts and be heard and assessed in real time.  I also worked on the hardware setup to put the board on the car dashboard along with nicely placing the camera at a high enough angle to get footage of the driver. Lastly, I worked of the poster for the final demo which is almost complete.

Progress: I am currently on schedule.  This project is wrapping up nicely and we are almost complete.

In the upcoming days I hope to finish:

  1. Edit the video
  2. There are also some more details that we need to add to the poster
  3. More testing that we are all planning on doing to include in our final report.

Team’s Status Report for 05/08

We finished our final presentation in which Evann presented on Monday. We also worked on our final demo video and poster which are almost completed. Lastly, we have completed our car set up which includes a small box placed in the car dashboard with the screen being held up by a phone holder.

  1. Significant Risks:  At this stage there is not many significant risks that can jeopardize our project.  There are small things such as improving testing and our dataset.
  2. Changes to the Existing Design: No changes for this week except we did add
  3. Schedule: There are no changes to the schedule

 

Adriana’s Status Report 05/01

This week, we all worked on our last code changes for the project. For me, in particular this involved  implementing the code for UI and user flow that Jananni and I worked on.  This involved creating the on click functionality when the buttons on each page are clicked by the user. At the start, the user sees the starting screen page, to continue they press “start”. Then they are directed to the “calibration” page. Once they click on “start calibration” button then the calibration process starts. This requires the user to place their head inside the circle. After their head placement is correct then it automatically takes the user to the page when the user is ready to start driving. To begin the CarMa alert program, they press “start” and the program is now classifying when the user appears to be distracted or not! The full process can be seen:

https://drive.google.com/file/d/1pIs0zBR5DGlLfyH3uHI5zbVwX-ey5mxl/view?usp=sharing

One key point is that the program only starts when the user clicks on the start button. This design was based on the ethics conversation when the students and the professor mentioned that it should be clear when the program is starting / recording so the user can have a clear understanding on the program. Lastly, we worked on our final presentation which will take place this upcoming week. We had to make sure we had all the data and information ready for this presentation

Progress: I am currently on schedule.  We ended up completing UI early and started working on smaller remaining items. This project is wrapping up nicely and no sudden hiccups have occured.

In the upcoming days I hope to complete the final presentation and the final demo video. The final demo video will take place in the car where the program audio and alerts and be heard and assessed in real time.  Lastly, we hope on having some sort of nice hardware setup to put the screen and the rest of the materials to package it nicely in the car.

Adriana’s Status Report 04/24

This week, I worked on breaking up the landmarking function independently to find out where the bottleneck exists in terms of what is taking a long time for the code to run. This lead to me, working on having our landmarking functions, specifically the model that we use for predictions that is loaded from tensorflow to be run on the GPU on our Xavier board. We then turned on the appropriate flags and settings on our board to have the tensorflow-GPU enabled.  

Similarly, I was able to research more into what our threshold for having ourface turned at an angle and still having accurate eye and mouth detection. The mouth detection turned out to be very accurate even when turned at about 75 degrees from the center of the camera. However, the eye detection began giving inaccurate results once one of the eyes was not visible. This usually happen around 30 degrees. Therefore we have concluded that the best range in which our application should work is when the user’s head is +/- 30 degrees from the center of the camera. 

Lastly, I worked on the user flow for how the user is going to turn on / off the application and what would be less distracting for the user. For next steps, Jananni and I are working on creating the user interface for when the device is turned on / off. This should be completed by Thursday which is when we are putting a “stop” on coding. 

I am currently on schedule and our goal is to be completely done with our code by Thursday. We are then going to focus the remainder of our time to testing in the car and gathering real time data. 

Adriana’s Status Report for 4/10

This week I was able to finish writing our video testing script that we are distributing to friends and family in order to gather more data. We have been beginning to test our algorithm locally using the videos in our testing suit to gain an analysis on how accurate our program is.

One of our biggest goals is to improve the performance of our algorithm on the board. During our weekly meeting with Professor Savvides, we came to the agreement that we are going to prioritize optimizing our program by making it faster and increasing our fps. This week in particular, I looked into having the OpenCV’s “Deep Neural Network” (DNN) module run with NVIDIA GPUs. We use the dnn module to get inferences on where the best face in the image is located. In the example tutorial we are following, they stated that running it on the GPU would have >211% faster inference than on the CPU. I have added the code for dnn to run on the GPU and we are currently in the process of having the dependencies fully work on the board in order to find out what our actual speed increase is which we will showcase during our demo next week.  

I would say that I am currently on track with our schedule. The software optimization is my biggest task at the moment but it is looking pretty good so far! For next week, I am hoping to have some big board fps improvements in the upcoming days. This would greatly improve our algorithm and make the application more user friendly.

Source: https://www.pyimagesearch.com/2020/02/03/how-to-use-opencvs-dnn-module-with-nvidia-gpus-cuda-and-cudnn/

Adriana’s Status Report for 4/3

This week I was able to spent a lot of my time on improving our classification for our eye tracking algorithm. Before, one of the major problems that we were having is not being able to accurately detect closed eyes when there was a glare in my glasses. To address this issue, I rewrote our classification code to utilize eye aspect ratio instead of average eve height to determine if the user is sleeping or not. In particular, this involved computing the euclidean distance between the two sets of vertical eye landmarks and the horizontal eye landmark in order to calculate the eye aspect ratio. (As shown in the image below).

Now our program can determine if a person’s eyes are closed if the Eye Aspect Ratio falls below a certain threshold (calculated at calibration) at a more accurate rate than with just looking at the eye height. Another issue that I solved was removing the frequent notifications to the user when they blink. Before, whenever a user would blink our program would alert them of falling asleep. However, I was able to fine tune that by incorporating a larger eye closed for consecutive frames threshold. This means that the user will ONLY be notified if their eye aspect ratio falls below their normal eye aspect ratio (taken at the calibration stage) AND it stays closed for about 12 frames. This number of 12 frames ended up equaling to about < 2 seconds which is the number in our requirements that we wanted to notify users at. Lastly, I was able to fully integrate Jananni’s calibration process with my eye + mouth tracking process by having a seamless connection of our programs. After calibration process is done and writes the necessary information to a file, my part of the program starts and reads that information and begins classifying the driver.

https://drive.google.com/file/d/1CdoDlMtzM9gkoprBNvEtFt83u75Ee-EO/view?usp=sharing

[Video of improved Eye tracking w/ glasses]

I would say that I am currently on track with our schedule. The software is looking pretty good on our local machines.

For next week, I intend to use some of my time next week to be more involved with the board integration in order to have the code that Jannani and I worked on to be functional on the board. As we write code we have to be mindful that we are being efficient so I plan to do another code refactoring where I can see if there are parts of the code that can be re-written to be more compatible with the board and to improve performance. Lastly, we are working on growing our test data suite by asking our friends and family to send us videos of themselves blinking and moving their mouth. I am working on writing the “script” of different scenarios that we want to make sure we get on video to be able to test. We are all hoping to get more data to verify our results.

Adriana’s Status Report for 3/27

This week the majority of my time went to working on our classification algorithm. Now that we have a working mouth and eye detection implementation we need to know when the user should be notified that they appear distracted. For distraction concerning the eyes, we are looking into when their eyes  are closed. Given the eye calibration values that Jananni has worked on, I use that number as a basis to detect when their eyes appear to be about half of that value. Similarly, for the mouth detection, I check to see if their mouth appears to be yawning by using the open mouth calibration value and check if the user is currently opening their mouth at a larger distance. When either of these conditions are met then an alert will sound to warn the user.

This week I also managed to help both Evann and Jananni with putting our eye detection algorithm on the board. We are currently in the process of improving how the code runs on the board and Evann is spearheading that progress. Another portion of the project that I have worked on is integration of our eye tracker and mouth tracker. Before they used to be two seperate algorithm but I have combine them in order for them to be able to run simultaneously. This involved refactoring a lot of the code that we had but it is something that will be very useful for us. My current progress is on schedule and I will be looking into how to incorporate an actual sound for an alert instead of text on the screen. I also hope on starting to work on improving our algorithm in both accuracy and speed since putting this on the board will degrade the performance of our algorithms.

Adriana’s Status Report for 03/13

This week I worked on the design review presentation along with the design review paper. That is where the majority of my time went towards along with  working on testing out the face detection algorithm locally in which we will be migrating to our board at the start of next week. Now that we have the majority of our parts we can begin assembling it together. This will be our first big task in seeing how to different components of our project get to work together. So far, I am on schedule and I’m making good progress.

Team’s Status Report for 03/06

As a team we all worked on our design review presentation. We had the chance to flush out all our block diagrams for the hardware and software components of our project. We decided on utilizing AWS compute power in case the board blows up or is not enough for our project. This also allowed us to obtain a better breakdown of our project and how all of our components are going to interact with one another. We are currently following our original schedule that we have planned out. Now that we have our Xavier board we can begin making some exciting progress by writing our preliminary code onto it.

Adriana’s Status Report for 03/06

This week, I was able to further look into how we are going to be implementing our software component of our project. Specifically, the face detection / eye tracking algorithm using openCV and DNN. I was able to get install the necessary modules and have some of the openCV’s eye detection algorithms work on my local computer. I also worked on the design review presentation in which I will be delivering next week. The progress I have made is on schedule. Additionally, we ordered and received our parts for the project. This means that we can begin integrating the software module that we will need into our board and start our preliminary code writing next week. For next week, we hope to have tested and written some code on our board in order to have learned how to use it.