Adriana’s Status Report 05/08

This week we finished the final presentation in which Evann presented. We all worked on the final demo video. The script for the final demo was written and most of the footage that we are using has been filmed. The final demo video takes place in the car where the program audio and alerts and be heard and assessed in real time.  I also worked on the hardware setup to put the board on the car dashboard along with nicely placing the camera at a high enough angle to get footage of the driver. Lastly, I worked of the poster for the final demo which is almost complete.

Progress: I am currently on schedule.  This project is wrapping up nicely and we are almost complete.

In the upcoming days I hope to finish:

  1. Edit the video
  2. There are also some more details that we need to add to the poster
  3. More testing that we are all planning on doing to include in our final report.

Team’s Status Report for 05/08

We finished our final presentation in which Evann presented on Monday. We also worked on our final demo video and poster which are almost completed. Lastly, we have completed our car set up which includes a small box placed in the car dashboard with the screen being held up by a phone holder.

  1. Significant Risks:  At this stage there is not many significant risks that can jeopardize our project.  There are small things such as improving testing and our dataset.
  2. Changes to the Existing Design: No changes for this week except we did add
  3. Schedule: There are no changes to the schedule

 

Evann’s Status Report for 05/08

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I worked mainly on creating our final demo video. I worked on ensuring that all the hardware would work inside of the car as well as mounting all of the components. I also worked on editing all of the film that we recorded as well as voicing over explanations of the system. I also collected more test data and performed more testing and validation to update our results.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Our progress on the board is on schedule. We’ve completed all of our major goals and are just finishing up the final report.

What deliverables do you hope to complete in the next week?

We are planning on doing more testing as well as finishing up the final report.

Team Status Report for 5/1

  1. Significant Risks:  At this stage there is not many significant risks that can jeopardize our project.  There are small things such as not being able to classify the eyes accurately enough. We are still working on best improving this algorithm depending on the distance the user is from the camera / the lighting in the car.
  2. Changes to the Existing Design: No changes for this week
  3. Schedule: There are no changes to the schedule
  4. Short demo Video:

https://drive.google.com/file/d/1gHNVyxSBz6iphaPCnnBXD8wDqWc6w8aR/view?usp=sharing

This shows the program working on our touchscreen. Essentially, after calibration, there is a sound played (played on the headphones in this video) when the user has their eyes closed / mouth open for a longer period of time.

Jananni’s Status Report for 5/1

  1. Accomplished this week: This week I first worked on the UI with Adriana then focused on tweaking the calibration and classification code for our device.  For the UI, we first did some research into integrating UI such as tkinter with our OpenCV code but we realized this may be difficult.  So we decided to make  buttons by creating a box and when a click occurs in the box an action should occur.  We then designed our very simple UI flow.  I also spent some time working on a friendly logo for our device.  This was what we designed.  Our flow starts at the top left box with our CarMa Logo then goes on to the Calibration step below.  Then tells the user it is ready to begin.  Once we designed this.  We simply coded up the connection between pages and connected the UI to our current calibration and main process code.  We needed to work on connecting the calibration because this step required the real time video whereas for the main process we are not planning on showing the user any video.  We only sound the alert if required.  After we completed the UI, I started working on tweaks for the calibration and classification code for the board.  When we ran our code some small things were off such as the drawn red ellipse the user is attempting to align their face with.  For this I adjusted the red ellipse dimensions but also made it easier for us later on to simply change the x, y, w, h constants with out go through the code and understand values.  Once I completed this, I started working on adjusting our classification to fit the board.  I also started working on the final presentation Power Point slides.
  2. Progress: I am currently ahead of our must recent schedule.  We planned to have the UI completed this week.  We completed this early and started working on auxiliary items.
  3. Next week I hope to complete the final presentation and potentially work on the thresholding.  I am wondering if this is the reason the classification of the eye direction is not great.  We also need to complete a final demo video.

Team Status Report for 4/24

  1. One risk that could potential impact our project at this point is the buzzer integration.  If there is an issue with this aspect of the project, we may need to figure out a new way to alert the user.  This could potentially be addressed by alerting the user with an alert on the screen.
  2. No, this week there were no changes to the existing design of the system.  For testing and verification, we more precisely defined what we meant by 75% accuracy against our test suite.
  3. This week we hope to complete the integration of the buzzer and the UI interface of our project.  Next week we hope to create a video for the head turn thresholding to showcase the accuracy and how it changes as you turn your head.  We also need to potentially work on pose detection and more test videos. Finally, we are going to be testing our application in a car to obtain real world data.

Adriana’s Status Report 04/24

This week, I worked on breaking up the landmarking function independently to find out where the bottleneck exists in terms of what is taking a long time for the code to run. This lead to me, working on having our landmarking functions, specifically the model that we use for predictions that is loaded from tensorflow to be run on the GPU on our Xavier board. We then turned on the appropriate flags and settings on our board to have the tensorflow-GPU enabled.  

Similarly, I was able to research more into what our threshold for having ourface turned at an angle and still having accurate eye and mouth detection. The mouth detection turned out to be very accurate even when turned at about 75 degrees from the center of the camera. However, the eye detection began giving inaccurate results once one of the eyes was not visible. This usually happen around 30 degrees. Therefore we have concluded that the best range in which our application should work is when the user’s head is +/- 30 degrees from the center of the camera. 

Lastly, I worked on the user flow for how the user is going to turn on / off the application and what would be less distracting for the user. For next steps, Jananni and I are working on creating the user interface for when the device is turned on / off. This should be completed by Thursday which is when we are putting a “stop” on coding. 

I am currently on schedule and our goal is to be completely done with our code by Thursday. We are then going to focus the remainder of our time to testing in the car and gathering real time data. 

Evann’s Status Report 4/24

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I completed the accelerometer integration. I was able to make a physical connection between the accelerometer and board using some spare Arduino wires. After ensuring that the connection was established, I then checked that the board was receiving data from the accelerometer by checking that the memory was mapped to the device. I then used the smbus python library to read the data coming in from the memory associated with the accelerometer. I also completed the sound output. This was accomplished by using the playsound python library. I also worked on implementing pose detection using the facial landmarks we already use for eye detection.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Our progress on the board is slightly ahead of schedule. We’ve completed all of our major goals and have begun work on stretch goals.

What deliverables do you hope to complete in the next week?

We are planning to focus work next week to get our system working in a vehicle for a demo. I want to finish the pose detection next week as well.

Jananni’s Status Report 4/24

  1.  This week I first worked on including the classification that if a driver’s eyes are pointed away from the road for over two seconds then the driver is classified as distracted.  In order to do this, I first took the computed direction values of the eyes and if the eyes are pointed up, down, left or right for over two seconds, I send an alert.  After committing this code, I worked on changing our verification and testing document.  I wanted to include the Prof’s recommendations about our testing to not take averages of error rates because this implies we have a lot more precision that we actually have.  Instead I recorded the number of total events that occurred and the number of error events.  Then I took the total number of errored events for all actions and take the percentage.  Now I am working on looking into research for our simple UI interface to integrate it with what we currently have.
  2. I am currently on track with our schedule.  We wrote out a complete schedule this week for the final weeks of school, including the final report and additional testing videos.
  3. This week I hope to finish implementing the UI for our project with Adriana.

Adriana’s Status Report for 4/10

This week I was able to finish writing our video testing script that we are distributing to friends and family in order to gather more data. We have been beginning to test our algorithm locally using the videos in our testing suit to gain an analysis on how accurate our program is.

One of our biggest goals is to improve the performance of our algorithm on the board. During our weekly meeting with Professor Savvides, we came to the agreement that we are going to prioritize optimizing our program by making it faster and increasing our fps. This week in particular, I looked into having the OpenCV’s “Deep Neural Network” (DNN) module run with NVIDIA GPUs. We use the dnn module to get inferences on where the best face in the image is located. In the example tutorial we are following, they stated that running it on the GPU would have >211% faster inference than on the CPU. I have added the code for dnn to run on the GPU and we are currently in the process of having the dependencies fully work on the board in order to find out what our actual speed increase is which we will showcase during our demo next week.  

I would say that I am currently on track with our schedule. The software optimization is my biggest task at the moment but it is looking pretty good so far! For next week, I am hoping to have some big board fps improvements in the upcoming days. This would greatly improve our algorithm and make the application more user friendly.

Source: https://www.pyimagesearch.com/2020/02/03/how-to-use-opencvs-dnn-module-with-nvidia-gpus-cuda-and-cudnn/