Evann’s Status Report for 4/10

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I spent the majority of my time integrating the updated application onto the board. There was an issue with how some of the calibration code was interacting with the images that were captured using the onboard camera. I also spent time collecting data on performance and working to optimize the landmarking. There are some dependency issues with converting the landmarking model to a tensorrt optimized graph and I am currently working through these.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Our progress on the board is slightly ahead of schedule. We finished integration this week, as well as testing and validation. We are planning to start work on some of our reach goals.

What deliverables do you hope to complete in the next week?

I hope to finish accelerometer input and sound output by next week.

Jananni’s Status Report for 4/10

  1. This week I worked on finishing up integration with the main process of our project.  Once that was completed I worked on getting more videos for our test suite of various people with various face shapes.  Finally I started researching how to improve our code to optimize the GPU.  This will ideally bring up our frames per second to higher than 5 frames per second as in our original requirement.  I also worked on putting together material for our demo next week.  Here are some important links I have been reading regarding optimizing our code specifically for the GPU: https://forums.developer.nvidia.com/t/how-to-increase-fps-in-nvidia-xavier/80507, and more.  The article (similar to many others) talks about using jetson_clocks and changing the power management profiles.  The change in power management profiles tries to maximize performance and energy usage.  Then the jetson_clocks are used to set max frequencies to the CPU, GPU and EMC clocks which should ideally improve the frames per second in our testing.
  2. Currently as a team we are a little ahead of schedule, so we revised our weekly goals and I am on track for those updated goals.  I am working with Adriana to improve our optimization.
  3. By next week, I plan to finish optimizing the code and start working on our stretch goal.  After talking to the professor, we are deciding between working on the pose detection or lane change using the accelerometer.

Team Status Report for 4/10

  1. The most significant risk that could happen to our project is not meeting one of our main requirements of at least 5 frames per second.  We are working on improving our current rate of 4 frames per second by having some openCV algorithms run GPU, in particular the dnn module which detects where a face in the image is.  If this optimization does not noticeably increase our performance then  we plan on using our back-up plan of moving some of the computation to AWS so that the board is faster.
  2. This week we have not made any major changes to our designs or block diagrams.
  3. In terms of our goals and milestones, we are a little ahead of our original plan.  Because of this, we are deciding which stretch goals we should start working on and which of our stretch goals are feasible within the remaining time we have. We are currently deciding between pose detection or phone detection.

Adriana’s Status Report for 4/3

This week I was able to spent a lot of my time on improving our classification for our eye tracking algorithm. Before, one of the major problems that we were having is not being able to accurately detect closed eyes when there was a glare in my glasses. To address this issue, I rewrote our classification code to utilize eye aspect ratio instead of average eve height to determine if the user is sleeping or not. In particular, this involved computing the euclidean distance between the two sets of vertical eye landmarks and the horizontal eye landmark in order to calculate the eye aspect ratio. (As shown in the image below).

Now our program can determine if a person’s eyes are closed if the Eye Aspect Ratio falls below a certain threshold (calculated at calibration) at a more accurate rate than with just looking at the eye height. Another issue that I solved was removing the frequent notifications to the user when they blink. Before, whenever a user would blink our program would alert them of falling asleep. However, I was able to fine tune that by incorporating a larger eye closed for consecutive frames threshold. This means that the user will ONLY be notified if their eye aspect ratio falls below their normal eye aspect ratio (taken at the calibration stage) AND it stays closed for about 12 frames. This number of 12 frames ended up equaling to about < 2 seconds which is the number in our requirements that we wanted to notify users at. Lastly, I was able to fully integrate Jananni’s calibration process with my eye + mouth tracking process by having a seamless connection of our programs. After calibration process is done and writes the necessary information to a file, my part of the program starts and reads that information and begins classifying the driver.

https://drive.google.com/file/d/1CdoDlMtzM9gkoprBNvEtFt83u75Ee-EO/view?usp=sharing

[Video of improved Eye tracking w/ glasses]

I would say that I am currently on track with our schedule. The software is looking pretty good on our local machines.

For next week, I intend to use some of my time next week to be more involved with the board integration in order to have the code that Jannani and I worked on to be functional on the board. As we write code we have to be mindful that we are being efficient so I plan to do another code refactoring where I can see if there are parts of the code that can be re-written to be more compatible with the board and to improve performance. Lastly, we are working on growing our test data suite by asking our friends and family to send us videos of themselves blinking and moving their mouth. I am working on writing the “script” of different scenarios that we want to make sure we get on video to be able to test. We are all hoping to get more data to verify our results.

Evann’s Status Report for 4/3

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I spent the majority of my time integrating the primary application onto the board. There was an issue with an incompatible version of Jetson’s SDK with the version of TensorFlow that we were using which required reflashing our SD card with a new OS image. Using Nvidia’s JetPack SDK required us to change some of our dependencies such as switching to Python 3.6 and TensorFlow2.4 which required us to refactor our code to handle some of the deprecated functions. Working through these issues, I was able to get the preliminary code running on the board. A demo is shown below.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Our progress on the board is slightly behind schedule. We intended to finish integration earlier this week, however issues with dependencies and platform differences caused some unforeseen delays. I will use some of the dedicated slack time to adjust the schedule.

What deliverables do you hope to complete in the next week?

I hope to implement accelerometer input and sound output by next week. I also intend to begin work on pose estimation if time allows.

Team Status Report for 4/3

  1. This week we focused on integrating with the board.  We finalized our calibration and main software processes.  We did more testing on how the lighting conditions affect our software and optimized our eye tracker by using the aspect ratio.  One big risk that we are continuously looking out for is the speed of our program.  On our laptops the computation speed is very quick but if it takes longer on the board our contingency plan is to use AWS to speed it up.
  2. For the board we are now using TensorFlow 2.4 and python 3.6 which changes a few built-in functions.  This is necessary for the compatibility with the Nvidia Jetpack SDK we are using on the board.  There are no costs for this only minor changes in the code are necessary.
  3. So far we are keeping to the original schedule.  We will be focusing on optimization and next start working on our stretch goals.
  4. Eyes closed detection:

Jananni’s Status Report for 4/3

  1.  This week my primary goal was working through and flushing out the calibration process, specifically making it more user friendly and putting pieces together.  I first spent some time figuring out the best way to interact with the user.  Initially I thought tkinter would be easy and feasible.  My initial approach was to have a button the user would click when they want to take their initial calibration pictures.  But after some initial research and coding and a lot of bugs I realized cv2 does not easily fit into tkinter.  So instead I decided to work with just cv2 and take the picture for the user once their head is within a drawn ellipse.  This will be easier for the user as they just have to position their face and this guarantees we get the picture and values we need.  Once the calibration process is over I store the eye aspect ratio and mouth height into a text file.  Adriana and I decided to do this to transfer the data from the calibration process to the main process so that the processes are independent from each other.  Here is a screen recording of the progress.

2. Based on our schedule I am on track.  I aimed to get the baseline calibration working correctly with simple user interface.

3. Next week I plan to start working on pose detection and possible code changes in order to be compatible with the board.

Adriana’s Status Report for 3/27

This week the majority of my time went to working on our classification algorithm. Now that we have a working mouth and eye detection implementation we need to know when the user should be notified that they appear distracted. For distraction concerning the eyes, we are looking into when their eyes  are closed. Given the eye calibration values that Jananni has worked on, I use that number as a basis to detect when their eyes appear to be about half of that value. Similarly, for the mouth detection, I check to see if their mouth appears to be yawning by using the open mouth calibration value and check if the user is currently opening their mouth at a larger distance. When either of these conditions are met then an alert will sound to warn the user.

This week I also managed to help both Evann and Jananni with putting our eye detection algorithm on the board. We are currently in the process of improving how the code runs on the board and Evann is spearheading that progress. Another portion of the project that I have worked on is integration of our eye tracker and mouth tracker. Before they used to be two seperate algorithm but I have combine them in order for them to be able to run simultaneously. This involved refactoring a lot of the code that we had but it is something that will be very useful for us. My current progress is on schedule and I will be looking into how to incorporate an actual sound for an alert instead of text on the screen. I also hope on starting to work on improving our algorithm in both accuracy and speed since putting this on the board will degrade the performance of our algorithms.

Team Status Report for 3/27

  1. This week we continued integration and the biggest potential issue we ran into was the compute power of the board.  Once the camera was connected, we quickly realized that the board had a delay of about 2 seconds.  With a decreased frame rate of 5 frames per second we had a delay of about half a second.  We are worried that with our heavy computation the delay will get larger.  We expected this and we are continuing forward with our plan to optimize for faster speeds and worst case put some computation on AWS to lighten the computation load.
  2. No changes were made to our requirements, specs or block diagrams.
  3. This week we have caught up to our original schedule and we plan to continue front loading our work so that integration will have more time.

Jananni’s Status Report for 3/27

  1. Accomplished this week:  This week I have been focused on building the calibration for our project.  This process asks the user to take pictures of their front and side profiles so that our algorithms can be more accurate for the specific user.  This week’s goal is to get the user’s eye height (in order to check for blinking) and the user’s mouth height (in order to check for yawning).

I first did research on various calibration code I could potential use.  One potential idea was using this project (https://github.com/rajendra7406-zz/FaceShape) where certain face dimensions are outputted.

Unfortunately these dimensions were not enough for our use case.  So I focused on using the DNN algorithm for a single frame and calculating the dimensions I required from the landmarks.

Then I started working on building the backend for the calibration process.  I spent a lot of time understanding the different pieces of code including the eye tracking and mouth detection.  Once I understood this, I worked on integrating the two and retrieving the specific values we need.  For the eye height, I calculated the average of the left and right eye height and for the mouth I return the max distance between the outer mouth landmarks.

2. Progress according to schedule:  My goal for this week was to build the basics of the calibration process and get the user’s facial dimensions which was accomplished on schedule.

3. Deliverables for next week:  Next week I hope to start working on the frontend of the calibration process and the user interface.