Adriana’s Status Report 05/08

This week we finished the final presentation in which Evann presented. We all worked on the final demo video. The script for the final demo was written and most of the footage that we are using has been filmed. The final demo video takes place in the car where the program audio and alerts and be heard and assessed in real time.  I also worked on the hardware setup to put the board on the car dashboard along with nicely placing the camera at a high enough angle to get footage of the driver. Lastly, I worked of the poster for the final demo which is almost complete.

Progress: I am currently on schedule.  This project is wrapping up nicely and we are almost complete.

In the upcoming days I hope to finish:

  1. Edit the video
  2. There are also some more details that we need to add to the poster
  3. More testing that we are all planning on doing to include in our final report.

Team’s Status Report for 05/08

We finished our final presentation in which Evann presented on Monday. We also worked on our final demo video and poster which are almost completed. Lastly, we have completed our car set up which includes a small box placed in the car dashboard with the screen being held up by a phone holder.

  1. Significant Risks:  At this stage there is not many significant risks that can jeopardize our project.  There are small things such as improving testing and our dataset.
  2. Changes to the Existing Design: No changes for this week except we did add
  3. Schedule: There are no changes to the schedule

 

Jananni’s Status Report for 05/08

  1. What did you personally accomplish this week on the project?  This week we focused on fine-tuning our project.  I focused on playing around with some thresholding variables so that our classification is slightly better.  But this was given one lighting condition so I need to test with different lighting conditions.  After this, we built the case for our device.  We used a Kleenex box and placed the Jetson board inside.  I cut out the cardboard so that it was usable positioned the camera such that it is not too intrusive for the driver.  Once we complete putting the device together, Evann and I tested the project to make sure everything was still working.  Meanwhile, we also started writing a script for our video including what shots we wants with which voiceover.  Evann and I also played around with the screen direction in order for the device to fit well in the car.  Once we completed this, we took our device out to the car to film clips for our video.
  2. Is your progress on schedule or behind?  We are currently on track with our schedule.
  3. What deliverables do you hope to complete in the next week?  Next week we should complete our Public Demo and work on our Final Report.

Evann’s Status Report for 05/08

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I worked mainly on creating our final demo video. I worked on ensuring that all the hardware would work inside of the car as well as mounting all of the components. I also worked on editing all of the film that we recorded as well as voicing over explanations of the system. I also collected more test data and performed more testing and validation to update our results.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Our progress on the board is on schedule. We’ve completed all of our major goals and are just finishing up the final report.

What deliverables do you hope to complete in the next week?

We are planning on doing more testing as well as finishing up the final report.

Adriana’s Status Report 05/01

This week, we all worked on our last code changes for the project. For me, in particular this involved  implementing the code for UI and user flow that Jananni and I worked on.  This involved creating the on click functionality when the buttons on each page are clicked by the user. At the start, the user sees the starting screen page, to continue they press “start”. Then they are directed to the “calibration” page. Once they click on “start calibration” button then the calibration process starts. This requires the user to place their head inside the circle. After their head placement is correct then it automatically takes the user to the page when the user is ready to start driving. To begin the CarMa alert program, they press “start” and the program is now classifying when the user appears to be distracted or not! The full process can be seen:

https://drive.google.com/file/d/1pIs0zBR5DGlLfyH3uHI5zbVwX-ey5mxl/view?usp=sharing

One key point is that the program only starts when the user clicks on the start button. This design was based on the ethics conversation when the students and the professor mentioned that it should be clear when the program is starting / recording so the user can have a clear understanding on the program. Lastly, we worked on our final presentation which will take place this upcoming week. We had to make sure we had all the data and information ready for this presentation

Progress: I am currently on schedule.  We ended up completing UI early and started working on smaller remaining items. This project is wrapping up nicely and no sudden hiccups have occured.

In the upcoming days I hope to complete the final presentation and the final demo video. The final demo video will take place in the car where the program audio and alerts and be heard and assessed in real time.  Lastly, we hope on having some sort of nice hardware setup to put the screen and the rest of the materials to package it nicely in the car.

Evann’s Status Report 05/01

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

This week I integrated the UI onto the board. This required debugging as we had to figure out a way to speed up loading up the application as well as remove multiple instances of the video capture application.  I worked through some issues with the sound output (no nonblocking function call support) and accelerometer input (inconsistent input). I also continued work on pose detection identifying landmarks we want to use. We’ll be using geometric ratios to compute pose. Additionally I captured more videos to test our application and performed the analysis on them. Finally, I spent some time working on the Final Presentation slides and preparing for the presentation.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Our progress on the board is on schedule. We’ve completed all of our major goals and are finishing up stretch goals.

What deliverables do you hope to complete in the next week?

We are planning to capture some videos our of system working in a car next week for our final video. I want to finish the pose detection next week as well.

Team Status Report for 5/1

  1. Significant Risks:  At this stage there is not many significant risks that can jeopardize our project.  There are small things such as not being able to classify the eyes accurately enough. We are still working on best improving this algorithm depending on the distance the user is from the camera / the lighting in the car.
  2. Changes to the Existing Design: No changes for this week
  3. Schedule: There are no changes to the schedule
  4. Short demo Video:

https://drive.google.com/file/d/1gHNVyxSBz6iphaPCnnBXD8wDqWc6w8aR/view?usp=sharing

This shows the program working on our touchscreen. Essentially, after calibration, there is a sound played (played on the headphones in this video) when the user has their eyes closed / mouth open for a longer period of time.

Jananni’s Status Report for 5/1

  1. Accomplished this week: This week I first worked on the UI with Adriana then focused on tweaking the calibration and classification code for our device.  For the UI, we first did some research into integrating UI such as tkinter with our OpenCV code but we realized this may be difficult.  So we decided to make  buttons by creating a box and when a click occurs in the box an action should occur.  We then designed our very simple UI flow.  I also spent some time working on a friendly logo for our device.  This was what we designed.  Our flow starts at the top left box with our CarMa Logo then goes on to the Calibration step below.  Then tells the user it is ready to begin.  Once we designed this.  We simply coded up the connection between pages and connected the UI to our current calibration and main process code.  We needed to work on connecting the calibration because this step required the real time video whereas for the main process we are not planning on showing the user any video.  We only sound the alert if required.  After we completed the UI, I started working on tweaks for the calibration and classification code for the board.  When we ran our code some small things were off such as the drawn red ellipse the user is attempting to align their face with.  For this I adjusted the red ellipse dimensions but also made it easier for us later on to simply change the x, y, w, h constants with out go through the code and understand values.  Once I completed this, I started working on adjusting our classification to fit the board.  I also started working on the final presentation Power Point slides.
  2. Progress: I am currently ahead of our must recent schedule.  We planned to have the UI completed this week.  We completed this early and started working on auxiliary items.
  3. Next week I hope to complete the final presentation and potentially work on the thresholding.  I am wondering if this is the reason the classification of the eye direction is not great.  We also need to complete a final demo video.

Team Status Report for 4/24

  1. One risk that could potential impact our project at this point is the buzzer integration.  If there is an issue with this aspect of the project, we may need to figure out a new way to alert the user.  This could potentially be addressed by alerting the user with an alert on the screen.
  2. No, this week there were no changes to the existing design of the system.  For testing and verification, we more precisely defined what we meant by 75% accuracy against our test suite.
  3. This week we hope to complete the integration of the buzzer and the UI interface of our project.  Next week we hope to create a video for the head turn thresholding to showcase the accuracy and how it changes as you turn your head.  We also need to potentially work on pose detection and more test videos. Finally, we are going to be testing our application in a car to obtain real world data.

Adriana’s Status Report 04/24

This week, I worked on breaking up the landmarking function independently to find out where the bottleneck exists in terms of what is taking a long time for the code to run. This lead to me, working on having our landmarking functions, specifically the model that we use for predictions that is loaded from tensorflow to be run on the GPU on our Xavier board. We then turned on the appropriate flags and settings on our board to have the tensorflow-GPU enabled.  

Similarly, I was able to research more into what our threshold for having ourface turned at an angle and still having accurate eye and mouth detection. The mouth detection turned out to be very accurate even when turned at about 75 degrees from the center of the camera. However, the eye detection began giving inaccurate results once one of the eyes was not visible. This usually happen around 30 degrees. Therefore we have concluded that the best range in which our application should work is when the user’s head is +/- 30 degrees from the center of the camera. 

Lastly, I worked on the user flow for how the user is going to turn on / off the application and what would be less distracting for the user. For next steps, Jananni and I are working on creating the user interface for when the device is turned on / off. This should be completed by Thursday which is when we are putting a “stop” on coding. 

I am currently on schedule and our goal is to be completely done with our code by Thursday. We are then going to focus the remainder of our time to testing in the car and gathering real time data.