Yasser’s Status Report for 03/25/23

This week, I personally accomplished finishing writing the eye tracking algorithm and the classification of determining whether a driver is looking away from the road for more than 2 seconds, as well as the calibration needed for this process. After preliminary testing, I did notice some bugs in the algorithm due to noise. After getting some advice from my research advisor, I tuned my algorithm to instead calculate the distance of the pupil moving to the two far end points of the eye. If the distance between the center of the pupil is very near these two endpoints, then this determines whether a driver is looking away. I’ll speak to the professor in our weekly meeting next week to see if this is enough for the eye tracking algorithm. I also started working on the blinking feature.

Despite my advances, I am still behind schedule due to other assignments from other classes and testing of the eye tracking algorithm. I plan on focusing on the blinking and mouth detection feature this week. After these two features are complete, all of the computer vision code will have been finished. Testing and integration with the hardware can soon be commenced.

Next week I plan on having the algorithms for the blinking and mouth detection features done as well as the classification needed to determine whether a driver is sleepy by the frequency of blinks and lengths of yawning.

Elinora’s Status Report for 03/25/23

This past week, I focused on fleshing out the web application with Sirisha. I implemented all the buttons that we need to navigate through the pages of the application and worked on formatting for the login, registration, and logs pages with Sirisha. I also met with Yasser to discuss what we want to have done by our interim report. Currently, we want to have the eye tracking working with a simplified calibration step for a demo on Yasser’s computer and have the web application fully working with login capabilities and dummy data. After the interim report, we will integrate the CV/ML with our hardware and display the data from the actual DriveWise device on the web application instead of just dummy data.

According to our Gantt chart, my progress is behind schedule. I did not make any progress this week incorporating the hardware components, instead deciding to focus on the web application because we’re thinking of having our interim demo not include hardware integration. This past week I was scheduled to test the website with dummy data, but we are not yet sure what we want the exact format of the data to be, so I will discuss with Sirisha and Yasser in class on Monday and then create some dummy data to test the metrics and logs parts of the web application with.

By the end of next week, I hope to complete the testing of the web application with dummy data and individually test the camera and accelerometers since I did not get to that this week.

Team Status Report for 03/25/23

Right now, the most significant risks are making sure that the computer vision code is complete and working as soon as possible because if it isn’t, we won’t be able to progress with the other tasks. The eye tracking code has been completed but we would need approval from our professor this week to see if the way we’re classifying a driver as looking away from the road is valid.

Since finishing up with the computer vision algorithms is the main priority right now, Sirisha has been tasked with helping with the mouth detection algorithm to speed up the process.

We also need to make sure all of the hardware components have arrived for us to start integrating soon. Camera has been ordered after considering whether the initial camera we had planned on ordering was of an adequate size for placing inside a vehicle.

No changes were made to the system design.

The only change to the schedule is pushing the computer vision code and hardware integration back.

Sirisha’s Status Report for 03/25/23

This week, I spent most of the time working on the web application.  I worked on getting the initial setup of all of the pages done, making the HTML and CSS as close as possible to what we designed a few weeks earlier, making the JavaScript allow us to go between pages and interact with them properly, and figuring out how to incorporate ways to display graphs and metrics on the web application.  I also worked on integrating the web application with Firebase, but there have been a few issues in terms of configurations that I am still in the process of figuring out.  At this point, the web application is as finished as it can be before we integrate the data and aside from a few finishing touches.

Progress is a bit behind at the moment, and this is due to some unexpected changes popping up with the computer vision code.  The teammate who is working on the eye tracking and facial detection has mentioned that in order to implement a certain part, we will need a different hardware component, so that also has to be ordered now and we need to wait for it.  In the meantime, we have still been able to test with our computer’s webcam.  Aside from the coding pushing back our integration with hardware and the need for new hardware components, everything else is on track.  Hopefully once, these new parts are ordered, we should be able to make up any lost time.  We also allocated a significant amount of buffer time, so everything will still be able to be completed.

For next week, we hope to have all of the hardware components finalized and delivered as well as the computer vision code done so we can start the integration process.

Team Status Report for 03/18/23

As of right now, the most significant risks that could jeopardize the success of the project are making sure that the algorithms used for eye tracking and facial detection are able to be completed on time and can be done with the appropriate frame rate.  This is imperative to get done as soon as possible because many of the other features cannot be completed until that part is done.  We have been working to constantly test the algorithms to ensure that they are working as expected, but in the event that they aren’t, we have a backup plan of switching from OpenCV DNN to Dlib.

We were looking into changing the model of the NVIDIA Jetson because the one we currently have could possibly utilize more power than what a car can provide. If this change needs to happen, it won’t incur any extra costs because there are other models in inventory and the other hardware components that we have are compatible with the other models. Also, in between the design presentation and report, we have added back the feature of the device working in non ideal conditions (low lighting and potential obstruction of the eyes by sunglasses or a hat). This was done based on feedback by faculty, but at the moment we are still unsure if we will for sure add back non ideal conditions because other faculty/TA feedback mentioned that we shouldn’t add it back. If it is added back, the cost incurred will be having to spend time working on the algorithm with non ideal conditions. This will lead to less time perfecting the accuracy of the ideal conditions algorithm. We are also changing the design slightly to account for edge cases in which the driver would be looking away from the windshield or mirrors while the car is moving but still not considered distracted. This would occur when the driver is looking left or right while making a turn or looking over their shoulder while reversing. For now, we will limit audio feedback for signs of distraction to be given only when the car is above a threshold speed (tentatively 5mph), replacing our previous condition of the car being stationary. This mph will be adjusted based on recorded speeds of turns and reversing during user testing. If we have extra time, we are considering detecting whether the turn signal is on or car is in reverse to more accurately detect these edge conditions.

After our meeting with the professor and TA this week, we are changing the deadline for the completion of the eye tracking and the head pose estimation tracking algorithms to be next week. We made this change because the CV/ML aspects of our project are the most critical to achieve our goals and having the algorithms completed earlier will allow us to have more time for testing/tuning to create a more robust system. Also, we have shifted the task of assembling the device and testing individual components, specifically the accelerometer and camera, to next week after those components have been delivered. Otherwise, we are on track with our schedule.

 

Elinora’s Status Report for 03/18/23

This week, I mainly focused on translating our web app design into code with Sirisha, primarily working on the login and registration pages. I also looked into how we could use Firebase to store our login information for the web app and read through the intro tutorials on its website. I spent a lot of my time this week outside of class on the ethics assignment as well. 

I am mostly on schedule according to our Gantt chart. We had previously scheduled for me to assemble the device this past week, but we are still waiting on some ordered components to arrive. Last week I had set the goal that I would research and decide on a cellular dongle, but I decided to hold off on that task for a while since we won’t need the cellular dongle for 2 more weeks. 

Next week, I will finish up the structure of the web app with Sirisha and create and feed dummy data into the web app to calculate and display the corresponding graphics on the metrics page. Once we receive the accelerometer and camera, I will test the accelerometer and assemble the device. I will also be working on the second half of the ethics assignment and attending the ethics lecture, so I will make sure to plan ahead and reserve more time to work on tasks for the project outside of class.

Yasser’s Status Report for 03/18/23

Personally, I accomplished getting the eye tracking algorithm working. There is an issue with the frame rate being slow on my webcam which can be attributed to the high resolution so further testing on our purchased camera will be needed to see what the issue is. Below is a screen shot of the eye tracking (red dots around my eyes, might not be fully visible in this photo).

 

My progress is still behind. I had planned on having the classification and calibration step done. However, after meeting with the professor, I need to focus on getting the gaze tracking algorithm first with determining when a driver’s pupils are looking away from the road, which I plan on getting done by Wednesday.

After I’m done with that, I plan on working on the mouth detection, calibration step, and classification. By next week, I hope on getting the code that determines whether a driver is looking away from the road done and get started on the calibration step.

 

Sirisha’s Status Report for 03/18/23

This past week, I worked mainly on the web application and testing and starting to set up the hardware.  I continued to implement the designs of the pages that were worked on earlier.  At the moment, almost all of the pages have been set up, and we are working on them making look like our designs.  We are looking into Firebase as well as an authentication system to incorporate into the web application.  I also started doing some testing of the hardware to make sure each component is working individually and, in the event we find something that requires us to use a different component and purchase something new, we want to have enough time for the order to be delivered and tested again.  We had a concern regarding the Jetson because it could potentially use more power than what a standard car is able to provide, so we have been looking at possible alternatives if in practice the Jetson actually cannot work properly in a car.  We also had to refine a few design choices, such as some more edge cases to consider with our project and if some features should actually be included, since we are getting conflicting feedback from faculty and TAs about this area.  A good chunk of this week was also used to work on the ethics assignment.

Our progress was mostly on track for this past week.  Due to the delay in our hardware components being delivered and some refinements we had to make in the computer vision algorithms, we weren’t able to integrate any hardware with software yet, and we are still testing individual components to make sure everything works.  This part can’t be worked on much until the hardware is finalized and delivered, which should be able to be finished very soon due to the research and testing we did.  The hosting of the website is also yet to be done, but that is something that should take a very short amount of time.

In the next week, we hope to have the eye tracking algorithms complete with the ability to detect the direction that the eye is looking in, the website should be hosted, and the hardware components should all be delivered and individually tested by then.

Sirisha’s Status Report for 03/11/23

The majority of the time this past week was spent on the design report and additional research as we determined certain changes needed to be made since we refined some aspects of the project.  For the design report, I worked on the abstract, the design trade studies, parts of the system implementation, test, verification, and validation, project management, and related work, in addition to assisting my teammates with their sections as needed and reviewing and revising the entire report at the end.  Throughout working on the design report, we all needed to conduct some more research in order to be thorough with our descriptions.  I also needed to review all of the work we had done up until this report to make sure everyone was on the same page about what we decided for our project and any changes that were made.  While working on the design report, we also started to individually test some of our hardware components to make sure they all work, but we are still waiting on some components to be delivered since some changes with our software algorithm decisions necessitated a change in the hardware we needed.  I also continued working on the website and implementing our UI designs from last week.

According to the Gantt chart, I was supposed to have finished any work with the device schematics and part selection, component testing, refining the design documentation, and setting up the initial webapp structure.  The work with testing components needs to be pushed because we had to change some of the hardware we needed to order.  Also, because of how much time the design report was taking up last week, not as much time was allocated to continuing the initial webapp structure.  These can be alleviated by next week because we have a lighter week according to the Gantt chart.

By next week, I aim to finish the initial webapp setup and assist the others in their work so we can start integration.

Yasser’s Status Report for 03/11/23

For this status report, I accomplished finishing writing the design report for our project. Specifically, I was tasked with writing the design requirements, design trade studies for the facial detection models and camera, as well as the system implementation dealing with the calibration process and computer vision. For the system implementation, I also added diagrams and flow charts detailing the overall process of how the computer vision algorithms will detect whether a driver is attentive or distracted. Furthermore, I was also tasked with quoting my sources for the information I was responsible for writing.

My progress is behind schedule. By this week, I should’ve already had the eye and mouth detection algorithms coded. However, this has not been accomplished since most of my attention was geared towards writing the design document. As a result, the actions that must be taken in order to catch up must be to focus on the eye and mouth detection code and get it done mid week. Afterwards, I’ll aim to get started on the calibration step code and the classification. There is also a 3 week buffer embedded into our Gantt schedule so this buffer could also be used to offset my delay.

In the next week, I aim to have the eye and mouth detection code written, as well as getting started on the calibration step and classification. If possible, finishing the latter two will be great and get me up to speed.