Karen’s Status Report for 4/6

This week, I completed integration of phone pick-up and other people distraction detection into the backend and frontend of our web application. Now we can see the phone and other people distraction type displayed in the current session page.

I also have finished the facial recognition implementation. I decided on the Fast MT-CNN model for facial detection and the SFace model for facial embeddings. This produced the best results in terms of a balance between accuracy and speed. This is the core of the facial recognition module with the rest of the logic in the run.py and utils.py scripts. The program now recognizes when the user is no longer recognized or not in frame and reports how long the user was missing for. 

User not recognized:  08:54:02
User recognized:  08:54:20
User was away for 23.920616388320923 seconds

I also recognized that adding facial recognition significantly slowed down the programming since facial recognition requires a large amount of processing time. Because of this, I implemented asynchronous distraction detection using threading so that consecutive frames can be processed simultaneously. I am using the concurrent.futures package to achieve this.

executor = ThreadPoolExecutor(max_workers=8)

A next step would be recognizing when the user is simply not in frame vs. when there is an imposter taking place of the user. After that would be integrating the facial recognition data into the frontend and backend of the web app. In the following week, I will focus on facial recognition integration and properly testing to verify my individual components.

I have some initial testing of my distraction detection components. Arnav, Rohan, and I have all used yawning, sleeping, and gaze detection with success. From initial testing, these modules work well across different users and faces. Initial testing of other people detection has shown success and robustness for a variety of users. Phone pick-up detection needs more testing with different users and different colored phones, but initial testing shows success on my phone. I also need to begin verification that face recognition works for different users, but it has worked well for myself for now.

I have already performed some verification of individual components, such as the accuracy of the YOLOv8 phone object detector and the accuracy of MT-CNN and SFace. More thorough validation methods for the components integrated in the project as a whole are listed in our team progress report.

In the coming week I will work on the validation and verification methods. Now that all of the video processing distraction detections are implemented, I will work with Arnav on making the web application cleaner and more user friendly.

Leave a Reply

Your email address will not be published. Required fields are marked *