Arnav’s Status Report for 3/16

This week I focused on integrating the camera data with the Django backend and React frontend in real-time. I worked mainly on getting the yawning feature to work and the other ones should be easily integrated now that I have the template in place. The current flow looks like the following: the run.py file which is used for detecting all distractions (gaze, yawn, phone pickups, microsleep) now sends a post request for the data to http://127.0.0.1:8000/api/detections/ and also sends a post request for the current session to http://127.0.0.1:8000/api/current_session. The current_session is used to ensure that previous data is not shown for the current session the user is working on. The data packet that is currently sent includes the session_id, user_id, distraction_type, timestamp, and aspect_ratio. For the backend, I created a  DetectionEventView, CurrentSessionView, and YawningDataView that handles the POST and GET requests and orders the data accordingly. Finally, the frontend fetches the data from these endpoints using fetch(‘http://127.0.0.1:8000/api/current_session‘) and fetch(`http://127.0.0.1:8000/api/yawning-data/?session_id=${sessionId}`) and polls the data every 1 second to ensure that it catches any distraction event in real-time. Below is a picture of the data that is shown on the react page every time a user yawns during a work session:

The data is ordered so that the latest timestamps are shown first. Once I have all the distractions displayed, then I will work on making the data look more presentable. 

My progress is on schedule and during the next week, I will continue to work on the backend to ensure that all the data is displayed and I will put the real-time data in a tabular format. I will also try to add a button to the frontend so that it automatically triggers the run.py file so that it does not need to be run manually. 

Leave a Reply

Your email address will not be published. Required fields are marked *