Arnav’s Status Report for 3/23

This week, I successfully integrated the yawning, gazing, and sleep detection data from the camera and also enabled a way to store a snapshot of the user when the distraction occurs. The yawning, gazing, and sleep detection data is now stored in a table format and the columns include Time, Distraction Type, and Image. The table is updated almost instantly with a couple of milliseconds delay and this is because I am polling the data from the API endpoints every 1 second. This can be updated if the data needs to be shown on the React page even faster, but it is most likely not needed since the user ideally will not be monitoring this page while they are in a work session. The table appears on the Current Session Page and is under the Real-Time Updates table. 

I was able to get the snapshot of the user by using the following steps: 

I first utilized the run.py Python script to capture images from the webcam which is being stored in current_frame (a NumPy array). Once a distraction state is identified, I encoded the associated image into a base64 string directly in the script. This conversion to a text-based format allowed me to send the image over HTTP by making a POST request to my Django backend through the requests library, along with other data like session ID and user ID. 

The Django backend, designed with the DetectionEventView class, handles these requests by decoding the base64 string back into a binary image format. Using the DetectionEventSerializer, the incoming data is serialized, and the image is saved in the server’s media path. I then generated a URL that points to the saved image, which can be accessed from the updated data payload. To make the images accessible in my React frontend, I configured Django with a MEDIA_URL, which allows the server to deliver media files. 

Within the React frontend, I implemented a useEffect hook to periodically fetch the latest detection data from the Django backend. This data now includes URLs for the images linked to each detection event. When the React component’s state is updated with this new data, it triggers a re-render, displaying the images using the <img> tag in a dynamically created table. I ensured the correct display of images by concatenating the base URL of my Django server with the relative URLs received from the backend. I then applied CSS to style the table, adjusting image sizing and the overall layout to provide a smooth and user-friendly interface.

 The Current Session Page looks like the following:

I made a lot of progress this week and I am definitely on schedule. I will add in data from phone detection and distractions from surroundings next week. I will also work on creating some sample graphs with the current data we have. If I have some additional time, I will connect with Rohan and start to look into the process of integrating the EEG data into the backend and frontend in real-time.

 

Leave a Reply

Your email address will not be published. Required fields are marked *