Cora’s Status Report For 4/26/2025

This week I gave the final presentation, so preparing this presentation took up a lot of my time this week. Additionally, I worked on the graph display for the neck angle data, which we were able to get working. Next week, I would like to test receiving inputs for both the blink rate and the neck angle at the same time and make sure these graphs can both update correctly at the same time since although we have tested them separately we haven’t yet tested them together. I want to get this done before the final video on Wednesday and final demo on Thursday so that I can demonstrate this functionality then.

I am on schedule this week. Next week I hope to test the browser extension to see how it does with multiple inputs and make sure it still works properly as well as make the browser extension a little more pretty probably with adding CSS to the HTML for the popup.

Cora’s Status Report for 4/19/2025

This week I worked on debugging the security issues that I was facing with the iframed graph display script for the blink rate/neck angle data. I was able to get these security issues figured out by adjusting the csp policies of the sandboxed html in the manifest.json file.

Here is an image displaying a sample graph which is iframed into the extension’s html. I was additionally able to get communication between the extension code and the sandboxed code via postmessages which is good since this allowed me to update the graph data with the information I’m getting from the server with updated blink rate/neck angle data.

Additionally, this week we worked on our final presentation. My progress is on schedule, although I would like that the graphs automatically update (right now the user has to click on a button to update the graph). I want to get this figured out before the final presentation next week so that we can display this functionality during the presentation.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

I relied on a lot of online documentation from Chrome for developers and JavaScript/HTML/CSS in general during this project. I had some experience making extensions in the past but this project required a lot of functionality I did not have experience with, such as I have never tried to send requests to a server via a Chrome browser extension before. I also found it helpful to look at GitHub repos for other extension projects, such as there was a repo I used to learn how to adjust the brightness of the user’s screen.

Team Status Report for 4/19/2025

The most significant risks in regards to the pressure sensing is sensor inaccuracy. Specifically, while getting the sensors to work for a singular person isn’t difficult, but it is hard to adjust the parameters so that they are universally applicable. In order to mitigate this risk, Kaitlyn is tuning the sensors and updating the algorithm so that it is more general and work with as many people as possible.

The most significant risk for the neck angle sensing is that the gyroscope has too much drift and will effect the results over long periods of time of use, such as 1+ hours. In order to mitigate this risk, Lilly is doing additional calibration and fine tuning to the kalman filter which she is using to process the neck angle data.

For the browser extension, the most significant risk right now is getting the displays working seamlessly without the user having to click buttons in order to update data in the graphs. In order to update the graph displaying blink rate right now, the user has to click a button, which works well since Chrome extensions are event-driven. Cora is mitigating this risk by making it so when the user opens the browser extension, it automatically makes update requests so the user doesn’t need to do this themselves.

There have not been changes to our design nor are there updates to our schedule. We are trying to do as much testing right now as possible before the final presentation next week so that we can display at the very least functionality of our product during the presentation.

Cora’s Status Report for 4/12/2025

Last week I worked on getting the blink rate integrated with the server and the browser extension. I’m running the python script locally which calls the OpenCV library which enables us to collect blink rate. This data is shared via a HTTP request to the server which shares that data with another HTTP request with the browser extension.

This is a picture of the python script running (note that the user will not see this in our final product but it will run in the background). Note the blink rate is at 2.

This is a picture of the browser extension. After requesting to get the blink rate from the server via the “Get Updated Blink Count Value” button, the HTTP of the extension was updated and “2” was displayed.

This week I worked on further integration with Lilly on the browser extension and the neck angle. Additionally, I am currently working on getting the graph display for the blink rate working. I’m running into some issues with using third party code which is necessary in order to make graphs. My solution which I’m debugging at the moment is to sandbox the JavaScript which uses the third party code in order that we can use this code without breaking Google Chrome’s strict security policies regarding external code.

I am on track this week. I think after getting the graph UI working the browser extension will be near finished as far as basic functionality and the rest will be small tweaks and making it look pretty. I’m hoping to reuse the graph code to display neck angle as well so once I get it figured out for the blink rate this shouldn’t be an issue.

Cora’s Status Report for 3/29/2025

This week I worked on the browser extension’s UI, continued integration with Kaitlyn (pressure sensors), and started integration with Lilly (neck angle).

For the browser’s extension UI, I implemented the CSS for displaying the pressure sensor data. The four ovals correspond to the four pressure sensors that we will have on the seat. The server sends the browser extension four binary values, when the binary value is 1, this indicates that the user is leaning too much in this area.

This is what is displayed when the user is sitting in the correct posture (their baseline they set which is their goal posture).

This is just an example of what the circles looked like filled in. When a circle turns red, this means that the user is leaning too much in that direction.

Additionally this week I continued working on integration with Kaitlyn and started integration with Lilly. With the neck sensor data, we were able to get the angle collected via the python code which collects the sensor data -> server -> browser extension.

I’m on track for this week. For the interim demo next week, I’d like to make sure that everything works seamlessly between the CV and the browser extension and make sure that the extension can demonstrate that it can receive data from the pressure sensors and the neck angle sensor.

Team Status Report for 3/22/2025

The most significant risks that are currently jeopardizing our project is, for the neck angle sensing, drift affecting our results overtime for the gyroscope and issues with the Ras Pi bluetooth. In order to prevent these risks from affecting our project, Lilly is doing testing to ensure that the data from the gyroscope is correct and is currently debugging the bluetooth. For the pressure sensing, the most significant risks currently are variability in the data sensor range and generally having trouble decoding the raw data that we are receiving. Kaitlyn is currently testing the pressure sensor to ensure that the data is consistently correct. For the browser extension, the most significant risks currently are latency and syncing issues between the data being received from different places. In order to avoid syncing issues, I’m hoping to reduce the latency from the requests coming in by doing the data processing on the script which is on the Ras Pi and sending only the final result over HTTP to the browser extension so we aren’t sending huge amounts of data and causing delays.

There have not been significant changes in the design, although this week we did decide on what kind of mat and visor we want. For the mat, we are going with clear, flexible plastic like a shower curtain and are currently just taping the foot pressure sensors to them for testing purposes. We chose this material for the mat since it is flexible and comfortable and will not be so thick as to affect the pressure measurement. For the visor, we are going with a simple visor but ensuring it is large enough and adjustable in order to account for a diverse range of people using our product.

There have been no changes to the schedule this week.

Cora’s Status Report for 3/22/2025

This week I focused on doing more integration between the Rasp Pi server and the browser extension as well as worked on the screen dimming part of the browser extension.

Kaitlyn and I worked on sending actual pressure sensor data across the network instead of just a simple request. We wanted to see that the data the browser extension was receiving actually changed when changes were made on the pressure sensor and that the latency wasn’t too extreme. We did this by sending the data to the server through the script that collects the data from the Ras Pi, then the server sends this data to the browser extension once it receives a request to do so. The browser extension currently makes this request when I click a button but this is for testing purposes right now and will be done behind the scenes for our actual finished product.

This image shows the pressure sensor data that the browser extension received back after I clicked a button to send a request. Note this is just raw data for now.

Additionally, this week I worked on the screen dimming part of the browser extension. Specifically, I worked on getting the dimming working for multiple tabs since this will be necessary for the final product. Previously, I’d have to open the extension in all tabs if I wanted to dim them, but now I can just do it in one tab and the effect will happen over all open tabs.

  

This is the before.

  

This is the after. Note that I’m adjusting the brightness via a slider in the extension right now but once again this is just for testing purposes and will be done behind the scenes later.

I’m on track this week. Next week I intend on getting the dimming to respond to the ambient brightness of the room (dimming when the room is dark) instead of having to manually do it.

Cora’s Status Report for 3/15/2025

This week I worked on integration between the browser extension and the server hosted by the Ras Pi. I was successfully able to send a simple HTTP request and receive a response. The request is triggered by clicking a button that I added to the extension for now for demonstration purposes, but later this will be behind the scenes.

This picture is before clicking the button and sending the request. I’m sshed into the Ras Pi in the command prompt on the right.

This picture is after clicking the button. The server prints to the console when it receives the request and I inserted a “Hello!” into the http of the extension when it receives a response back from the server.

Additionally, this week I worked on the screen dimming mechanism of the extension. This will be incorporated when the camera detects a dark environment and this will trigger the screen to automatically dim to reduce eye strain. I am on track this week. Next week I’d like to finish the screen dimming and finish incorporating cookies into the CV app so that the web app can communicate with the extension.

Cora’s Status Report for 3/8/2025

This week I focused on getting the CV working with the web app. I’m able to get it working no longer using OpenCV but tensorflow, since there was a convenient npm package that I was able to download and use that had the functionality that we were looking for. Currently, the web app is ran locally and when ran counts the number of blinks the user performs in a 1 minute interval, which I think is a good demonstration of the functionality that we will need for our final product.

Next week, I would like to get the web app communicating with the browser extension and be able to display the blink count on the extension instead of just printing it to the console. I would also like to create and implement the screen dimming mechanism of the extension, which will automatically dim the tab that the user is currently on if they are in a dark environment. The primary thing I’m concerned with here is how I will get the ambient brightness data from the webcam and how accurate that data will be. I’m on time this week since I was able to figure out the CV and I anticipate the screen dimming will not take long so looking forward if I hope to get that done next week which leaves ample time for the integration with the Ras Pi server which I anticipate will take the most time.

Cora’s Status Report for 2/22/2025

This week I attempted to do the blink detection algorithm and browser extension integration. Last week I was able to get webcam access in the browser extension with the intention of just integrating the blink detection script into the JavaScript of the browser extension. I ran into some issues trying to make this work, specifically Chrome no longer allows remote script to be executed with the release of Manifest V3. I proceeded to install opencv.js locally and uploaded it with the browser extension files but still ran into issues since opencv.js uses eval which Chrome prevents from running even when calling local code. So I’ve abandoned the idea of running the blink detection algorithm via the extension and instead decided to get it up and working with a simple web app then once that is working properly have this web app store the data from the blink detection algorithm in a cookie in the user’s browser and then have the extension read the cookie and display the data.

I definitely wanted to be able to get the blink detection algorithm done this week, but don’t think this makes me behind quite yet. I think if I can’t figure it out next week I will consider myself behind. Ideally I also get the automated screen brightness adjustment feature of the extension done next week too since I should not run into security issues with this because it does not require installing a library like opencv.