Olivia’s Status Report for 4/27

This week, I added more basic functionalities to the GUI to improve usability such as buttons for going to previous pages or to the main menu, closing the adjustment or calibration process, and bug fixes for new pages. I also integrated the focus detection and stretch reminder features into the GUI so that these trackers can be activated and periodically remind users to stretch or take a break. I used multithreading for these features so that the process can be started and run outside of the main tkinter thread to periodically check the user and do computations on their landmark positions. I was able to implement notification features using tkinter’s messagebox library that will pop up on the user’s screen after a set time has passed where they have been looking at their screen or sitting for too long .

I was also able to do more tests of the adjustment algorithm using the new motor for height adjustment, and found that the intervals and delays used will need to be tweaked since the new motor’s max speed is much lower than the previous motor’s.

I am a bit behind on schedule since I am still testing the algorithm will the fully integrated system, and plan to continue working on fine tuning the process and well as refining the GUI for usability next week.

Olivia’s Status Report for 4/20

This week I added the Manual Adjustment page to the GUI which will allow users to adjust both angle and height of the stand using button clicks that send serial commands to the Arduino similar to the system used for automatic adjustment.

I also created a focus tracking program using OpenCV that tracks the direction in which a user is looking to determine whether they are looking at their screen. This program keeps track of variations of both horizontal and vertical distances between key facial landmarks to periodically check the user’s head position, and if they have been consistently looking forward at the screen for 20 minutes, not including small breaks or movements, then they will receive a notification in the GUI reminding them to take a break to prevent eye strain. I also wrote a similar, simpler program that periodically detects whether a face is detected in frame which will be used to send notifications via GUI when a user should get up to stretch after long periods of sitting in front of their screen.

I was also able to begin testing the automated adjustment process using the linear actuators on our stand and adjusting the height of the stand manually. I tested the angle adjustment by opening my laptop to several different angles, undergoing calibration, running the angle adjustment code in the integrated subsystem, and tuning the time needed between feedback checks in OpenCV/Python. I found that 250 ms is the smallest delay that can be used between checking landmark distances before the angle adjustment becomes noticeably inaccurate.

I am currently on track, and next week, I plan to integrate the focus tracking aspects to the GUI. After we are able to supply enough torque for the platform jack’s screw rotation, I will more thoroughly test the height adjustment with the automated system.

New Tools and Knowledge:

Over the course of this project, I learned how to use OpenCV and other computer vision tools such as trained models for object classification and detection. To learn about these topics, I used many online tutorials, videos, and the OpenCV documentation to learn the basics of CV programs, and I wrote a bunch of smaller test programs to test out and learn about new features and how I could use them in our project.

Olivia’s Status Report for 4/6

This week I continued working on the GUI and added additional pages and features. I added a new page that allows a user to choose whether to calibrate their stand for the first time or to adjust the stand based on prior calibration data. This way, users can skip the calibration after they’ve gone through the process once. I was able to integrate the height and angle adjustment program with the Arduino via serial connection, and we are able to automate the motor and linear actuator movement.

I have done some testing with the height and angle adjustment to make sure that the algorithm we use is able to determine the correct height and angle for a user when starting from different heights and angles. I will continue testing this when we integrate the software with the stand in order to tune and make some adjustments to the time between feedback checks and speed at which we turn the motors in order to reach the ideal height and angle within our desired time limit.

I am currently on schedule. Next week I plan to add more pages to the GUI to display posture tracking data as well as implement eye tracking to the software in order to send users notifications when they have been focusing on their screen for long periods of time.

Olivia’s Status Report for 3/30

This week I wrote the Python code that would carry out the height and angle determination of our system. The process will first need a calibration stage in which the user’s landmark data will be captured when their laptop is at the ideal height. Then, any time the user uses the device again, this data will be used as reference for determining whether to increase or decrease the height and/or angle of the laptop. The program uses an iterative process of incrementally increasing or decreasing the angle of the laptop until the vertical distance between landmarks is approximately the same as the reference distance from calibration, and then switching to adjusting the height of the laptop until the y-coordinate of the landmark at the center of the face is within threshold of the position determined at calibration. This method proved to be pretty accurate even when starting at different heights and angles.

I am also working on a method that uses position of key landmarks on the frame’s coordinate system in order to determine the height and angle adjustments that will be needed. In this method, we wouldn’t need a calibration stage for the user which may be more convenient and faster, but it seems to be less accurate in my testing.

I am currently on schedule and will continue working on this software next week and am planning to have this height and angle adjustment code integrated with the Arduino firmware in order to move the motor and linear actuators according to which adjustment is needed.

Olivia’s Status Report for 3/23

This week I continued working on our automatic height and angle adjustment software. After discussing with the team, we decided to do a multiple iterations of height and angle adjustment to incrementally reach the correct position rather than calculate the exact height and angle needed for adjustment. The method I am considering is to use both the vertical distance between landmarks and the y-coordinate of the facial landmark in the middle of the user’s face to determine whether the screen is positioned at the correct height and angle. Using these two measurements, we can switch between incrementing height or angle by an interval and checking these measurements until we reach an acceptable threshold of vertical landmark distance and position of the face on the screen.

I also worked on creating the GUI for our system and added the front page with buttons to choose between opening a window to use the adjustment functionality or to view the posture tracking data. When a user clicks the adjustment button, the calibration and height determination program will run.

I am currently on schedule and am aiming to finish the iterative adjustment cycle and have it coded and tested by next week.

Olivia’s Status Report for 3/16

This week, I worked on a function that get the real life distance a laptop would need to rise based on the distance in pixels. To do this, I’m using the camera calibration as described in the OpenCV documentation to find the focal length of my camera and used these in my calculations assuming the distance from the screen is constant.

I also integrated the calibration setup for gathering facial landmarks with the current landmark distance calculation, so that we are using the calibration data points to calculate the reference difference and compare them to current landmarks. I’m able to detect if the camera is at a height lower than the one determined at calibration, higher, or within threshold.

However, when neither the height or pitch of the laptop are known, it is difficult to determine whether the correct course of action for the system will be to raise the height of the stand or to angle to the laptop using the linear actuators since either a change in height or change in angle can produce similar results in vertical landmark difference. In order to solve this issue, I’m working on using a few vectors between landmarks to measure the skew of certain sections of a user’s face to be able to decide if the camera is at an angle or at a height difference.

I am currently on schedule, and next week I will continue working on this height vs. angle determination. I also hope to begin creating a starter GUI to display basic functionalities such as calibration and setup.

Olivia’s Status Report for 3/9

This week, I have been working on the Design Review, specifically the Abstract, Introduction, Test, Verification, and Validation, and Project Management sections.

I also wrote a script that will gather all the landmark coordinates of a single frame and write that information to a text file. After our meeting last week, we decided to include 2 calibrations stages: one for gathering reference landmarks and one for gathering posture information. I plan to use this script to achieve the former by saving the user’s facial landmark data from the calibration stage to reference for future use when adjusting the stand’s height automatically.

I am currently on schedule and plan to integrate these distance calculations and reference checks with motor movement in the upcoming weeks.

Olivia’s Status Report for 2/24

This week, I worked on the OpenCV facial landmark detection for the stand height automation. I wrote a Python program to determine the distance between any 2 specified landmarks and tried a bunch of different face angles to see the which landmarks would be good references to use. I found that the distance between the eyebrows and the tip of the nose would be one good set of landmarks to determine head tilt use because it had significant and consistent change in distance when a user is looking up, straight on, or down.

I also worked on the Design slides and practiced for the presentation this week.

By next week, I’d like to have the acceptable thresholds of distances between landmarks determined that will signify that the ideal height has been reached and write a function to decide this.

Olivia’s Status Report for 2/17

We have changed our method of determining the height to raise the stand from detecting eye level to detecting facial landmarks and the distances between points.

So, I’ve spent this week getting more experience with facial landmark detection with OpenCV by downloading dlib and using sample code to use the pre-trained detector to point out coordinates on my face. 

I have also downloaded OpenPose and looked into the documentation on how to use keypoints and distances between points to detect good posture versus poor posture. I have also been working on a mock up of what the user interface will look like.

Olivia’s Status Report for 2/10

I have been doing more research into OpenCV and how it can be used for locating eyes on a person’s face. I was looking into some open source projects that have been done using eye tracking to detect if eyes are open, and I think these resources will be useful later in the project when we are trying to incorporate focus detection into the stand’s features.

At the moment, I am more concerned with the eye recognition as coordinates on the screen and in what ways I can use this information to determine the ideal height to raise the stand. From just empirical research and looking at ergonomic screen heights, I think that ensuring that the approximate y-coordinates of the user’s eyes are at the top third of the screen’s height should be the optimal height to reach. However, this will also depend on the distance between the user and their laptop, so I think we will need to incorporate a distance sensor in order to guide the user to place their laptop at an ergonomic distance away of about 30 inches.

But I would like to have more certainty in whether these are good metrics to use in order to determine the screen’s ideal height, so I will continue to research this as well as OpenCV eye detection. I am currently on schedule and would like to have these height determination methods solidified by next week.