Team Status Report for 4/30

We continued to perform user studies this week. In particular, we tested out the speed of the motor with the projector at a larger distance from the wall (~4.5 meters).  Beforehand, we were only testing with the projector at about 1.5 meters from the wall. We discovered that the current speed we had for 1.5 meters was very satisfactory for the 4.5 meters. However, through our testing, we realized it was a more pleasant experience if the motor moved slightly slower for smaller distances. Now, the motor speed is dependent on the distance it is going to travel. For larger distances, the speed is faster. Similarly, for smaller distances, the speed is smaller. We are still editing this, but the motion has become pretty smooth, and we are all very happy about it!

Over the next week, we have a poster, video, final paper, and a demo to prepare for and finish. There is a lot we need to get done still, but we feel on schedule and are proud and excited to share our project with everyone. The hardest part of this next week might be preparing for our project demonstration. We need to make sure to make it interesting and show all the hard work we have put into this project. We are also still in the process of integrating the day/night vision camera, making the project look more aesthetically pleasing, and fine tuning the overall system.

Olivia’s Status Report for 4/30

Since our final presentation, I have been continuing to test the overall system and making fine tunes. For instance, today, I was working with the team to fix the system from moving when the user makes small head movements (mimicking a person’s head motion as if they are reading something on the projection.) We successfully completed that. I also helped perform user studies to fine tune the speed of the motor. Now, the motor moves faster for larger panning distances and slower for smaller distances. The motion continues to become smoother every time we work on it which is great.

I am also still trying to integrate the day/night vision camera into the overall system. This camera is much more sensitive than my laptop camera that we have been using for the user studies. (The laptop camera works pretty well.) I hope by tomorrow that I will have this camera working well with the system. The horizontal motion is working nicely. It is the vertical motion that causes the most trouble.

The team and I also planned out everything we want done for this week. We have a rough draft of the poster done today, plan to work on the video on Monday, and have our demo presentation polished by Wednesday. It is bittersweet seeing this project come to a close. I feel on schedule and am excited for this week.

Team Status Report for 4/23

We have made a lot of progress over the past week. Our entire system is integrated together and we have been performing a lot of testing. More specifically, we began a user survey to find the “magic curve” for moving the projection from point A to point B. This survey involves having the user experience 6 different combinations of speed and movement. We are continuing the survey into tomorrow and hope to have a finalized “magic curve” for our final presentation tomorrow.

While running our entire system this week, we have find quite a few bugs and ideas for improvement. This intensiveness testing of our system has been extremely beneficial for ensuring we create our system to the best of our ability. For instance, one issue we found was that the motor was a bit jerky as it moved from point A to point B. A great amount of time had been spent making this movement much smoother.

We also got a working motor and added in vertical movement. Currently, the motorized projector responds to the user’s change in pitch and yaw, which is great! The vertical movement of the projection is definitely more sensitive than the horizontal movement, so we have been and are continuing to take extra care when working with this direction. Another development is with the calibration process. We found the calibration to be a bit difficult to work with, so we are currently transferring the calibration system to be more user-friendly. For instance, one small improvement is that we added sounds to alert the user when when the calibration process changes states.

We still have a lot of progress to make but are extremely excited to see our system working and gaining user feedback!

Olivia’s Status Report for 4/23

Over the past week, I have been spending most of my time testing out the overall system. This involves running the calibration and translation phase. My main job has been to find issues with the system through extensive testing and brainstorm ideas for how to fix them. Since we just got the new motor (that powers the vertical movement), I have been spending quite a bit of time with testing that motor and altering values to ensure it moves the best.

I also helped develop the user survey for us to find the “magic curve” and have been helping to run these tests. These take about 30 minutes per person to talk through the system with them, test out the 9 different speed combinations, and get their feedback. Other action items this week were adding the new camera into the system (that has day and night vision). This camera is more sensitive than my laptop camera that we have been testing on, so I have been testing values out and altering the pitch and yaw calculations to get this set-up working.

Another improvement is that I added sounds to the calibration phase to make it more user-friendly. I was originally working on a graphic to display on the projection when the user was undergoing calibration, but I found this to be too difficult in the given time frame because of threading/blocking issues. Now, the system dings every time the lock gesture is detected to alert the user to look at the projection. I feel on schedule and have enough time to continue to test out the system and increase the robustness/accuracy.

Olivia’s Status Report for 4/16

The past week, I have been working on testing out the overall system. This includes running the calibration system then running the program that follows the head movement. Testing the overall system helped us find some bugs in our system and intuition as to how to fix the bugs. In particular, we found the “user offset” and “user wall-to-distance” to be incorrect at times. We are currently working on resolving this issue. 

When we were testing the overall system, we found the motor to move in an annoying fashion when we moved our head only slightly. This makes sense, since we did not add the ‘threshold’ in yet to prevent movement with small head movements. I am currently working on fixing the CV program to deal with these small thresholds by averaging the past 5 head pose estimations. I also got a new camera to add into the system that should work in dim lighting. My goal over the next week is to continue to test the overall system and tweak it to ensure it offers a clean user experience. We also plan to edit and run the user survey next weekend when the new motor gets delivered.

Olivia’s Status Report for 4/10

Earlier this week, the Wyze v3 camera was finally delivered.  I spent time trying to connect the camera to my laptop which required flashing the Wyze camera firmware. However, once this was done, the settings of the camera could not be changed to the specific night vision settings that are normally accessible in the Wyze app. Since we need a camera that works in dim lighting, this camera will not be sufficient. I put in a new order request for the Arducam 1080P Day and Night Vision camera which is supposed to connect easily to an Arduino or laptop and should be delivered this week.

I have been editing my head pose estimation program to make it more robust to natural human movement. For example, I have been handling the case where the person’s head goes out of frame and their head pose can no longer be detected. I still need to handle the case where a person makes an unintentional movement (such as sneezing). In this case, we do not want the system to follow the movement of the person’s head. I have also spent time creating our user experience survey which we plan on conducting this Friday, hopefully. We will be testing specific settings such as motor speed, lock gesture commands, movement thresholds, and overall user experience. I may add a few more items to test in the user survey as I continue to edit my program to make it more accommodating to natural body movement.

Olivia’s Status Report for 4/2

This week I implemented the “lock” gesture which occurs when someone blinks 3 times in a span of 2 seconds. This lock gesture will work in conjunction with the calibration process when a user “locks” the projection in place to confirm its location is correctly attuned to the persons gaze. I implemented this gesture using four Mediapipe landmarks around both the left and right eye. In each frame the program processes, I calculate the “eye aspect ratio” and determine if it is below a set threshold. If so, a blink is detected. I also spent time cleaning up my code and making it best suitable to work with the overall system. Sadly, I got the flu the past week which made it very difficult for me to get any other work done or go to class. I personally feel behind schedule because I have not met with my team in person for a bit. I hope that changes this next week when I am finally able to pick up the camera, connect it to the overall system, and see the progress we have recently made with my own eyes.

Team Status Report for 4/2

The past week, we have successfully integrated the entire pipeline together. This means the system can convert the movement of a user’s head into the movement of a motor. The next step is to add the projector onto the motor and conduct a user study to determine how we can tune our system to provide the best user experience. This will involve gathering 10 people and asking them a range of questions after testing out our system. This will be are biggest tackle for the rest of the month. It is essential our system has a positive user satisfaction by the end of this project. We will be taking our peer’s advice very seriously and altering our implementation as needed. We have also created a Trello board to organize all of our tasks for the rest of the semester, since from now on we’ll be working closely with the full system implementation and need to maintain good communication between the team.
The main risk is that we may not be able to meet all of the usability requests from our participants. If this happens, we will triage the ones that we consider to make the largest impact.
Additional changes were made to the calibration program, with the state machine being redrawn and a lot of the code being rewritten to cope with the data flow. We’re now building the system and refining it with preliminary user tests bit by bit, and making refinement notes, to adapt to these changes.

Olivia’s Status Report for 3/26

This past week, I have been refining the head pose estimation program. I have also begun implementing the detection of eye blinks. Our idea is that 3 eye blinks within 3 seconds will activate a lock or unlock gesture. “Locking” will pause the movement of the motor while “unlocking” will allow the motor to follow the movement of a person’s head again.

Aside from that, I’ve run into quite a few non-technical difficulties. My laptop and iPad broke which left me without the ability to work on the program for quite a few days. (Thank goodness all my code was backed up on GitHub.) Additionally, I was told the camera I ordered was delivered. My plan this week was to connect that into the rest of the system. However, when I picked up the camera, there was a diode in the package instead. I discussed the issue with ECE Receiving and was told the camera should be here soon. My plan for the next week is to connect the camera and head pose estimation program to the rest of the project and have an initial prototype complete. This way, we can begin user testing! I also plan to finish the program that detects the eye blinks this week.

Team Status Report for 3/19

There have been a lot of updates in the past week. We have an initial gaze estimation program set up that has the capability of sending head pose data over to an Arduino. The Arduino then runs an initial program to calibrate and translate this head pose data into proper motor movements. Currently, we are focusing on moving the projection horizontally. Once this is working, we will add and integrate the other directions.

We are still working on integrating the Arduino to the motor. We have been testing various speeds of the motor. One of our main goals is to ensure the movement of the projector is smooth and provides an enjoyable user experience. We are aiming for the motor speed to move in a parabolic fashion so the movement of the project from point A to point B is not jerky.

We aim to have our initial prototype complete by Sunday, March 27th.  Following this, we will gather roughly 5 individuals to test out our system and provide feedback. We will then edit our project based on this feedback and repeat the process. A large part of our projection involves our design in order to create the smoothest, most satisfactory experience for our users. We also ordered a LiDAR and camera this week.