Team Status Report for 4/30

We continued to perform user studies this week. In particular, we tested out the speed of the motor with the projector at a larger distance from the wall (~4.5 meters).  Beforehand, we were only testing with the projector at about 1.5 meters from the wall. We discovered that the current speed we had for 1.5 meters was very satisfactory for the 4.5 meters. However, through our testing, we realized it was a more pleasant experience if the motor moved slightly slower for smaller distances. Now, the motor speed is dependent on the distance it is going to travel. For larger distances, the speed is faster. Similarly, for smaller distances, the speed is smaller. We are still editing this, but the motion has become pretty smooth, and we are all very happy about it!

Over the next week, we have a poster, video, final paper, and a demo to prepare for and finish. There is a lot we need to get done still, but we feel on schedule and are proud and excited to share our project with everyone. The hardest part of this next week might be preparing for our project demonstration. We need to make sure to make it interesting and show all the hard work we have put into this project. We are also still in the process of integrating the day/night vision camera, making the project look more aesthetically pleasing, and fine tuning the overall system.

Olivia’s Status Report for 4/30

Since our final presentation, I have been continuing to test the overall system and making fine tunes. For instance, today, I was working with the team to fix the system from moving when the user makes small head movements (mimicking a person’s head motion as if they are reading something on the projection.) We successfully completed that. I also helped perform user studies to fine tune the speed of the motor. Now, the motor moves faster for larger panning distances and slower for smaller distances. The motion continues to become smoother every time we work on it which is great.

I am also still trying to integrate the day/night vision camera into the overall system. This camera is much more sensitive than my laptop camera that we have been using for the user studies. (The laptop camera works pretty well.) I hope by tomorrow that I will have this camera working well with the system. The horizontal motion is working nicely. It is the vertical motion that causes the most trouble.

The team and I also planned out everything we want done for this week. We have a rough draft of the poster done today, plan to work on the video on Monday, and have our demo presentation polished by Wednesday. It is bittersweet seeing this project come to a close. I feel on schedule and am excited for this week.

Team Status Report for 4/23

We have made a lot of progress over the past week. Our entire system is integrated together and we have been performing a lot of testing. More specifically, we began a user survey to find the “magic curve” for moving the projection from point A to point B. This survey involves having the user experience 6 different combinations of speed and movement. We are continuing the survey into tomorrow and hope to have a finalized “magic curve” for our final presentation tomorrow.

While running our entire system this week, we have find quite a few bugs and ideas for improvement. This intensiveness testing of our system has been extremely beneficial for ensuring we create our system to the best of our ability. For instance, one issue we found was that the motor was a bit jerky as it moved from point A to point B. A great amount of time had been spent making this movement much smoother.

We also got a working motor and added in vertical movement. Currently, the motorized projector responds to the user’s change in pitch and yaw, which is great! The vertical movement of the projection is definitely more sensitive than the horizontal movement, so we have been and are continuing to take extra care when working with this direction. Another development is with the calibration process. We found the calibration to be a bit difficult to work with, so we are currently transferring the calibration system to be more user-friendly. For instance, one small improvement is that we added sounds to alert the user when when the calibration process changes states.

We still have a lot of progress to make but are extremely excited to see our system working and gaining user feedback!

Olivia’s Status Report for 4/23

Over the past week, I have been spending most of my time testing out the overall system. This involves running the calibration and translation phase. My main job has been to find issues with the system through extensive testing and brainstorm ideas for how to fix them. Since we just got the new motor (that powers the vertical movement), I have been spending quite a bit of time with testing that motor and altering values to ensure it moves the best.

I also helped develop the user survey for us to find the “magic curve” and have been helping to run these tests. These take about 30 minutes per person to talk through the system with them, test out the 9 different speed combinations, and get their feedback. Other action items this week were adding the new camera into the system (that has day and night vision). This camera is more sensitive than my laptop camera that we have been testing on, so I have been testing values out and altering the pitch and yaw calculations to get this set-up working.

Another improvement is that I added sounds to the calibration phase to make it more user-friendly. I was originally working on a graphic to display on the projection when the user was undergoing calibration, but I found this to be too difficult in the given time frame because of threading/blocking issues. Now, the system dings every time the lock gesture is detected to alert the user to look at the projection. I feel on schedule and have enough time to continue to test out the system and increase the robustness/accuracy.

Olivia’s Status Report for 4/16

The past week, I have been working on testing out the overall system. This includes running the calibration system then running the program that follows the head movement. Testing the overall system helped us find some bugs in our system and intuition as to how to fix the bugs. In particular, we found the “user offset” and “user wall-to-distance” to be incorrect at times. We are currently working on resolving this issue. 

When we were testing the overall system, we found the motor to move in an annoying fashion when we moved our head only slightly. This makes sense, since we did not add the ‘threshold’ in yet to prevent movement with small head movements. I am currently working on fixing the CV program to deal with these small thresholds by averaging the past 5 head pose estimations. I also got a new camera to add into the system that should work in dim lighting. My goal over the next week is to continue to test the overall system and tweak it to ensure it offers a clean user experience. We also plan to edit and run the user survey next weekend when the new motor gets delivered.

Olivia’s Status Report for 4/10

Earlier this week, the Wyze v3 camera was finally delivered.  I spent time trying to connect the camera to my laptop which required flashing the Wyze camera firmware. However, once this was done, the settings of the camera could not be changed to the specific night vision settings that are normally accessible in the Wyze app. Since we need a camera that works in dim lighting, this camera will not be sufficient. I put in a new order request for the Arducam 1080P Day and Night Vision camera which is supposed to connect easily to an Arduino or laptop and should be delivered this week.

I have been editing my head pose estimation program to make it more robust to natural human movement. For example, I have been handling the case where the person’s head goes out of frame and their head pose can no longer be detected. I still need to handle the case where a person makes an unintentional movement (such as sneezing). In this case, we do not want the system to follow the movement of the person’s head. I have also spent time creating our user experience survey which we plan on conducting this Friday, hopefully. We will be testing specific settings such as motor speed, lock gesture commands, movement thresholds, and overall user experience. I may add a few more items to test in the user survey as I continue to edit my program to make it more accommodating to natural body movement.

Team Status Report for 4/2

The past week, we have successfully integrated the entire pipeline together. This means the system can convert the movement of a user’s head into the movement of a motor. The next step is to add the projector onto the motor and conduct a user study to determine how we can tune our system to provide the best user experience. This will involve gathering 10 people and asking them a range of questions after testing out our system. This will be are biggest tackle for the rest of the month. It is essential our system has a positive user satisfaction by the end of this project. We will be taking our peer’s advice very seriously and altering our implementation as needed. We have also created a Trello board to organize all of our tasks for the rest of the semester, since from now on we’ll be working closely with the full system implementation and need to maintain good communication between the team.
The main risk is that we may not be able to meet all of the usability requests from our participants. If this happens, we will triage the ones that we consider to make the largest impact.
Additional changes were made to the calibration program, with the state machine being redrawn and a lot of the code being rewritten to cope with the data flow. We’re now building the system and refining it with preliminary user tests bit by bit, and making refinement notes, to adapt to these changes.

Olivia’s Status Report for 3/26

This past week, I have been refining the head pose estimation program. I have also begun implementing the detection of eye blinks. Our idea is that 3 eye blinks within 3 seconds will activate a lock or unlock gesture. “Locking” will pause the movement of the motor while “unlocking” will allow the motor to follow the movement of a person’s head again.

Aside from that, I’ve run into quite a few non-technical difficulties. My laptop and iPad broke which left me without the ability to work on the program for quite a few days. (Thank goodness all my code was backed up on GitHub.) Additionally, I was told the camera I ordered was delivered. My plan this week was to connect that into the rest of the system. However, when I picked up the camera, there was a diode in the package instead. I discussed the issue with ECE Receiving and was told the camera should be here soon. My plan for the next week is to connect the camera and head pose estimation program to the rest of the project and have an initial prototype complete. This way, we can begin user testing! I also plan to finish the program that detects the eye blinks this week.

Isabel’s Status Report for 3/19

This week was fairly productive, as we are beginning to build our full pipeline, with the goal of finishing it out next week. Most of my hours went towards designing a state machine for the arduino program, which will need to switch through different parts of code for the different calibration phases (user height, then left and right measurements). For these phases, I’m using while loops with a timeout mechanism, as in, once the user stops moving the projector around for one calibration phase, it will automatically lock in and switch to the next phase. During each phase, the user will move the projector around with their head, which will signal the motor to move at a flat rate in the direction the user is facing. This mechanism may be redesigned in the future for usability, but for our preliminary tests it should work to be streamlined with the current computer vision. I’ve been commenting ‘//TUNE’ next to constants we may be able to tune once we begin testing our code. I also ordered our lidar and did some research on how I will hook it up once it comes in.
Currently I am right on schedule with the calibration, as long as I continue at this pace to finish the first draft of calibration code within the next week. I’m aiming to at least get a usable version by Wednesday. The main challenge for this isn’t writing the code, since I’ve already designed this, but currently I’m struggling to debug the Serial port since it doesn’t seem to be reading back the angles that we’re writing over it. If I’m still stuck on that feature on Monday, I might ask a TA that’s used pySerial if they have any input on why it doesn’t seem to be working. I am slightly behind schedule for the lidar because it might take a while to come in, but my mitigation for this would be manually setting the wall distance for now.
My main deliverables for next week will be: the calibration program, and some communication designs for my teammates since we’ll all be working very closely with this calibration code in the future.

Team Status Report for 3/19

There have been a lot of updates in the past week. We have an initial gaze estimation program set up that has the capability of sending head pose data over to an Arduino. The Arduino then runs an initial program to calibrate and translate this head pose data into proper motor movements. Currently, we are focusing on moving the projection horizontally. Once this is working, we will add and integrate the other directions.

We are still working on integrating the Arduino to the motor. We have been testing various speeds of the motor. One of our main goals is to ensure the movement of the projector is smooth and provides an enjoyable user experience. We are aiming for the motor speed to move in a parabolic fashion so the movement of the project from point A to point B is not jerky.

We aim to have our initial prototype complete by Sunday, March 27th.  Following this, we will gather roughly 5 individuals to test out our system and provide feedback. We will then edit our project based on this feedback and repeat the process. A large part of our projection involves our design in order to create the smoothest, most satisfactory experience for our users. We also ordered a LiDAR and camera this week.