Team Status Report for 4/30

We continued to perform user studies this week. In particular, we tested out the speed of the motor with the projector at a larger distance from the wall (~4.5 meters).  Beforehand, we were only testing with the projector at about 1.5 meters from the wall. We discovered that the current speed we had for 1.5 meters was very satisfactory for the 4.5 meters. However, through our testing, we realized it was a more pleasant experience if the motor moved slightly slower for smaller distances. Now, the motor speed is dependent on the distance it is going to travel. For larger distances, the speed is faster. Similarly, for smaller distances, the speed is smaller. We are still editing this, but the motion has become pretty smooth, and we are all very happy about it!

Over the next week, we have a poster, video, final paper, and a demo to prepare for and finish. There is a lot we need to get done still, but we feel on schedule and are proud and excited to share our project with everyone. The hardest part of this next week might be preparing for our project demonstration. We need to make sure to make it interesting and show all the hard work we have put into this project. We are also still in the process of integrating the day/night vision camera, making the project look more aesthetically pleasing, and fine tuning the overall system.

Olivia’s Status Report for 4/30

Since our final presentation, I have been continuing to test the overall system and making fine tunes. For instance, today, I was working with the team to fix the system from moving when the user makes small head movements (mimicking a person’s head motion as if they are reading something on the projection.) We successfully completed that. I also helped perform user studies to fine tune the speed of the motor. Now, the motor moves faster for larger panning distances and slower for smaller distances. The motion continues to become smoother every time we work on it which is great.

I am also still trying to integrate the day/night vision camera into the overall system. This camera is much more sensitive than my laptop camera that we have been using for the user studies. (The laptop camera works pretty well.) I hope by tomorrow that I will have this camera working well with the system. The horizontal motion is working nicely. It is the vertical motion that causes the most trouble.

The team and I also planned out everything we want done for this week. We have a rough draft of the poster done today, plan to work on the video on Monday, and have our demo presentation polished by Wednesday. It is bittersweet seeing this project come to a close. I feel on schedule and am excited for this week.

Isabel’s Status Report for 4/30

This week, we finished up our initial user studies on Sunday, and then completed a more informal user study today to fine tune our system. I was mainly focusing on preparing for the presentation, since I was the designated presenter for this time. I worked on the slides and performed some small code updates for improvements we came up with during testing to take out some of the jitteriness from our system, refine the calibration process, and clean up the code, but this week was less of a large update compared to the previous weeks.

We’re still on schedule and it looks like things will be wrapping up without any trouble next week. Next week I will help with planning and editing the video, putting together and refining our final report, and preparing for our demo!

Isabel’s Status Report for 4/23

This week, I’ve been mainly working with the entire team with debugging and improving the overall system. On Monday, I tested 3 different software serial programs to integrate the lidar values. I ended up going with SoftwareSerial even though it’s a blocking program, because it worked the most consistently with verifying the data from the lidar, so I found a way to quarantine the distance detection into the arduino’s setup code.

After Monday, the biggest two changes I implemented were one for smoothing the horizontal movement of the projector, and optimizing the pipeline. And the movement of the projector now runs noticeably smoother and faster! Which is good, because we have been finding lots of new bugs to work through during the user tests.

With the lidar integrated, now I am back on schedule. We anticipated that the last few weeks would be for our whole team to work on refinement, so everything I’ve been doing is falling into that category.
For this next week, I need to redo the block diagrams for our system, prepare for the presentation on Monday, smooth the vertical movement of the projector, and I hope to make another small redesign of the calibration program to make it much easier on the user, since we’ve been noticing from our tests that it can be difficult to move around the projector to two different angles, and then look at them correctly. Our calibration code is picky and very sensitive to error, so I want to make it as robust as possible before the demo.

Team Status Report for 4/23

We have made a lot of progress over the past week. Our entire system is integrated together and we have been performing a lot of testing. More specifically, we began a user survey to find the “magic curve” for moving the projection from point A to point B. This survey involves having the user experience 6 different combinations of speed and movement. We are continuing the survey into tomorrow and hope to have a finalized “magic curve” for our final presentation tomorrow.

While running our entire system this week, we have find quite a few bugs and ideas for improvement. This intensiveness testing of our system has been extremely beneficial for ensuring we create our system to the best of our ability. For instance, one issue we found was that the motor was a bit jerky as it moved from point A to point B. A great amount of time had been spent making this movement much smoother.

We also got a working motor and added in vertical movement. Currently, the motorized projector responds to the user’s change in pitch and yaw, which is great! The vertical movement of the projection is definitely more sensitive than the horizontal movement, so we have been and are continuing to take extra care when working with this direction. Another development is with the calibration process. We found the calibration to be a bit difficult to work with, so we are currently transferring the calibration system to be more user-friendly. For instance, one small improvement is that we added sounds to alert the user when when the calibration process changes states.

We still have a lot of progress to make but are extremely excited to see our system working and gaining user feedback!

Olivia’s Status Report for 4/23

Over the past week, I have been spending most of my time testing out the overall system. This involves running the calibration and translation phase. My main job has been to find issues with the system through extensive testing and brainstorm ideas for how to fix them. Since we just got the new motor (that powers the vertical movement), I have been spending quite a bit of time with testing that motor and altering values to ensure it moves the best.

I also helped develop the user survey for us to find the “magic curve” and have been helping to run these tests. These take about 30 minutes per person to talk through the system with them, test out the 9 different speed combinations, and get their feedback. Other action items this week were adding the new camera into the system (that has day and night vision). This camera is more sensitive than my laptop camera that we have been testing on, so I have been testing values out and altering the pitch and yaw calculations to get this set-up working.

Another improvement is that I added sounds to the calibration phase to make it more user-friendly. I was originally working on a graphic to display on the projection when the user was undergoing calibration, but I found this to be too difficult in the given time frame because of threading/blocking issues. Now, the system dings every time the lock gesture is detected to alert the user to look at the projection. I feel on schedule and have enough time to continue to test out the system and increase the robustness/accuracy.

Team Status Report for 4/16

This week we had an unfortunate incident where the battery overheated and fried our vertical motor. We had to rush order a new one, and we’re planning to cope with this to get it set up as best as we can in time for our user study in the second half of this week. If something goes wrong with getting the replacement, our contingency plan is to just default to the horizontal motor and try to set up our apparatus in such a way that the projector is in line with an average user’s height.
We aren’t going to be making any large changes to the system at this point, we’ve just been coming up with ways to make our program more robust (ex. averaging the user’s angle during the pairing process to soften if we get one bad reading, etc.). Again, we are planning as a team to get some people to test our system as a user study, as soon as we can (hopefully) get the new vertical motor set up.
We’ve got the full setup on the tripod, and just need to find a way to attach our various circuits (lidar, arduino) onto the tripod to make it more portable. The lidar circuit is also now built and confirmed that it works properly in communication with the arduino.

Isabel’s Status Report for 4/16

This week, I couldn’t be there in-person for the demo due to COVID, but I was still able to work on the project remotely thanks for my Arduino at home. I could also join zoom remotely during lab time, so I was able to help my teammates with running the full pipeline. I’ve been mostly debugging with our team and helping come up with new ideas to improve our pipeline. I was considering setting up some sort of sound system to guide the user through the locking and pairing procedure. Also, I was able to solder and build the lidar circuit when I got out of quarantine on Friday, and got it running with some test arduino code from this website: https://github.com/TFmini/TFmini-Arduino/blob/master/TFmini_Arduino_SoftwareSerial/TFmini_Arduino_SoftwareSerial.ino which fixed a checksum issue I was having with the basic reading code that shipped with the tfmini library.

With this lidar setup, I’m almost back on schedule. I hope to finish integrating the code by Monday, so that then I will be completely caught up and we’ll be thoroughly in the refinement stage for this last week.

 

Over the next week, I will complete any debugging or improvement patches to our calibration code, finish integrating the lidar, and help out with the user study we will be conducting in the second half of the week. I will also help with the completion of the final presentation slides.

Olivia’s Status Report for 4/16

The past week, I have been working on testing out the overall system. This includes running the calibration system then running the program that follows the head movement. Testing the overall system helped us find some bugs in our system and intuition as to how to fix the bugs. In particular, we found the “user offset” and “user wall-to-distance” to be incorrect at times. We are currently working on resolving this issue. 

When we were testing the overall system, we found the motor to move in an annoying fashion when we moved our head only slightly. This makes sense, since we did not add the ‘threshold’ in yet to prevent movement with small head movements. I am currently working on fixing the CV program to deal with these small thresholds by averaging the past 5 head pose estimations. I also got a new camera to add into the system that should work in dim lighting. My goal over the next week is to continue to test the overall system and tweak it to ensure it offers a clean user experience. We also plan to edit and run the user survey next weekend when the new motor gets delivered.

Olivia’s Status Report for 4/10

Earlier this week, the Wyze v3 camera was finally delivered.  I spent time trying to connect the camera to my laptop which required flashing the Wyze camera firmware. However, once this was done, the settings of the camera could not be changed to the specific night vision settings that are normally accessible in the Wyze app. Since we need a camera that works in dim lighting, this camera will not be sufficient. I put in a new order request for the Arducam 1080P Day and Night Vision camera which is supposed to connect easily to an Arduino or laptop and should be delivered this week.

I have been editing my head pose estimation program to make it more robust to natural human movement. For example, I have been handling the case where the person’s head goes out of frame and their head pose can no longer be detected. I still need to handle the case where a person makes an unintentional movement (such as sneezing). In this case, we do not want the system to follow the movement of the person’s head. I have also spent time creating our user experience survey which we plan on conducting this Friday, hopefully. We will be testing specific settings such as motor speed, lock gesture commands, movement thresholds, and overall user experience. I may add a few more items to test in the user survey as I continue to edit my program to make it more accommodating to natural body movement.