Olivia’s Status Report for 3/26

This past week, I have been refining the head pose estimation program. I have also begun implementing the detection of eye blinks. Our idea is that 3 eye blinks within 3 seconds will activate a lock or unlock gesture. “Locking” will pause the movement of the motor while “unlocking” will allow the motor to follow the movement of a person’s head again.

Aside from that, I’ve run into quite a few non-technical difficulties. My laptop and iPad broke which left me without the ability to work on the program for quite a few days. (Thank goodness all my code was backed up on GitHub.) Additionally, I was told the camera I ordered was delivered. My plan this week was to connect that into the rest of the system. However, when I picked up the camera, there was a diode in the package instead. I discussed the issue with ECE Receiving and was told the camera should be here soon. My plan for the next week is to connect the camera and head pose estimation program to the rest of the project and have an initial prototype complete. This way, we can begin user testing! I also plan to finish the program that detects the eye blinks this week.

Team Status Report for 3/26

This week, we were able to finish out some of the last pieces we need for an end-to-end pipeline. The circuit for out motor-to-arduino connection was completed, the calibration translation serial port was rewritten and debugged, and our computer vision pipeline now has a non-camera display version and a locking feature in progress. Once our members can join together on Monday, we should be able to put together our parts and begin debugging the entire pipeline, with the goal of presenting it on Wednesday.

The largest change was to the internal design of the calibration system. Now instead of sending data over the serial port one at a time, the byte stream for user pitch and yaw will be sent in one string, and then parsed by the arduino. This design involves more complex code on the arduino side, but makes our system more efficient and less error-prone.

The biggest risk currently is there may be unmitigated problems in our system that might pop up as we are putting together the pipeline on Monday, but our team is expecting this and we can each set aside time to debug the different problems that may arise. Once we can get through this process, we can begin gathering users for usability testing! Additionally we are still waiting for our camera and lidar to arrive, the camera delivery had an issue where the camera wasn’t in the package, so we need to wait a little more before integrating these tools. Luckily we don’t need them to debug our full system, so it doesn’t put us behind schedule, but we may need to put in some extra hours once those tools come in to learn how to read the data and input it into our program.

Isabel’s Status Report for 3/26

This week was also fairly productive. While I thought I was facing a single issue with the serial port last week when creating the calibration pipeline, in fact I was facing a set of issues. When I got the serial port running, the bytes that came back would be slightly out of order, or seemed randomly to be either what I was expecting or some disorganized mishmash of the codes I was using to signify ‘right’ (720).

I decided to debug it more closely using by a test file ‘py_connectorcode.py’. I researched more on the serial port bugs and read through arduino forums, and found this useful resource https://forum.arduino.cc/t/serial-input-basics/278284/2 about creating a non-blocking function with start and end markers to prevent the data from overwriting itself. The non-blocking element is important so that our functions that are writing angles to the motor don’t stop while waiting for the serial port to finish. I also decided to remake my design to send only one pre-compiled message per round (for yaw and pitch of the person), and then print only one line back from the arduino, to clean up the messages.

The calibration code is complete, but still needs to be synthesized with the motor code before we can test it. Currently our team plan is to put together the pieces on Monday, so I’ll have this code in the head_connector.py file by the end of Sunday. Then next week I hope to help the rest of the team with debugging the code, specifically for translation, before coming up with ideas and most likely debugging the initial calibration process. Currently I am on schedule with the calibration process, but slightly behind with Lidar, since I was planning to integrate it this week but couldn’t due to it not having arrived yet. I’ve already researched a tutorial on how to put it together, so once it arrives I have a plan to put it together, and can make extra time in the lab to get that circuit out.

 

 

Rama’s Status Report for 3/26

In the beginning of the week, I simplified the circuit and found mini breadboards to use in the final assembly of the projector stand. I wrote movement functions needed for calibration and wrote a draft of the functions needed to move the projection given the information collected over CV. I attended the National Society of Black Engineers (NSBE) Conference in Anaheim that started on Wednesday and I will be back Sunday night so I was not able to make further progress than this. On Monday, I hope to make the last order for parts and test out the draft functions. My goal by the end of the week is to have the physical system assembled so it can be properly tested with my team member’s work and code.

Rama’s Status Report for 3/19

This week I set up the circuit and found the angle and speed ranges for both the motors. I also confirmed the power ranges for the motors so I can purchase the correct power source since we are powering the Arduino and both motors with one external power source.

I have decided to use the ServoEasing library to control the speed of the motors. I tested out linear, quadratic, and cubic easing of the motors as well as combinations of these types. I also tested out the synchronization of the motion of both motors to see if it follows natural head movement better.

I have created a file in which different easing types are used to move the projection so that we can do a small user study and see which motion people find smoothest and most comfortable. I have also worked on the functions we will be using for horizontal motion. This week I hope to finish the functions and focus on properly securing the projector and cleaning up the circuit so it can be secured on the stand.

Isabel’s Status Report for 3/19

This week was fairly productive, as we are beginning to build our full pipeline, with the goal of finishing it out next week. Most of my hours went towards designing a state machine for the arduino program, which will need to switch through different parts of code for the different calibration phases (user height, then left and right measurements). For these phases, I’m using while loops with a timeout mechanism, as in, once the user stops moving the projector around for one calibration phase, it will automatically lock in and switch to the next phase. During each phase, the user will move the projector around with their head, which will signal the motor to move at a flat rate in the direction the user is facing. This mechanism may be redesigned in the future for usability, but for our preliminary tests it should work to be streamlined with the current computer vision. I’ve been commenting ‘//TUNE’ next to constants we may be able to tune once we begin testing our code. I also ordered our lidar and did some research on how I will hook it up once it comes in.
Currently I am right on schedule with the calibration, as long as I continue at this pace to finish the first draft of calibration code within the next week. I’m aiming to at least get a usable version by Wednesday. The main challenge for this isn’t writing the code, since I’ve already designed this, but currently I’m struggling to debug the Serial port since it doesn’t seem to be reading back the angles that we’re writing over it. If I’m still stuck on that feature on Monday, I might ask a TA that’s used pySerial if they have any input on why it doesn’t seem to be working. I am slightly behind schedule for the lidar because it might take a while to come in, but my mitigation for this would be manually setting the wall distance for now.
My main deliverables for next week will be: the calibration program, and some communication designs for my teammates since we’ll all be working very closely with this calibration code in the future.

Team Status Report for 3/19

There have been a lot of updates in the past week. We have an initial gaze estimation program set up that has the capability of sending head pose data over to an Arduino. The Arduino then runs an initial program to calibrate and translate this head pose data into proper motor movements. Currently, we are focusing on moving the projection horizontally. Once this is working, we will add and integrate the other directions.

We are still working on integrating the Arduino to the motor. We have been testing various speeds of the motor. One of our main goals is to ensure the movement of the projector is smooth and provides an enjoyable user experience. We are aiming for the motor speed to move in a parabolic fashion so the movement of the project from point A to point B is not jerky.

We aim to have our initial prototype complete by Sunday, March 27th.  Following this, we will gather roughly 5 individuals to test out our system and provide feedback. We will then edit our project based on this feedback and repeat the process. A large part of our projection involves our design in order to create the smoothest, most satisfactory experience for our users. We also ordered a LiDAR and camera this week.

Olivia’s Status Report for 3/19

At the beginning of the week, I finalized an initial gaze estimation program. Attached are two photos showing me moving my face around. The green face mesh marks different landmarks that MediaPipe identifies. The blue line extending from my nose marks my estimated head pose (which is where the projection should eventually be in line with). This estimated head pose has a rotation and translation vector corresponding to it.  The head pose estimation file has the capability to connect to the Arduino and send data to it. Currently, we are only sending the head angle rotation about the y-axis (“yaw”) to the Arduino. Once we get this working with the motor and projector, we intend to send and integrate the rest of the data.

My personal goal for the next week is to refine the angle estimation and make it more robust to small head movements.  I also ordered a camera that should be able to handle facial recognition in dim lighting, so I plan to integrate that into the system within the next week as well.