Isabel’s Status Report for 4/10

Earlier this week, before Wednesday, I completed the full new version of the calibration state machine and tested to make sure the stages were switching through correctly. I also made a python testing suite to test all the mathematical functions I was using within the arduino code-the reason being, in python it’s much easier and faster to print out debugging material, and debugging trig and geometric conversion code can be a headache at the best of times, so I needed as much feedback as possible. On Wednesday during our practice demo, I noticed the locking mechanism happens a bit too fast for a user to properly interact with it, so I designed and implemented a mechanism within the calibration phases to provide some delay between the user locking the projector and the program actually pairing the user/projector angles with each other. I made a diagram of this process:

Now the “L” still stands for lock, and stops the motors, the “S” means the python program is ‘sleeping’, and the “P” signals for the arduino program to pair the angles. The reason I did this in stages is to leave room for a possible python program addition that counts down the seconds until pairing so the user knows they should look at the projector.

Next week, I need to explain to my teammates how to set up the physical system and work the calibration program. I can’t leave my home until Friday because I’ve got COVID, and I’ve been pretty sick but have been slowly recovering. The good thing is the bulk of this is built and debugged, we just need to get the physical setup going now. Our goal is to get this ready by the Wednesday demo, and I will support this remotely in whatever way I can.

I’m still on schedule for everything except the lidar. Since I had to isolate this weekend, I couldn’t go into the lab and make the lidar circuit. I’ll have to do this next weekend.

Isabel’s Status Report for 4/2

In addition to redrawing the state machine for the calibration (shown Wednesday), I’ve since been working on refining the linear motor movement program. Rama has been giving me her interface notes and functions that she wrote for motor movement, so I’ve been incorporating those with the arduino state machine to move the motor vertically as well as horizontally. Finally, I’ve added a fix for the issue of the motor not stopping between the user turning their head by adding a ‘center’ signal between ‘left’ and ‘right’, which sends a stop signal. I’ve also integrated Olivia’s code for the locking signal, and tested that it makes it over to the arduino and registers correctly (the ‘locked’ means locked, and the ‘U’ means unlocked):

Something I’ve been considering with Olivia is how to improve the accuracy of our locking gesture. Today I had a friend with thick glasses test the program, and it had trouble detecting the three blinks. We might consider a hand gesture or head nod as an alternative to this.

Over the next week I will continue testing each stage of the calibration process and continue making refinement notes (which I am tracking in a document), so we can consider these refinement notes when we do end-to-end user testing after this next week.  Some examples of these notes are:

-send floats over serial port rather than int

-Consider the thresholding for the first head movement phase

-Decrease the threshold for blink detection

And so on. My goal is to finish the testing by the end of the week, and hopefully clean up the code a little so it’ll be more readable to my teammates.

Currently I’m on schedule for the calibration, although the testing and refinement phases have been merging in with each other more than expected. The lidar just arrived, and my plan as of now is to integrate it next weekend after I can walk my teammates through the current calibration code. This is mainly so any roadblocks with the lidar don’t jeopardize the demo, since we’ll do all the preparation for that this week.

Team Status Report for 4/2

The past week, we have successfully integrated the entire pipeline together. This means the system can convert the movement of a user’s head into the movement of a motor. The next step is to add the projector onto the motor and conduct a user study to determine how we can tune our system to provide the best user experience. This will involve gathering 10 people and asking them a range of questions after testing out our system. This will be are biggest tackle for the rest of the month. It is essential our system has a positive user satisfaction by the end of this project. We will be taking our peer’s advice very seriously and altering our implementation as needed. We have also created a Trello board to organize all of our tasks for the rest of the semester, since from now on we’ll be working closely with the full system implementation and need to maintain good communication between the team.
The main risk is that we may not be able to meet all of the usability requests from our participants. If this happens, we will triage the ones that we consider to make the largest impact.
Additional changes were made to the calibration program, with the state machine being redrawn and a lot of the code being rewritten to cope with the data flow. We’re now building the system and refining it with preliminary user tests bit by bit, and making refinement notes, to adapt to these changes.

Olivia’s Status Report for 3/26

This past week, I have been refining the head pose estimation program. I have also begun implementing the detection of eye blinks. Our idea is that 3 eye blinks within 3 seconds will activate a lock or unlock gesture. “Locking” will pause the movement of the motor while “unlocking” will allow the motor to follow the movement of a person’s head again.

Aside from that, I’ve run into quite a few non-technical difficulties. My laptop and iPad broke which left me without the ability to work on the program for quite a few days. (Thank goodness all my code was backed up on GitHub.) Additionally, I was told the camera I ordered was delivered. My plan this week was to connect that into the rest of the system. However, when I picked up the camera, there was a diode in the package instead. I discussed the issue with ECE Receiving and was told the camera should be here soon. My plan for the next week is to connect the camera and head pose estimation program to the rest of the project and have an initial prototype complete. This way, we can begin user testing! I also plan to finish the program that detects the eye blinks this week.

Team Status Report for 3/26

This week, we were able to finish out some of the last pieces we need for an end-to-end pipeline. The circuit for out motor-to-arduino connection was completed, the calibration translation serial port was rewritten and debugged, and our computer vision pipeline now has a non-camera display version and a locking feature in progress. Once our members can join together on Monday, we should be able to put together our parts and begin debugging the entire pipeline, with the goal of presenting it on Wednesday.

The largest change was to the internal design of the calibration system. Now instead of sending data over the serial port one at a time, the byte stream for user pitch and yaw will be sent in one string, and then parsed by the arduino. This design involves more complex code on the arduino side, but makes our system more efficient and less error-prone.

The biggest risk currently is there may be unmitigated problems in our system that might pop up as we are putting together the pipeline on Monday, but our team is expecting this and we can each set aside time to debug the different problems that may arise. Once we can get through this process, we can begin gathering users for usability testing! Additionally we are still waiting for our camera and lidar to arrive, the camera delivery had an issue where the camera wasn’t in the package, so we need to wait a little more before integrating these tools. Luckily we don’t need them to debug our full system, so it doesn’t put us behind schedule, but we may need to put in some extra hours once those tools come in to learn how to read the data and input it into our program.

Isabel’s Status Report for 3/26

This week was also fairly productive. While I thought I was facing a single issue with the serial port last week when creating the calibration pipeline, in fact I was facing a set of issues. When I got the serial port running, the bytes that came back would be slightly out of order, or seemed randomly to be either what I was expecting or some disorganized mishmash of the codes I was using to signify ‘right’ (720).

I decided to debug it more closely using by a test file ‘py_connectorcode.py’. I researched more on the serial port bugs and read through arduino forums, and found this useful resource https://forum.arduino.cc/t/serial-input-basics/278284/2 about creating a non-blocking function with start and end markers to prevent the data from overwriting itself. The non-blocking element is important so that our functions that are writing angles to the motor don’t stop while waiting for the serial port to finish. I also decided to remake my design to send only one pre-compiled message per round (for yaw and pitch of the person), and then print only one line back from the arduino, to clean up the messages.

The calibration code is complete, but still needs to be synthesized with the motor code before we can test it. Currently our team plan is to put together the pieces on Monday, so I’ll have this code in the head_connector.py file by the end of Sunday. Then next week I hope to help the rest of the team with debugging the code, specifically for translation, before coming up with ideas and most likely debugging the initial calibration process. Currently I am on schedule with the calibration process, but slightly behind with Lidar, since I was planning to integrate it this week but couldn’t due to it not having arrived yet. I’ve already researched a tutorial on how to put it together, so once it arrives I have a plan to put it together, and can make extra time in the lab to get that circuit out.

 

 

Isabel’s Status Report for 3/19

This week was fairly productive, as we are beginning to build our full pipeline, with the goal of finishing it out next week. Most of my hours went towards designing a state machine for the arduino program, which will need to switch through different parts of code for the different calibration phases (user height, then left and right measurements). For these phases, I’m using while loops with a timeout mechanism, as in, once the user stops moving the projector around for one calibration phase, it will automatically lock in and switch to the next phase. During each phase, the user will move the projector around with their head, which will signal the motor to move at a flat rate in the direction the user is facing. This mechanism may be redesigned in the future for usability, but for our preliminary tests it should work to be streamlined with the current computer vision. I’ve been commenting ‘//TUNE’ next to constants we may be able to tune once we begin testing our code. I also ordered our lidar and did some research on how I will hook it up once it comes in.
Currently I am right on schedule with the calibration, as long as I continue at this pace to finish the first draft of calibration code within the next week. I’m aiming to at least get a usable version by Wednesday. The main challenge for this isn’t writing the code, since I’ve already designed this, but currently I’m struggling to debug the Serial port since it doesn’t seem to be reading back the angles that we’re writing over it. If I’m still stuck on that feature on Monday, I might ask a TA that’s used pySerial if they have any input on why it doesn’t seem to be working. I am slightly behind schedule for the lidar because it might take a while to come in, but my mitigation for this would be manually setting the wall distance for now.
My main deliverables for next week will be: the calibration program, and some communication designs for my teammates since we’ll all be working very closely with this calibration code in the future.

Team Status Report for 3/19

There have been a lot of updates in the past week. We have an initial gaze estimation program set up that has the capability of sending head pose data over to an Arduino. The Arduino then runs an initial program to calibrate and translate this head pose data into proper motor movements. Currently, we are focusing on moving the projection horizontally. Once this is working, we will add and integrate the other directions.

We are still working on integrating the Arduino to the motor. We have been testing various speeds of the motor. One of our main goals is to ensure the movement of the projector is smooth and provides an enjoyable user experience. We are aiming for the motor speed to move in a parabolic fashion so the movement of the project from point A to point B is not jerky.

We aim to have our initial prototype complete by Sunday, March 27th.  Following this, we will gather roughly 5 individuals to test out our system and provide feedback. We will then edit our project based on this feedback and repeat the process. A large part of our projection involves our design in order to create the smoothest, most satisfactory experience for our users. We also ordered a LiDAR and camera this week.

Olivia’s Status Report for 3/19

At the beginning of the week, I finalized an initial gaze estimation program. Attached are two photos showing me moving my face around. The green face mesh marks different landmarks that MediaPipe identifies. The blue line extending from my nose marks my estimated head pose (which is where the projection should eventually be in line with). This estimated head pose has a rotation and translation vector corresponding to it.  The head pose estimation file has the capability to connect to the Arduino and send data to it. Currently, we are only sending the head angle rotation about the y-axis (“yaw”) to the Arduino. Once we get this working with the motor and projector, we intend to send and integrate the rest of the data.

My personal goal for the next week is to refine the angle estimation and make it more robust to small head movements.  I also ordered a camera that should be able to handle facial recognition in dim lighting, so I plan to integrate that into the system within the next week as well.

Rama’s Status Report for 2/26

My primary focus has been on the design report and coming up with possible designs for the system as well as a more complete list of parts needed. I researched different projector mounts and stands, as well as robotics materials. I decided that we should secure the projector to a plate that is attached to the motor. From here, we can add attachments or straps to the plate to better secure the projector.

There was an issue with ordering the motor off of Amazon, so we had to buy it from the supplier and it just came in so I will be able to pick it up Monday morning. I plan on testing functionality and motor speeds and writing the base of the functions that we will need. Once the connector code is done, I can test out the functions with hardcoded inputs sent through PySerial.