Shengxi’s Status Report for March 8th

What did you personally accomplish this week on the project?
This week, I worked on integrating my reconstruction pipeline into Jetson and framed the most efficient approach for the rendering pipeline. I also focused on motion tracking to ensure that the rendering stays aligned with the user’s face with minimal drift. Specifically, I refined the alignment of 3D facial landmarks with the face model and calibrated the coordinate transformations using OpenCV and AprilTag. Additionally, I implemented real-time head motion tracking using PnP to estimate both rigid and non-rigid transformations, ensuring that AR filters remain correctly positioned.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
My progress is on schedule. The foundation for rendering and motion tracking is in place, and next week, I will move on to implementing the OpenGL rendering.

What deliverables do you hope to complete in the next week?
Next week, I plan to complete the OpenGL-based rendering pipeline. This includes implementing real-time texture blending using OpenGL shaders to seamlessly overlay AR effects onto the user’s face. Additionally, I will refine motion tracking by further improving the hybrid approach for rigid and non-rigid motion estimation, ensuring robustness against rapid movements and partial occlusions.

Team’s Status Report for March8

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

One of the most significant risks is environment compatibility, since currently each of us is coding independently, so some members have to work from the Mac ecosystem, so running our software on the Jetson may present unforeseen challenges, such as driver conflicts or performance limitations.
Mitigation: We will pass the access of Jetson one by one in our team to ensure smooth integration before full-system testing.

Another minor risk is performance bottlenecks, 3D face modeling, and gesture recognition involve computationally expensive tasks, which may slow real-time performance.
Mitigation: We are each trying different tricks to optimize computation like using SIMD, and also evaluating accuracy trade-offs between accuracy and efficiency to ensure the best performance within required frame rate bounds.

One risk that was faced was uploading the Arduino code onto the Arduino. We had anticipated that we would just need to buy the materials as instructed so that we can be ready to code and upload it to the Arduino. However, we found out that there’s no way to upload the code to an Arduino Pro Mini, so with our leftover budget, we bought 2 USB to serial adapters for around $9 so that we can upload the code.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The overall block diagram remains unchanged, and few details within the software implementation of the pipeline are tested (e.g. what exact opencv technique we use in each module).

However, we will have to meet next week on going through our requirements again to make sure that the previously described performance bottlenecks are be mitigated within our test requirement, or to loosen the requirements a little to ensure a smooth user experience with the computing power we have.

Updates & Schedule change

So far we are good with the schedule, some changes have been made, and the UI development has been pulled in the front since the hardware parts have not arrived yet. In terms of progress, we are positive and will be able to reach our system integration deadline in time. We will also ensure weekly synchronization between modules to prevent any latency in final integration.

Anna’s Status Report for March8

 

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

I am working on the setup for the UI, more specifically, generating build files and building (step 5- last step: https://github.com/kevidgel/usar-mirror). So far, I was able to do all the other previous steps (steps 1-4) and verified that openpose (https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/0_index.md#compiling-and-running-openpose-from-source) is running successfully as shown in the image below. 

Right now, I am having trouble identifying the CMakeList.txt file which involves installing Cuda. I have confirmed with Steven that we will not be using Cuda, so I will ask Steven how to build without using Cuda since it won’t let me build without using Cuda even after silencing the flags. 

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am slightly behind in that I still need to solder my PCB with the parts that I just got on the day before Spring break and am still waiting on my USB to Serial adapter to upload my code to the Arduino. I am also a little behind in terms of setting up the UI as I anticipate working on the UI after assembling the camera rig. At the same time, I will write and test the Arduino code. 

  • What deliverables do you hope to complete in the next week?

I hope to at least build my camera rig and to finish setting up my environment so that I can get started working on the UI. Then, I will plan on writing and testing my Arduino code and integrating it with the gesture recognition. 

Steven’s Status Report for March 8

What did you personally accomplish this week on the project?

I worked on integrating the C++ api for OpenPose in our application, and did some fine tuning for performance and accuracy. Keypoints are now available to use in our application for eye tracking and gesture control. I also did some research for gesture recognition algorithms. I think a good starting point is having the purely based on the velocity of the keypoint (ex. Left hand moves quickly to the right).

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Roughly on schedule. I think with the OpenPose now integrated into the application, developing gesture control should be simple.

What deliverables do you hope to complete in the next week?

Complete gesture control algorithm. Also, I have yet to compile the project on the Jetson.