Shengxi’s Status Report for Feb 8th

Accomplishments This Week:

  • Planning and Research:

The flow diagram illustrates the high-level process for a 3D reconstruction pipeline using Intel RealSense and side webcams. The process begins with a one-time synchronization and calibration of the RealSense camera with the webcams, so we would be able to know the relative position of webcams in relation to RealSense to better render resultant images from their perspectives. This step involves ensuring that all cameras are spatially and temporally aligned using calibration techniques such as checkerboard or AprilTag patterns. The goal is to establish a unified coordinate system across all devices to facilitate accurate data capture and reconstruction.

Following calibration, the depth image capture phase begins, where the RealSense camera captures depth. The captured data is then processed using Dynamic Fusion, a technique that performs non-rigid surface reconstruction. This step updates and refines the 3D face model in real time, accommodating facial deformations and changes.

Once the face model is processed, the pipeline evaluates whether the 3D face model contains holes or incomplete areas. If gaps are detected, the system loops back to Dynamic Fusion processing to refine the model further and fill in the missing parts. If the model is complete, the pipeline proceeds to facial movement tracking, where the system shifts focus to monitoring the user’s facial movements and angles across frames. This is not too changing since the face would move rigidly so only one 6DoF transform is needed per frame.

Finally, the reconstructed model is aligned with the side webcams during the 3D model alignment step using the 6DoF transform we computed. This ensures the resultant AR overlay would accurately stick on the face of the users.

  • Dynamic Fusion Investigation:
    • Discovered an OpenCV package for DynamicFusion, which can be utilized for non-rigid surface tracking in our system.
    • Began reviewing relevant documentation and testing standalone implementations.
  • Calibration Code Development:
    • Wrote the AprilTag and checkerboard-based calibration code.
    • Cannot fully test yet, as RealSense hardware is required for debugging and system implementation. (still waiting for RealSense camera)

Project Progress:

    • Slightly behind due to missing hardware required for implementation (I need the RealSence in order to know the input format of the depth information and to integrate Dynamic Fusion), but was able to complete the design and most part of implementation.
    • Currently I have tried using Blender to create scenes with depth map and RGB image to simulate RealSence data to use in my program, currently have a working version that is slower than the requirement (takes around few seconds per frame).
    • Completed the camera calibration code but also cannot test if it works due to lack of hardwares.
    • Another concern I have is opencv’s package is able to work locally, but the environment setup is pretty challenging and might be hard to migrate to jetson.
  • Actions to Catch Up:
    • Continue refining non-rigid transform calculation process using DynamicFusion.
    • RealSense was ready for pickup, will integrate RealSense measurements with my code.
    • Also needs to check how necessary is using DynamicFusion since it does pose a pretty big delay to the 3D reconstruction processing pipeline.
  • By next week, I hope to be able to run the RealSense on my code to validate initial results. By next status report, I should be able to attach images of 3D reconstructed model of my face using the pipeline I wrote

Leave a Reply

Your email address will not be published. Required fields are marked *