Team Status Report for March 29th

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

One major risk is the unreliability of gesture recognition, as OpenPose struggles with noise and time consistency. To address this, the team pivoted to a location-based input model, where users interact with virtual buttons by holding their hands in place. This approach improves reliability and user feedback, with potential refinements like additional smoothing filters if needed.

System integration is also behind schedule due to incomplete subsystems. While slack time allows for adjustments, delays in dependent components remain a risk. To mitigate this, the team is refining individual modules and may use mock data for parallel development if necessary.

The camera rig needs a stable stand and motion measurement features. A second version is in progress, and if stability remains an issue, alternative mounting solutions will be explored.

Finally, GPU performance issues could affect real-time AR overlays. Ongoing shader optimizations prioritize stability and responsiveness, with fallback rendering techniques as a contingency if improvements are insufficient.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Gesture-based input has been replaced with a location-based system due to unreliable pose recognition. While this requires UI redesign and new logic for button-based interactions, it improves usability and consistency. The team is expediting this transition to ensure thorough testing before integration.

Another key change is a focus on GPU optimization after identifying shader inefficiencies. This delays secondary features like dynamic resolution scaling but ensures smooth AR performance. Efforts will continue to balance visual quality and efficiency.

Provide an updated schedule if changes have occurred.

This week, the team is refining motion tracking, improving GPU performance, stabilizing the camera rig, and finalizing the new input system. Next week, focus will shift to full system integration, finalizing input event handling, and testing eye-tracking once the camera rig is ready. While integration is slightly behind, a clear plan is in place to stay on track.

Shengxi Wu’s report for March 29

What did you personally accomplish this week on the project?
This week, I focused on preparing a working prototype for the interim demo. I finalized the integration of motion tracking data with the OpenGL rendering pipeline on the Jetson Orin Nano, which now supports stable AR overlays in real time. I implemented basic camera motion smoothing to reduce jitter and improve alignment between the virtual and real-world content. On the performance side, I began profiling GPU usage under different scene conditions, identifying a few bottlenecks in the current shader code.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
I’m on schedule for the interim demo, though there’s still polishing left. Some experimental features like dynamic resolution scaling are still pending, but the core functionality for the demo is working. To stay on track, I’m prioritizing stability and responsiveness, and will continue optimizing shader code and system performance after the demo.

What deliverables do you hope to complete in the next week?
Next week, I plan to polish the demo for the presentation, focusing on smoother motion tracking, more consistent AR overlays, and better GPU performance. I also want to finish porting over the refined blending strategies from macOS to the Jetson, and begin experimenting with fallback rendering techniques to handle lower-performance scenarios gracefully

Anna’s Status Report for Mar29

 

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

I wrote new code for my camera rig to test up and down motion. I got my up/down and left/right motion working. (couldn’t upload videos, but got the motions working. Providing screenshots instead)

 

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

 

I am on track, though I am a little behind. I still have to find a way to make the camera rig stand and to write code for measuring the degree and distance.

 

  • What deliverables do you hope to complete in the next week?

 

I hope to start creating my 2nd camera rig and work on the UI.

 

Steven’s Status Report for March 29th

 

What did you personally accomplish this week on the project?

Gesture recognition seems too unintuitive for the application, plus the inputs we received from the pose recognition model were too noisy for reliable velocity estimation for gestures. So, I pivoted to a location based input model, where the user instead of making gestures, moves their hand to a virtual button on the screen, which registers input if the user “holds” their hand location over that button for a period of time. This is a better solution, since estimating position was a lot more reliable than estimating velocities, since (I don’t think) OpenPose is time consistent. Also, visual buttons on the screen provide better feedback to the user.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Behind. Currently we’re supposed to do system integration. But I’m waiting on my partners subsystems, and currently finishing up my own. We have a little slack time for this, so I am not too worried.

What deliverables do you hope to complete in the next week?

I hope to write input events/buttons for all the input events we have planned as features for our project. I also hope to get started on testing the eye-tracking system (i.e. correcting eye level) once Anna finishes her camera rig, through serial command inputs from the program to Arduino via usb.

Team Status Report for Mar22

 

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

 

A significant risk could be having the two webcams not working properly. These two webcams are the essence behind our new feature of providing multiple viewpoints, and given that it doesn’t necessarily follow the original design, there are a lot of things that can go wrong. Right now, there are problems with setting up the stepper motor for rotational movement of the camera since the camera rig will be placed vertically instead of horizontally. They are being managed by using some screws and some materials to keep the stepper motor secured to the mounting plate that will move up and down. Contingency plans are purchasing stepper motor mounts that can hold the stepper motors perpendicularly in place.

Most significant risk currently is the system integration failing. So far everyone has been working on their tasks pretty separately (software for gesture recognition/eye tracking, software for AR overlay, hardware for camera rig + UI). It looks like everyone has made significant progress on their tasks, and are close to the testing stage for the individual parts. However, not much testing/design has gone into how these subprojects will interface. We will discuss this in the further weeks. Moreover, we have made some time in the schedule for the integration, which gives us ample time for making sure everything works.

Another risk is performance, the compute requirement for the software is a lot and the Jetson may not be able to handle it. But this has already been mentioned in our last team status report, and we are currently working on it.

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

 

Yes, the placement of the stepper motor had to be positioned upward instead of sideways so that the camera can properly rotate horizontally when moving up and down. This change was necessary so that we can implement our new feature (multiple viewpoints) to users who want to view their face at different angles. The change has a risk of failing since there’s no efficient way to secure the stepper motor in place, and given the camera setup and structure, the camera might have difficulties rotating. These changes involve having to put more effort in making the new modifications work and can possibly incur more costs (though Anna is trying to work her way around it). If needed, we will purchase stepper motor mounts if they can’t be secured with screws and other materials.

  • Provide an updated schedule if changes have occurred.

No major changes have occurred yet.

 

 

Anna’s Status Report for Mar22

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

This week, I made sure to assemble the whole camera rig and identify parts that I would need to officially start testing my code. Currently, I am trying to modify the camera design from the original once since ours will be placed vertically instead of horizontally. Right now, if I follow the original design, the camera wouldn’t be able to rotate horizontally, so I would have to make sure that the stepper motor is positioned perpendicularly to the camera rig. I also identified that I would need a battery connector to connect the battery to the PCB.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am a little behind given that I was supposed to start testing my code this week. This week, I will purchase the battery connector and start testing my code on the stepper motors even if I don’t get the cameras in position. I will focus on assembling the camera rig itself and testing the code separately for functionality.

What deliverables do you hope to complete in the next week?

I hope to test my code and fix some bugs that I anticipate as well as fully build the camera rig.

Steven’s status report for March 22

 

What did you personally accomplish this week on the project?

https://github.com/kevidgel/usar-mirror/commit/5f6e604137110f6559df3144245f885c7efa9c0f — the pose tracking doesn’t suck anymore. Also worked on the gesture recognition algorithm to be more robust to missing/implausible data.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

Behind. But I will spend some extra time this week to catch up.

What deliverables do you hope to complete in the next week?

Maybe I get to compile it on the Jetson if I have time. But mainly testing gesture control.

 

Shengxi’s status report for March 22

What did you personally accomplish this week on the project?
This week, I continued working on the OpenGL rendering pipeline for the Jetson Orin Nano. After resolving the environment setup issues from last week, I successfully got real-time shader compilation and execution working on the device. I debugged and finalized the initial implementation of real-time texture blending using OpenGL shaders, and now the system renders correctly with basic blending. I also started integrating the rendering pipeline with motion tracking data and ran performance tests to evaluate shader throughput. On my Mac, I continued experimenting with advanced blending strategies for more realistic AR overlays, which I plan to port over to the Jetson once stable.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?
I’m now mostly back on schedule. Getting past the earlier OpenGL setup challenges unblocked a lot of progress this week. Some shader optimization tasks may still spill into next week, I did not manage to implement everything I was planning for due to other workload but I am optimistic to get back on track. To stay on track, I will continue to test and iterate quickly on the Jetson, focusing on resource-efficient rendering and responsive performance.

What deliverables do you hope to complete in the next week?
Next week, I plan to complete integration of motion tracking with the rendering pipeline and refine the AR overlay stability. I’ll also work on final shader optimizations and begin profiling GPU usage to ensure low-latency rendering on the Jetson. If time permits, I’ll explore dynamic resolution scaling or other adaptive techniques to maintain performance under varying scene complexity.

Anna’s Status Report for Mar15

 

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours). 

I got the UI setup and ready to go so that I can start working on the UI. However, I encountered a problem where ImGui is not showing up. This didn’t happen before, so I will have to debug this with Steven. 

I also assembled my PCB so that I can use it to test the camera rig.

I also got all the parts for the camera rig so that I can start to assemble the camera rig and connect the stepper motors on the camera rig to the PCB. 

I also finished writing my Arduino code. I will upload it to the Arduino and have it hooked up with the PCB and stepper motors to test if my code works.

 

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? I am on schedule. This week, I will finish assembling the camera rig since I now have all the parts and will test if my Arduino code is working. I will also see why my ImGui isn’t showing up even though the build and run is successful.

 

  • What deliverables do you hope to complete in the next week? I hope to assemble the camera rig and make progress in fixing up my code by testing it. If possible, I hope to be able to start working on the UI.

Team Status Report for March 15th

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

Most significant risk currently is the system integration failing. So far everyone has been working on their tasks pretty separately (software for gesture recognition/eye tracking, software for AR overlay, hardware for camera rig + UI). It looks like everyone has made significant progress on their tasks, and are close to the testing stage for the individual parts. However, not much testing/design has gone into how these subprojects will interface. We will discuss this in the further weeks. Moreover, we have made some time in the schedule for the integration, which gives us ample time for making sure everything works.

Another risk is performance, the compute requirement for the software is a lot and the Jetson may not be able to handle it. But this has already been mentioned in our last team status report, and we are currently working on it.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No changes currently.

Updates & Schedule change

The team seems to be roughly on schedule.