Team’s Status Report for 4/12

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

One major risk is the unreliability of gesture recognition, as OpenPose struggles with noise and time consistency. To address this, the team pivoted to a location-based input model, where users interact with virtual buttons by holding their hands in place. This approach improves reliability and user feedback, with potential refinements like additional smoothing filters if needed.

System integration is also behind schedule due to incomplete subsystems. While slack time allows for adjustments, delays in dependent components remain a risk. To mitigate this, the team is refining individual modules and may use mock data for parallel development if necessary.

Finally, GPU performance issues could affect real-time AR overlays. Ongoing shader optimizations prioritize stability and responsiveness, with fallback rendering techniques as a contingency if improvements are insufficient.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Gesture-based input has been replaced with a location-based system due to unreliable pose recognition. While this requires UI redesign and new logic for button-based interactions, it improves usability and consistency. The team is expediting this transition to ensure thorough testing before integration.

Another key change is a focus on GPU optimization after identifying shader inefficiencies. This delays secondary features like dynamic resolution scaling but ensures smooth AR performance. Efforts will continue to balance visual quality and efficiency.

The PCB didn’t exactly match the electrical components, especially the stepper motor driver being used. We had ordered a different kind of stepper motor to match our needs (being run for long periods of time), but it required an alternative design. So, we made new wire connections to be able to use the stepper motor.

Provide an updated schedule if changes have occurred.

This week, the team is refining motion tracking, improving GPU performance, and finalizing the new input system. Next week, focus will shift to full system integration, finalizing input event handling, and testing eye-tracking once the camera rig is ready. While integration is slightly behind, a clear plan is in place to stay on track. We will begin integrating the camera rig that is ready while the second one is being built.

Anna’s Status Report for 4/12

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).

I found a way to mount the camera rig securely. I also got started on building the 2nd camera rig.

I also got all four stepper motors to work.

Steven and I also integrated gesture recognition with the camera rig for two stepper motors. However, it shouldn’t be a problem for four stepper motors since the command is just a little different.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am on track. I will finish building the 2nd camera rig this week so that I can get started on the UI.

What deliverables do you hope to complete in the next week?

I hope to finish my 2nd camera rig and work on the UI.

Anna’s Status Report for Mar29

 

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

I wrote new code for my camera rig to test up and down motion. I got my up/down and left/right motion working. (couldn’t upload videos, but got the motions working. Providing screenshots instead)

 

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

 

I am on track, though I am a little behind. I still have to find a way to make the camera rig stand and to write code for measuring the degree and distance.

 

  • What deliverables do you hope to complete in the next week?

 

I hope to start creating my 2nd camera rig and work on the UI.

 

Team Status Report for Mar22

 

  • What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

 

A significant risk could be having the two webcams not working properly. These two webcams are the essence behind our new feature of providing multiple viewpoints, and given that it doesn’t necessarily follow the original design, there are a lot of things that can go wrong. Right now, there are problems with setting up the stepper motor for rotational movement of the camera since the camera rig will be placed vertically instead of horizontally. They are being managed by using some screws and some materials to keep the stepper motor secured to the mounting plate that will move up and down. Contingency plans are purchasing stepper motor mounts that can hold the stepper motors perpendicularly in place.

Most significant risk currently is the system integration failing. So far everyone has been working on their tasks pretty separately (software for gesture recognition/eye tracking, software for AR overlay, hardware for camera rig + UI). It looks like everyone has made significant progress on their tasks, and are close to the testing stage for the individual parts. However, not much testing/design has gone into how these subprojects will interface. We will discuss this in the further weeks. Moreover, we have made some time in the schedule for the integration, which gives us ample time for making sure everything works.

Another risk is performance, the compute requirement for the software is a lot and the Jetson may not be able to handle it. But this has already been mentioned in our last team status report, and we are currently working on it.

 

  • Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

 

Yes, the placement of the stepper motor had to be positioned upward instead of sideways so that the camera can properly rotate horizontally when moving up and down. This change was necessary so that we can implement our new feature (multiple viewpoints) to users who want to view their face at different angles. The change has a risk of failing since there’s no efficient way to secure the stepper motor in place, and given the camera setup and structure, the camera might have difficulties rotating. These changes involve having to put more effort in making the new modifications work and can possibly incur more costs (though Anna is trying to work her way around it). If needed, we will purchase stepper motor mounts if they can’t be secured with screws and other materials.

  • Provide an updated schedule if changes have occurred.

No major changes have occurred yet.

 

 

Anna’s Status Report for Mar22

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

This week, I made sure to assemble the whole camera rig and identify parts that I would need to officially start testing my code. Currently, I am trying to modify the camera design from the original once since ours will be placed vertically instead of horizontally. Right now, if I follow the original design, the camera wouldn’t be able to rotate horizontally, so I would have to make sure that the stepper motor is positioned perpendicularly to the camera rig. I also identified that I would need a battery connector to connect the battery to the PCB.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am a little behind given that I was supposed to start testing my code this week. This week, I will purchase the battery connector and start testing my code on the stepper motors even if I don’t get the cameras in position. I will focus on assembling the camera rig itself and testing the code separately for functionality.

What deliverables do you hope to complete in the next week?

I hope to test my code and fix some bugs that I anticipate as well as fully build the camera rig.

Anna’s Status Report for Mar15

 

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours). 

I got the UI setup and ready to go so that I can start working on the UI. However, I encountered a problem where ImGui is not showing up. This didn’t happen before, so I will have to debug this with Steven. 

I also assembled my PCB so that I can use it to test the camera rig.

I also got all the parts for the camera rig so that I can start to assemble the camera rig and connect the stepper motors on the camera rig to the PCB. 

I also finished writing my Arduino code. I will upload it to the Arduino and have it hooked up with the PCB and stepper motors to test if my code works.

 

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule? I am on schedule. This week, I will finish assembling the camera rig since I now have all the parts and will test if my Arduino code is working. I will also see why my ImGui isn’t showing up even though the build and run is successful.

 

  • What deliverables do you hope to complete in the next week? I hope to assemble the camera rig and make progress in fixing up my code by testing it. If possible, I hope to be able to start working on the UI.

Team’s Status Report for March8

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

One of the most significant risks is environment compatibility, since currently each of us is coding independently, so some members have to work from the Mac ecosystem, so running our software on the Jetson may present unforeseen challenges, such as driver conflicts or performance limitations.
Mitigation: We will pass the access of Jetson one by one in our team to ensure smooth integration before full-system testing.

Another minor risk is performance bottlenecks, 3D face modeling, and gesture recognition involve computationally expensive tasks, which may slow real-time performance.
Mitigation: We are each trying different tricks to optimize computation like using SIMD, and also evaluating accuracy trade-offs between accuracy and efficiency to ensure the best performance within required frame rate bounds.

One risk that was faced was uploading the Arduino code onto the Arduino. We had anticipated that we would just need to buy the materials as instructed so that we can be ready to code and upload it to the Arduino. However, we found out that there’s no way to upload the code to an Arduino Pro Mini, so with our leftover budget, we bought 2 USB to serial adapters for around $9 so that we can upload the code.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The overall block diagram remains unchanged, and few details within the software implementation of the pipeline are tested (e.g. what exact opencv technique we use in each module).

However, we will have to meet next week on going through our requirements again to make sure that the previously described performance bottlenecks are be mitigated within our test requirement, or to loosen the requirements a little to ensure a smooth user experience with the computing power we have.

Updates & Schedule change

So far we are good with the schedule, some changes have been made, and the UI development has been pulled in the front since the hardware parts have not arrived yet. In terms of progress, we are positive and will be able to reach our system integration deadline in time. We will also ensure weekly synchronization between modules to prevent any latency in final integration.

A was written by Shengxi Wu, B was written by Anna Paek, and C was written by Steven Lee.

A) The product we design provides an efficient system that allows seamless operation across various global use cases inside and outside Pittsburgh. Since our system involves modular design, so it should be able to be deployed in different regions, once the camera rig is up and ready, it is easily reproducible for anyone with some basic knowledge using our software system. Not only is this product interesting work for academia, it also adheres to industry standards for many requirements like robustness and security since we does not involve the use of clouds but having all storage local by default. This system we designed thus has the potential to improve AR mirror application as well as AR filter application using part of our software system for emerging markets, making it a globally viable solution.

B) Our AR mirror meets the specific needs of people from different cultural backgrounds. It will consider and respect different religious values reflected in the style of the makeup. For example, Muslim women prefer natural makeup over bold makeup styles, so the AR mirror will have a settings for softer makeup looks. There are also different beauty standards for each country. For example, Korea prefers lighter makeup to achieve that natural beauty look while other cultures may prefer bolder makeup. As for eyeglasses, the mirror will account for the fact that eyeglasses are used differently. They can be used to make one look more intelligent or just for fashion. Depending on the purpose, the mirror will provide a collection of eyeglasses suited for that purpose. Some regions ban certain cosmetic ingredients, so the AR mirror will account for that when generating filters. The mirror will also have privacy protection for the user’s face and ID to be protected.

C) One environmental aspect that our product aims to meet is affordability/reusability. Many traditional AR mirrors have proprietary, expensive parts such as transparent displays, which are hard to repair (hence producing more waste). Our AR display aims to be achievable with commonly found parts such as off-the-shelf computer monitors and webcams, so that it’s easier to repurpose used technology into this product.

Moreover, our project aims to be low-power, through the use of a low-power constrained device, the Jetson Nano, to power the software for the AR mirror. Also, the use of affordable cameras and basic depth sensors rather than costly LIDAR or 3D cameras helps to meet that low-power goal.

Anna’s Status Report for March8

 

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

I am working on the setup for the UI, more specifically, generating build files and building (step 5- last step: https://github.com/kevidgel/usar-mirror). So far, I was able to do all the other previous steps (steps 1-4) and verified that openpose (https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/0_index.md#compiling-and-running-openpose-from-source) is running successfully as shown in the image below. 

Right now, I am having trouble identifying the CMakeList.txt file which involves installing Cuda. I have confirmed with Steven that we will not be using Cuda, so I will ask Steven how to build without using Cuda since it won’t let me build without using Cuda even after silencing the flags. 

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am slightly behind in that I still need to solder my PCB with the parts that I just got on the day before Spring break and am still waiting on my USB to Serial adapter to upload my code to the Arduino. I am also a little behind in terms of setting up the UI as I anticipate working on the UI after assembling the camera rig. At the same time, I will write and test the Arduino code. 

  • What deliverables do you hope to complete in the next week?

I hope to at least build my camera rig and to finish setting up my environment so that I can get started working on the UI. Then, I will plan on writing and testing my Arduino code and integrating it with the gesture recognition. 

Anna’s Status Report for Feb22

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

Files/photos: https://docs.google.com/document/d/1AsT0dXenHnLb7vWtu7ljc6i2_zY0NIXrijEcWs9SGDA/edit?usp=sharing

This week, I focused on setting up the user interface (UI) and preparing everything needed to start coding. I spent time looking at UIs from makeup and glasses apps like YouCam Makeup, which helped me get an idea of what the UI should look like. I also checked out some tutorials for Dear ImGui to understand how to implement the UI elements.

Steven shared the GitHub repo with the ImGui backend set up, so I just need to call the library functions in the code to create the UI elements. However, I’ve been having some trouble with generating build files and running the build process. Steven is helping me troubleshoot, and we’re hoping to get everything set up so I can start coding the UI on Monday.

Another part of the project I’m responsible for is the motorized camera control system. I ordered the parts last week, so I’m still waiting for them to arrive. Once I get the parts, I can start assembling and programming the system.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I’m a little behind schedule due to the delays in receiving the parts for the motorized camera control system and the issues I’ve had with building the Dear ImGui project. That said, I’ve been working closely with Steven to resolve the build problems, and I expect to be able to move forward with coding the UI soon. To catch up, I’ll focus on fixing the build issue and getting everything set up so I can start coding the UI by next week. Once the camera control system parts arrive, I’ll focus on assembling and programming it, so I stay on track with both tasks.

What deliverables do you hope to complete in the next week?

I hope to begin assembling the motorized camera control system and start the initial programming once the parts arrive. I also hope to begin coding the UI elements (like the camera angle and filter menus) using Dear ImGui, starting with the basic UI elements and getting them integrated into the project. 

Team Status Report for Feb15

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

One of the most significant risks could be that there can be jitters from the camera. This will ruin the overall experience of the users since they are not able to see their side profiles and other parts of the face well. To mitigate this, we are implementing a PID control loop to ensure smooth motor movement and reduce vibrations. Additionally, we are testing different mounting and damping mechanisms to isolate vibrations from the motor assembly.

Contingency plans include having a backup stepper motor with finer resolution and smoother torque, as well as a manual override mode for emergency situations.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The design for the camera control system changed, but it’s cheaper. We changed it due to the cost and complexity of the previous project. The one we are going with requires fewer 3D printing parts, which cuts down our cost by half, and it will integrate well with the display. The change also simplifies the assembly and reduces the overall weight of the system, improving portability.

The cost incurred is minimal, primarily for redesigning and reprinting certain parts. To mitigate these costs, we are using readily available components and not printing parts that we don’t need (like the stepper motor case).

Provide an updated schedule if changes have occurred.

We are behind schedule since we haven’t received the materials and equipment yet. Once we get the materials, we plan on catching up with the schedule by allocating more time for assembly and testing. We’ve also added buffer periods for unforeseen delays and assigned team members specific tasks to parallelize the work.

A: Public health, safety or welfare Considerations (written by Shengxi)

Our system prioritizes user well-being by incorporating touch-free interaction, eliminating the need for physical contact and reducing the spread of germs, particularly in shared or public spaces. By maintaining proper eye-level alignment, the system helps minimize eye strain and fatigue, preventing neck discomfort caused by prolonged unnatural viewing angles. Additionally, real-time AR makeup previews contribute to psychological well-being by boosting user confidence and reducing anxiety related to cosmetic choices. The ergonomic design further enhances comfort by accommodating various heights and seating positions, ensuring safe, strain-free interactions for all users.

B: Social Factor Considerations (written by Steven)

Being a display with mirror-like capabilities, we aim to pay close attention to how it affects perception of body image and self-perception. We plan to make the perspective transforms accurate and the image filters reasonable,  so we don’t unintentionally reinforce unrealistic beauty norms or contribute to negative self-perception. This will be achieved through user-testing and accuracy testing of our reconstruction algorithms. Also one of the goals of this project is to keep it at a lower cost compared to competitors (enforced by our limited budget of ~$600) so that lower-income communities have access to this technology.

C: Economic Factors (written by Anna)

UsAR mirror provides a cost-efficient and scalable solution as our mirror costs no more than $600. For production, the UsAR mirror has costs in hardware, software, and maintenance/updates. It uses affordable yet high-quality cameras like the Realsense depth camera and webcams. The Realsense depth camera will allow users to have filters properly aligned to a 3D reconstruction of the face, maximizing the experience while minimizing the cost. The camera control system has an efficient yet simple design that doesn’t require many materials or doesn’t incur a lot of costs. As for the software, there’s no cost. It uses free, open-source software libraries like OpenCV, Open3D, OpenGL, and OpenPose. The Arduino code that controls the side-mounted webcams is developed with no cost.

For distribution, the mirror is lightweight and easy to handle and install. The mirror is a display that’s only 23.8 inches, so it is easy to carry and use as well as easy to package and ship. For consumption, UsAR mirror will be greatly used by retailers who can save money on sample products and the time spent for customers to try on all kinds of glasses. Moreover, because customers are able to try on makeup and glasses efficiently, this reduces the percentage that they will likely come back to return products, making the shopping experience and business on the retail end more convenient. These days, customers are longing for a more personalized and convenient way of shopping, and UsAR mirror addresses this demand.