Team’s Status Report for March8

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

One of the most significant risks is environment compatibility, since currently each of us is coding independently, so some members have to work from the Mac ecosystem, so running our software on the Jetson may present unforeseen challenges, such as driver conflicts or performance limitations.
Mitigation: We will pass the access of Jetson one by one in our team to ensure smooth integration before full-system testing.

Another minor risk is performance bottlenecks, 3D face modeling, and gesture recognition involve computationally expensive tasks, which may slow real-time performance.
Mitigation: We are each trying different tricks to optimize computation like using SIMD, and also evaluating accuracy trade-offs between accuracy and efficiency to ensure the best performance within required frame rate bounds.

One risk that was faced was uploading the Arduino code onto the Arduino. We had anticipated that we would just need to buy the materials as instructed so that we can be ready to code and upload it to the Arduino. However, we found out that there’s no way to upload the code to an Arduino Pro Mini, so with our leftover budget, we bought 2 USB to serial adapters for around $9 so that we can upload the code.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The overall block diagram remains unchanged, and few details within the software implementation of the pipeline are tested (e.g. what exact opencv technique we use in each module).

However, we will have to meet next week on going through our requirements again to make sure that the previously described performance bottlenecks are be mitigated within our test requirement, or to loosen the requirements a little to ensure a smooth user experience with the computing power we have.

Updates & Schedule change

So far we are good with the schedule, some changes have been made, and the UI development has been pulled in the front since the hardware parts have not arrived yet. In terms of progress, we are positive and will be able to reach our system integration deadline in time. We will also ensure weekly synchronization between modules to prevent any latency in final integration.

A was written by Shengxi Wu, B was written by Anna Paek, and C was written by Steven Lee.

A) The product we design provides an efficient system that allows seamless operation across various global use cases inside and outside Pittsburgh. Since our system involves modular design, so it should be able to be deployed in different regions, once the camera rig is up and ready, it is easily reproducible for anyone with some basic knowledge using our software system. Not only is this product interesting work for academia, it also adheres to industry standards for many requirements like robustness and security since we does not involve the use of clouds but having all storage local by default. This system we designed thus has the potential to improve AR mirror application as well as AR filter application using part of our software system for emerging markets, making it a globally viable solution.

B) Our AR mirror meets the specific needs of people from different cultural backgrounds. It will consider and respect different religious values reflected in the style of the makeup. For example, Muslim women prefer natural makeup over bold makeup styles, so the AR mirror will have a settings for softer makeup looks. There are also different beauty standards for each country. For example, Korea prefers lighter makeup to achieve that natural beauty look while other cultures may prefer bolder makeup. As for eyeglasses, the mirror will account for the fact that eyeglasses are used differently. They can be used to make one look more intelligent or just for fashion. Depending on the purpose, the mirror will provide a collection of eyeglasses suited for that purpose. Some regions ban certain cosmetic ingredients, so the AR mirror will account for that when generating filters. The mirror will also have privacy protection for the user’s face and ID to be protected.

C) One environmental aspect that our product aims to meet is affordability/reusability. Many traditional AR mirrors have proprietary, expensive parts such as transparent displays, which are hard to repair (hence producing more waste). Our AR display aims to be achievable with commonly found parts such as off-the-shelf computer monitors and webcams, so that it’s easier to repurpose used technology into this product.

Moreover, our project aims to be low-power, through the use of a low-power constrained device, the Jetson Nano, to power the software for the AR mirror. Also, the use of affordable cameras and basic depth sensors rather than costly LIDAR or 3D cameras helps to meet that low-power goal.

Anna’s Status Report for March8

 

  • What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

I am working on the setup for the UI, more specifically, generating build files and building (step 5- last step: https://github.com/kevidgel/usar-mirror). So far, I was able to do all the other previous steps (steps 1-4) and verified that openpose (https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation/0_index.md#compiling-and-running-openpose-from-source) is running successfully as shown in the image below. 

Right now, I am having trouble identifying the CMakeList.txt file which involves installing Cuda. I have confirmed with Steven that we will not be using Cuda, so I will ask Steven how to build without using Cuda since it won’t let me build without using Cuda even after silencing the flags. 

  • Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I am slightly behind in that I still need to solder my PCB with the parts that I just got on the day before Spring break and am still waiting on my USB to Serial adapter to upload my code to the Arduino. I am also a little behind in terms of setting up the UI as I anticipate working on the UI after assembling the camera rig. At the same time, I will write and test the Arduino code. 

  • What deliverables do you hope to complete in the next week?

I hope to at least build my camera rig and to finish setting up my environment so that I can get started working on the UI. Then, I will plan on writing and testing my Arduino code and integrating it with the gesture recognition. 

Anna’s Status Report for Feb22

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

Files/photos: https://docs.google.com/document/d/1AsT0dXenHnLb7vWtu7ljc6i2_zY0NIXrijEcWs9SGDA/edit?usp=sharing

This week, I focused on setting up the user interface (UI) and preparing everything needed to start coding. I spent time looking at UIs from makeup and glasses apps like YouCam Makeup, which helped me get an idea of what the UI should look like. I also checked out some tutorials for Dear ImGui to understand how to implement the UI elements.

Steven shared the GitHub repo with the ImGui backend set up, so I just need to call the library functions in the code to create the UI elements. However, I’ve been having some trouble with generating build files and running the build process. Steven is helping me troubleshoot, and we’re hoping to get everything set up so I can start coding the UI on Monday.

Another part of the project I’m responsible for is the motorized camera control system. I ordered the parts last week, so I’m still waiting for them to arrive. Once I get the parts, I can start assembling and programming the system.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

I’m a little behind schedule due to the delays in receiving the parts for the motorized camera control system and the issues I’ve had with building the Dear ImGui project. That said, I’ve been working closely with Steven to resolve the build problems, and I expect to be able to move forward with coding the UI soon. To catch up, I’ll focus on fixing the build issue and getting everything set up so I can start coding the UI by next week. Once the camera control system parts arrive, I’ll focus on assembling and programming it, so I stay on track with both tasks.

What deliverables do you hope to complete in the next week?

I hope to begin assembling the motorized camera control system and start the initial programming once the parts arrive. I also hope to begin coding the UI elements (like the camera angle and filter menus) using Dear ImGui, starting with the basic UI elements and getting them integrated into the project. 

Team Status Report for Feb15

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

One of the most significant risks could be that there can be jitters from the camera. This will ruin the overall experience of the users since they are not able to see their side profiles and other parts of the face well. To mitigate this, we are implementing a PID control loop to ensure smooth motor movement and reduce vibrations. Additionally, we are testing different mounting and damping mechanisms to isolate vibrations from the motor assembly.

Contingency plans include having a backup stepper motor with finer resolution and smoother torque, as well as a manual override mode for emergency situations.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The design for the camera control system changed, but it’s cheaper. We changed it due to the cost and complexity of the previous project. The one we are going with requires fewer 3D printing parts, which cuts down our cost by half, and it will integrate well with the display. The change also simplifies the assembly and reduces the overall weight of the system, improving portability.

The cost incurred is minimal, primarily for redesigning and reprinting certain parts. To mitigate these costs, we are using readily available components and not printing parts that we don’t need (like the stepper motor case).

Provide an updated schedule if changes have occurred.

We are behind schedule since we haven’t received the materials and equipment yet. Once we get the materials, we plan on catching up with the schedule by allocating more time for assembly and testing. We’ve also added buffer periods for unforeseen delays and assigned team members specific tasks to parallelize the work.

A: Public health, safety or welfare Considerations (written by Shengxi)

Our system prioritizes user well-being by incorporating touch-free interaction, eliminating the need for physical contact and reducing the spread of germs, particularly in shared or public spaces. By maintaining proper eye-level alignment, the system helps minimize eye strain and fatigue, preventing neck discomfort caused by prolonged unnatural viewing angles. Additionally, real-time AR makeup previews contribute to psychological well-being by boosting user confidence and reducing anxiety related to cosmetic choices. The ergonomic design further enhances comfort by accommodating various heights and seating positions, ensuring safe, strain-free interactions for all users.

B: Social Factor Considerations (written by Steven)

Being a display with mirror-like capabilities, we aim to pay close attention to how it affects perception of body image and self-perception. We plan to make the perspective transforms accurate and the image filters reasonable,  so we don’t unintentionally reinforce unrealistic beauty norms or contribute to negative self-perception. This will be achieved through user-testing and accuracy testing of our reconstruction algorithms. Also one of the goals of this project is to keep it at a lower cost compared to competitors (enforced by our limited budget of ~$600) so that lower-income communities have access to this technology.

C: Economic Factors (written by Anna)

UsAR mirror provides a cost-efficient and scalable solution as our mirror costs no more than $600. For production, the UsAR mirror has costs in hardware, software, and maintenance/updates. It uses affordable yet high-quality cameras like the Realsense depth camera and webcams. The Realsense depth camera will allow users to have filters properly aligned to a 3D reconstruction of the face, maximizing the experience while minimizing the cost. The camera control system has an efficient yet simple design that doesn’t require many materials or doesn’t incur a lot of costs. As for the software, there’s no cost. It uses free, open-source software libraries like OpenCV, Open3D, OpenGL, and OpenPose. The Arduino code that controls the side-mounted webcams is developed with no cost.

For distribution, the mirror is lightweight and easy to handle and install. The mirror is a display that’s only 23.8 inches, so it is easy to carry and use as well as easy to package and ship. For consumption, UsAR mirror will be greatly used by retailers who can save money on sample products and the time spent for customers to try on all kinds of glasses. Moreover, because customers are able to try on makeup and glasses efficiently, this reduces the percentage that they will likely come back to return products, making the shopping experience and business on the retail end more convenient. These days, customers are longing for a more personalized and convenient way of shopping, and UsAR mirror addresses this demand.

Anna’s Status Report for Feb15

 

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

This week, I spent a lot of time refining the design and control system for the four stepper motors (one for rotation and one for linear motion on each rig) using a single Arduino. I tested a few different approaches to optimize power consumption and reduce battery usage, but after some troubleshooting, I concluded that the original plan of creating two identical PCBs, rather than one large one, would be more efficient and reliable.

On Monday, I reached out to Techspark to ask about 3D printing costs. After calculating the material needed, I realized it would be too expensive for our budget. So, I spent time researching alternative designs that would minimize the need for 3D-printed components while still meeting our functional needs. I came across a promising design for linear motion, panning/rotation, and object detection that was both simple in terms of assembly and Arduino coding:

Reference Design: https://www.the-diy-life.com/diy-motorised-camera-slider-with-object-tracking/ 

I ordered the necessary parts and made some cost-effective adjustments where needed. I also placed the order for the PCBs and began planning the integration of the camera control system with the mirror. I worked with the team to decide that the linear motion will be controlled through software, while the panning/rotation could be managed by either hardware or software.

Once the design and component selection were finalized, I developed a plan to test the camera control system and integrate gesture recognition. I’ll write additional test scripts and add more input parameters (like distance and duration) to test different motion types. My main goal is to ensure smooth camera movement, eliminating jitter or unexpected behavior for a stable, responsive system.

In addition, I spent a couple of hours setting up OpenGL, GLFW, and GLAD to get started on the user interface. The installation process was a bit challenging, as it required careful attention to make sure everything was set up correctly. But after troubleshooting and verifying that all packages installed properly, I now have the environment ready for development. This part was crucial to lay the foundation for the visual interface of the project, and it took a fair amount of effort to get everything working smoothly.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

My progress is slightly behind, mainly due to a delay in finalizing and ordering the parts for the camera rig. However, I’ve now completed the design, and all the parts are ordered and accounted for. With everything in place, I’m ready to start building and coding this coming week. I anticipate a significant ramp-up in progress now that I have everything on hand. I will begin assembling the camera rig and focus on coding the user interface using OpenGL. Specifically, I’ll concentrate on developing the menu system and displaying filter selections.

What deliverables do you hope to complete in the next week?

I aim to complete the assembly of the camera rig and have the menu system fully created and functional by the end of next week.

 

Team Status Report for Feb8

 

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

 

Our system will feature a camera control mechanism that adjusts the camera’s position based on the user’s movements. The control system consists of three camera rigs: one for linear motion and two for panning and tilting. A RealSense camera will be mounted at the top of the display, capable of horizontal movement along with panning and tilting. Additionally, two webcams with a similar setup will be responsible for vertical movement while also supporting panning and tilting. 

To achieve precise control over the cameras, we will use an Arduino to interface with motorized actuators. The Arduino will process real-time data on the user’s position, movement, and angles collected from computer vision and tracking algorithms (processed on jetson). Based on this data, the Arduino will adjust the cameras accordingly, ensuring that the virtual overlays remain properly aligned with the user’s face.

One of the most significant risks in our project is ensuring that the camera dimensions are compatible with the premade rig design, particularly for the pan and tilt mechanism. Since the rig has many moving parts, even slight misalignments could lead to unstable movement (especially jittery motion) or poor tracking. To mitigate this, I will adjust the CAD files and verify all measurements before printing the parts. Additionally, I will test the motors beforehand to ensure they function smoothly. To reduce jittery movements, I will implement controlled speed adjustments and include a brief resting period after movement to allow the motors to stabilize.

Another risk is ensuring that the motors respond accurately to the Arduino’s commands. Before integrating the motors into the camera system, I will perform basic functionality tests to confirm their responsiveness. I will also take advantage of Arduino’s built-in motor control libraries to fine-tune movements for precision.

To ensure proper synchronization between the camera movement and the AR system, we will conduct individual component testing before proceeding with full system integration. If issues arise, debugging will be more manageable since we will already know which part of the system requires improvement.

Since unforeseen problems could still occur, we have built buffer time into our project schedule to accommodate troubleshooting and necessary modifications.

 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

 

We are replacing the Kinect depth camera with an Intel RealSense camera along with two web cameras on each side. This is necessary because Kinect cameras are no longer in production and are difficult to obtain. Using a RealSense camera offers the same functionality with only a slight increase in cost ($300 instead of $200). The change won’t affect the overall functionality of the project, but it does require extra coding to integrate and process data from the new camera setup. While this helps reduce hardware costs, it comes at the expense of additional development time for software adjustments. To manage this, we’ll focus on optimizing the code for depth and vision processing, making use of existing libraries and frameworks to streamline integration. We’ll also conduct thorough testing to ensure the new setup maintains the required accuracy and performance.

Provide an updated schedule if changes have occurred.

 

 

We are behind in schedule since we haven’t received the materials and equipment yet. Once we get the materials, we plan on catching up with the schedule. Steven pushed the eye tracking implementation to the week after, and Anna pushed the camera control system assembly the week after since she couldn’t get the materials and parts yet. 

This is also the place to put some photos of your progress or to brag about a component you got working. 

 

 

Anna’s Status Report for Feb8

What did you personally accomplish this week on the project? Give files or photos that demonstrate your progress. Prove to the reader that you put sufficient effort into the project over the course of the week (12+ hours).  

     This week, I worked on designing the control system for the camera’s rotation, tilt, and vertical movement. At first, I considered building a custom camera rig from scratch using servos for rotation and tilt and linear actuators for vertical movement. However, I realized that integrating these components with the Arduino could be tricky, and that coming up with my own design that isn’t polished enough might lead to wasted time and money. To make the process more efficient, I decided to research existing designs that could be adapted for our project to ensure that we have a working design.

     I came across several options, but most were either too complex or lacked clear instructions. One design stood out because it allowed for pan, tilt, and vertical movement—exactly what we need for our augmented reality mirror. I would have three of these set up on the mirror; one horizontally at the top of the mirror for the realsense camera and two vertically on the sides of the mirror for the webcams. However, the design required a lot of material prep, had minimal step-by-step guidance, and involved a more complicated assembly.
Reference: https://www.youtube.com/watch?v=hEBjbSTLytk 

     I also spent time researching different motor options for controlling the cameras, focusing on cost and ease of implementation. After looking at multiple designs, I chose one that looked similar, except that it included 3D-printable parts, making it much easier to put together. This design also provides a full list of required parts, estimated costs, and dimensions, which helped me confirm that it would work with our webcams and budget. I would have to adjust the dimensions of the parts to fit the realsense camera (which is longer). I also made sure it could be smoothly integrated into the overall project.
Reference: https://www.instructables.com/Automatic-Arduino-Powered-Camera-Slider-With-Pan-a/

     I decided to use an Arduino as the main controller because it’s easy to program, has strong support for motor control, and works well with both servo motors and linear actuators. Its built-in libraries make it simple to create precise movements, allowing for smooth camera adjustments in all directions. Plus, it leaves room for future improvements, like adding user controls or automating movement based on environmental data.

Is your progress on schedule or behind? If you are behind, what actions will be taken to catch up to the project schedule?

     My progress is a little behind. This week, we had our proposal presentation, and I started on designing the camera control system when I was supposed to start assembling the camera control system. I worked on researching all the necessary materials and assessed whether the design is feasible for our project, given the fixed budget and time frame. I will go to Techspark on the 9th to see which materials I can borrow and fill out the purchase form so that I can get the materials as soon as possible. Once I get all the materials, I can start building this week so that I can work on programming the control system and integrating it into the mirror later on.  

What deliverables do you hope to complete in the next week?

     Next week, I hope to have my digitally fabricated parts ready for me to assemble the week after next week. I would have to adjust the dimensions in the STL files so that our cameras can fit in the designs. I hope to have my other materials ready and delivered so that I can have all the materials ready to build the week after next week.