What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?
Our system will feature a camera control mechanism that adjusts the camera’s position based on the user’s movements. The control system consists of three camera rigs: one for linear motion and two for panning and tilting. A RealSense camera will be mounted at the top of the display, capable of horizontal movement along with panning and tilting. Additionally, two webcams with a similar setup will be responsible for vertical movement while also supporting panning and tilting.
To achieve precise control over the cameras, we will use an Arduino to interface with motorized actuators. The Arduino will process real-time data on the user’s position, movement, and angles collected from computer vision and tracking algorithms (processed on jetson). Based on this data, the Arduino will adjust the cameras accordingly, ensuring that the virtual overlays remain properly aligned with the user’s face.
One of the most significant risks in our project is ensuring that the camera dimensions are compatible with the premade rig design, particularly for the pan and tilt mechanism. Since the rig has many moving parts, even slight misalignments could lead to unstable movement (especially jittery motion) or poor tracking. To mitigate this, I will adjust the CAD files and verify all measurements before printing the parts. Additionally, I will test the motors beforehand to ensure they function smoothly. To reduce jittery movements, I will implement controlled speed adjustments and include a brief resting period after movement to allow the motors to stabilize.
Another risk is ensuring that the motors respond accurately to the Arduino’s commands. Before integrating the motors into the camera system, I will perform basic functionality tests to confirm their responsiveness. I will also take advantage of Arduino’s built-in motor control libraries to fine-tune movements for precision.
To ensure proper synchronization between the camera movement and the AR system, we will conduct individual component testing before proceeding with full system integration. If issues arise, debugging will be more manageable since we will already know which part of the system requires improvement.
Since unforeseen problems could still occur, we have built buffer time into our project schedule to accommodate troubleshooting and necessary modifications.
Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?
We are replacing the Kinect depth camera with an Intel RealSense camera along with two web cameras on each side. This is necessary because Kinect cameras are no longer in production and are difficult to obtain. Using a RealSense camera offers the same functionality with only a slight increase in cost ($300 instead of $200). The change won’t affect the overall functionality of the project, but it does require extra coding to integrate and process data from the new camera setup. While this helps reduce hardware costs, it comes at the expense of additional development time for software adjustments. To manage this, we’ll focus on optimizing the code for depth and vision processing, making use of existing libraries and frameworks to streamline integration. We’ll also conduct thorough testing to ensure the new setup maintains the required accuracy and performance.
Provide an updated schedule if changes have occurred.
We are behind in schedule since we haven’t received the materials and equipment yet. Once we get the materials, we plan on catching up with the schedule. Steven pushed the eye tracking implementation to the week after, and Anna pushed the camera control system assembly the week after since she couldn’t get the materials and parts yet.
This is also the place to put some photos of your progress or to brag about a component you got working.