Team Status Report for 3/23

Risks

With the augmented reality floor mapping base implementation done, we are about to move into marking/erasing the overlay. Nathalie and Harshul have discussed multiple implementation strategies for marking coverage, and are not entirely sure which approach will be most successful – this is something we will determine when working on it this week. Our initial thought is to combine the work that we have each done separately (Nathalie having mapped the floor and Harshul creating logic to change plane color on tap). Specifically, we want to add more nodes in a specific shape to the floor plane in a different color, like red, with the diameter of the shape equivalent to the width of the floor vacuum. Still, we need to figure out first how to do that, and once it works what shape would best capture the vacuum coverage dimensions. This is important because the visual representation of coverage is essential to our project actually working. As a fallback, we have experimented with the World Tracking Configuration logic which is able to capture our location in space and are willing to explore how our alternative approaches might work to solve the problem of creating visual indicators on a frozen floor map.

The core challenge is that upon freezing map updates we run the risk of odometry and drift of objects as we move around the room and tracking information changes, but doesn’t propagate to the actual planes drawn in the scene. However keeping the map dynamic mitigates this but then prevents consistency in the actual dimensions of our plane which make it difficult to measure and benchmark our coverage requirements. One mitigation method would be to have custom update renderers to avoid redefining plane boundaries but possibly allow their anchor position to change.

Another challenge that our group is currently facing is the accuracy of the mapping. While we addressed this issue before, the problem still stands. At this time, we have not been able to get the ARKit mappings to reflect the error rates that we desire, as specified by our use case requirements. This is due to the constraints of Apple’s hardware and software, and tuning these models may not be a viable option, giving the remaining time we have for the rest of the semester. Our group has discussed readjusting our error bounds in our use case requirements, and this is something we plan to flush out within the week.

We also need to get started on designing and productionizing all the hardware components we need in order to assemble our product end to end. The mounts for the Jetson hardware as well as the active illumination LEDs need to be custom made, which means that we may need to go through multiple iterations of the product before we are able to find a configuration that works well with our existing hardware. Since the turnaround is tight considering our interim demo is quickly approaching, we may not be able to demonstrate our project as an end-to-end product; rather, we may have to show it in terms of the components that we have already tested. 

Scheduling 

We are now one week away from the interim demo. The last AR core feature we need to do is plane erasure. We’ve successfully tracked the phone’s coordinates and drawing that in the scene. The next step is to project that data into the floor plane. This would leave the AR subsystem ready to demo. Since our camera positioning has been finalized, we are beginning to move forward with designing and 3D printing the mounting hardware. Next milestones will entail a user friendly integration of our app features as well as working on communication between the Jetson and the iPhone.



Leave a Reply

Your email address will not be published. Required fields are marked *