Team Report for 3/16

Risks

Nathalie and Harshul are working on projecting texture on a floor, with memory. Through demos and experimentation, it seems that the accuracy of the plane mapping depends on the user’s ability to perform the initial scanning—one major risk that could jeopardize the project would be the accuracy of the mapping, because both dirt detection and coverage metrics depend on their ability to appear on the map. In order to mitigate this, we are doing careful research during our implementation phases on how to best anchor the points, and performing tests in different kinds of rooms (blank rooms versus ones with lots of things on the floor) in order to validate our scope of a plain white floor. Erin is working on dirt detection and we found that our initial algorithm was sensitive to noise. We have created a new dirt detection algorithm which makes use of many of OpenCV’s built in preprocessing functions, rather than preprocessing the input images ourselves. While we originally thought the algorithms were very sensitive to noise, we have recently realized that this may be more of an issue with our image inputs. The new dirt detection algorithm is less sensitive to shade noise, but will still classify patterning to be dirty. Regardless, we hope to tune the algorithm to be less sensitive to noise, testing its performance accordingly at the desired height and angle.

The main risk encountered in working with plane texturing was the inaccuracies in how the plane was fitted to the plane boundaries. Since this is a core feature this is a top priority that we plan on addressing this coming week and are meeting as a team to develop a more robust approach. We also need this component to be finished in order to begin integration, which we presume will be a nontrivial task, especially since it involves all our hardware.

 

Next Steps

We are currently in the process of finalizing the ideal height and angle for the camera mount. Once that is decided, we will be thresholding and validating our definition of dirt with respect to the angle that was selected. Once dirt is detected, we will need to record the position to communicate with the augmented reality parts of our system. We still need to sync as a team on the type of information that the augmented reality algorithm would need to receive from the Jetson and camera. For the texture mapping on the floor, we are working on projecting the overlay with accuracy defined in our technical scope. Further, we will then be working on tracking motion of a specific object (which will represent the vacuum) in the context of the floor mapping. We hope to have a traceable line created on our overlay that indicates where we can erase parts of the map.

Leave a Reply

Your email address will not be published. Required fields are marked *