I spent this week outlining which hardware parts were going to be needed for each functionality piece. We broke down our hardware deliverables into two parts, the ARKit coverage mapping and back facing camera. The back-facing camera has a lot more hardware components like the Jetson, a Jetson-compatible camera, and the LED light. I spent a lot of time figuring out which components were good for our project and doing research on actual specs. Then I placed the orders for our parts through the Purchase Forms. We were going to use a Raspberry Pi but the one we wanted already had priority on the ECE inventory list so had to pick something else, the Jetson.
I also installed Xcode and set up my developer environment, which I thought was going to be pretty quick but ran into some unexpected issues because Xcode requires a lot of disk space to download. I spent a few hours downloading a disk scanner so that I could delete things on my computer, backing it up to a hard drive, and then downloading the actual Xcode which was extremely slow. In the meantime, I was able to do some initial research on ARKit and how other people have mapped rooms/floors in the past. I found this room plan tool that Apple has developed and maybe we can leverage since we plan on doing a simpler task of mapping just the floor, but a 3D plan of the room is helpful to determining where the borders of the floor are. We can probably use this tool for our initial mapping of the room to develop our ‘memory’ feature. This floor plan anchor tool and mesh object detection and mapping is similar to what we want to accomplish when mapping the floor, especially when it comes to detecting which objects we don’t want to include. There is an ARPlaneAnchor.Classification.floor that represents a real-world floor, ground plane, or similar large horizontal surface.
Plan is currently still on schedule but there isn’t much slack left for these next parts so it’s really important that our subsequent tasks are done on time.
Next steps include making sure that our Jetson is able to boot (and reordering if needed), plus taking static images of dirt on the floor and feeding it into a dirt detection model to fully flush out our thresholds and make sure that the object detection model is able to perform in our context. After that, we are going to use the Jetson to perform the object detection mapping.