Erin’s Status Report for 2/17

This week I mainly worked on getting started with the dirt detection, and figuring out what materials to order. Initially, our group had wanted to use an LED for active illumination, and then run our dirt detection module on the resulting illuminated imagery, but I found some existing technology that we ended up purchasing which would take care of this for us. The issue with the original design was that we had to be careful about whether the LEDs that we were using were vision safe, thus bringing ethics into question as well as design facets that my group and I do not know enough about. Moreover, I have started working a little with Swift and Xcode. There were a few Swift tutorials that I have watched over the week, and I toyed around with some of the syntax in my local development environment. I have also started doing research on model selection for our dirt detection problem. This is an incredibly crucial component of our end-to-end system, as it plays a large part in how easily we would be able to achieve our use case goals. I have looked into the Apple Live Capture feature, as this is one route that I am exploring for dirt detection. The benefit of this is that it is native to Apple Vision, and so I should have no issue integrating this into our existing system. However, the downside is that this model is often used for object detection rather than dirt detection, and the granularity might be too small for the model to work. Another option I am currently considering is DeeplabV3 model. This model specializes in segmenting pixels within an image into different objects. For our use case, we just need to differentiate between the floor and essentially anything which is not the floor. If we are able to detect small particles as “different objects” than what is on the floor, we could move forward with some simple casing on the size of these such objects for our dirt detection. The aim is to experiment will all these models over the couple days, and settle on a model of choice by the end of the week. 

We are mostly on schedule, but we ran into some technical difficulties with the Jetson. We initially did not plan on using a Jetson; rather, this was a quick change of plans when we were choosing hardware, as the department was low in stock on Raspberry Pis and high in stock on Jetsons. The Jetson also has CUDA acceleration, which is good for our use case, since we are working with a processing intensive project. Our issue with the Jetson may cause delays on the side of end-to-end integration and is stopping Harshul from performing certain initial tests he wanted to run, but I am able to experiment with my modules independently  from the Jetson, and I am currently not blocked. In addition, since we have replaced the active illumination with an existing product component, we are ahead of schedule on that front!

In the next week, I (again) hope to have a model selected for dirt detection. I also plan to help my group mates writing the design document, which I foresee to be a pretty time consuming part of my tasks for next week. 

Leave a Reply

Your email address will not be published. Required fields are marked *