Erin’s Status Report for 3/9

I spent the majority of my time this week writing the design report. I worked primarily on the testing and verification section of the design document. Our group had thought about the testing protocols for multiple use-case requirements, but there were scenarios I found we failed to consider. For example, I added the section regarding the positioning of the camera. In addition, much of the content that we had before for testing and verification had to be refined; the granularity at which we had specified our tests were not clear enough. I redesigned multiple of the testing methods, and added a couple more which we had not previously covered in depth in our presentations.

The Jetson Camera Mount test was a new component of our Testing and Verification section that we had not considered before. I designed the entirety of this section this week, and have started to execute the plan itself. We had briefly discussed how to mount the camera, but our group had never gotten into the nitty-gritty details of the design. While creating the testing plan, I realized that there would be additional costs associated with the camera component as well, and realized that mounting the camera could introduce many other complications, which caused me to brainstorm additional contingency plans. For example, the camera would be mounted separately from the Jetson computer itself. If we were to mount the computer higher, with respect to the camera, we would need a longer wire. The reasoning behind this is to combat overheating, as well as to separate the device from any external sources of interference. Moreover, I designed the actual testing plan for mounting the camera from scratch. We need to find the optimal angle for the camera so it can capture the entire span of space that is covered by the vacuum, and we also need to tune the height of the camera such that there is not an excessive amount of interference from dirt particles or vibrations from the vacuum itself. To account for all of these factors, I created a testing plan which accommodated for eight different camera angles, as well as three different height configurations. The choices for the number of configurations I chose was determined by a couple of simple placement tests I conducted with my group using the hardware that we already had. The next step, which I hope to have completed by the end of the week, would be to execute this test and to begin to integrate the camera hardware into the existing vacuum system. If a gimbal or specialized 3D printed mount is needed, I plan to reach out to Harshul to design it, as he has more experience in this field than the rest of us.

Our group is on pace with our schedule, although the ordering of some components of our project have been switched around. Additionally, after having accounted for the slack time, we are in good shape to produce a decently demonstrable product by the demo deadline, which is coming up in about a month. I would like to have some of the hardware components pieced together sooner rather than later though, as I can foresee the Jetson mounts causing some issues in the future. I hope to get these nuances sorted out while we still have the time to experiment. I also plan to get the AR system set up to test on my device. One thing that has changed from before is that Nathalie and Harshul discovered that LiDAR is not strictly necessary to run all of the required AR scripts that we plan to integrate into our project; the LiDAR scanner simply makes the technology work better in terms of speed and accuracy. We would thus not use my phone to demonstrate our project, but with this knowledge, I could help them more with the development process. I also hope to get this fully set up within the week, although I do not think this is going to be a blocker for anyone in our group. I recently created a private Github repository, and I have pushed all our existing dirt detection code to the remote server so everyone is able to access it. When Harshul has refined some of his code for the AR component of our project, I will be able to seamlessly pull the code he has and try to run it on my own device. Our group has also discussed our Github etiquette—once software development ramps up, we plan on using pull requests and pushing from our own individual branches before merging our project into the main branch. For now, we plan to operate slowly, as we are working on non-intersecting codebases.

Team Status Report for 2/24

We solved the previous issue of the Jetson not working and successfully managed to get a new one from inventory flashed with the OS and running. We performed dummy object detection experiments with particles on a napkin and observed a high false positive rate, which is a challenge that we are going to work on in the coming weeks. All three of us have successfully started onboarding with Swift. 

We changed our use case and technical requirements for cleanliness to measure the actual size of the dirt particles instead of the covered area because it was too vague. We realized that 15% coverage of an area doesn’t really have meaning in context and instead wanted to measure meaningful dirt particles, specifically those that are >1mm in diameter and within 10 cm of the camera. We have also created new battery life requirements for the vacuum such that it must be active for over 4 hours, and have performed the accompanying calculations for maH. We updated our block diagrams and general design to include a form of wireless power with batteries that we plan on ordering in the coming week. In addition, we discovered that developing with Xcode without a developer account/license means we can only work with a cable plugged into our phone. While this is fine for the stage of development we are currently in, we need to purchase at least one developer license so that we can deploy wirelessly. This is the only adjustment that impacted our budget; we did not make any other changes to the costs our project would incur. We do not foresee many more use case/system adjustments of this degree.

Our timeline has accounted for enough slack so that the schedule has remained unchanged, but we definitely need to stay on track before spring break. We managed to find a functioning Jetson which has allowed us to stay on track, which was our challenge from last week because we did not know what was the problem or how long we would be blocked on the Jetson for. Luckily this has resolved, but we still need to acquire the Apple Developer pack so that we can power the Jetson wirelessly. This week, one of our main focus points will be the room mapping—we want to soon get a dummy app running with ARKit which can detect the edges of a room. Another one of our frontrunner tasks would be to flush out the rest of our design document.



Erin’s Status Report for 2/24

This week I worked primarily on actually implementing the software component of our dirt detection. We had ordered the hardware in the previous week, but since we designed our workflow to be streamlined in a parallel sense, I was able to get started with developing a computer vision algorithm for our dirt detection even though Harshul was still working with the Jetson. Initially, I had thought that I would be using one of Apple’s native machine learning models to solve this detection problem, and I had planned on testing multiple different models (as mentioned in last week’s status report) against a number of toy inputs. However, I had overlooked the hardware that we were using to solve this specific problem—the camera that we would be using was one that we bought which was meant to be compatible with the Jetson. As such, I ended up opting for a different particle detection algorithm. The algorithm that I used was written in Python, and I drew a lot of inspiration from a particle detection algorithm I found online. I have been working with the NumPy and OpenCV packages extensively, and so I was able to better tune the existing code to our use case. I tested the script on a couple of sample images of dirt and fuzz against a white napkin. Although this did not perfectly simulate the use case that we had described/have decided to go with, it was sufficient for determining whether this algorithm was a good enough fit. I ended up realizing that the algorithm could only be tuned so much with its existing parameters, and then experimented with a number of options to preprocess the image. I ended up tuning the contrast and the brightness of the input images and I found a general threshold that allowed for a low false negative rate which was still able to filter out a significant amount of noise. Here are the results that my algorithm produced:

Beyond the parameter tuning and the image preprocessing, I had tried numerous other algorithms for our dirt detection. I had run separate scripts and tried tuning methods for each of them. Most of them completely did not fit our use case, as they picked up far too much noise or were unable to terminate within a reasonable amount of time, indicating that the computational complexity was likely far too high to be compatible with our project.

I have also started working on the edge detection, although I have not finished as much as I would have liked at this current moment in time.

I am currently a little bit behind of schedule, as I have not entirely figured out a way to run an edge detection algorithm for our room mapping. This delay was in part due to our faulty Jetson, which Harshul has since replaced. I plan to work a little extra this week, and maybe put in some extra time over the upcoming break to make up for some of the action items I am missing.

Within the next week, I hope to be able to get a large part of the edge detection working. My hope is that I will be able to run a small test application from either Harshul or Nathalie’s phones (or their iPads), as my phone does not have the hardware required to run test this module. I will have to find some extra time when we are all available to sync on this topic.

Team Status Report for 2/17

We recently encountered an issue that could jeopardize the success of our project—our Jetson appears to not be able to receive any serial input. We are currently troubleshooting and trying to figure out a workable solution to this problem without replacing the unit entirely, although we have accounted for the slack time that we may end up needing in case we do need to reorder the component. We will make sure to seek a replacement component from the ECE inventory.

Our original plan did not include a Jetson; we had originally planned to use a Raspberry Pi component. Over this past week, we made the decision to deviate from our plan, and we opted for the Jetson instead. The Jetson is CUDA accelerated, and one of our group members (Harshul) has existing experience working with the Jetson. In addition, we have swapped out the active illumination LED with an existing piece of technology. The considerations for this were mostly based on the fact that the component that we chose was affordable, and that we would have to spend a significant amount of time designing our module so that it would be safe for the human eye. 

Our schedule has not changed, but with the time that it has taken to perform setup and acquire working hardware was longer than expected which was accounted for in our slack time.  Going forward it is important for us to remain on task, and we will do so by setting more micro-deadlines between the larger deadlines that Capstone requires. It is also essential that we work in parallel with getting ARkit floor mapping working, so that the hardware issues we face are not blockers that cause delays in the schedule.

Continue reading “Team Status Report for 2/17”

Erin’s Status Report for 2/17

This week I mainly worked on getting started with the dirt detection, and figuring out what materials to order. Initially, our group had wanted to use an LED for active illumination, and then run our dirt detection module on the resulting illuminated imagery, but I found some existing technology that we ended up purchasing which would take care of this for us. The issue with the original design was that we had to be careful about whether the LEDs that we were using were vision safe, thus bringing ethics into question as well as design facets that my group and I do not know enough about. Moreover, I have started working a little with Swift and Xcode. There were a few Swift tutorials that I have watched over the week, and I toyed around with some of the syntax in my local development environment. I have also started doing research on model selection for our dirt detection problem. This is an incredibly crucial component of our end-to-end system, as it plays a large part in how easily we would be able to achieve our use case goals. I have looked into the Apple Live Capture feature, as this is one route that I am exploring for dirt detection. The benefit of this is that it is native to Apple Vision, and so I should have no issue integrating this into our existing system. However, the downside is that this model is often used for object detection rather than dirt detection, and the granularity might be too small for the model to work. Another option I am currently considering is DeeplabV3 model. This model specializes in segmenting pixels within an image into different objects. For our use case, we just need to differentiate between the floor and essentially anything which is not the floor. If we are able to detect small particles as “different objects” than what is on the floor, we could move forward with some simple casing on the size of these such objects for our dirt detection. The aim is to experiment will all these models over the couple days, and settle on a model of choice by the end of the week. 

We are mostly on schedule, but we ran into some technical difficulties with the Jetson. We initially did not plan on using a Jetson; rather, this was a quick change of plans when we were choosing hardware, as the department was low in stock on Raspberry Pis and high in stock on Jetsons. The Jetson also has CUDA acceleration, which is good for our use case, since we are working with a processing intensive project. Our issue with the Jetson may cause delays on the side of end-to-end integration and is stopping Harshul from performing certain initial tests he wanted to run, but I am able to experiment with my modules independently  from the Jetson, and I am currently not blocked. In addition, since we have replaced the active illumination with an existing product component, we are ahead of schedule on that front!

In the next week, I (again) hope to have a model selected for dirt detection. I also plan to help my group mates writing the design document, which I foresee to be a pretty time consuming part of my tasks for next week. 

Erin’s Status Report for 2/10

This week I worked primarily on the proposal presentation that I gave in class. I spent time practicing and thinking about how to structure the presentation to make it more palatable. Additionally, prior to my Monday presentation, I spent a good amount of time trying to justify the error values that we were considering for our use case requirements. I also worked on narrowing the scope of our project—I tried to calibrate the amount of work our group would have to do to something I believe would be more achievable during the semester. Initially, we had wanted to our project to work on all types of flooring, but I realized that the amount of noise that could get picked up on surfaces such as carpet or wood may make our project too difficult. I also spent some time looking at existing vacuum products so that our use case requirements would make sense with the current state of the products which are on the market. I worked on devising a gantt chart (shown below) for our group as well. The gantt chart was aligned with the schedule of tasks that Nathalie created, and it also showed the planned progression of our project through the course of the fourteen week semester. Finally, I also looked into some of the existing computer vision libraries and read up on ARKit and RealityKit to familiarize myself with the technologies that our group will be working with in the near future. 

Our group is on track with respect to our gantt chart, although it is my wish that we stay slightly ahead of schedule. We plan on syncing soon and meeting up to figure out exactly what hardware we need to order. 

Within the next week, I hope to have an answer/better response to all the questions that were revealed to our group in response to our initial presentation. Furthermore, I hope to make headway on dirt detection, as that is the next planned task that I am running point on. This would start with getting the materials ordered, figuring out how our component may fit on a vacuum, and brainstorming any backup plan in case our initial plan for the dirt detection LED falls through.