Team Report for 3/16

Risks

Nathalie and Harshul are working on projecting texture on a floor, with memory. Through demos and experimentation, it seems that the accuracy of the plane mapping depends on the user’s ability to perform the initial scanning—one major risk that could jeopardize the project would be the accuracy of the mapping, because both dirt detection and coverage metrics depend on their ability to appear on the map. In order to mitigate this, we are doing careful research during our implementation phases on how to best anchor the points, and performing tests in different kinds of rooms (blank rooms versus ones with lots of things on the floor) in order to validate our scope of a plain white floor. Erin is working on dirt detection and we found that our initial algorithm was sensitive to noise. We have created a new dirt detection algorithm which makes use of many of OpenCV’s built in preprocessing functions, rather than preprocessing the input images ourselves. While we originally thought the algorithms were very sensitive to noise, we have recently realized that this may be more of an issue with our image inputs. The new dirt detection algorithm is less sensitive to shade noise, but will still classify patterning to be dirty. Regardless, we hope to tune the algorithm to be less sensitive to noise, testing its performance accordingly at the desired height and angle.

The main risk encountered in working with plane texturing was the inaccuracies in how the plane was fitted to the plane boundaries. Since this is a core feature this is a top priority that we plan on addressing this coming week and are meeting as a team to develop a more robust approach. We also need this component to be finished in order to begin integration, which we presume will be a nontrivial task, especially since it involves all our hardware.

 

Next Steps

We are currently in the process of finalizing the ideal height and angle for the camera mount. Once that is decided, we will be thresholding and validating our definition of dirt with respect to the angle that was selected. Once dirt is detected, we will need to record the position to communicate with the augmented reality parts of our system. We still need to sync as a team on the type of information that the augmented reality algorithm would need to receive from the Jetson and camera. For the texture mapping on the floor, we are working on projecting the overlay with accuracy defined in our technical scope. Further, we will then be working on tracking motion of a specific object (which will represent the vacuum) in the context of the floor mapping. We hope to have a traceable line created on our overlay that indicates where we can erase parts of the map.

Erin’s Status Report for 3/16

The focus of my work this week was calibrating the height and angle at which we would mount the Jetson camera. In the past two weeks, I spent a sizable amount of my time creating a procedure any member of my group could easily follow to determine the best configuration for the Jetson camera. This week, I performed the tests, and have produced a preliminary result: I believe that mounting the camera at four inches above floor level and a forty-five degree angle will produce the highest quality results for our dirt detection component. Note that these results are subject to change, as our group could potentially conduct further testing at a higher granularity. In addition, I still have to sync with the rest of my group members in person to discuss matters further.

Aside from running the actual experiments to determine the optimal height and angle for the Jetson camera, I also included our active illumination into the dirt detection module. We previously received our LED component, but we had run the dirt detection algorithms without it. Incorporating this element into the dirt detection system gives us a more holistic understanding of how well our computer vision algorithm performs with respect to our use case. As shown by the images I used for the testing inputs, my setup wasn’t perfect—the “background”, or “flooring” is not an untextured, patternless white surface. I was unable to perfectly mimic our use case scenario, as we have not yet purchased the boards that we intend to use to demonstrate the dirt detection component of our product. Instead, I used paper napkins to simulate the white flooring that we required in our use case constraints. While imperfect, this setup configuration suffices for our testing.

Prior to running the Jetson camera mount experiment, I had been operating under the assumption that the output of this experiment would depend heavily on the outputs that the computer vision script generated. However, I realized that for certain input images, running the computer vision script was wholly unnecessary; the input image itself did not meet our standards, and that camera configuration should not have be considered, regardless of the performance of the computer vision script. For example, at a height of two inches and an angle of zero degrees, the camera was barely able to capture an image of any worthy substance. This is shown by Figure 1 (below). There is far too little workable data within the frame; it does not capture the true essence of the flooring over which the vacuum had just passed. As such, this input image alone rules out this height and angle as a candidate for our Jetson camera mount.

Figure 1: Camera Height (2in), Angle (0°)

I also spent a considerable amount of time refactoring and rewriting the computer vision script that I was using for dirt detection. I have produced a second algorithm which relies more heavily on OpenCV’s built in functions, rather than preprocessing the inputs myself. While the output of my test image (the chosen image from the Jetson camera mount experiment) against this new algorithm does appear to be slightly more noisy than we would like, I did not consider this an incredibly substantial issue. This is because the input image itself was noisy; our use case encompasses patternless, white flooring, but the napkins in the image were highly textured. In this scenario, I believe that the fact that algorithm detected the napkin patterning is actually beneficial to our testing, which is a factor that I failed to consider the last couple of times I had tried to re-tune the computer vision script.

I am slightly behind schedule with regard to the plan described by our Gantt chart. However, this issue can (and will) be mitigated by syncing with the rest of my group. In order to perform the ARKit Dirt Integration step of our scheduled plan, Nathalie and Harshul will need to have the AR component working in terms of real-time updates and localization.

Within the next week, I hope to help Nathalie and Harshul with any areas of concern in the AR component. In addition, I plan to start designing the camera mount, and place an order for the Jetson camera extension cord, as we have decided that the Jetson will not be mounted very close to the camera.

Nathalie’s Status Report for 3/16

The first part of this week I spent brainstorming and researching the ethical implications of our project at scale. The ethics assignment gave me an opportunity to think about the broader societal implications of the technology that we are building, who hypothetical users could be, and potential safety concerns. The Winner article and the Ad Design paper led me to think about the politics of technologies and who is responsible for the secondary implications that arise. Capstone is a small-scaled version of real world projects from industry: I weighted the ethical implications of developing emerging technologies like autonomous driving, ultimately considering how algorithmic design decisions have serious consequences. In the case of autonomous driving this can involve making decisions about what and who to sacrifice (eg. the driver or the child walking across the street). Personally, I think developers need to take more responsibility for the technical choices that we make, and this has led me to think about what potential misuse or decisions I would be making for this capstone project. In relation to our project, this has led me to consider things I hadn’t previously thought of, like environmental impacts of the materials that we are using. We are choosing to build a mount and 3D print a customized part, and I realized that we never really talked about what we are going to create this out of. I want to create the materials out of recycled biodegradable plastic, because if produced at scale there should be responsibility on us as developers to reduce harmful secondary effects.

I’ve also been working on the augmented reality floor overlay in Swift and Xcode, doing research to figure out how the plane detection ARKit demo actually works in the backend. This research is being done to accomplish next steps of projecting an overlay/texture on the actual floor with edge detection and memory, which Harshul and I are currently working on. The augmented reality plane detection algorithms does two things (1) builds point cloud mesh of the environment by creating a map and (2) establish anchor points to assign positioning to the device relative to the environment that it is in. Memory is done through loop closing for SLAM, trying to match key points of the given frame with previously seen frames. If enough key points match (established threshold), then it is considered a match of the previously seen environment. Apple specifically uses Visual Interial Odometry, which essential maps a real world point to point in the camera sensor.  Sensor readings are very frequent (1000/s) and allow interpreted position to be updated regularly.

Plane Detection: https://link.springer.com/chapter/10.1007/978-1-4842-6770-7_9 – This reading was helpful in looking at code snippets for the plane detection controllers in Swift. It details SceneView Delegate and ARPlaneAnchor, the latter of which is useful for our purposes.

Plane Detection with ARKit: https://arvrjourney.com/plane-detection-in-arkit-d1f3389f7410

Fundamentals about ARCore from Google: https://developers.google.com/ar/develop/fundamentals

Apple Developer Understanding World Tracking: https://developer.apple.com/documentation/arkit/arkit_in_ios/configuration_objects/understanding_world_tracking

Designing for AR, Creating Immersive Overlays: https://medium.com/@JakubWojciechowskiPL/designing-for-augmented-reality-ar-creating-immersive-digital-overlays-0ef4ae9182c2

Visualizing and Interacting with a Reconstructed Scene: https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/visualizing_and_interacting_with_a_reconstructed_scene

Placing Objects and Handling 3D Interaction: https://developer.apple.com/documentation/arkit/arkit_in_ios/environmental_analysis/placing_objects_and_handling_3d_interaction

Anchoring the AR content, updating AR content with plane geometry information (ARKit Apple Developer)

Progress and Next Steps

Harshul and I are working on turning all this theory into code for our specific use case. It’s different than the demos because we only need to map the floor. I’m in the process of figuring out how to project a texture on to the floor using my iPhone as a demo object and the floors around my house as the environment even though it does not perfectly mimic our scope. These projections will be including memory, and next steps include making sure that we can track the path of a specific object on the screen. Erin is working on thresholding dirt detection, and then we will work together to figure out how to map the detected dirt areas to the map. We are on schedule according to our Gantt chart, but have accounted for some slack time because blockers during these steps will seriously impact the future of the project. We are setting up regular syncs and updates so that each section worked on in parallel is making progress.

 

Harshul’s status report for 3/16

This week my focus was mainly directed towards understanding the 3 texturing approaches I outlined in the previous status report and experimenting with AR plane features within our project.  Since shader programming with metal is a rather herculean task, I mainly honed in on scenekit’s material api’s. I worked on updating the initial object placement app to learn how to extract a plane from the world instead of an anchor point by using the type ARPlaneAnchor instead of VirtualObjectAnchor.  Using scenekit’s plane I created a plane and by read up on SCNMaterial to understand how to instantiate a texture and what properties are available to us. I went with a minimal example to create a plane just in the world and then added properties to make it translucent and attempt to map it to the worldplane. The first example is not fit well and was opaque.  I managed to improve the anchoring of the plane to get it coplanar with the floor but it’s not staying within the boundaries which is something I’m meeting Nathalie tomorrow to sync on our AR progress and brainstorm approaches to anchor the plane in the most accurate way that is concordant with the tests we outlined in our design report for plane fitting. Right now I’m using a hit-test which is returning a transform that captures position and orientation to the tapped point and I think this is overfitting the plane to the tapped point rather than recovering information about the plane the transform is encoding. One idea I plan on exploring on Sunday is having the user tap multiple times to allow a plane to be extracted from the set of world transforms.

 

In terms of timeline I think we’re generally on track, the plane detection was earlier in our schedule but the progress we’ve made in terms of relocalisation has contributed to the camera angle memory task.

Next steps entail trying to get the planes to fit exactly and working on erasure and then building the app’s features out for human use. Professor Kim wanted to see an example of our AR apps in action so we will also be ready to show the features that we have so far.

Erin’s Status Report for 3/9

I spent the majority of my time this week writing the design report. I worked primarily on the testing and verification section of the design document. Our group had thought about the testing protocols for multiple use-case requirements, but there were scenarios I found we failed to consider. For example, I added the section regarding the positioning of the camera. In addition, much of the content that we had before for testing and verification had to be refined; the granularity at which we had specified our tests were not clear enough. I redesigned multiple of the testing methods, and added a couple more which we had not previously covered in depth in our presentations.

The Jetson Camera Mount test was a new component of our Testing and Verification section that we had not considered before. I designed the entirety of this section this week, and have started to execute the plan itself. We had briefly discussed how to mount the camera, but our group had never gotten into the nitty-gritty details of the design. While creating the testing plan, I realized that there would be additional costs associated with the camera component as well, and realized that mounting the camera could introduce many other complications, which caused me to brainstorm additional contingency plans. For example, the camera would be mounted separately from the Jetson computer itself. If we were to mount the computer higher, with respect to the camera, we would need a longer wire. The reasoning behind this is to combat overheating, as well as to separate the device from any external sources of interference. Moreover, I designed the actual testing plan for mounting the camera from scratch. We need to find the optimal angle for the camera so it can capture the entire span of space that is covered by the vacuum, and we also need to tune the height of the camera such that there is not an excessive amount of interference from dirt particles or vibrations from the vacuum itself. To account for all of these factors, I created a testing plan which accommodated for eight different camera angles, as well as three different height configurations. The choices for the number of configurations I chose was determined by a couple of simple placement tests I conducted with my group using the hardware that we already had. The next step, which I hope to have completed by the end of the week, would be to execute this test and to begin to integrate the camera hardware into the existing vacuum system. If a gimbal or specialized 3D printed mount is needed, I plan to reach out to Harshul to design it, as he has more experience in this field than the rest of us.

Our group is on pace with our schedule, although the ordering of some components of our project have been switched around. Additionally, after having accounted for the slack time, we are in good shape to produce a decently demonstrable product by the demo deadline, which is coming up in about a month. I would like to have some of the hardware components pieced together sooner rather than later though, as I can foresee the Jetson mounts causing some issues in the future. I hope to get these nuances sorted out while we still have the time to experiment. I also plan to get the AR system set up to test on my device. One thing that has changed from before is that Nathalie and Harshul discovered that LiDAR is not strictly necessary to run all of the required AR scripts that we plan to integrate into our project; the LiDAR scanner simply makes the technology work better in terms of speed and accuracy. We would thus not use my phone to demonstrate our project, but with this knowledge, I could help them more with the development process. I also hope to get this fully set up within the week, although I do not think this is going to be a blocker for anyone in our group. I recently created a private Github repository, and I have pushed all our existing dirt detection code to the remote server so everyone is able to access it. When Harshul has refined some of his code for the AR component of our project, I will be able to seamlessly pull the code he has and try to run it on my own device. Our group has also discussed our Github etiquette—once software development ramps up, we plan on using pull requests and pushing from our own individual branches before merging our project into the main branch. For now, we plan to operate slowly, as we are working on non-intersecting codebases.

Team Status Report for 3/9

Team Status Report

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

We believe that mounting the Jetson components could cause some unforeseen issues. We need to design hardware to hold the Jetson camera, and we may need to create another mount for the Jetson computer. The challenges we are facing include creating a system which is stable, yet does not inhibit access to any part of the computer or the camera. In addition, the computer mount should not cause the Jetson to overheat in any capacity; this adds a constraint on the mount design to ensure that the heatsink is exposed to the air and has good airflow.

The proof of concepts have covered a good amount of our feature-set that we need but a key one that has not been accomplished yet is projecting a texture onto a surface and modifying the texture. We have made progress with mapping but the next step this coming week is to work on projecting a texture onto the floor plane and also exploring how we can modify the texture as we move. To mitigate this risk of the complexity of this task we have researched that there are many candidate approaches that we can experiment to find one that best fits our needs. We have 2 AR api’s from apple SceneKit and RealityKit and they both have support for projecting textures.We can create a shader, we could modify the texture in real time or we could create another scene node to occlude the existing texture on the plane. This will be a key action item going forward.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We had not previously considered mounting the Jetson computer at a higher point on the vacuum. This alteration will incur an extra cost to cover the extension cable for the Jetson camera. In addition, we previously only planned on using one of the active illumination light components that we purchased, but we have reconsidered using two active illumination lights. This will not induce any additional cost for us, as we had purchased two units to start. 

Schedule 

The schedule has not changed from prior weeks. Our subtasks remain assigned as follows: Erin on dirt detection, Harshul and Nathalie on augmented reality plane mapping & tracking. We are going to sync on progress and reassign tasks in the coming weeks.

ABET Considerations

Part A was written by Harshul, Part B was written by Nathalie Part C was written by Erin

Part A: Global Factors

Some key global factors that we are considering are Human-Centric Design and Technology-Penetration. In order to make this app accessible to the broadest customer base it is important to avoid any unnecessary complexity in the application and ensure that the app is intuitive to users and leverages built in apis for accessibility in different language modalities.  Additionally with respect to technology penetration we are keenly aware that AR and XR systems are still in the early stages of the product adoption curve which means that the cost for truly immersive AR solutions like headsets is quite high and not nearly as ubiquitous as technologies like smartphones with Apple having significant market share we felt that this would allow for greater access to and penetration of our app given the much lower barrier to entry. Additionally because our application is designed to use bluetooth and on device capabilities the app’s functionality will not be constrained if deployed in rural regions with inconsistent/reduced wireless connectivity.

Part B: Cultural Factors

When accounting for cultural factors, it’s important to consider what cleanliness means in certain cultures. There are different customs and traditions associated with cleaning, and different times of day + frequency that people perform their vacuuming duties. Further, we are assuming that our users already have access to a vacuum and an outlet, which might not necessarily be the case. For example, based on statistics from Electrolux, Koreans vacuum most frequently while Brazilians and Portuguese people statistically spend the longest time vaccuumming.

Similarly, different cultures have different standards for cleanliness and often hold different decor elements on their floor, which changes the augmented reality mappings in ways that we might not be able to anticipate. Our use case already limits most of these scenarios by specifying simplicity, but ultimately we want to still think about designing products for the practical world.

While our product’s design and use case doesn’t directly deal with religious symbolism or iconography, but we must being considerate of the fact that cleanliness has religious significance to certain cultures so it’s worth being mindful of that in any gamification features that we add to ensure that we are not being insensitive.

Part C: Environmental Factors

Our project takes into account environmental factors as we create only an additive product. All the components that we are building can be integrated into an existing vacuum design, and will not produce a considerable amount of waste product. We initially had intended to create an active illumination module using LEDs, but we decided to forego this idea, as creating a good, vision-safe illumination method would cost raw material—we would have to cycle though multiple iterations of the product, and the final solution we end with may not be as safe as an existing solution. As such, we settled for an already manufactured LED. We also recently discussed a method to mount the Jetson and its corresponding components to the vacuum module. One option that we are heavily exploring is a 3D printed solution. We can opt to use a recycled filament, as this would be more environmentally friendly, compared to some of the other raw material sources. Moreover, our project as a whole aims to aid the user in getting a faster, better, clean. It does not interfere with any other existing environmental factors in a negative way, and the charge needed to power our system is negligible compared to what a typical college student consumes on a daily basis.

Nathalie’s Status Report for 3/9

This week I spent a lot of time working on the design report and performing experiments in order to determine the best ways to detect planes, elaborating on the use case requirements and the technical design requirements. This involved writing descriptions for the ethical, social and environmental  considerations, thinking about how our tool interacts with the world around us. Specifically, for the use case requirements, I delved deeper into the main categories of mapping coverage, tracking coverage, and dirt detection cleanliness, adding an additional section for battery life coverage after discussing this need for our group. We wanted to make sure that the vacuum was actually a usable product, and that the Jetson would have sufficient battery life to actually be an operational product.

Experimenting with ARKit plane detection

We solidified our metrics for each of the technical design requirements through experimentation. By downloading the ARKit plane detection demo on to my iPhone, I was able to see what this looks like in different room environments. I tested the plane detection augmented reality algorithm in different rooms: the ECE capstone room, my living room, stairs, and kitchen. By testing this in different environments I was able to observe the accuracy of the plan detection in the midst of different obstacles and objects that were present in each room. Not only did this app detect the surface area and corners of planes, but it also labelled objects: Wall, Floor, Table, Ceiling. Most of the random objects that exist are mapped with an area and labelled as Unknown, which is acceptable for our purposes. 

.     

Since the ARKit plane detection has memory, I realized how much the initial mapping/scanning of the room actually matters in the accuracy of the edge detection. In the pictures on the left, I did an initial zoomed out scan of the room before narrowing into a specific corner whereas on right hand side I did not map the room first. We can see the difference between the accuracy present in the edges between the floor and the wall – the left picture has a much higher accuracy. Hence, we have accommodated for user error in our quantitative error margins of ± 100%. We also tested this difference in various iPhones, particularly the iPhone 13 Pro and the iPhone 14 Pro Max. The iPhone 14 Pro Max was more accurate because of an updated LIDAR sensor.

I also did more research into the related work, comparing and contrasting how different existing products act as partial solutions for our product. While there are a lot of potential AR applications, this is a relatively new field so a lot of the development has yet to be fully materialized, and mostly involve expensive products like the Apple Vision Pro and the Meta Quest. We are trying to accomplish similar ideas but in a more accessible way.

Much of our augmented reality experimentation is on track, and our next steps involve seeing if we can track objects and map them in the plane. In addition, we need to figure out how the hardware is going to be placed (specifically the camera) to make sure that we can set up the software with a solidified hardware environment. Much of our post-break work is going to involve implementation and making sure that we encounter our roadblocks early on rather than later.

Harshul’s Status Report for 3/9

This week I spent the majority of my time working on the design report and working with the ARKit applications on the iPhone. In the report I primarily focused on the intro, the system architecture, the design trade studies and the section on the implementation details for object placement and the software implementation.The work on the architecture section involved refining the architecture work from the design presentation and clearly identifying subsystems and components that allow for easier quantification of progress and tasks as well as clearly defining subsystem boundaries and functionalities for system integration. We’d discussed tradeoffs in our component selection but the testing and research we had done was now quantified in the trade studies by creating comparison matrices with weighted metrics to quantitatively demonstrate the rationale behind our component selection as well as performing a tradeoff analysis on whether to purchase or custom fabricate certain components. Below is an example of one of the trade studies with a 1-10 likert scale  and weights that sum to 1.0.

With respect to implementation details I ran an ARKit world tracking session as a baseline to test whether we can save existing map information and reload that data to avoid having to localize from scratch every time. This feature would enable us to map out the space initially to improve overall accuracy. The second feature that this app tested was the ability to place an object anchored to a specific point in the world, this tests the ability for the user to place an input as well as the ability for the ARKit API to anchor an object to a specific coordinate in the world map and be able to dynamically store and reload this without needing to re-localize from scratch. From here we plan on integrating this with the subsequent plane detection feature outlined in the next subsection as well as test mapping a full texture to a plane in the map instead of just placing an object into the world. As shown in the below images:

This was a key proof of concept in our ability to map and localize to the environment and combining this approach with Nathalie’s work on plane detection will be a key next step. Additionally this work on mapping allowed me to assist in coming up with the test metric for mapping for coverage. Researching options and using existing knowledge of the API I outlined the use of the planeExtent to extract the dimensions of a plane as well as the ability to compute point to point measurements of distance in AR to outline the mapping coverage test to ensure the dimensional error and drift error on the planes are within the bounds specified by our requirements. 

 

With respect to timelines things are on track, certain component orders have been moved around on the gaant chart, but AR development is going well. The key next steps are to now take our proof of concept applications and start integrating them into a unified app that can localize, select a floor plane and then mesh a projected texture onto that plane .

Team Status Report for 2/24

We solved the previous issue of the Jetson not working and successfully managed to get a new one from inventory flashed with the OS and running. We performed dummy object detection experiments with particles on a napkin and observed a high false positive rate, which is a challenge that we are going to work on in the coming weeks. All three of us have successfully started onboarding with Swift. 

We changed our use case and technical requirements for cleanliness to measure the actual size of the dirt particles instead of the covered area because it was too vague. We realized that 15% coverage of an area doesn’t really have meaning in context and instead wanted to measure meaningful dirt particles, specifically those that are >1mm in diameter and within 10 cm of the camera. We have also created new battery life requirements for the vacuum such that it must be active for over 4 hours, and have performed the accompanying calculations for maH. We updated our block diagrams and general design to include a form of wireless power with batteries that we plan on ordering in the coming week. In addition, we discovered that developing with Xcode without a developer account/license means we can only work with a cable plugged into our phone. While this is fine for the stage of development we are currently in, we need to purchase at least one developer license so that we can deploy wirelessly. This is the only adjustment that impacted our budget; we did not make any other changes to the costs our project would incur. We do not foresee many more use case/system adjustments of this degree.

Our timeline has accounted for enough slack so that the schedule has remained unchanged, but we definitely need to stay on track before spring break. We managed to find a functioning Jetson which has allowed us to stay on track, which was our challenge from last week because we did not know what was the problem or how long we would be blocked on the Jetson for. Luckily this has resolved, but we still need to acquire the Apple Developer pack so that we can power the Jetson wirelessly. This week, one of our main focus points will be the room mapping—we want to soon get a dummy app running with ARKit which can detect the edges of a room. Another one of our frontrunner tasks would be to flush out the rest of our design document.



Erin’s Status Report for 2/24

This week I worked primarily on actually implementing the software component of our dirt detection. We had ordered the hardware in the previous week, but since we designed our workflow to be streamlined in a parallel sense, I was able to get started with developing a computer vision algorithm for our dirt detection even though Harshul was still working with the Jetson. Initially, I had thought that I would be using one of Apple’s native machine learning models to solve this detection problem, and I had planned on testing multiple different models (as mentioned in last week’s status report) against a number of toy inputs. However, I had overlooked the hardware that we were using to solve this specific problem—the camera that we would be using was one that we bought which was meant to be compatible with the Jetson. As such, I ended up opting for a different particle detection algorithm. The algorithm that I used was written in Python, and I drew a lot of inspiration from a particle detection algorithm I found online. I have been working with the NumPy and OpenCV packages extensively, and so I was able to better tune the existing code to our use case. I tested the script on a couple of sample images of dirt and fuzz against a white napkin. Although this did not perfectly simulate the use case that we had described/have decided to go with, it was sufficient for determining whether this algorithm was a good enough fit. I ended up realizing that the algorithm could only be tuned so much with its existing parameters, and then experimented with a number of options to preprocess the image. I ended up tuning the contrast and the brightness of the input images and I found a general threshold that allowed for a low false negative rate which was still able to filter out a significant amount of noise. Here are the results that my algorithm produced:

Beyond the parameter tuning and the image preprocessing, I had tried numerous other algorithms for our dirt detection. I had run separate scripts and tried tuning methods for each of them. Most of them completely did not fit our use case, as they picked up far too much noise or were unable to terminate within a reasonable amount of time, indicating that the computational complexity was likely far too high to be compatible with our project.

I have also started working on the edge detection, although I have not finished as much as I would have liked at this current moment in time.

I am currently a little bit behind of schedule, as I have not entirely figured out a way to run an edge detection algorithm for our room mapping. This delay was in part due to our faulty Jetson, which Harshul has since replaced. I plan to work a little extra this week, and maybe put in some extra time over the upcoming break to make up for some of the action items I am missing.

Within the next week, I hope to be able to get a large part of the edge detection working. My hope is that I will be able to run a small test application from either Harshul or Nathalie’s phones (or their iPads), as my phone does not have the hardware required to run test this module. I will have to find some extra time when we are all available to sync on this topic.